Ms. Shape says, "I am a talking shape. Thanks to Blender 3D, I can talk like this!" in the video. She was modeled and animation rendered in the open source 3D graphics and animation software, Blender version 2.70.
www.youtube.com/watch?v=CSB0WJYn2ow (http://www.youtube.com/watch?v=CSB0WJYn2ow)
And, for her voice... Festival, a general speech synthesis system developed at Centre for Speech Technology Research at the University of Edinburgh was used.
All made possible through Linux.
www.youtube.com/watch?v=Fg8K2BjVNAU (http://www.youtube.com/watch?v=Fg8K2BjVNAU)
Remind me not to have a conversation with the pink one, she doesn't agree with anything ;D
:sign-great-job:
These are both beginner level called "f-curve". The beginner level is limited but far less involved. It uses three different panels in Blender (2.70): The Graph Editor, the Video Sequence Editor, and the 3D View panel. There is an intermediate level called "drivers" which get even more involved. That is next on the agenda, and involves some Python code.
As for Festival text to speech, I figured out how to insert pauses in between the phrases, using SoX. The voice is generated by a bash script, which does everything automatically. And saves a final stereo audio file to insert into Blender, which gets baked into an animation, on the Z axis automatically. So simply put, 3D has three axes, X,Y and Z -- For the sake of our discussion, Z is the up and down axis, which roughly maps the different levels of volume in the audio file to a mouth opening and closing.
This is sometimes considered an easier alternative method to mapping phonemes (sounds) to visemes (lip postures). But, still yet, it is enjoyable to play around with, and this is turning out to be a fun Blender 3D pastime because there is room to grow!
Just goes to show a lot can be done with open source software :thumbsup: