Lip Sync by Jumping to Frames
For one final example, I want to look at a great lip sync technique, using a sound's amplitude. In this article, I will not examine the high quality, labor-intensive approach of matching phonetic speech sounds with specific mouth positions. For a more practical approach, suited to average needs, I will show how to play a "mouth animation" while text is being spoken. While it is not possible to match a mouth position to phonetics using FlashAmp, it is a good idea to vary the mouth positions more than just to the degree the mouth is open. This prevents your animated characters from having flapping jaws like the Team America
marionettes. The example source file, "lipsync.fla
," uses the 11-frame mouth animation shown in Figure 3
|Figure 3. Easy lip synchronization is achieved when this 11-frame sequence of mouth positions is manipulated by a sound's amplitude values.|
Traditional lip sync methods call for breaking the audio file into sections of speech and pauses, playing the sequential or, perhaps randomized, animation during speech, stopping the animation during pauses. However, in addition to being tedious, this approach makes changes difficult. Often, changes require that entire passages be reanimated to match new audio.
Using FlashAmp, however, you can use a sound's amplitude to display different mouth positions, thus making a character speak only when the voice is loud enough to hear. Therefore, you could play a single sound as long as you needed, with one simple script that sent the mouth animation to each frame indicated by the array. To accomplish this, we'll use a scale of 10, instead of 100, because we want to send the "mouth" MovieClip to one of the positions pictured above. This example array again uses a demonstration two-second sound at Flash's default 12 fps:
amplitude = [2, 6, 6, 4, 5, 3, 1, 5, 4, 5, 3, 1, 0, 0, 0, 0, 3, 3, 3, 2, 4, 3, 3, 3]
You can see that various mouth positions are being displayed, based on how loud the character is speaking, as the sound progresses. You can even see a pause between phrases, indicated by the four consecutive zeros toward the center of the array.
The script on the mouth MovieClip in frame 1 would read something like this:
myFrame = _root.amplitude[_root._currentframe-1] + 1;
While we still use our standard frame offset compensation to get the correct index value for the array (in this case, subtracting one because the sound starts in frame one), you may notice an additional "+ 1" at the end of the first line of script. This is because using a scale of 10 to create our array could result in values from 0 to 10, and we're working with frame numbers where there is no frame 0. By adding 1 to the frame number value, a minimum value of 0 will yield frame 1, and a maximum value of 10 will yield frame 11. This gives you more control than if you were to allow Flash to automatically default to frame 1 if you specify frame 0, or to default to the last frame of a clip if you specify a higher frame number.
In part 1 of this article, I showed you how to use FlashAmp to create a simple amplitude array, and how to use amplitude and spectrum data in a variety of ways within Flash. The source code
that accompanies this article includes amplitude and frequency visualizations, as well as an amplitude lip sync example. Next month, I'll discuss the remaining FlashAmp settings, how to use external as well as internal sounds in your files, how to access the FlashAmp arrays properly even when your sound playback doesn't begin in frame 1, and a few tips and tricks for optimizing your data.