Login | Register   
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


advertisement
 

Create Sound Synchronization Magic in Flash, Part 1

Remember the speaker cabinets from the '70s, with lights that throbbed to the music? Using a powerful third-party utility for Flash, you can create games and animations that use rapid audio synchronization to drive assets with sound instead of programming.


advertisement


or basic sound synchronization needs, Flash offers two sound settings that give its users a choice in how to match audio and graphics sequences. "Event" sounds are triggered when the Flash playback head comes in contact with a frame in which the sound is placed. This is useful for short sounds and simple sound effects, for example a ball bounce sound placed in the frame where a bouncing ball hits the floor. Event sounds must be fully loaded before they can play. "Stream" sounds allow Flash to synchronize sound more like video software. When an animation soundtrack is set to stream, Flash will drop frames to keep the animation in sync with the sound. Stream sounds, as the name implies, can also play during download. These options offer fine ways of handling sounds when basic running synchronization is needed, but what about situations requiring more control over audio sync? For example, what if you want to use programming to synchronize sound with animation such as syncing a voice-over presentation with accompanying visuals, using ActionScript instead of the timeline.

A common method of accomplishing this goal is to break the audio portion into small chunks and import them into separate frame spans that correspond to changes in the visuals. Using Flash MX or later, it is also possible to use the onSoundComplete event handler to execute code when the sound finishes playing. However, chopping sounds into small pieces this way is tedious, and requires a lot of management. While this, and similar, methods offer solutions to some of the synchronization issues that face developers, none are particularly good at handling rapid events (many changes per second). Perhaps more importantly, none of the previously discussed methods allow for more advanced audio sync techniques such as linking assets to a sound's volume, or its bass, midrange, and treble frequencies.

New Sync Possibilities
Taking that idea a step further, what if you could pre-process a sound and get back a simple array of amplitude (volume) values, or even up to 16 bands of frequency data, to react to? If this array included a value for every frame per second that your Flash movie used, a simple enterFrame event could retrieve each corresponding value easily allowing you to manipulate your file in exact sync with the sound's sonic characteristics. That's exactly what I'm going to show you how to do in this article. The ease with which this can be accomplished makes it more feasible for designers with little scripting experience to code dynamic animations driven by sound. Another, equally important idea, is that these same techniques give programmers who never considered themselves to be designers a chance to create simple but compelling sound-based designs.



Just about any user can create beautiful sound visualizations simply by scaling shapes, or changing their transparency to the beat of a soundtrack. Remember the speaker cabinets in the '70s with lights that throbbed to the music? Think of how much easier it could be to bring games or linear animations to life by driving assets with sound rather than additional programming. Remember that electronic game from the '70s called Simon? The programming beauty of Simon is that a reasonably limitless sequence of tones/shapes could be created just by adding a random number to each round. What if that process of creating the tones/shapes weren't random, but instead were linked to pre-recorded music? A musical sequence with four discreet frequencies is played in any pitch the designer chooses. The pre-analyzed sound triggers the shape changes and the user is left to imitate the sequence. Adding more frequencies to the mix (up to 16) gets you close to a more sophisticated game such as the Konami arcade game Dance Dance Revolution.

A much simpler, but still impressive, example might be something similar to the popular Nintendo game, Donkey Konga. The drum sounds, and synchronized visual cues, can be triggered with a very few lines of script simply by pre-analyzing a stereo drum track. A drum in the left channel corresponds to the left on-screen drum and a drum in the right channel corresponds to the right on-screen drum. Simply by playing the songs, the audio and visual cues will be triggered, and the user has a sequence that must be matched.

Author's Note: To follow along with this article, download the source code, including low-resolution audio files. If you want to try to analyze the sounds used herein yourself, you may also want to grab the high resolution audio files in an optional separate download.



Comment and Contribute

 

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Sitemap