Create Sound Synchronization Magic in Flash, Part 2

Create Sound Synchronization Magic in Flash, Part 2

n part 1 of this article, I showed you how to use a great third-party utility, FlashAmp, to create a simple amplitude array, and how to use amplitude and spectrum data in a variety of ways within Flash. These techniques will allow you to do a bevy of neat effects that manipulate visual assets dynamically to audio.

In this article, I’ll build on that information by discussing the remaining FlashAmp settings, how to use external as well as internal sounds in your files, how to access the FlashAmp arrays properly even when your sound playback doesn’t begin in frame 1, and a few tips and tricks for optimizing your data.

Using FlashAmp
Having discussed three different examples of FlashAmp output in action in part 1 of this article, I’d like to show you how simple it is to use FlashAmp itself. In short, all you need to do is configure the settings in the Settings pane, choose the file you want to analyze in the Input pane, choose where you want to save your preferred text format in the Output pane, and process the file (see Figure 1).

Common to Amplitude and Spectrum Analysis
FlashAmp requires few settings to work its magic. The settings common to both amplitude and spectrum analysis are very straightforward.

1) Frame Rate: As previously discussed, for proper sync, FlashAmp must provide as many values as there are frames per second during audio playback. If you have a 1-second song, and you are using the default Flash frame rate of 12 fps, the FlashAmp array must have 12 indices. Be sure the FlashAmp frame rate setting matches your Flash fps, or poor sync is guaranteed.

2) Scale: This arbitrary range should be set to what is most helpful to your Flash needs. We’ve discussed the benefits of a scale of 100 already. As another example, if you wanted to work with color values, a scale of 255 might be useful.

Figure 1. Setting Amplitude: The color numbers in this screen shot of the FlashAmp interface correspond to amplitude settings (green, 6-7), spectrum settings (red, 8-9) and settings common to both (blue, 1-5).

3) Stereo: Processing a stereo sound for stereo output will generate two similar lists of values: one for the left channel and one for the right channel. This allows you to use each array discretely to show the result of stereo separation in your visuals.

4) Normalize: This instructs FlashAmp to normalize its values before creating an output file. Normalizing an audio file will increase its volume to the loudest possible point without clipping (distorting) any part of it. This setting only affects the FlashAmp data, making the visual changes more useful, but still consistent with the sound. FlashAmp does not reprocess the audio file itself. Note that normalizing raises the amplitude data of the entire output, so it can also increase noise in the data. Where you may be expecting values of zero, you may get low positive values instead. This may be obvious in the jittering of peak meters during quiet passages.

5) Cue Points: Cue points require a bit of additional explanation, so I’ll discuss them in their own section, immediately after concluding my overview of the Settings panel.

Unique to Amplitude Analysis
6) Smoothing: This setting is used when you want to make the changes in values a little smoother, rather than hyper accurate data that may jump more dramatically between values. This can sometimes help animations to appear less jittery or halting. I’ve provided an example of this in the “speaker.fla” source file.

7) Return dB (decibel) Scale: This setting will limit responses to those within the typical human hearing range. As the typical relative dB values are negative (like on a mixer) and this is not terribly useful in Flash, FlashAmp inverts the scale so that 0 is loudest, and everything above 50 is very quiet.

See also  IO-Link Wireless and Industrial Internet of Things

Unique to Spectrum Analysis
8) Bands: This menu allows you to choose how many divisions of frequencies FlashAmp will create values for. By default, the bands divide the frequency range of the sound into equal ranges from 20 Hz to half the sample rate of the sound. For example, a 16-band analysis for a sound with a sample rate of 22050 Hz will divide a range of 20 Hz to 11025 Hz into 16 divisions.

9) Logarithmic Bands: This option allows you to divide the total frequency range logarithmically, rather than linearly, for distribution among the bands. In most instances, this is used to achieve a more even distribution of frequency data. (The sound remains unaffected.) If you find that your analysis yielded mostly zeros in the upper frequencies, this option can help redistribute those data values a bit for a more pleasing visual effect.

1) Frame Rate Revisited: When generating spectrum data, FlashAmp needs at least 1024 frequency samples per frame in order to perform its analysis. As such, there is a limit to the number of frames per second that can be used for each sample rate. For 44.1 kHz sounds, a maximum of 43 fps can be used in your Flash file. 21 fps is the limit for 22.050 kHz sounds, and 11.025 kHz sounds cannot exceed 10 fps. This is not likely to be an issue for higher sample rates because the Flash Player performance is not likely to achieve frame rates higher than these limits. However, if you are forced to use 11.025 kHz sounds, be sure you don’t exceed 10 fps in your animations.

Cue Points
In both amplitude and spectrum analysis, FlashAmp can read cue points embedded into an audio file, and can make that data available to Flash (which is great because Flash doesn’t directly support audio cue points). Within its output, FlashAmp will create a “parallel” array (a separate array with the same number of indices), so you can easily look for cue point values at the same frame rate as your Flash file plays. This means you can write a simple script that will make it possible to take advantage of these audio markers.

For simplicity, I’ve created two arrays based on a 5-second sound and a frame rate of 1 fps. (Again, this slow frame rate is helps simplify this example.) The first array is an amplitude list, the second is a cue point list.

amplitude = [0, 10, 7, 5, 2]cuePoints = ["", "about", "", "", "contact"]

By associating an array index with a Flash frame (offsetting the current frame by 1, as discussed earlier), you can see that this audio file is quiet in frames 1 and 5, and louder in frames 2 through 3. You can also see that the sound contains cue points, “about” and “contact,” at 2 and 5 seconds, respectively. Using this information, you can easily instruct a MovieClip to navigate to a marker with the same name as the cue point. Assuming this sound is playing from frame 1 the code would look like this:

on (enterFrame) {    thisCuePoint = _root.cuePoints[_root._currentframe-1]    //if the value is not empty, tell a movie clip to go to    //  the frame marker of the same name    if (thisCuePoint != "") {        displayMC.gotoAndStop(thisCuePoint)    }}
Author’s Note: To follow along with this article, download the source code, including low-resolution audio files. If you want to try to analyze the sounds used herein yourself, you may also want to grab the high resolution audio files in an optional separate download.

Input File Formats
FlashAmp supports compressed or uncompressed AIFF and WAV files, as well as SWA (Shockwave Audio) and MP3 files. Both 16- and 8-bit sound files are supported, as well as the most common sample rates: 44.100, 22.050, 11.025, and 7.418 kHz. If you are using MP3 files, they must be encoded using Constant Bit Rate encoding (CBR), and stereo files must use normal stereo mode. Neither Variable Bit Rate encoding (VBR) nor Joint Stereo formats are supported at this time.

See also  Future Trends in SaaS for Education: The Next Wave of Innovative Learning Apps

Output File Formats
FlashAmp can output data in three ways. First, it can display the data within FlashAmp so you can copy it and paste it into Flash. This is handy if your sound and data are both internal within Flash. Second, it can output the data as a text file formatted for the loadVariables() ActionScript method. This method loads data as name/value pairs of text strings, so FlashAmp will format its output this way to make it compatible. Within your scripts, you can then process the data and convert the strings into usable data types. This is useful when you wish to dynamically load data from a remote server. Third, FlashAmp can output the data as a text file formatted for the #include ActionScript compiler directive. This is the best way to handle external data and/or sounds when you do not need to load data from a remote server. The file is, essentially, an external ActionScript file, so it is easy to read, easy to update, and doesn’t clutter up your Flash scripting panel with code you are not likely to review repeatedly.

Internal vs. External Sounds
As you can see by the output file format information, FlashAmp data is compatible with the use of internal or external sounds in your Flash file. Internal sounds can appear in a timeline, or be used from the library using the Flash sound object and the attachSound() method. External MP3 files can be loaded using the Flash sound object and the loadSound() method. Each approach has its pros and cons, so I’ll discuss them briefly with some code examples.

Internal Sounds, Using the Timeline
This is the old school approach of simply importing a sound and adding it to a timeline frame span using the property inspector. Making certain the sound is set to Stream, sync will be maintained and you can use the timeline’s _currentframe property to reference the FlashAmp array’s correct index. Remembering that ActionScript uses a zero-based array, the syntax we’re familiar with from our previous discussions is based on a sound starting in frame 1. Looking at this sample line of script, you see we take the current frame of 1, subtract 1, and end up with array index 0, as we should.

this._alpha = amplitude[_root._currentframe-1]

This same technique can be used if you don’t start the sound in frame 1. For example, if you start the sound in frame 20, the script would look like this:

this._alpha = amplitude[_root._currentframe-20]

The benefits of using internal timeline sounds are that everything is internal (making it easier to distribute files without concerns about pathnames), and you can start and stop the sound without losing sync. If you place the sound in the root timeline, the Stream setting will allow the sound to play during download. (You can still use a timeline in a MovieClip, but the MovieClip will need to download fully before the sound can begin.) The disadvantages of using timeline-based sounds are that keeping everything internal makes it more difficult to make changes to the sound (the sound must be updated or re-imported), you can’t switch sounds dynamically, and you can’t use sound object properties such as volume and pan.

Internal Sounds, Using the Flash Sound Object
This approach involves attaching an imported sound to a Flash sound object using the attachSound() method and its Linkage name, and starting the sound playback when you need it. You can use a _currentframe property or a variable to walk through the FlashAmp array.

See also  Are Deceased Social Media Accounts Outnumbering the Living?

The benefits of using the sound object with internal sounds are that you have full access to the sound object properties (such as volume and pan) and, as long as other sounds with Linkage names exist in the file’s Library, you can dynamically switch sounds. The disadvantages include having to update or re-import sounds if edits are made, but also that sounds controlled by the sound object must fully download before playing.

External Sounds, Using the Flash Sound Object
Using external MP3 sounds requires essentially the same Flash approach as using internal sounds with the Sound object, but it’s even simpler. Instead of importing the sound and assigning a Linkage name, you simply use the loadSound() method and supply a pathname to the file. Everything else remains the same on the sound playback side, including access to volume and pan through the Sound object. However, there’s one wrinkle when using the FlashAmp array.

Because the sound is external, you’ll need to use the position property of the Sound object the way you normally use a timeline’s current frame. Sound.position is measured in milliseconds, so the goal is to arrive at a frame rate equivalent value. This is accomplished by dividing the current Sound.position millisecond value by 1000, and multiplying it by your frame rate.

For example, using a 12 fps file and a sound’s elapsed time of 1 second (when the Sound.position value would be 1000), the equation would be 1000/1000*12. This equals 12, which is where you should be at one second in a 12 fps movie. Using a fraction of a second, however, will yield a decimal value. At one-tenth of a second, the equation would be (100/1000) * 12, which would yield 1.2. However, you don’t typically want decimal values for these situations, so round the number to an integer. It helps to round down, when dealing with sound values, because that way you’re always sure to get zero values during silences. So, instead of using Math.round(), use Math.floor(), which rounds down to the nearest integer even if the value to the right of the decimal point is .5 or higher. An entire script might look like this:

onClipEvent (load) {    // sets up a sound object    faTrack = new Sound();    // load the external sound    faTrack.loadSound("mySound.mp3", true);    // play the sound    faTrack.start();}onClipEvent (enterFrame) {    // get the current frame equivalent of an external    //   sound by dividing the current playback position    //   of the sound by 1000, multiply by Flash frame    //   rate, and round down    myFrame = Math.floor((this.faTrack.position/1000)*fps)    // retrieve corresponding value from the FlashAmp array    this._alpha = amplitude[myFrame]};

The benefits of using external sounds include access to the Sound object properties and methods, a greater ease in switching sounds on the fly, and the convenience of simply replacing the sound file if edits are required. The downside is that Flash has some problems tracking the sound position of external sounds if you stop and start playback. Sync will be maintained if the sound is played straight through, but if the sound is stopped and started again from the beginning, Flash will not automatically reset the Sound.position value and you will not be able to maintain sync with the FlashAmp array. The workaround is to simply void the Sound object when the sound is stopped and create a new Sound object when you want to play the sound again.

Much, Much More
There are many more creative, impressive examples of FlashAmp projects on the Marmalade Multimedia Web site. The Related Resources section of this article (see left column) includes links to these examples, as well other sources of additional information. Whether you are an artist or a programmer, I hope this look at Flash sound synchronization has inspired you to use sound in creative ways. Send me links to your efforts!


About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist