Sneaks is live at #AdobeMAX! We're rolling out the exciting projects our incredible Adobe engineers and research scientists have been cooking up behind the scenes. Keep it locked here for updates ⤵️⤵️⤵️ pic.twitter.com/8xlAtrhzwU
— Adobe Creative Cloud (@creativecloud) October 15, 2024
Adobe has unveiled Project Super Sonic, an experimental prototype that demonstrates how users could generate background audio and sound effects for video projects using text-to-audio, object recognition, and their own voice. The tool offers three modes for creating soundtracks. The first mode uses text prompts to generate audio effects.
David Wadhwani, President, DMe at Adobe, on our incredible community and the continuing growth of the creative industry. #AdobeMAX pic.twitter.com/x5e68t72jO
— Adobe Creative Cloud (@creativecloud) October 14, 2024
The second mode leverages object recognition models, allowing users to click on any part of a video frame, create a prompt, and generate the corresponding sound. The third mode lets users record themselves imitating the desired sounds, timed to the video, and Project Super Sonic then generates the appropriate audio automatically.
Project Super Sonic modes explained
Transform your creative journey with Adobe Photoshop's latest features! Learn more and unleash your imagination! 🎨✨ #AdobeMAX https://t.co/jW6Eij2tCU
— Adobe Photoshop (@Photoshop) October 14, 2024
Justin Salamon, the head of Sound Design AI at Adobe, explained that the team began with the text-to-audio model, using only licensed data, with the goal of giving users control over the sound generation process. For vocal control, the tool analyzes different voice characteristics and the spectrum of the sounds being made to guide the generation process. Users could also clap their hands or play an instrument to guide the process.
Project Super Sonic was showcased as one of the “sneaks” at Adobe MAX, which are experimental features the company is working on. While there’s no guarantee these projects will make it into Adobe’s Creative Suite, Project Super Sonic might have a promising future, given that the same team worked on the audio portion of Adobe Firefly, a generative AI model that extends short video clips, including their audio tracks. As of now, Project Super Sonic remains a demo, but it showcases the potential for AI to revolutionize the process of creating engaging video content by simplifying the creation of background audio and sound effects.
Cameron is a highly regarded contributor in the rapidly evolving fields of artificial intelligence (AI) and machine learning. His articles delve into the theoretical underpinnings of AI, the practical applications of machine learning across industries, ethical considerations of autonomous systems, and the societal impacts of these disruptive technologies.























