Adobe Inc.

09/11/2024 | News release | Distributed by Public on 09/11/2024 06:56

Bringing generative AI to video with Adobe Firefly Video Model

We first launched Adobe Firefly in March 2023, and since then we've delivered rapid innovation with new models in imaging, design and vectors. These Firefly models have quickly grown to power some of the most popular features across Creative Cloud and Express like Generative Fill in Photoshop, Generative Remove in Lightroom, Generative Shape Fill in Illustrator and Text-to-Template in Express. Along the way, we've received incredible feedback from the creative community and enterprise customers alike - and in total our community has generated over 12B images and vectors making Firefly and the features it powers some of the fastest adopted by our community.

We all know that video is the currency of engagement today - and we're excited to share a peek at the upcoming Firefly Video Model and some of the revolutionary professional workflows it'll power in our industry-leading video tools like Premiere Pro, available starting in beta later this year.

Over the past several months, we've worked closely with the video editing community to advance the Firefly Video Model. Guided by their feedback and built with creators' rights in mind, we're developing new workflows leveraging the model to help editors ideate and explore their creative vision, fill gaps in their timeline and add new elements to existing footage.

Just like our other Firefly generative AI models, editors can create with confidence knowing the Adobe Firefly Video Model is designed to be commercially safe and is only trained on content we have permission to use - never on Adobe users' content.

We're excited to share some of the incredible progress with you today - all of which is designed to be commercially safe and available in beta later this year. To be the first to hear the latest updates and get access, sign up for the waitlist here.

The ever-increasing demand for fresh, short-form video content means editors, filmmakers and content creators are being asked to do more and in less time. Today not only do editors cut picture, but they're also tasked with color correction, titling, visual effects, animation, audio mixing and more. At Adobe, we're leveraging the power of AI to help editors expand their creative toolset so they can work in these other disciplines, delivering high-quality results on the timelines their clients require.

Common editorial tasks - like navigating gaps in footage, removing unwanted objects from a scene, smoothing jump cut transitions, and searching for the perfect b-roll takes time. When performed well, addressing these tasks can make the difference between a compelling and emotional narrative, rather than one that distracts from the story you're trying to tell.

Sometimes, sharing creative intent with your team and the stakeholders that green light and fund the work can also be a challenge, requiring many rounds of communication. At Adobe, we provide tools such as Frame.io to streamline teamwork by enabling a uniquely integrated review and approval process. And now, we are delivering AI tools to help facilitate an even better creative process that helps take the tedium out of post-production. Our AI tools give editors more time to explore new creative ideas, the part of the job they love, while also setting them up for successful collaboration with the larger team. Not only does Adobe facilitate streamlined work process, but now, with Generative AI, we've made it even easier and faster for editors and motion designers to create their best work in record time. Watch below to see how we're taking video editing to new heights using Firefly.

With Firefly Text-to-Video, you can use text prompts, a wide variety of camera controls, and reference images to generate B-Roll that seamlessly fills gaps in your timeline.

All generations shown below were created with Adobe's Firefly Video Model and were generated in under 2 minutes.

The Firefly Video Model excels at generating videos of the natural world. When production misses a key establishing shot needed to set the scene, generate an insert with camera motion, like landscapes, plants or animals.

Need more complementary shots? Fill the gap in your timeline with a generated clip based on a reference frame.

Original footage:

Generated clip:

Combined sequence:

With the Firefly Video Model you can leverage rich camera controls, like angle, motion and zoom to create the perfect perspective on your generated video.

The more detailed your prompts are, the better the model can leverage that depth to generate inspirational imagery and b-roll.

The Firefly Video Model supports a broad variety of use cases including creating atmospheric elements like fire, smoke, dust particles and water against a black or green background that can then be layered over existing content using blend modes or keying inside Adobe's tools like Premiere Pro and After Effects.

Original media:

Flame overlay:

Combined sequence:

Ideate 2D and 3D animation, including Claymation, that can be shared with creative collaborators to communicate intent.

Brainstorm ideas for custom text effects to share with clients for feedback.

With Image-to-Video, you can bring an existing still shot or illustration to life by transforming them into stunning live action clips.

Reference image:

Galaxy far away in outer space.

And coming later this year to Premiere Pro (beta), Generative Extend allows you to extend clips to cover gaps in footage, smooth out transitions, or hold on shots longer for perfectly timed edits.

Watch how the editor uses Generative Extend powered by the Adobe Firefly Video Model in Premiere Pro to hold on a shot longer to match the crescendo of the audio.

And here you can see the final output with the generated frames creating the perfect edit.

We're excited by all the recent advancements on the Adobe Firefly Video Model and look forward to continuing to partner with the community to build generative AI into the Adobe tools and workflows you rely on.