At Adobe MAX 2025, the company unveiled a major expansion of its AI-powered creative suite, Adobe Firefly, adding new tools for video, audio, and image creation. The update brings new models, AI assistants, and editing tools designed to help creators. Adobe said Firefly has now evolved into a full-fledged AI creative studio — a single workspace where anyone can brainstorm, generate, and refine multimedia projects.
The new version of Firefly adds features like “Generate Soundtrack” and “Generate Speech” for music and voiceovers, a timeline-based AI video editor, and the Firefly Image Model 5 for photorealistic image generation and natural-language editing. The company has also opened the door to more third-party AI models from the likes of ElevenLabs and Topaz Labs, letting users mix capabilities directly inside Firefly without juggling multiple tools.
Adobe Firefly: What is new
New AI tools for audio, video, and imaging
Generate Soundtrack (public beta) — an AI-powered music generator that creates fully licensed instrumental tracks for videos. Adobe said the tool can instantly generate multiple original variations and automatically sync them with uploaded clips, making it useful for YouTubers, short filmmakers, and anyone producing content for social platforms.
Generate Speech (public beta) — which is said to produce lifelike voiceovers in multiple languages. The feature is powered by Adobe’s Firefly Speech Model and ElevenLabs technology, and offers adjustable tone, pacing, and emotion. Creators can fine-tune how natural or expressive the voice sounds, removing the need for separate text-to-speech software.
The company is also introducing a web-based Firefly video editor (private beta) — a timeline-style AI editing tool that combines generation and manual editing in one place. Users can import their own footage, trim clips, arrange sequences, add AI-generated voiceovers or soundtracks, and even create new shots directly inside the editor. Adobe said the editor includes style presets like claymation, anime, and 2D art, as well as a built-in transcript editor that lets users cut or rearrange scenes by text rather than markers on a timeline.
Firefly Image Model 5
Adobe’s new Firefly Image Model 5 (public beta) is its most advanced image generation system so far. Adobe said the model can create 4MP images with realistic lighting, textures, and lifelike human details — without needing separate upscaling.
The update also introduces Prompt to Edit, a conversational editing feature that lets creators describe visual changes in plain language. For instance, users can type “make the lighting softer” or “replace the background with a sunset,” and Firefly will carry out the edit automatically.
The model also lays the groundwork for upcoming capabilities such as Layered Image Editing, which will enable context-aware tweaks while keeping elements and compositions consistent.
Firefly Boards and Custom Models
Adobe has expanded Firefly Boards, the app’s collaborative workspace, to make idea generation and sharing more fluid. A new Rotate Object tool can turn flat 2D images into 3D-like perspectives, and improved export and download options make organising creative boards easier.
For the first time, Firefly Custom Models (private beta) are coming to individual creators. Adobe said these allow users to train small, personalised AI models using images they own — enabling them to generate content in their own distinct visual style, whether that’s an illustration theme, colour palette, or photo tone. The setup process is drag-and-drop simple, and custom models remain private unless the creator chooses to share them.
Third-party models and integrations
The Firefly ecosystem now supports a wider range of third-party AI models, giving users access to new creative capabilities without switching platforms. The latest integrations include ElevenLabs Multilingual v2 for advanced voiceovers and Topaz Bloom for AI-driven image enhancement and upscaling.
They join models from Google (Veo 3.1, Imagen 4, Gemini 2.5), OpenAI (GPT Image), Runway (Gen-4), Luma AI (Ray3), and others — all accessible within Firefly’s unified interface. Adobe said this approach is meant to give creators “a wider creative toolkit in one place,” whether they’re editing visuals, sound, or motion.
Project Moonlight
Adobe also previewed Project Moonlight, a conversational assistant designed to work across its creative apps. The company said Moonlight can analyse a creator’s projects or social content to suggest ideas, provide feedback, or help turn a concept into production-ready material using simple natural language prompts.
Adobe Firefly: Availability of new features
Adobe said that Firefly Image Model 5, Generate Soundtrack, and Generate Speech are available today in public beta, while the Firefly video editor and Custom Models are in private beta, rolling out to early testers next month.
Creators can sign up for early access to upcoming tools, including Project Moonlight, through Adobe’s beta waitlists.
Until December 1, users subscribed to Creative Cloud Pro and Firefly plans can generate unlimited images and videos using both Adobe and third-party AI models integrated into the platform.