Adobe unveiled a lineup of new generative AI tools and features across its Creative Cloud suite on the first day of its MAX 2025 event. The announcements highlight Adobe’s integration of Firefly, its family of creative AI models, into apps like Photoshop, Premiere Pro, Lightroom, and Express, alongside new conversational assistants and cross-application AI agents.
This year’s event introduced Firefly Image Model 5, AI-driven audio and video tools, expanded support for partner models, and Project Moonlight, a unified agent designed to act as a personal creative director. The company also announced updates to the Firefly website, enterprise-grade creation tools, and new developer integration options.
Adobe MAX 2025: Highlights
Adobe Firefly platform
Adobe expanded Firefly into a full creative AI studio, combining the company’s and third-party AI models for image, video, and audio generation at one place. Firefly now functions as a single platform where creators can ideate, generate, and edit across media formats using both Adobe’s proprietary Firefly models and those from partners, including Google, OpenAI, ElevenLabs, Topaz Labs, Runway, and Luma AI.
Alongside, it introduced a new “Prompt to Edit” feature, which it said will let users describe visual edits in natural language such as “brighten background” or “remove person” and Firefly will automatically apply the change.
Additionally, Adobe said that it has added a Firefly Creative Production, a batch editing solution (in private beta) that allows teams to process thousands of images at once, applying consistent background replacements, colour grading, and cropping through a no-code interface.
Firefly Image Model 5
Adobe introduced the Image Model 5, which it said delivers images at native 4-megapixel resolution, eliminating the need for upscaling. As per the company, the model brings stronger photorealism, improved human anatomy rendering, and better handling of lighting and texture.
Adobe said that the Image Model 5 powers layered and prompt-based editing, treating image elements as separate layers for resizing, rotation, or content-aware adjustments. It also enables Layered Image Editing (in development), which automatically corrects shadows and lighting when objects are moved.
Firefly Custom Models
Now in private beta, Firefly Custom Models let creators train personalised AI models using their own artwork, illustrations, or photography, said Adobe. The feature enables consistent, brand-aligned asset generation while keeping trained models private and rights-protected by default. These custom models can be used across the Firefly app and Firefly Boards.
Firefly Boards
Firefly Boards, Adobe’s AI-powered ideation and collaboration surface, received several new features to help teams move faster from concept to composition. Here’s an overview:
- Rotate Object allows creators to turn 2D visuals into 3D-like perspectives.
- PDF export and bulk image download streamline project sharing.
- New Presets make it possible to generate images in multiple visual styles with a single click.
- Generative Text Edit lets users replace text inside photos or graphics instantly, without separate editing software.
Firefly website updates
Adobe said that the Firefly website has been redesigned to make creation faster and more accessible. Users can now switch between image and video generation directly in the main prompt box, select which AI model to use, and change aspect ratios without leaving the page. The homepage also now displays recent generation history, shortcuts to Creative Cloud apps, and access to user files.
A new word cloud prompt helper feature helps users build more descriptive prompts by suggesting keywords and related terms to refine results.
Firefly Video Editor
Currently in private beta, the Firefly Video Editor introduces a multitrack, timeline-based interface for generating, trimming, and sequencing clips. It integrates AI tools for voiceover and soundtrack generation, style presets such as claymation or anime, and text-based editing through automatic transcripts. Creators can mix uploaded footage with generated content for hybrid projects, and generate new clips directly from the timeline.
Generate Soundtrack and Generate Speech
Two new Firefly Audio tools are launching in public beta:
- Generate Soundtrack creates royalty-free, fully licensed background music synced to video footage. Users can choose genres or describe moods such as “nostalgic,” “energetic,” or “cinematic.” Each prompt generates four variations up to five minutes long.
- Generate Speech converts text into lifelike voiceovers in more than 20 languages, using the Firefly Speech Model and ElevenLabs Multilingual v2. It supports pitch, pacing, and emotional adjustments, and allows manual pronunciation edits for regional words or names.
Photoshop, Lightroom, and Premiere Pro
Adobe announced several AI-driven upgrades across its flagship Creative Cloud applications:
- Photoshop now supports third-party generative models (Google Gemini 2.5 Flash, Black Forest Flux.1 Kontext, and Firefly Image Model 5) within Generative Fill, offering multiple creative variations. A new AI Assistant (private beta) allows users to perform edits via text instructions, while Layered Image Editing ensures realistic shadow and light handling.
- Lightroom introduces Assisted Culling (beta) to automatically filter and recommend top-quality shots based on sharpness, angle, and focus.
- Premiere Pro adds AI Object Mask (public beta), which automatically detects people and objects in video frames for faster colour correction, background blurring, and visual effects.
Adobe Express and AI assistant
Adobe introduced the AI assistant (beta) in Adobe Express, enabling conversational creation and editing. Users can describe changes in natural language — such as “add a summer vibe” or “make this text stand out” — and the assistant applies those edits automatically, understanding design context and visual harmony.
The tool can modify specific layers — fonts, images, or backgrounds — while keeping other elements intact. Users can toggle between AI-guided and manual editing at any time.
Express also gains the Developer MCP Server for Add-ons, allowing developers to build integrations and new features for Express via conversational interfaces. Enterprise-grade updates are coming soon, including template locking, batch creation, and brand consistency controls for large organizations.
Project Moonlight
Adobe previewed Project Moonlight, an agentic AI assistant designed to operate across all Adobe creative apps and social media platforms. Acting like a personal creative director, Moonlight connects to Creative Cloud libraries and social accounts to understand a creator’s visual style, tone, and ongoing projects.
The AI can:
- Generate personalised visuals, videos, and posts that match the creator’s aesthetic.
- Offer data-driven insights, analysing social media performance and suggesting growth strategies.
- Facilitate conversational creation, where users describe ideas and the agent produces finished content aligned to those directions.
- Project Moonlight is entering private beta, with users able to join a waitlist for early access.
Partnerships and model ecosystem
Adobe has expanded its AI partner ecosystem, integrating external models from ElevenLabs, Google, OpenAI, Topaz Labs, Runway, Black Forest Labs, Luma AI, and others into Firefly. These models can now be accessed directly inside Firefly’s interface, giving creators flexibility to experiment with different outputs from multiple AI engines.