Home / Technology / Tech News / Adobe Firefly AI Assistant now available in public beta: What can it do
Adobe Firefly AI Assistant now available in public beta: What can it do
Adobe rolls out Firefly AI Assistant in public beta, enabling users to edit images and videos using prompts instead of manually navigating tools across apps
Adobe has announced that its Firefly AI Assistant, a cross-app AI agent that coordinates actions and workflows across its Creative Cloud suite from a single conversational interface, is now available in public beta. For the uninitiated, Firefly AI Assistant is built around a conversational interface where users can describe what they want to create or edit, and the system executes those tasks across apps such as Photoshop, Premiere Pro, Lightroom, Illustrator and Express. It removes the requirement to manually do all that.
The company has not yet made any announcements about when it will become generally available for all users.
Firefly AI Assistant shifts how editing workflows are handled across its apps. Instead of navigating tools manually, users can describe what they want in plain language, and the system executes those steps across Photoshop, Premiere Pro, Lightroom, Illustrator and Express. The focus here is on reducing the need for step-by-step inputs. Users can give a broader instruction, and the assistant breaks it down into actions on its own.
Adobe is positioning this as a way to handle multi-step creative work in a more efficient way. For instance, tasks like editing visuals, adjusting audio or preparing content for different formats can be carried out within a single flow, without switching between tools manually. The assistant is also designed to retain context, meaning it can understand ongoing work and continue it across sessions or even across different applications.
To make this more usable in day-to-day workflows, Adobe has introduced pre-built “creative skills” for common tasks such as retouching images or generating content for multiple platforms. These workflows can be customised, or users can create their own depending on their needs. The assistant also works with existing assets like images, videos and brand elements, so edits are more context-aware rather than generic.
Importantly, Adobe has kept user control central to the experience. Even though the assistant can execute tasks end-to-end, users can review, refine or override changes at any stage, ensuring that the final output remains aligned with their intent.