Adobe this week began rolling out a conversational AI assistant for Photoshop on the web and in mobile apps, and expanded its Firefly image-editing toolkit with several new generative features. The updates, released in beta, aim to speed routine edits and give creators more control through natural-language prompts.
First shown at Adobe MAX last fall, the new Photoshop AI assistant lets users describe edits instead of navigating menus. On mobile and in browsers users can ask the tool to erase subjects, alter color and light, or apply stylistic changes like softening highlights or shifting backgrounds — all by typing or speaking simple instructions.
Adobe says the assistant is available in beta and has different generation limits depending on account type: paid Photoshop subscribers can produce unlimited edits through April 9, while free users receive an initial allotment of 20 generations.
OpenAI CEO makes amends with Tumbler Ridge community after backlash
Walmart sales trend echoes past recessions: rising risk for consumers
AI markup, now in public beta, introduces a visual-first workflow: draw markers over an image to indicate what you want changed — sketch a flower to add it, circle an object to remove it, or tag a region for background replacement. The marker becomes the control point for the assistant’s response, blending manual selection with generative editing.
- Generative Fill — insert or replace elements and have the background adapt to the new content.
- Generative Remove — remove unwanted items or people using AI-aware content filling.
- Generative Expand — intelligently increase canvas size while extending scene elements to fill the frame.
- Generative Upscale — improve image resolution using machine learning upscaling.
- One-click background removal — a simplified tool to isolate subjects from their backgrounds.
These capabilities are being added not only to Photoshop but also to Firefly, Adobe’s web-based media-creation service. Adobe previously removed usage caps for Firefly subscribers to encourage experimentation, and the company says it has integrated more than two dozen third‑party generation models into the service, including Google’s Nano Banana 2, OpenAI’s Image Generation, Runway’s Gen-4.5 and Black Forest Labs’ Flux.2 Pro.
For creators, the changes mean routine tasks — from cleaning up travel shots to producing social visuals — can be handled faster and with fewer manual steps. For teams that operate at scale, the combination of text-driven prompts and marker-based edits could streamline revisions and speed turnaround.
At the same time, wider availability of generative tools raises questions about workflow standards, asset provenance and how edited images are labeled and managed in professional settings. Adobe’s move to broaden access suggests the company is betting creators and casual users alike will adopt inline generative editing as a default part of image production.
Rollout is staggered: the Photoshop assistant arrives in beta for web and mobile users immediately, while the Firefly enhancements are being introduced to the online tool. Paid Photoshop subscribers should note the temporary unlimited-generation window ends April 9; free accounts begin with a limited number of generations.
Adobe framed the updates as incremental steps toward a more conversational editing experience — shifting some decision-making from tool menus to natural language and simple on-image controls. For anyone who edits images regularly, that promises real time savings; for the broader industry, it signals another phase in the integration of generative AI into mainstream creative software.












