Adobe, the company behind big creative programs like Photoshop and Premiere, just wrapped up its 2025 Adobe Max keynote, and you know what that means. That’s right: more AI. Over the course of the three-hour presentation, the company went big on automating creativity, introducing new generative AI tools for Photoshop, Lightroom, Premiere Pro, and other Creative Cloud apps. Some of these are expansions of tools that already exist, like better generative fill, while other are all new—like Firefly’s new AI audio generation.
Adobe Express can design based on vibes
Credit: Adobe
Before getting into the meatier stuff, let’s start with Adobe’s entry-level apps. While Adobe is known for professional-level programs like Photoshop, the company also has its own free basic web editor (although there’s also a mobile app) to help it compete with alternatives like Canva. Called Adobe Express, the tool’s been getting a steady stream of upgrades since its debut in 2015, and with the introduction of generative AI, has been quick to jump onto the trend to try to make itself easier to use.
Enter today’s “AI Assistant in Adobe Express.” When toggled on through a switch in the app’s top-left corner, the assistant will replace your tools with a chatbox where you can instruct it to either make a new design from scratch or edit an existing one. Should you need your tools again, you can bring them back by toggling the assistant off, although Adobe’s demos for the feature also show the assistant bringing up contextual sliders when needed, like one for resizing.
While this is not Adobe Express’ first venture into generative AI, the idea is to make getting started or quickly editing a piece less intimidating, by having inexperienced users spend most of their time in a chatbox rather than having to click through a toolbar. Adobe says, like its other AI tools, it pulls from a number of “commercially safe” sources, including the company’s font and stock image libraries and its Firefly AI models.
The tool will start rolling out in public beta today, so you should be able to try it out shortly.
Adobe Premiere is getting built into YouTube Shorts
Credit: Adobe
Shorts are the next big thing over on YouTube, and to encourage more people to make them, YouTube is teaming up with Adobe. As an update to both the Premiere iPhone app and YouTube itself, Adobe’s new Create for YouTube Shorts feature allows you to upload your footage and instantly make it publish-ready with Adobe’s font overlays and a number of “exclusive” effects, transitions, and stickers. Or you can directly plug your footage into templates that already have transitions and effects included.
The feature is currently listed as “coming soon,” so it’ll be a bit before you can try it. But once it’s live, Adobe and YouTube say you’ll be able to access it either through the Premiere iPhone app or directly though YouTube, via an “Edit in Adobe Premiere” icon in YouTube shorts.
There is no word yet on an Android or desktop release.
Adobe will add sound to your videos for you
Credit: Adobe
Sound is easy to overlook when making a new video, and I’ve had to scramble to find a decent soundtrack to add to my videos at the last second more than once. Adobe’s new Firefly AI audio features are looking to save you from that fate, by making it easy to add music and even narration to an otherwise silent video.
Rolling out in public beta today, Firefly’s new “Generate Soundtrack” and “Generate Speech” buttons use AI and a Mad Libs style prompting system to help you quickly score your content from a number of options.
For “Generate Soundtrack,” you’ll upload your video, press the appropriate button, and the app will suggest a prompt for you and give you a palette of adjectives, genre types, and and content types to refine it with. Drag your chosen terms into the prompt box, hit generate, and you’ll get four options, each cutting out at a maximum of five minutes.
It’s a bit odd that you can’t just enter your own terms into the prompt box, although Adobe generative AI head Alexandru Costin told The Verge that’s because AI audio is “a new muscle we need to develop” and that the current approach is “easier and more accessible.”
Like other Firefly generations, audio will be generated using Adobe’s own licensed content, so users won’t have to worry about copyright strikes on videos made using the feature.
“Generate Speech,” meanwhile, gives users access to 50+ text-to-speech voices, either from Adobe Firefly or licensed via ElevenLabs. There’s no Mad Libs prompting here, with Adobe instead allowing for fine-tune control over factors like speed, pitch, tone, and even pronunciation. Currently, over 20 languages are supported.
Taken together, the updates seem to me like an attempt to keep up with platforms like Instagram and TikTok, which have licensed music libraries and text-to-speech built in. Whether a purely AI-powered version can keep up remains to be seen, although putting it into the editor rather than the platform does give creators more choice about where to upload.
Updates inside Photoshop, Lightroom, and Premiere
Credit: Adobe
Finally, for the more hardcore Adobe users, updates are coming to the company’s core apps as well.
First, Photoshop is also getting its own AI assistant, which will be able to use prompts to edit for you. However, unlike Adobe Express, it’s currently in a private beta, so it’ll be some time before most users see it. It’s also limited to the web version of the app for now.
However, not in beta is the ability to choose which AI models the app works with. Previously, Generative Fill, which uses AI to fill in blank spots in backgrounds (or just generate whole canvases from nothing), were limited to Adobe’s Firefly models. Now, users will also be able to use them with Google’s Gemini 2.5 Flash model, or Black Forest’s Flux.1 Kontext model. Given how popular 2.5 Flash has gotten on social media under the name “nano banana,” that’s a big get for Adobe.
Still, Firefly isn’t getting left behind. Adobe says it’s upgraded the model with the ability to generate in a native four-megapixel resolution, and to better render people. It’s also integrating it into a new “Layered Image Editing” tool that can make contextual changes for you across layers, like futzing with shadows after you move an image.
Outside of Photoshop, Lightroom has its own private beta feature called “Assisted Culling.” I’ll admit Lightroom is probably where I have the least experience when it comes to Adobe, but the company says it’ll be able to filter through uploaded photos for you and find the most edit-friendly ones.
Finally, Premiere Pro has its own beta feature, but one that’s graciously public. Called “AI Object Mask,” it’ll automatically detect and track people and objects in your video’s background, so you can more easily add effects like blurs or color grading. It could be useful if, say, you’re shooting in a crowded area where you need to blur a lot of faces.
A little something for everyone
Overall, it was a fairly balanced Max, with a number of features for both pros and beginners. That said, I can’t ignore the focus on AI and automatic generation. On one hand, I get that photoshop’s a bit intimidating. On the other, the more Adobe handles your edits for you, the more it runs the risk of competing with existing easy-edit apps and platforms. I’m curious to see how the industry giant will compete as platforms like TikTok and Instagram continue to offer their own built-in editing tools.
Original Source: https://lifehacker.com/tech/new-ai-features-coming-to-adobe-products?utm_medium=RSS
Original Source: https://lifehacker.com/tech/new-ai-features-coming-to-adobe-products?utm_medium=RSS
Disclaimer: This article is a reblogged/syndicated piece from a third-party news source. Content is provided for informational purposes only. For the most up-to-date and complete information, please visit the original source. Digital Ground Media does not claim ownership of third-party content and is not responsible for its accuracy or completeness.
