Adobe's new Firefly model makes it easier to use Photoshop's AI tools – The Verge

By Jess Weatherbed, a news writer focused on creative industries, computing, and internet culture. Jess started her career at TechRadar, covering news and hardware reviews.
If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.
Adobe is adding some new generative AI tools to its Photoshop creative software that aim to give users additional ways to control the designs they generate. Powered by Adobe’s new Firefly Image 3 foundation model, these new tools are available today via the Photoshop beta desktop app, and will be generally available “later this year” according to Adobe’s Press release.
The most notable tool is Reference Image, which uses user-uploaded images to inspire the output generated by Adobe’s AI, matching similar elements in style and color. For example, instead of repeatedly tweaking a prompt description like “a blue vintage truck with flower decals,” users can instead provide a reference image that Photoshop will use as a guide.
“Prompting is a pain in the butt,” Ely Greenfield, chief technology officer for Digital Media at Adobe told The Verge. “Why spend an hour trying to craft a three-paragraph prompt if you have an image that you’ve created that’s exactly the thing you want to reference? The saying ‘a picture is worth a thousand words’ applies here.”
Users are expected to have the rights to use images they want to reference. Greenfield told the Verge that a message will flag this ownership requirement when the tool is first used, and that the company is working on a universal “do not train” tag for Adobe’s Content Authenticity Initiative that will also block images from being used as a reference. Images uploaded as reference materials won’t be used to train Firefly. Despite the ownership responsibility being placed on users, Adobe says this new referencing tool is still “safe for commercial use” — one of the most notable advantages that Adobe claims Firefly has over rival generative AI models.
Additional generative AI tools available in the Photoshop beta app include Generate Background, which replaces and creates new background images for things like product photography, and Enhance Detail, which increases clarity and makes images appear sharper.
Generate Similar is also available, which uses one of the three images generated by Photoshop’s Firefly tools as a reference to produce similar-looking content, while Generate Image allows users starting with a blank page to generate an entire image from a text description for the first time.
Adobe’s third-generation Firefly model, which has higher-quality image generation capabilities compared to its predecessor, is also available in a public, global beta for anyone to try outside of Photoshop via the Firefly web application. Adobe says its latest Firefly model delivers “photorealistic quality like never before with better lighting, positioning, and attention to detail.” Firefly Image 3 is more capable than the previous Firefly model at understanding long, descriptive text prompts, and can produce clearer text in the images it generates.
Outside of generative AI, Adobe is also adding some new, standard tools to Photoshop that can speed up creative processes. These include an Adjustment Brush that lets Photoshop users make non-destructive changes, such as color adjustments, to specific sections of an image. There’s also a new Adjustment Presets that can quickly change an image using filters, and an improved Font Browser that gives users real-time access to the over 25,000 fonts in Adobe’s cloud without leaving the Photoshop application.
/ Sign up for Verge Deals to get deals on products we’ve tested sent to your inbox weekly.
The Verge is a vox media network
© 2024 Vox Media, LLC. All Rights Reserved

source

Leave a Reply

The Future Is A.I. !
To top
en_USEnglish