It takes something special to make jaded photographers exclaim in genuine surprise when editing photos. The exclamations were rampant after Adobe recently released the latest public beta of Adobe Photoshop with a new Generative Fill feature that can create photorealistic objects, backgrounds, and scene extensions in existing photos using generative AI technology.
We’ve seen Adobe’s implementation of generative AI in Adobe Firefly, the web-based tool for creating entire synthetic images from text prompts. Generative Fill uses Firefly technology to edit existing images in a more targeted way, bringing generative AI to Photoshop as a standard feature soon. (A few third-party Photoshop plug-ins that tie into other generative AI systems have been available for a while, such as Alpaca and Stability AI.)
How the image-making machine works
A generative AI system like Firefly creates entirely original images based on what’s described in a text prompt. It doesn’t lift sections of existing images and throw them together to create a new composition. Instead, using what it has learned from ingesting millions of photos, the system invents scenes and objects to match what it understands the text to mean. A prompt such as ‘Antique car on a rainy street at night’ assembles an image from a random mass of pixels to match what the system understands is a ‘car,’ ‘rain,’ ‘street,’ and ‘night.’ The systems usually provide multiple variations on your theme.
Generating images in Adobe Firefly from just a text prompt. |
What if you already have a photo of a rainy street at night, and you want to add an antique car to the composition? Using Photoshop’s Generative Fill feature, you can select an area where you want the car to appear and type ‘Antique car’ to generate one (this is also known as ‘inpainting’). Or you can select objects you want to remove from an image and use Generative Fill without a specific text prompt to let the tool determine how to fill the missing area.
Adobe is making this process more user friendly than other generative AI systems. It has the home-field advantage of building the tool directly into Photoshop, where Generative Fill sports a clean and direct interface. In comparison, the popular service Midjourney requires that you join a Discord server, subscribe to the service, enter a chat room set up to receive text prompts, and then type what you want to generate using commands such as ‘Imagine antique car on a rainy street at night.’ Your results appear in a scrolling discussion along with images generated by others who are in the same chat room.
Photoshop’s approach debuts a new Contextual Task Bar with commands such as Select Subject or Remove Background. When you make a selection using any of the selection tools, such as the Lasso tool, one option in the bar is a Generative Fill button.
Making a selection in Photoshop with the Contextual Task Bar visible. |
Clicking that button reveals a text field where you can describe what should be created within the selection. Or, you can leave the field blank and click the Generate button to have Photoshop determine what will appear based on the context of the surrounding scene.
Clicking ‘Generative Fill’ displays the text prompt field. |
Once you click ‘Generate,’ Photoshop produces three Firefly-generated variations and shows you the first. You can cycle through them using buttons in the Contextual Task Bar or by clicking the thumbnails in the Properties panel. If none of them look good, you can click ‘Generate’ again to get three more variations.
After generating the result, the tourists are removed. This is one of the three variations, visible as thumbnails in the Properties panel. |
(By the way, if you get frustrated with the Contextual Task Bar appearing directly below every selection, you can drag it to where you want, then click the three-dot icon on the bar, and choose Pin Bar Position from the menu.)
All of the variations are contained in a new type of layer, the Generative Layer, that also includes a mask for the area you selected. If you apply Generative Fill to another area of the image, a new Generative Layer is created. All the variations are saved in those layers, so you can go back and try variations nondestructively, hide or show the layers, set the blend mode and opacity, and use all the other flexible attributes of layers.
Also note that Generative Fill is creating results at the same resolution as the original photos. This is in contrast to most systems, including Firefly on the web, where the generated images are low resolution, typically 1024 by 1024 pixels.
Now let’s look at what Generative Fill can do.
Remove objects
Usually when you use tools to remove unwanted items from a photo, the software attempts to backfill the missing area using pixels from elsewhere in the image (see Remove All the Things: Using modern software to erase pesky objects). That becomes more difficult when the area to be removed is large, leading to repeating artifacts that make it obvious something has been removed.
Generative Fill instead looks at the context of the image and attempts to create what would make sense in its place. In the examples above where we removed the tourists, Photoshop recreated the lines and colors of the buildings and matched the texture of the ground.
But you can’t assume that the feature will get it right every time. Take the image of two people below. We can attempt the removal of one person (the man on the left) by making a loose selection around him with the Lasso tool to define the area we want to replace, and then clicking Generate with nothing in the text box. Strangely, in multiple attempts the tool assumed that we wanted to replace the person with another, random, person. And these were nightmare-inducing renditions of synthetic people.
When leaving the text field blank, Photoshop really didn’t want to leave the person on the right all alone, and generated some scary looking replacement ‘people.’ |
According to Adobe, there is no need to type a command such as ‘Remove person’ as a prompt when using Generative Fill, but in the end, typing that prompt gave us the result we wanted. Note, though, that while Photoshop returned one variation without a person (see below), it also created two variations that still included people.
We can perhaps chalk this up to the feature being pre-release, although more likely it reveals that despite the amount of machine learning, the software is still just making guesses.
According to Adobe’s online advice, you shouldn’t need to write commands like ‘Remove person,’ but that’s what finally did the trick in this one variation. |
Replace items and areas
Removing objects is one thing, but what about replacing them with something entirely different? With Generative Fill, you can create things that were never in the original image. For instance, in the following photo we can turn one of the desserts into a blueberry tart by making a selection around the raspberries (expanding the selection slightly helps to pick up the paper texture behind them) and typing ‘Blueberries’ in the text prompt field. It took a couple of iterations to find one that matched, but these blueberries look pretty convincing.
The original image with the raspberries selected. | Replacing the raspberries in the foreground tart with blueberries. |
Or what about the drinks in the background? Instead of cold brew coffee, we can select the glass on the left and enter ‘Pint of beer’ as the prompt. Notice that not only is the glass slightly out of focus to match the depth of field of what it replaced, it also includes a hint of a reflection of the raspberry tart in front of it and the coffee to the side.
The cold coffee in the back is replaced by a pint of beer. |
Adding arbitrary items to an empty space can be more hit or miss. You’ll get better results if the selection you make is roughly the correct size and shape of the item in context. In this case, we drew a rectangular selection in the foreground and typed the prompt ‘Dog lying down,’ whereupon Photoshop created several pup variations, of which we liked the one below best. The angle of the light and shadow matches fairly well.
The original photo with a rectangular selection. |
Adding a dog makes every scene better, especially when the lighting remains consistent. |
In addition to replacing or adding foreground objects, by using Select Subject and inverting the selection to select the background, we can enter a prompt to redefine the whole context of the scene.
Generative Fill can also be used in more surgical ways, such as changing someone’s clothing. This, too, can have unpredictable results depending on what you’re looking to create. However, in some cases the rendered image looks fine.
The original photo with a selection around the clothing. | The text prompt ‘Gray turtle-neck sweater’ applied. |
Keep in mind that it can take multiple prompt requests and revisions to get what you want. and people with true fashion sense are likely to quibble with Photoshop’s choices. You can see some different resulting outfits below.
Not the subject of a fashion spread, we assure you. |
Extend the canvas
The other big feature of Generative Fill is the ability to extend the canvas and create content outside the photo’s original frame, also known as ‘outpainting.’ Using the Crop tool (and making sure the Background layer is unlocked), you drag to set new dimensions for the image. Select the empty areas, and overlap the edges of the existing image as reference.
Extend the crop area, select the empty space and make sure to select into edges of the original image. |
As before, you can click Generative Fill and then click Generate without a text prompt to let Photoshop figure it out.
The extended result. |
Of course, if you’re extending an image of a real location, the software doesn’t know the particulars of the landscape; it’s still only making informed machine-learning guesses about what should appear in the new areas. So especially if you’re working on photos of a well-known area, don’t expect location accuracy.
How to get the Photoshop public beta
The Generative Fill feature appears in the latest Photoshop 24.6.0 public beta release, which is available to Creative Cloud subscribers. In the Creative Cloud app, select Beta apps in the sidebar and look for Photoshop (Beta) in the list at right. Click Install. The beta and the release version of Photoshop can both exist on your computer at the same time; the beta doesn’t overwrite what you’ve already installed.
Get the Photoshop public beta in the Creative Cloud app. |
Caveats abound, but the Future is here
Because Generative Fill is still in beta, some results are breathtaking, while others are completely absurd. Also, the output isn’t yet intended for commercial use, according to Adobe in a note on a blog post announcing the feature: “Disclaimer: Generative Fill in the Photoshop (beta) app is available to all Creative Cloud members with a subscription or trial that includes Photoshop. Generative Fill is currently not available for commercial use, not available to people under 18, not available in China, and works with English-only text prompts.”
It’s also worth mentioning that the Generative Layers can dramatically increase the size of the Photoshop file (or layered TIFFs that are created when sending a photo from Lightroom). The photo of the van above with multiple background variations ended up occupying 2.67 GB as a layered TIFF. Deleting unused Generative Layers and variations makes a big difference, or saving the result as a TIFF with the layers discarded.
In the meantime, Creative Cloud subscribers can explore the capabilities of the beta and give Adobe feedback. Each variation thumbnail includes a three-dots menu where you can mark the result good or bad, delete the variation, or report it for potential violations of harmful or offensive content.
And if you don’t have access to the Photoshop beta, you can play with these features in Adobe Firefly on the web. The service is now open to anyone and includes the Generative Fill feature that you can apply to images you upload.
Author:
jeffcarlson
Source: Dpreview