AI & RoboticsNews

Adobe’s Photoshop Neural Filters use AI to change faces, recolor photos

Since Adobe uses its annual MAX conference to reveal new user-facing improvements to its professional creative applications, it’s no surprise that this year’s online-only event is a virtual firehose of announcements — too many to count or even list, thanks to the large collection of Creative Cloud apps that are all receiving updates. But one new feature really stands out from the rest: an AI-powered addition to Photoshop called Neural Filters, which leverage cloud-based neural processing to enable over a dozen new instant photo editing tools, all designed to “improve over time” with machine learning.

Photoshop’s Neural Filters are perhaps the largest validation yet of Adobe’s AI strategy, which relies on the Sensei cloud-based machine learning platform to do heavy computational lifting for professional apps. Here, Sensei enables Photoshop to perform tasks such as Super Zoom high-resolution upscaling, portrait editing, and black-and-white photo colorization with little more than a single click. Instead of relying solely on a user’s local AI processing capabilities for the filters, as Skylum did last year with Luminar 4, Photoshop can tap into server-class computational power for even more intriguing capabilities. As was the case with Adobe Photoshop Camera for mobile phones, that means you won’t need a massive desktop computer to achieve pro-class results.

One of the most eye-catching Neural Filters is Smart Portrait, which enables a 2D photograph of a head to be repositioned and modified with post processing, using AI to compute what the face would look like with alternate angles or facial expressions. “Head Direction” and “Light Direction” sliders separately recompute the positions of the head, gaze, lighting, and shadowing of a person looking straight at the camera, while separate sliders adjust their “happiness,” “surprise,” and/or “anger.” There are also Snapchat-like sliders for facial aging and hair thickness, notably applied here to a professional high-resolution image rather than low-resolution content intended for social media.

Other Neural Filters include Colorize, which can instantly recolor a black-and-white scene using machine learning to deduce correct color data across even complex images; tools to clean up and apply makeup to faces; and filters that automatically convert photos to sketches, sketches to portraits, and faces to caricatures. To the extent that Photoshop hadn’t fully blurred the lines between photography and art, the Neural Filters go even further, using cloud ML to reduce the gap between the average and best results achieved by image editing software. A separate Adobe initiative — a new Discover panel with Quick Actions — will enable users to quickly see a range of available image editing effects and instantly apply them without needing to dive deeply into Photoshop’s menus.

The ever-easing process of photo editing will only enhance existing concerns over the authenticity of images, and Adobe openly acknowledges that its software is enabling both artists and bad actors to create “photos” that aren’t what they seem. To address that concern, Adobe will release a private beta of its Content Authenticity Initiative for Photoshop and Behance, which will allow creatives to opt into adding certification metadata for their images. A pop-up panel includes checkmarks to include a cryptographically signed, permanently attached thumbnail; the image producer’s name; a list of edits and activity; and links to original assets used in the final image.

Backed by content providers such as the BBC, CBC Radio-Canada, and the New York Times, with Microsoft, Qualcomm, Truepic, Twitter, and other companies on the tech side, Adobe’s initiative includes a site called verify.contentauthenticity.org that members of the public can use to check content authenticity for images. Since photographers will need to opt in to registering their images, and participation will likely require use of Adobe’s Creative Cloud, it’s unclear how widely the service will be used, but it’s something.

Above: Photoshop live-streaming from an iPad.

Image Credit: Adobe

To help creatives spread their visions over social media, Adobe also announced that it’s adding some live-streaming functionality to Creative Cloud, including a Photoshop feature on the iPad that will composite the app, front camera input, and microphone input into a single feed that can be used for instructional videos. The company is also introducing shareable creators’ history feeds containing step-by-step workflows for specific projects, enabling users to learn exactly how images were created.

Additional news on Photoshop and Lightroom versions optimized for Mac and Windows ARM processors is coming “soon after MAX,” the company says, without providing additional details. Apple is expected to hold a media event in November to introduce its first Mac computers with ARM-based “Apple Silicon” chips, which would be a natural opportunity to share this news.


The audio problem:

Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here



Author: Jeremy Horwitz
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!