AI & RoboticsNews

Adobe Max Sneaks feature AI photography, animation, and audio tools

Whenever an Adobe event offers Sneaks — quick previews of what its researchers are working on — there’s a caveat that the “projects” might not wind up in shipping products. But they often do, and if that happens with the company’s just-revealed collection of intriguing features, Adobe users can look forward to a lot of AI assistance in the foreseeable future, including tools for photography, animation, and audio editing.

On the photographic front, Project Light Right harnesses Adobe’s Sensei AI system to bring time- and date-appropriate lighting edits to images. Rather than applying a light source and shadows to an image based solely on a user-selected position on a 3D globe, Light Right uses AI and multiple images to deduce the sun’s position, then add directionally appropriate light and shadows during edits. It can also use videos and Adobe Stock photos as inputs for its lighting calculations.

A more subtle application of AI is Project About Face (shown above), which can be used to detect edited images, generating an automated “Probability of Manipulation” and heatmap to show where edits have been made — including ones that are too subtle for the human eye to notice. About Face will likely be used to contribute to Adobe’s upcoming Content Authenticity program, which promises to offer photo and video viewers a sense of whether they’re seeing edited or unedited imagery, and might even be used to reverse the edits, revealing the original image.

Last but not least, Project All In promises to solve a classic photographer’s issue — the inability of a person standing behind the lens to be part of a group photo. All In uses Sensei to automatically blend two photos together, such that two people could take turns shooting the same background while one person stands in the image, resulting in a composite where both people stand together in the same environment. Alternately, All In can be used to exclude a second instance of a person who appears differentially in two shots.

Adobe also showed off several AI-aided animation tools. Like Samsung’s recent 3D scanning and AR avatar demos, Adobe’s Project Go Figure turns video of a real person’s movements into skeletal frame animations that can be exported for use by a virtual character. Project Pronto can add 3D objects to smartphone videos, such that the objects naturally follow the motion path of the video with AR-style blended live and digital results. And Project Sweet Talk (shown above) promises to automate the animation of lip synchronization, converting recorded audio into a mesh that can be applied to flat images — even paintings — and animated characters.

The researchers are also using AI to speed up audio editing. Project Sound Seek automates the process of eliminating repeated sounds, such as “um” or “ah” tics, from recordings. Reducing noise is the focus on Project Awesome Audio, which claims to “awesomize” even a mediocre internal PC microphone recording with a single button click, adjusting levels and removing background interference.

Whether these features will wind up in Adobe apps in the near future remains to be seen, but the company has aggressively brought some of its Sneaks — including Project Aero — into actual shipping apps this year. The timeline from preview to release can be a year or more, but for smaller individual features, may be sooner.


Author: Jeremy Horwitz
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!