NewsPhotography

Microsoft unveils a new AI-powered tool for spotting deepfaked images and videos

On Tuesday, Microsoft introduced Video Authenticator, a new AI-powered tool that analyzes still images and videos to determine the likelihood that they feature digital manipulation. The tool is designed to address the growing problem of ‘deepfakes,’ a type of highly realistic manipulated content generated or modified by artificial intelligence systems.

Deepfake refers to images, videos and audio modified using AI tools. Though this technology can be used creatively, it is most associated with manipulating media to present something that didn’t happen in real life. This could include, for example, a video of a politician saying something they never said or doing something they never did.

Because these deepfakes are created using machine learning algorithms, the resulting content is typically very high quality and difficult (or impossible) for someone to discern from authentic media just by looking at it or listening to it. The solution to AI manipulation is another AI trained to spot the changes.

Microsoft has introduced Video Authenticator under its Defending Democracy Program, pointing out that dozens of ‘foreign influence campaigns’ targeting countries around the world have been identified in the past several years. These campaigns are intended to manipulate the public into certain beliefs or ideologies; others attempt to stir up debate and further polarize groups against each other.

Of 96 different campaigns identified (PDF) in part with support from Microsoft, 93% of them involved original content, which can be particularly difficult to detect. Microsoft explains that while ‘no single technology will solve the challenge of helping people decipher what is true and accurate,’ its Video Authenticator is an important tool that will help counteract disinformation by detecting subtle evidence of AI involvement in its creation.

Though Video Authenticator isn’t a long-term solution to what is inevitably an evolving technology, Microsoft explains that ‘in the short run, such as the upcoming U.S. election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes.’

Video Authenticator works by analyzing every frame in a video, as well as still images, and assigning them a ‘percentage chance,’ also called a confidence score, that indicates its likelihood of having been manipulated. When analyzing videos, Video Authenticator presents users with a realtime percentage for each frame.

In a sample provided by Microsoft, the tool isn’t able to detect evidence of manipulation in every frame; some pass without triggering the system, while others may have enough greyscale elements, blending boundaries, subtle fading or other aspects to trigger the detection system.

Ultimately, Video Authenticator is just the start. Microsoft explains:

We expect that methods for generating synthetic media will continue to grow in sophistication. As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods. Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media. There are few tools today to help assure readers that the media they’re seeing online came from a trusted source and that it wasn’t altered.

Microsoft isn’t making Video Authenticator available to the public in order to protect it against manipulation that may hamper the effort.

Video Authenticator is the latest example of a deep learning algorithm designed to counter the negative use of other AI algorithms. Last year, for example, Adobe Research and UC Berkeley introduced a method for detecting subtle face manipulations made using the Face Aware Liquify tool in Photoshop.

Conversely, we’ve also seen AI-based technologies that empower users to better protect themselves in this new digital landscape. Most recently, researchers with the University of Chicago SAND Lab released a free tool that uses AI to subtly ‘cloak’ images of one’s own face in order to poison facial recognition algorithms trained to recognize them.


Author:
Brittany Hillen
Source: Dpreview

Related posts
AI & RoboticsNews

DeepSeek’s first reasoning model R1-Lite-Preview turns heads, beating OpenAI o1 performance

AI & RoboticsNews

Snowflake beats Databricks to integrating Claude 3.5 directly

AI & RoboticsNews

OpenScholar: The open-source A.I. that’s outperforming GPT-4o in scientific research

DefenseNews

US Army fires Precision Strike Missile in salvo shot for first time

Sign up for our Newsletter and
stay informed!