NewsPhotography

Microsoft’s New AI ‘Authenticator’ Spots Manipulated Photos and Videos

Earlier today, Microsoft announced two new tools that will help identify manipulated photos and videos. The first is a metadata-based system, and the second is a “Video Authenticator” that will analyze photos and videos and provide a “confidence score” that tells you whether or not the media has been altered by AI.

The metadata system that Microsoft talks about sounds similar to Adobe’s upcoming “Authenticity” system: one program is used by the original creator to add certain tags to an image, and another checks those tags to tell you whether or not you’re looking at the original or a manipulated version. That seems pretty straight forward, and not terribly ground-breaking.

What sounds far more interesting is the so-called “Microsoft Video Authenticator,” which is actually meant for both stills and video. According to Microsoft:

Video Authenticator can analyze a still photo or video to provide a percentage chance, or confidence score, that the media is artificially manipulated. In the case of a video, it can provide this percentage in real-time on each frame as the video plays. It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye.

You can see the system in action, frame-by-frame, in the GIF below:

The tech was designed to fight deepfakes: manipulated photos and videos that are created by deep-learning algorithms (hence: deep fake). These manipulated files—which are often used to make it seem like a prominent celebrity or political figure said or did something they did not say or do—are becoming increasingly problematic since the algorithms are always learning, creating more-and-more convincing and hard-to-detect fakes.

This is what Video Authenticator is trying to defend against, and it was trained using the latest-and-greatest datasets for deefake detection.

Since this, itself, is an AI-powered “deep learning” based system, it won’t be released broadly to the public, where bad actors could potentially train their algorithms against it. The Microsoft Video Authenticator will instead be available through a partnership with the AI Foundation’s Reality Defender 2020 (RD2020) initiative, and anyone who is interested in learning more or getting access to the tech will need to reach out to RD2020 here.

(via Engadget)


Author: DL Cade
Source: Petapixel

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!