AI & RoboticsNews

How GPT-4o defends your identity against AI-generated deepfakes

GPT-4o: The Surge of Deepfake Incidents by 2027

Deepfake incidents are surging in 2024, predicted to increase by 60% or more this year, pushing global cases to 150,000 or more. That’s making AI-powered deepfake attacks the fastest-growing type of adversarial AI today. Deloitte predicts deepfake attacks will cause over $40 billion in damages by 2027, with banking and financial services being the primary targets.

AI-generated voice and video fabrications are blurring the lines of believability to hollow out trust in institutions and governments. Deepfake tradecraft is so pervasive in nation-state cyberwarfare organizations that it’s reached the maturity of an attack tactic in cyberwar nations that engage with each other constantly.

“In today’s election, advancements in AI, such as Generative AI or deepfakes, have evolved from mere misinformation into sophisticated tools of deception. AI has made it increasingly challenging to distinguish between genuine and fabricated information,” Srinivas Mukkamala, chief product officer at Ivanti told VentureBeat.

Sixty-two percent of CEOs and senior business executives think deepfakes will create at least some operating costs and complications for their organization in the next three years, while 5% consider it an existential threat. Gartner predicts that by 2026, attacks using AI-generated deepfakes on face biometrics will mean that 30% of enterprises will no longer consider such identity verification and authentication solutions to be reliable in isolation.

“Recent research conducted by Ivanti reveals that over half of office workers (54%) are unaware that advanced AI can impersonate anyone’s voice. This statistic is concerning, considering these individuals will be participating in the upcoming election,” Mukkamala said.

The U.S. Intelligence Community 2024 threat assessment states that “Russia is using AI to create deepfakes and is developing the capability to fool experts. Individuals in war zones and unstable political environments may serve as some of the highest-value targets for such deepfake malign influence.” Deepfakes have become so common that the Department of Homeland Security has issued a guide, Increasing Threats of Deepfake Identities.

How GPT-4o is designed to detect deepfakes

OpenAI’s latest model, GPT-4o, is designed to identify and stop these growing threats. As an “autoregressive omni model, which accepts as input any combination of text, audio, image and video,” as described on its system card published on Aug. 8. OpenAI writes, “We only allow the model to use certain pre-selected voices and use an output classifier to detect if the model deviates from that.”

Identifying potential deepfake multimodal content is one of the benefits of OpenAI’s design decisions that together define GPT-4o. Noteworthy is the amount of red teaming that’s been done on the model, which is among the most extensive of recent-generation AI model releases industry-wide.

All models need to constantly be training on and learning from attack data to keep their edge, and that’s especially the case when it comes to keeping up with attackers’ deepfake tradecraft that is becoming indistinguishable from legitimate content.

The following table explains how GPT-4o features help identify and stop audio and video deepfakes.

Source: VentureBeat analysis

Key GPT-4o capabilities for detecting and stopping deepfakes

Key features of the model that strengthen its ability to identify deepfakes include the following:

Generative Adversarial Networks (GANs) detection. The same technology that attackers use to create deepfakes, GPT-4o, can identify synthetic content. OpenAI’s model can identify previously imperceptible discrepancies in the content generation process that even GANs can’t fully replicate. An example is how GPT-4o analyzes flaws in how light interacts with objects in video footage or inconsistencies in voice pitch over time. 4o’s GANS detection highlights these minute flaws that are undetectable to the human eye or ear.

GANs most often consist of two neural networks. The first is a generator that produces synthetic data (images, videos or audio) and a discriminator that evaluates its realism. The generator’s goal is to improve the content’s quality to deceive the discriminator. This advanced technique creates deepfakes nearly indistinguishable from real content.

Source: CEPS Task Force Report, Artificial Intelligence, and Cybersecurity. Technology, Governance and Policy Challenges, Centre for European Policy Studies (CEPS). Brussels. May 2021

Voice authentication and output classifiers. One of the most valuable features of GPT-4o’s architecture is its voice authentication filter. The filter cross-references each generated voice with a database of pre-approved, legitimate voices. What’s fascinating about this capability is how the model uses neural voice fingerprints to track over 200 unique characteristics, including pitch, cadence and accent. GPT-4o’s output classifier immediately shuts down the process if any unauthorized or unrecognized voice pattern is detected.

Related: OpenAI updates ChatGPT to new GPT-4o model 

Multimodal cross-validation. OpenAI’s system card comprehensively defines this capability within the GPT-4o architecture. 4o operates across text, audio, and video inputs in real time, cross-validating multimodal data as legitimate or not. If the audio doesn’t match the expected text or video context, the GPT4o system flags it. Red teamers found this is especially crucial for detecting AI-generated lip-syncing or video impersonation attempts.

Deepfake attacks on CEOs are growing

Of the thousands of CEO deepfake attempts this year alone, the one targeting the CEO of the world’s biggest ad firm shows how sophisticated attackers are becoming.

Another is an attack that happened over Zoom with multiple deepfake identities on the call including the company’s CFO. A finance worker at a multinational firm was allegedly tricked into authorizing a $25 million transfer by a deepfake of their CFO and senior staff on a Zoom call.

In a recent Tech News Briefing with the Wall Street Journal, CrowdStrike CEO George Kurtz explained how improvements in AI are helping cybersecurity professionals defend systems while also commenting on how attackers are using it. Kurtz spoke with WSJ reporter Dustin Volz about AI, the 2024 U.S. election and threats posed by China and Russia.

“And if now in 2024 with the ability to create deepfakes, and some of our internal guys have made some funny spoof videos with me and it just to show me how scary it is, you could not tell that it was not me in the video,” Kurtz told the WSJ. “So I think that’s one of the areas that I really get concerned about. There’s always concern about infrastructure and those sort of things. Those areas, a lot of it is still paper voting and the like. Some of it isn’t, but how you create the false narrative to get people to do things that a nation-state wants them to do, that’s the area that really concerns me.”

The critical role of trust and security in the AI era

OpenAI’s prioritizing design goals and an architectural framework that puts defake detection of audio, video and multimodal content at the forefront reflect the future of gen AI models.

“The emergence of AI over the past year has brought the importance of trust in the digital world to the forefront,” says Christophe Van de Weyer, CEO of Telesign. “As AI continues to advance and become more accessible, it is crucial that we prioritize trust and security to protect the integrity of personal and institutional data. At Telesign, we are committed to leveraging AI and ML technologies to combat digital fraud, ensuring a more secure and trustworthy digital environment for all.”

VentureBeat expects to see OpenAI expand on GPT-40’s multimodal capabilities, including voice authentication and deepfake detection through GANs to identify and eliminate deepfake content. As businesses and governments increasingly rely on AI to enhance their operations, models like GPT-4o become indispensable in securing their systems and safeguarding digital interactions.

Mukkamala emphasized to VentureBeat that “When all is said and done, though, skepticism is the best defense against deepfakes. It is essential to avoid taking information at face value and critically evaluate its authenticity.”


Author: Louis Columbus
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!