AI & RoboticsNews

AI is growing faster than companies can secure it, warn industry leaders

DataGrail Summit: Navigating the Risks of AI

At the DataGrail Summit 2024 this week, industry leaders delivered a stark warning about the rapidly advancing risks associated with AI.

Dave Zhou, CISO of Instacart, and Jason Clinton, CISO of Anthropic, highlighted the urgent need for robust security measures to keep pace with the exponential growth of AI capabilities during a panel titled “Creating the Discipline to Stress Test AI—Now—for a More Secure Future.” The panel, moderated by VentureBeat’s editorial director Michael Nunez, revealed both the thrilling potential and the existential threats posed by the latest generation of AI models.

AI’s exponential growth outpaces security frameworks

Jason Clinton, whose company Anthropic operates at the forefront of AI development, didn’t hold back. “Every single year for the last 70 years, since the perceptron came out in 1957, we have had a 4x year-over-year increase in the total amount of compute that has gone into training AI models,” he explained, emphasizing the relentless acceleration of AI’s power. “If we want to skate to where the puck is going to be in a few years, we have to anticipate what a neural network that’s four times more compute has gone into it a year from now, and 16x more compute has gone into it two years from now.”

Clinton warned that this rapid growth is pushing AI capabilities into uncharted territory, where today’s safeguards may quickly become obsolete. “If you plan for the models and the chatbots that exist today, and you’re not planning for agents and sub-agent architectures and prompt caching environments, and all of the things emerging on the leading edge, you’re going to be so far behind,” he cautioned. “We’re on an exponential curve, and an exponential curve is a very, very difficult thing to plan for.”

AI hallucinations and the risk to consumer trust

At the DataGrail Summit, Dave Zhou from Instacart highlighted the urgent and complex challenges of securing vast amounts of sensitive customer data, while also managing the unpredictable behavior of large language models (LLMs). “When we consider LLMs, which are Turing complete and have memory, from a security standpoint, it’s important to recognize that even if these models are trained to respond in certain ways, with enough prompting and nudging, there might be ways to exploit them,” Zhou explained.

He offered a compelling example of how AI-generated content could have real-world consequences. “For instance, we saw initial stock images of ingredients that resembled a hot dog—but it wasn’t quite a hot dog. It looked more like an alien version of one,” Zhou noted. Such errors, he warned, could diminish consumer trust or, in extreme cases, lead to actual harm. “If a recipe generated by AI is incorrect or ‘hallucinated,’ it could result in someone creating something that may harm them.”

Throughout the summit, speakers, including Zhou and Clinton, underscored that the rapid adoption of AI technologies, driven by the pursuit of innovation, has outpaced the development of crucial security frameworks. They emphasized the need for companies to invest as much in AI safety systems as they do in AI development itself.

Zhou urged companies to balance their investments. “Please try to invest as much as you are in AI into either those AI safety systems and those risk frameworks and the privacy requirements,” he advised, highlighting the “huge push” across industries to capitalize on AI’s productivity benefits. Without a corresponding focus on minimizing risks, he warned, companies could be inviting disaster.

Preparing for the unknown: AI’s future poses new challenges

Clinton, whose company operates on the cutting edge of AI intelligence, provided a glimpse into the future—one that demands vigilance. He described a recent experiment with a neural network at Anthropic that revealed the complexities of AI behavior.

“We discovered that it’s possible to identify in a neural network exactly the neuron associated with a concept,” he said. Clinton described how a model trained to associate specific neurons with the Golden Gate Bridge couldn’t stop talking about the bridge, even in contexts where it was wildly inappropriate. “If you asked the network… ‘tell me if you know, you can stop talking about the Golden Gate Bridge,’ it actually recognized that it could not stop talking about the Golden Gate Bridge,” he revealed, noting the unnerving implications of such behavior.

Clinton suggested that this research points to a fundamental uncertainty about how these models operate internally—a black box that could harbor unknown dangers. “As we go forward… everything that’s happening right now is going to be so much more powerful in a year or two years from now,” Clinton said. “We have neural networks that are already sort of recognizing when their neural structure is out of alignment with what they consider to be appropriate.”

As AI systems become more deeply integrated into critical business processes, the potential for catastrophic failure grows. Clinton painted a future where AI agents, not just chatbots, could take on complex tasks autonomously, raising the specter of AI-driven decisions with far-reaching consequences. “If you plan for the models and the chatbots that exist today… you’re going to be so far behind,” he reiterated, urging companies to prepare for the future of AI governance.

The DataGrail Summit panels in whole delivered a clear message: the AI revolution is not slowing down, and neither can the security measures designed to control it. “Intelligence is the most valuable asset in an organization,” Clinton stated, capturing the sentiment that will likely drive the next decade of AI innovation. But as both he and Zhou made clear, intelligence without safety is a recipe for disaster.

As companies race to harness the power of AI, they must also confront the sobering reality that this power comes with unprecedented risks. CEOs and board members of DataGrail Summit must heed these warnings and ensure that their organizations are not just riding the wave of AI innovation but are also prepared to navigate the treacherous waters ahead.

 


Author: Michael Nuñez
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

DeepSeek’s first reasoning model R1-Lite-Preview turns heads, beating OpenAI o1 performance

AI & RoboticsNews

Snowflake beats Databricks to integrating Claude 3.5 directly

AI & RoboticsNews

OpenScholar: The open-source A.I. that’s outperforming GPT-4o in scientific research

DefenseNews

US Army fires Precision Strike Missile in salvo shot for first time

Sign up for our Newsletter and
stay informed!