AI & RoboticsNews

AI’s future is packed with promise and potential pitfalls

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Because it’s such a young science, machine learning (ML) is constantly being redefined. 

Many are looking at autonomous, self-supervised AI systems as the next big disrupter, or potential definer, in the discipline. 

These so-called “foundation models” include DALL-E 2, BERT, RoBERTa, Codex, T5, GPT-3, CLIP and others. They’re already being used in areas including speech recognition, coding and computer vision, and they’re emerging in others. Evolving in capability, scope and performance, they’re using billions of parameters and are able to generalize beyond expected tasks. As such, they’re inspiring awe, ire and everything in between. 

“It’s quite likely that the progress that they have been making will keep going on for quite a while,” said Ilya Sutskever, cofounder and chief scientist at OpenAI, whose work on foundation models has drawn wide-sweeping attention. “Their impact will be very vast – every aspect of society, every activity.” 

Rob Reich, Stanford professor of political science, agreed. “AI is transforming every aspect of life – personal life, professional life, political life,” he said. “What can we do? What must we do to advance the organizing power of mankind alongside our extraordinary technical advances?” 

Sutskever, Reich and several others spoke at length about the development, benefits, pitfalls and implications – both positive and negative – of foundation models at the spring conference of the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The Institute was founded in 2019 to advance AI research, education, policy and practice to “improve the human condition,” and their annual spring conference focused on key advances in AI.

Far from optimal, making progress 

Foundation models are based on deep neural networks and self-supervised learning that accepts unlabeled or partially labeled raw data. Algorithms then use small amounts of identified data to determine correlations, create and apply labels and train the system based on those labels. These models are described as adaptable and task-agnostic. 

The term “foundation models” was dubbed by the newly formed Center for Research on Foundation Models (CRFM), an interdisciplinary group of researchers, computer scientists, sociologists, philosophers, educators and students that formed at Stanford University in August 2021. The description is a purposely double-sided one: It denotes such models’ existence as unfinished but serving as the common basis from which many task-specific models are built via adaptation. It’s also intended to emphasize the gravity of such models as a “recipe for disaster” if poorly constructed, and a “bedrock for future applications” if well-executed, according to a CRFM report. 

“Foundation models are super impressive, they’ve been used in a lot of different settings, but they’re far from optimal,” said Percy Liang, director of CRFM and Stanford associate professor of computer science.

He described them as useful for general capabilities and able to provide opportunities across a vast variety of disciplines such as law, medicine and other sciences. For instance, they could power many tasks in medical imaging, whose data is at the petabyte level. 

Sutskever, whose OpenAI developed a GPT-3 language model and DALL-E 2, which generates images from text descriptions, pointed out that much progress has been made with text-generating models. “But the world is not just text,” he said. 

Solving the inherent problems of foundation models requires real-world use, he added. “These models are breaking out of the lab,” Sutskever said. “One way we can think about the progression of these models is that of gradual progress. These are not perfect; this is not the final exploration.” 

Questions of ethics and action

The CRFM report starkly points out that foundation models present “clear and significant societal risks,” both in their implementation and premise, while the resource required to train them has lowered standards for accessibility, thus excluding the majority of the community. 

The center also emphasizes that foundation models should be grounded, should emphasize the role of people and should support diverse research. Their future development demands open discussion and should take into effect protocols for data management, respect for privacy, standard evaluation paradigms and mechanisms for intervention and recourse. 

“In general, we believe concerted action during this formative period will shape how foundation models are developed, who controls this development and how foundation models will affect the broader ecosystem and impact society,” Liang wrote in a CRFM blog post. 

Defining the boundary between safe and unsafe foundation models requires having a system in place to track when and what these models are being used for, Sutskever agreed. This would also include methods for reporting misuse. But such infrastructure is lacking right now, he said, and the emphasis is more on training these models. 

With DALL-E 2, OpenAI has done pre-planning ahead of charting to think about the many ways things can go wrong, such as bias and misuse, he contended. They might also modify training data using filters or perform training “after the fact” to modify system capabilities, Sutskever said. 

Overall, though, “Neural networks will continue to surprise us and make incredible progress,” he said. “It’s quite likely that the progress that they have been making will keep going on for quite a while.” 

However, Reich is more wary about the implications of foundation models. “AI is a developmentally immature domain of scientific inquiry,” said the associate director of HAI. He pointed out that computer science has only been around, formally speaking, for a few decades, and AI for only a fraction of that. 

“I am suspicious of the idea of democratizing AI,” Reich said. “We don’t want to democratize access to some of the most powerful technologies and put them in the hands of anyone who might use them for adversarial purposes.”

While there are opportunities, there are also many risks, he said, and he questioned what counts as responsible development and what leading AI scientists are doing to accelerate the development of professional norms. He added that social and safety questions require input from multiple stakeholders and must extend beyond the purview of any technical expert or company. 

“AI scientists lack a dense institutional footprint of professional norms and ethics,” he said. “They are, to put it even more provocatively, like late-stage teenagers who have just come into a recognition of their powers in the world, but whose frontal lobes are not yet sufficiently developed to give them social responsibility. They need a rapid acceleration. We need a rapid acceleration of the professional norms and ethics to steward our collective work as AI scientists.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.


Author: Taryn Plumb
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!