AI & RoboticsNews

Former Google CEO Eric Schmidt warns against overregulation of AI

Former Google CEO Eric Schmidt urged cooperation with Chinese scientists, warned against the threat of misinformation, and advised against overregulation by governments today in a broad-ranging speech about AI ethics and regulation of big tech companies. He also talked about conflict deterrence between nation-states in the age of AI and pondered how secretaries of state might share information in the coming age of artificial general intelligence (AGI).

“What are the norms of this? This area strikes me as one that’s nascent but will become very important as general intelligence becomes more and more possible some time from now,” he said. “We haven’t had a common regime around how all that works.”

In a speech at Stanford University’s Hoover Institution today, he praised progress made in the field of AI in areas like autonomous driving and medicine, federated learning for privacy-preserving on-device machine learning, and eye scans for detection of cardiovascular issues. A combination of generative adversarial networks and reinforcement learning will lead to major advances in science in the years ahead.

He also urged government restraint in regulation of technology as the AI industry continues to grow.

“I would be careful of building any form of additional regulatory structure that’s extralegal,” Schmidt said in response when a member of the audience proposed the creation of a new federal agency to critique algorithms used by private companies.

Schmidt shared the stage with Marietje Schaake, a Stanford Institute for Human-Centered Artificial Intelligence (HAI) fellow and Dutch former member of European Parliament who played a role in passage of GDPR regulation. She counterpointed that companies that say regulation may stifle innovation often assume technology is more important than democracy and the rule of law.

A hands-off approach on tech regulation has led to the creation of new monopolies, thrown journalism into turmoil, and allowed the balkanization of the internet, she said. Failure to act now, she added, could allow for AI to accelerate and amplify discrimination. She suggested systematic impact assessments to operate in parallel with AI research so that our understanding of negative impacts can mirror progress.

“I think it’s very clear that tech companies can all stay on the fence in taking a position in relation to values and rights. I personally believe that a rules-based system serves the public interest as well as collective rights and liberties the companies benefit from,” she said. “I see clear momentum now between the EU and U.S. and a significant part of the democratic world, where [we] can catch up to the civil regulatory gaps platforms and other digital services … anticipating the broader use of artificial intelligence.”

She also argued that big tech self-regulation efforts have failed and emphasized the need for empowering regulators in order to defend democracy.

“Because with great power should come great responsibility, or at least modesty,” she said. “Everyone has a role to play to strengthen the resilience of our democracy.”

Schaake and Schmidt spoke for more than an hour this morning at a symposium held by the Stanford University Institute for Human-Centered AI about AI ethics, policy, and governance.

The debate between the two comes at a time when regulators in the United States have increased scrutiny of tech giants. Companies like Google currently face antitrust investigations from state attorneys general, and Democratic presidential candidate Elizabeth Warren has made the breakup of tech giants a central part of her campaign.

Last month, due to Schmidt’s potential role in issues ranging from Google’s project to enter mainland China to its work with the Department of Defense to its payout to Andy Rubin of $90 million despite sexual harassment allegations, a number of AI ethicists asked HAI to rescind its invitation to this event. Written by Tech Inquiry founder Jack Poulson, signatories include roughly 50 people, about a dozen of whom currently work as engineers at Google.

In response to the petition, HAI published a tweet warning against the dangers of “damaging intellectual blindness.”

Pentagon’s Defense Innovation Board AI ethics recommendations and the report from the national security commission on AI — two committees that Schmidt oversees — are due out October 31 and November 5, respectively.

Both initiatives are aimed at helping the United States create a national AI strategy as roughly 30 other nations around have done, he said. Last week, founders of the Stanford center called for $120 billion in government spending over the course of the next decade as part of a national strategy.

Author: Khari Johnson

Source: Venturebeat

Related posts
AI & RoboticsNews

Mike Verdu of Netflix Games leads new generative AI initiative

AI & RoboticsNews

Google just gave its AI access to Search, hours before OpenAI launched ChatGPT Search

AI & RoboticsNews

Runway goes 3D with new AI video camera controls for Gen-3 Alpha Turbo

DefenseNews

Why the Defense Department needs a chief economist

Sign up for our Newsletter and
stay informed!