AI & RoboticsNews

The strange new world of AI power, politics and the ‘pause’ | The AI Beat

AI

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

The noisy debates around AI risk and regulation got many decibels louder last week, while simultaneously becoming even harder to decipher.

There was the blowback from tweets by Senator Chris Murphy (D-CT) about ChatGPT, including that “Something is coming. We aren’t ready.” Then there was the complaint to the FTC about OpenAI, as well as Italy’s ban on ChatGPT. And, most notably, the open letter signed by Elon Musk, Steve Wozniak, and others that proposed a six-month “pause” on large-scale AI development. It was put out by an organization focused on x-risk (a nickname for “existential risk”) called Future of Life Institute and, according to Eliezer Yudkowsky, it didn’t even go far enough.

Not surprisingly, the fierce debate about AI ethics and risks, both short-term and long-term, has been amped up by the mass popularity enjoyed by OpenAI’s ChatGPT since it was released on November 30. And the growing number of industry-led AI tools built on large language models (LLMs) — from Microsoft’s Bing and Google’s Bard to a slew of startups — has led to a far greater scale of AI discussion in the mainstream media, industry pubs and on social platforms.

>>Follow VentureBeat’s ongoing generative AI coverage<<

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

Register Now

AI debates have veered toward the political

But it seems like as AI leaves the research lab and launches fully-flowered into the cultural zeitgeist, promising tantalizing opportunities as well as presenting real-world societal dangers, we’re also entering a strange new world of AI power and politics. No longer are AI debates just about technology, or science, or even reality. They are also about opinions, fears, values, attitudes, beliefs, perspectives, resources, incentives and straight-up weirdness.

This is not inherently bad, but it does lead to the DALL-E-drawn elephant in the room: For months now, I’ve been trying to figure out how to cover the confusing, kinda creepy, semi-scary corners of AI development. These are focused on the hypothetical possibility of artificial general intelligence (AGI) destroying humanity, with threads of what has recently become known as “TESCREAL” ideologies — including effective altruism and longtermism, with transhumanism woven in. You’ll find some science fiction sewn into this AI team sweater, with the words “AI safety” and “AI alignment” embroidered in red.

Each of these areas of the AI landscape has its own rabbit hole to go down, some of which seem relatively level-headed, while others lead to articles about the paper clip-maximizing problem, a posthuman future created by artificial superintelligence; and a San Francisco pop-up museum devoted to highlighting the AGI debate with a sign saying “sorry for killing most of humanity.”

The disconnect between applied AI and AI predictions

Much of my VentureBeat coverage focuses on the effects of AI on the enterprise. Frankly, you don’t see C-suite executives worrying about whether AI will extract their atoms to turn into paper clips — they’re wondering whether AI and machine learning can boost customer service, or make workers more productive.

The disconnect is that there are plenty of voices at top companies, from OpenAI and Anthropic to DeepMind and all around Silicon Valley, who have an agenda based at least partly on some of the TESCREAL issues and belief systems. That may not have mattered much 7, 10 or 15 years ago, when deep-learning research was in its infancy, but it certainly garners a lot of attention now. And it’s becoming more and more difficult to discern the agenda behind some of the biggest headline-grabbers.

That has led to suspicion and accusations: For example, last week a Los Angeles Times article highlighted the contradiction that Sam Altman, CEO of OpenAI, has declared that he was “a little bit scared” of the company’s technology that he is “currently helping to build and aiming to disseminate, for profit, as widely as possible.”

The article said: “Let’s consider the logic behind these statements for a second: Why would you, a CEO or executive at a high-profile technology company, repeatedly return to the public stage to proclaim how worried you are about the product you are building and selling? Answer: If apocalyptic doomsaying about the terrifying power of AI serves your marketing strategy.”

The shift from technology and science to politics

Over the weekend, I posted a Twitter thread. I was at a loss, I wrote, as to how to address the issues lurking beneath the AI pause letter, the information that led to Senator Murphy’s tweets, the polarizing debates happening about open and closed source AI, Sam Altman’s biblical prophecy-style post about AGI. All of these discussions are being driven partly by those with beliefs that most in the public have no idea about — both that they hold those beliefs and what they mean.

What should a humble reporter do who is trying to be balanced and reasonably objective? And what can everyone in the AI community do — from research to industry to policy — to get a grip on what’s going on?

Former White House policy advisor Suresh Venkatasubramanian replied that the problem is that “there’s a huge political agenda behind a lot of what masquerades as tech discussion.” And others agreed that the discourse around AI has moved from the realm of technology and science to politics — and power.

Technology has always been political, of course. But perhaps it does help to acknowledge that the current AI debates have soared into the stratosphere (or sunk into the muck, depending on your take) of political discourse.

Spend time on real-world risks

There were other helpful recommendations for how we can all gain some perspective: Rich Harang, a principal security architect at Nvidia, tweeted that it’s important to talk to people who actually build and deploy these LLM models. “Ask people going off the deep end about AI ‘x-risk’ what their practical experience is in doing applied work in the area,” he advised, adding that it’s important to “spend some time on real-world risks that exist right now that stem from ML R&D. There’s plenty, from security issues to environmental issues to labor exploitation.”

And B Cavello, director of emerging technologies at the Aspen Institute, pointed out that “predictions are often spaces of disagreement.” Cavello added that they have been working on focusing less on the disagreement, and more where people are aligned — many of those who disagree about AGI, for example, do agree on the need for regulation and for AI developers to be more responsible.

I’m grateful to all who responded to my Twitter thread, both in the comments and in direct messages. Have a great week.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!