AI & RoboticsNews

Why AI might need to take a time out

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Earlier this week, I signed the “Pause Letter” issued by the Future of Life Institute calling on all AI labs to pause their training of large-scale AI systems for at least six months.

As soon as the letter was released, I was flooded by inquiries asking me why I believe the industry needs a “time-out,” and if a delay like this is even feasible. I’d like to provide my perspective here, as I see this a little differently than many.

First and foremost, I am not worried that these large-scale AI systems are about to become sentient, suddenly developing a will of their own and turning their ire on the human race. That said, these AI systems don’t need a will of their own to be dangerous; they just need to be wielded by unscrupulous humans who use them to influence, undermine, and manipulate the public.

This is a very real danger, and we’re not ready to handle it. If I’m being perfectly honest, I wish we had a few more years to prepare, but six months is better than nothing. After all, a major technological change is about to hit society. It will be just as significant as the PC revolution, the internet revolution, and the mobile phone revolution.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

Register Now

But unlike these prior transitions, which happened over years and even decades, the AI revolution will roll over us like a thundering avalanche of change.

Unprecedented rate of change

That avalanche is already in motion. ChatGPT is currently the most popular Large Language Model (LLM) to enter the public sphere. Remarkably, it reached 100 million users in only two months. For context, it took Twitter five years to reach that milestone.

We are clearly experiencing a rate of change unlike anything the computing industry has ever encountered. As a consequence, regulators and policymakers are deeply unprepared for the changes and risks coming our way.

To make the challenge we face as clear as I can, I find it helpful to think of the dangers in two distinct categories:

  1. The risks associated with generative AI systems that can produce human-level content and replace human-level workers.
  2. %he risks associated with conversational AI that can enable human-level dialog and will soon hold conversations with users that are indistinguishable from authentic human encounters.

Let me address the dangers associated with each of these advancements.

Generative AI is revolutionary; but what are the risks?

Generative AI refers to the ability of LLMs to create original content in response to human requests. The content generated by AI now ranges from images, artwork and videos to essays, poetry, computer software, music and scientific articles.

In the past, generative content was impressive but not passable as human-level output. That all changed in the last twelve months, with AI systems suddenly becoming able to create artifacts that can easily fool us, making us believe they are either authentic human creations or genuine videos or photos captured in the real world. These capabilities are now being deployed at scale, creating a number of significant risks for society.

One obvious risk is the job market. That’s because the human-quality artifacts created by AI will reduce the need for workers who would have created that content. This impacts a wide range of professions, from artists and writers to programmers and financial analysts.

In fact, a new study from Open AI, OpenResearch and the University of Pennsylvania explored the impact of AI on the U.S. Labor Market by comparing GPT-4 capabilities to job requirements. They estimate that 20% of the U.S. workforce will have at least 50% of their tasks impacted by GPT-4, with higher-income jobs facing greater consequences.

They further estimate that “15% of all worker tasks” in the U.S. could be performed faster, cheaper, and with equal quality using today’s GPT-4 level technology.

From subtle mistakes to wild fabrications

The looming impact to jobs is deeply concerning, but it’s not the reason I signed the Pause Letter. The more urgent worry is that the content generated by AI can look and feel authentic and often comes across as authoritative, and yet it can easily have factual errors. No accuracy standards or governing bodies are in place to help ensure that these systems — which will become a major part of the global workforce — will not propagate errors from subtle mistakes to wild fabrications.

We need time to put protections in place and ramp up regulatory authorities to ensure these protections are used.

Another big risk is the potential for bad actors to deliberately create flawed content with factual errors as part of AI-generated influence campaigns that spread propaganda, disinformation and outright lies. Bad actors can already do this, but generative AI enables it to be done at scale, flooding the world with content that looks authoritative and yet is completely fabricated. This extends to deepfakes in which public figures can be made to do or say anything in realistic photos and videos.

With AI getting increasingly skilled, the public will soon have no way to distinguish real from synthetic. We need watermarking systems that identify AI-generated content as synthetic and enables the public to know when (and with which AI systems) the content was created. This means we need time to put protections in place and ramp up regulatory authorities to enforce their use.

The dangers of conversational influence

Let me jump next to conversational AI systems, a form of generative AI that can engage users in real-time dialog through text chat and voice chat. These systems have recently advanced to the point where AI can hold a coherent conversation with humans, keeping track of the conversational flow and context over time. These technologies worry me the most because they introduce a very new form of targeted influence that regulators are not prepared for — conversational influence.

As every salesperson knows, the best way to convince someone to buy something or believe something is to engage them in conversation so that you can make your points, observe their reactions and then adjust your tactics to address their resistance or concerns.

With the release of GPT-4, it’s now very clear that AI systems will be able to engage users in authentic real-time conversations as a form of targeted influence. I worry that third parties using APIs or plugins will impart promotional objectives into what seems like natural conversations, and that unsuspecting users will be manipulated into buying products they don’t want, signing up for services they don’t need or believing untrue information.

The AI manipulation problem

I refer to this as the AI manipulation problem — and it has suddenly become an urgent risk. That’s because the technology now exists to deploy conversational influence campaigns that target us individually based on our values, interests, history and background to optimize persuasive impact.

Unless regulated, these technologies will be used to drive predatory sales tactics, propaganda, misinformation and outright lies. If unchecked, AI-driven conversations could become the most powerful form of targeted persuasion we humans have ever created. We need time to put regulations in place, potentially banning or heavily restricting the use of AI-mediated conversational influence.

So yes, I signed the Pause Letter, pleading for extra time to protect society. Will the letter make a difference? It’s not clear whether the industry will agree to a six-month pause, but the letter is drawing global attention to the problem. And frankly, we need as many alarm bells ringing as possible to wake up regulators, policymakers and industry leaders to take action.

Maybe this is optimistic, but I would hope that most major players would appreciate a little breathing room to ensure that they get these technologies right. The fact is, we need to defuse the current arms race: It’s driving faster and faster releases of AI systems into the wild, pushing some companies to move more quickly than they should.

Louis Rosenberg is the founder of Immersion Corporation (IMMR: Nasdaq), Microscribe 3D, Outland Research, and Unanimous AI.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Louis Rosenberg, Unanimous A.I.
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!