Like many longtime technologists, I am deeply worried about the dangers of AI, both for its near-term risk to society and its long-term threat to humanity. Back in 2016 I put a date on my greatest concerns, warning that we could achieve Artificial Superintelligence by 2030 and Sentient Superintelligence soon thereafter.
My words got attention back then, but often for the wrong reasons — with criticism that I was off by a few decades. I hope the critics are correct, but the last seven years have only made me more concerned that these milestones are rapidly approaching and we remain largely unprepared.
Sure, there’s far more conversation these days about the “existential risks” of AI than in years past, but the discussion often jumps directly to movie plots like Wargames (1983), in which an AI almost causes a nuclear war by accidentally misinterpreting human objectives, or Terminator (1984), in which an autonomous weapons system evolves into a sentient AI that turns against us with an army of red-eyed robots. Both are great movies, but do we really think these are the likely risks of a superintelligence?
Of course, an accidental nuclear launch or autonomous weapons gone rogue are real threats, but they happen to be dangers that governments already take seriously. On the other hand, I am confident that a sentient superintelligence would be able to easily subdue humanity without resorting to nukes or killer robots. In fact, it wouldn’t need to use any form of traditional violence. Instead, a superintelligence will simply manipulate humanity to meet its own interests.
I know that sounds like just another movie plot, but the AI systems that big tech is currently developing are being optimized to influence society at scale. This isn’t a bug in their design efforts or an unintended consequence — it’s a direct goal.
After all, many of the largest corporations working on AI systems have business models that involve selling targeted influence. We’ve all seen the damage this can have on society thanks to years of unregulated social media. That said, traditional online influence will soon look primitive. That’s because AI systems will be widely deployed that can be used to target users on an individual-by-individual basis through personalized interactive conversations.
It was less than two years ago that I wrote pieces here in VentureBeat about “AI micro-targeting” and the looming dangers of conversational manipulation. In those articles, I explored how AI systems would soon be able to manipulate users through interactive dialog. My warning back then was that corporations would race to deploy artificial agents that are designed to draw us into friendly conversation and impart influence objectives on behalf of third-party sponsors. I also warned that this tactic would start out as text chat, but would quickly become personified as voice dialog coming from friendly faces: Artificial characters that users will come to trust and rely upon.
Well, at Meta Connect 2023, Meta announced it will deploy an army of AI-powered chatbots on Facebook, Instagram and WhatsApp through partnerships with “cultural icons and influencers” including Snoop Dogg, Kendall Jenner, Tom Brady, Chris Paul and Paris Hilton.
“This isn’t just gonna be about answering queries,” Mark Zuckerberg said about the technology. “This is about entertainment and about helping you do things to connect with the people around you.”
In addition, he indicated that the chatbots are text for now, but voice-powered AIs will likely be deployed early next year. Meta also suggested that these AI agents will likely exist as full VR experiences on their new Quest3 headset. If this does not seem troubling to you, you may not be thinking it through.
Let’s be clear: Meta’s goal of deploying AI agents to help you do things and help you connect with people will have many positive applications. Still, I believe this is an extremely dangerous direction in which powerful AI systems will increasingly mediate our personal lives. And it’s not just Meta racing in this direction — Google, Microsoft, Apple and Amazon are all developing increasingly powerful AI assistants that they hope the public will use extensively throughout our daily routines.
Why is this so dangerous? As I often tell policymakers: Think about a skilled salesperson. They know that the best way to sway someone’s opinion is not to hand them a brochure. It’s to engage them in direct and interactive conversation, usually by easing them into friendly banter, subtly expressing a sales pitch, hearing the target’s objections and concerns, and actively working to overcome those barriers. AI systems are now ready to engage individuals this way, performing all steps of the process. And as I detail in this recent academic paper, we humans will be thoroughly outmatched.
After all, these AI systems will be far more prepared to target you than any salesperson. They could have access to data about your interests, hobbies, political leanings, personality traits, education level and countless other personal details. And soon, they will be able to read your emotions in your vocal inflections, facial expressions and even your posture.
You, on the other hand, will be talking to an AI that can look like anything from Paris Hilton and Snoop Dogg, to a cute little fairy that guides you through your day. And yet that cute or charming AI could have all the world’s information at its disposal to counter your concerns, while also being trained on sales tactics, cognitive psychology, and strategies of persuasion. And it could just as easily sell you a car as it could convince you of misinformation or propaganda. This risk is called the AI Manipulation Problem and most regulators still fail to appreciate the subtle danger of interactive conversational influence.
Now let’s look a little further into the future and consider the magnitude of the manipulation risk as AI systems achieve superintelligence and eventually sentience. Will we regret that we allowed the largest companies in the world to normalize the deployment of AI agents that look human, act human and sound human, but are not human in any real sense of the word, and yet can skillfully pursue tactics that can manipulate our beliefs, influence our opinions and sway our actions? I think so.
After all, a sentient superintelligence, by definition, will be an AI system that is significantly smarter than any individual human and has a distinct will of its own. That means it could choose to pursue objectives that directly conflict with the needs of humanity.
And again, such a superintelligence will not need to take control over our nukes or military drones. It will just need to use the tactics that big tech is currently developing — the ability to deploy personalized AI agents that seem so friendly and non-threatening that we let down our guard, allowing them to whisper in our ears and guide us through our lives, reading our emotions, predicting our actions and potentially manipulating our behavior with super-human skill.
This is a real threat — and yet we’re not acting like it’s rapidly approaching. In fact, we are underestimating the risk because of the personification described above. These AI agents have already become so good at pretending to be human, even by simple text chat, that we’re already trusting their words more than we should.
And so, when these powerful AI systems eventually appear to us as Snoop Dogg or Paris Hilton or some new fictional persona that’s friendly and charming, we will only let down our guard even more.
As crazy as it seems, if a sentient superintelligence emerges in the coming years with interests that conflict with our own, it could easily leverage the personalized AI assistants (that we will soon rely upon) to push humanity in whatever direction it desires. Again, I know this sounds like a science fiction tale (in fact, I wrote such a story called UPGRADE back in 2008) but these technologies are now emerging for real and their capabilities are exceeding all expectations.
So how can we get people to appreciate the magnitude of the AI threat?
Over the last decade, I found that an effective way to contextualize the risks is to compare the creation of a superintelligence with the arrival of an alien spaceship. I refer to this as the Arrival Mind Paradox because the creation of a super-intelligent AI here on earth is arguably more dangerous than intelligent aliens arriving from another planet. And yet with AI now advancing at a record pace, we humans are not acting like we just looked into a telescope and spotted a fleet of ships racing towards us.
So, let’s compare the relative threats. If an alien spaceship was spotted heading towards earth and moving at a speed that made it five to ten years away, the world would be sharply focused on the approaching entity — hoping for the best, but undoubtedly preparing our defenses, likely in a coordinated global effort unlike anything the world has ever seen. Some would argue that the intelligence species will come in peace, but most would demand that we prepare for a full scale invasion.
On the other hand, we have already looked into a telescope that’s pointing back at ourselves and have spotted an alien superintelligence headed for earth. Yes, it will be our own creation, born in a corporate research lab, but it will not be human in any way.
As I discussed in a 2017 TED talk, the fact that we are teaching this intelligence to be good at pretending to be human does not make it any less alien. This arriving mind will be profoundly different from us and we have no reason to believe it will possess humanlike values, morals or sensibilities. And by teaching it to speak our languages and write our programming code and integrate with our computing networks, we are actively making it more dangerous than an alien that shows up from afar.
Worse, we are teaching these AI systems to read our emotions, predict our reactions and influence our behaviors. To me, this is beyond foolish.
And yet, we don’t fear the arrival of this alien intelligence — not in the visceral, stomach turning way that we would fear a mysterious ship headed for earth. That’s the Arrival Mind Paradox — the fact that we fear the arrival of the wrong aliens and will likely do so until it’s too late to prepare. And if this alien AI shows up looking like Paris Hilton or Snoop Dogg or countless other familiar faces and speaks to each of us in ways that individually appeal to our personalities and backgrounds, what chance do we have to resist?
Yes, we should secure our nukes and drones, but we also need to be aggressive about protecting against the widespread deployment of personified AI agents. It’s a real threat and we are unprepared.
Louis Rosenberg founded Immersion Corporation and Unanimous AI. His new book Our Next Reality comes out early next year.
VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More
Like many longtime technologists, I am deeply worried about the dangers of AI, both for its near-term risk to society and its long-term threat to humanity. Back in 2016 I put a date on my greatest concerns, warning that we could achieve Artificial Superintelligence by 2030 and Sentient Superintelligence soon thereafter.
My words got attention back then, but often for the wrong reasons — with criticism that I was off by a few decades. I hope the critics are correct, but the last seven years have only made me more concerned that these milestones are rapidly approaching and we remain largely unprepared.
The likely risks of superintelligence?
Sure, there’s far more conversation these days about the “existential risks” of AI than in years past, but the discussion often jumps directly to movie plots like Wargames (1983), in which an AI almost causes a nuclear war by accidentally misinterpreting human objectives, or Terminator (1984), in which an autonomous weapons system evolves into a sentient AI that turns against us with an army of red-eyed robots. Both are great movies, but do we really think these are the likely risks of a superintelligence?
Of course, an accidental nuclear launch or autonomous weapons gone rogue are real threats, but they happen to be dangers that governments already take seriously. On the other hand, I am confident that a sentient superintelligence would be able to easily subdue humanity without resorting to nukes or killer robots. In fact, it wouldn’t need to use any form of traditional violence. Instead, a superintelligence will simply manipulate humanity to meet its own interests.
Event
AI Unleashed
An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.
I know that sounds like just another movie plot, but the AI systems that big tech is currently developing are being optimized to influence society at scale. This isn’t a bug in their design efforts or an unintended consequence — it’s a direct goal.
After all, many of the largest corporations working on AI systems have business models that involve selling targeted influence. We’ve all seen the damage this can have on society thanks to years of unregulated social media. That said, traditional online influence will soon look primitive. That’s because AI systems will be widely deployed that can be used to target users on an individual-by-individual basis through personalized interactive conversations.
Hiding behind friendly faces
It was less than two years ago that I wrote pieces here in VentureBeat about “AI micro-targeting” and the looming dangers of conversational manipulation. In those articles, I explored how AI systems would soon be able to manipulate users through interactive dialog. My warning back then was that corporations would race to deploy artificial agents that are designed to draw us into friendly conversation and impart influence objectives on behalf of third-party sponsors. I also warned that this tactic would start out as text chat, but would quickly become personified as voice dialog coming from friendly faces: Artificial characters that users will come to trust and rely upon.
Well, at Meta Connect 2023, Meta announced it will deploy an army of AI-powered chatbots on Facebook, Instagram and WhatsApp through partnerships with “cultural icons and influencers” including Snoop Dogg, Kendall Jenner, Tom Brady, Chris Paul and Paris Hilton.
“This isn’t just gonna be about answering queries,” Mark Zuckerberg said about the technology. “This is about entertainment and about helping you do things to connect with the people around you.”
In addition, he indicated that the chatbots are text for now, but voice-powered AIs will likely be deployed early next year. Meta also suggested that these AI agents will likely exist as full VR experiences on their new Quest3 headset. If this does not seem troubling to you, you may not be thinking it through.
AI mediating our personal lives
Let’s be clear: Meta’s goal of deploying AI agents to help you do things and help you connect with people will have many positive applications. Still, I believe this is an extremely dangerous direction in which powerful AI systems will increasingly mediate our personal lives. And it’s not just Meta racing in this direction — Google, Microsoft, Apple and Amazon are all developing increasingly powerful AI assistants that they hope the public will use extensively throughout our daily routines.
Why is this so dangerous? As I often tell policymakers: Think about a skilled salesperson. They know that the best way to sway someone’s opinion is not to hand them a brochure. It’s to engage them in direct and interactive conversation, usually by easing them into friendly banter, subtly expressing a sales pitch, hearing the target’s objections and concerns, and actively working to overcome those barriers. AI systems are now ready to engage individuals this way, performing all steps of the process. And as I detail in this recent academic paper, we humans will be thoroughly outmatched.
After all, these AI systems will be far more prepared to target you than any salesperson. They could have access to data about your interests, hobbies, political leanings, personality traits, education level and countless other personal details. And soon, they will be able to read your emotions in your vocal inflections, facial expressions and even your posture.
You, on the other hand, will be talking to an AI that can look like anything from Paris Hilton and Snoop Dogg, to a cute little fairy that guides you through your day. And yet that cute or charming AI could have all the world’s information at its disposal to counter your concerns, while also being trained on sales tactics, cognitive psychology, and strategies of persuasion. And it could just as easily sell you a car as it could convince you of misinformation or propaganda. This risk is called the AI Manipulation Problem and most regulators still fail to appreciate the subtle danger of interactive conversational influence.
Normalizing human-like AI
Now let’s look a little further into the future and consider the magnitude of the manipulation risk as AI systems achieve superintelligence and eventually sentience. Will we regret that we allowed the largest companies in the world to normalize the deployment of AI agents that look human, act human and sound human, but are not human in any real sense of the word, and yet can skillfully pursue tactics that can manipulate our beliefs, influence our opinions and sway our actions? I think so.
After all, a sentient superintelligence, by definition, will be an AI system that is significantly smarter than any individual human and has a distinct will of its own. That means it could choose to pursue objectives that directly conflict with the needs of humanity.
And again, such a superintelligence will not need to take control over our nukes or military drones. It will just need to use the tactics that big tech is currently developing — the ability to deploy personalized AI agents that seem so friendly and non-threatening that we let down our guard, allowing them to whisper in our ears and guide us through our lives, reading our emotions, predicting our actions and potentially manipulating our behavior with super-human skill.
AI leveraging AI
This is a real threat — and yet we’re not acting like it’s rapidly approaching. In fact, we are underestimating the risk because of the personification described above. These AI agents have already become so good at pretending to be human, even by simple text chat, that we’re already trusting their words more than we should.
And so, when these powerful AI systems eventually appear to us as Snoop Dogg or Paris Hilton or some new fictional persona that’s friendly and charming, we will only let down our guard even more.
As crazy as it seems, if a sentient superintelligence emerges in the coming years with interests that conflict with our own, it could easily leverage the personalized AI assistants (that we will soon rely upon) to push humanity in whatever direction it desires. Again, I know this sounds like a science fiction tale (in fact, I wrote such a story called UPGRADE back in 2008) but these technologies are now emerging for real and their capabilities are exceeding all expectations.
So how can we get people to appreciate the magnitude of the AI threat?
Over the last decade, I found that an effective way to contextualize the risks is to compare the creation of a superintelligence with the arrival of an alien spaceship. I refer to this as the Arrival Mind Paradox because the creation of a super-intelligent AI here on earth is arguably more dangerous than intelligent aliens arriving from another planet. And yet with AI now advancing at a record pace, we humans are not acting like we just looked into a telescope and spotted a fleet of ships racing towards us.
Fearing the wrong aliens
So, let’s compare the relative threats. If an alien spaceship was spotted heading towards earth and moving at a speed that made it five to ten years away, the world would be sharply focused on the approaching entity — hoping for the best, but undoubtedly preparing our defenses, likely in a coordinated global effort unlike anything the world has ever seen. Some would argue that the intelligence species will come in peace, but most would demand that we prepare for a full scale invasion.
On the other hand, we have already looked into a telescope that’s pointing back at ourselves and have spotted an alien superintelligence headed for earth. Yes, it will be our own creation, born in a corporate research lab, but it will not be human in any way.
As I discussed in a 2017 TED talk, the fact that we are teaching this intelligence to be good at pretending to be human does not make it any less alien. This arriving mind will be profoundly different from us and we have no reason to believe it will possess humanlike values, morals or sensibilities. And by teaching it to speak our languages and write our programming code and integrate with our computing networks, we are actively making it more dangerous than an alien that shows up from afar.
Worse, we are teaching these AI systems to read our emotions, predict our reactions and influence our behaviors. To me, this is beyond foolish.
And yet, we don’t fear the arrival of this alien intelligence — not in the visceral, stomach turning way that we would fear a mysterious ship headed for earth. That’s the Arrival Mind Paradox — the fact that we fear the arrival of the wrong aliens and will likely do so until it’s too late to prepare. And if this alien AI shows up looking like Paris Hilton or Snoop Dogg or countless other familiar faces and speaks to each of us in ways that individually appeal to our personalities and backgrounds, what chance do we have to resist?
Yes, we should secure our nukes and drones, but we also need to be aggressive about protecting against the widespread deployment of personified AI agents. It’s a real threat and we are unprepared.
Louis Rosenberg founded Immersion Corporation and Unanimous AI. His new book Our Next Reality comes out early next year.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
Author: Louis Rosenberg, Unanimous A.I.
Source: Venturebeat
Reviewed By: Editorial Team