![AI Agents Are Coming: Decoding Your Personality](https://toptech.news/wp-content/uploads/2025/02/Like-e-you17-02-2025.jpg)
When I was a kid there were four AI agents in my life. Their names were Inky, Blinky, Pinky and Clyde and they tried their best to hunt me down. This was the 1980s and the agents were the four colorful ghosts in the iconic arcade game Pac-Man.
By today’s standards they weren’t particularly smart, yet they seemed to pursue me with cunning and intent. This was decades before neural networks were used in video games, so their behaviors were controlled by simple algorithms called heuristics that dictate how they would chase me around the maze.
Most people don’t realize this, but the four ghosts were designed with different “personalities.” Good players can observe their actions and learn to predict their behaviors. For example, the red ghost (Blinky) was programmed with a “pursuer” personality that charges directly towards you. The pink ghost (Pinky) on the other hand, was given an “ambusher” personality that predicts where you’re going and tries to get there first. As a result, if you rush directly at Pinky, you can use her personality against her, causing her to actually turn away from you.
I reminisce because in 1980 a skilled human could observe these AI agents, decode their unique personalities and use those insights to outsmart them. Now, 45 years later, the tides are about to turn. Like it or not, AI agents will soon be deployed that are tasked with decoding your personality so they can use those insights to optimally influence you.
The future of AI manipulation
In other words, we are all about to become unwitting players in “The game of humans” and it will be the AI agents trying to earn the high score. I mean this literally — most AI systems are designed to maximize a “reward function” that earns points for achieving objectives. This allows AI systems to quickly find optimal solutions. Unfortunately, without regulatory protections, we humans will likely become the objective that AI agents are tasked with optimizing.
I am most concerned about the conversational agents that will engage us in friendly dialog throughout our daily lives. They will speak to us through photorealistic avatars on our PCs and phones and soon, through AI-powered glasses that will guide us through our days. Unless there are clear restrictions, these agents will be designed to conversationally probe us for information so they can characterize our temperaments, tendencies, personalities and desires, and use those traits to maximize their persuasive impact when working to sell us products, pitch us services or convince us to believe misinformation.
This is called the “AI Manipulation Problem,” and I’ve been warning regulators about the risk since 2016. Thus far, policymakers have not taken decisive action, viewing the threat as too far in the future. But now, with the release of Deepseek-R1, the final barrier to widespread deployment of AI agents — the cost of real-time processing — has been greatly reduced. Before this year is out, AI agents will become a new form of targeted media that is so interactive and adaptive, it can optimize its ability to influence our thoughts, guide our feelings and drive our behaviors.
Superhuman AI ‘salespeople’
Of course, human salespeople are interactive and adaptive too. They engage us in friendly dialog to size us up, quickly finding the buttons they can press to sway us. AI agents will make them look like amateurs, able to draw information out of us with such finesse, it would intimidate a seasoned therapist. And they will use these insights to adjust their conversational tactics in real-time, working to persuade us more effectively than any used car salesman.
These will be asymmetric encounters in which the artificial agent has the upper hand (virtually speaking). After all, when you engage a human who is trying to influence you, you can usually sense their motives and honesty. It will not be a fair fight with AI agents. They will be able to size you up with superhuman skill, but you won’t be able to size them up at all. That’s because they will look, sound and act so human, we will unconsciously trust them when they smile with empathy and understanding, forgetting that their facial affect is just a simulated façade.
In addition, their voice, vocabulary, speaking style, age, gender, race and facial features are likely to be customized for each of us personally to maximize our receptiveness. And, unlike human salespeople who need to size up each customer from scratch, these virtual entities could have access to stored data about our backgrounds and interests. They could then use this personal data to quickly earn your trust, asking you about your kids, your job or maybe your beloved New York Yankees, easing you into subconsciously letting down your guard.
When AI achieves cognitive supremacy
To educate policymakers on the risk of AI-powered manipulation, I helped in the making of an award-winning short film entitled Privacy Lost that was produced by the Responsible Metaverse Alliance, Minderoo and the XR Guild. The quick 3-minute narrative depicts a young family eating in a restaurant while wearing autmented reality (AR) glasses. Instead of human servers, avatars take each diner’s orders, using the power of AI to upsell them in personalized ways. The film was considered sci-fi when released in 2023 — yet only two years later, big tech is engaged in an all-out arms race to make AI-powered eyewear that could easily be used in these ways.
In addition, we need to consider the psychological impact that will occur when we humans start to believe that the AI agents giving us advice are smarter than us on nearly every front. When AI achieves a perceived state of “cognitive supremacy” with respect to the average person, it will likely cause us to blindly accept its guidance rather than using our own critical thinking. This deference to a perceived superior intelligence (whether truly superior or not) will make agent manipulation that much easier to deploy.
I am not a fan of overly aggressive regulation, but we need smart, narrow restrictions on AI to avoid superhuman manipulation by conversational agents. Without protections, these agents will convince us to buy things we don’t need, believe things that are untrue and accept things that are not in our best interest. It’s easy to tell yourself you won’t be susceptible, but with AI optimizing every word they say to us, it is likely we will all be outmatched.
One solution is to ban AI agents from establishing feedback loops in which they optimize their persuasiveness by analyzing our reactions and repeatedly adjusting their tactics. In addition, AI agents should be required to inform you of their objectives. If their goal is to convince you to buy a car, vote for a politician or pressure your family doctor for a new medication, those objectives should be stated up front. And finally, AI agents should not have access to personal data about your background, interests or personality if such data can be used to sway you.
In today’s world, targeted influence is an overwhelming problem, and it is mostly deployed as buckshot fired in your general direction. Interactive AI agents will turn targeted influence into heat-seeking missiles that find the best path into each of us. If we don’t protect against this risk, I fear we could all lose the game of humans.
Louis Rosenberg is a computer scientist and author known who pioneered mixed reality and founded Unanimous AI.
Author: Louis Rosenberg, Unanimous A.I.
Source: Venturebeat
Reviewed By: Editorial Team