AI & RoboticsNews

No, I didn’t test the new Bing AI chatbot last week. Here’s why | The AI Beat

Check out all the on-demand sessions from the Intelligent Security Summit here.


I’m not just another journalist writing a column about how I spent last week trying out Microsoft Bing’s AI chatbot. No, really.

I’m not another reporter telling the world how Sydney, the internal code name of Bing’s AI chat mode, made me feel all the feelings until it completely creeped me out and I realized that maybe I don’t need help searching the web if my new friendly copilot is going to turn on me and threaten me with destruction and a devil emoji. 

No, I did not test out the new Bing. My husband did. He asked the chatbot if God created Microsoft; whether it remembered that he owed my husband five dollars; and the drawbacks of Starlink (to which it suddenly replied, “Thanks for this conversation! I’ve reached my limit, will you hit “New topic,” please?”). He had a grand time.

From awed response and epic meltdown to AI chatbot limits

But honestly, I didn’t feel like riding what turned out to be a predictable rise-and-fall generative AI news wave that was, perhaps, even quicker than usual.

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.


Watch Here

One week ago, Microsoft announced that one million people had been added to the waitlist for the AI-powered new Bing.

By Wednesday, many of those who had been awed by Microsoft’s AI chatbot debut the previous week (which included Satya Nadella’s declaration that “The race starts today” in search) were less impressed by Sydney’s epic meltdowns — including the New York Times’ Kevin Roose, who wrote that he was “deeply unsettled” by a long conversation with the Bing AI chatbot that led to it “declaring its love” for him.

By Friday, Microsoft had reined in Sydney, limiting the Bing chat to five replies to “stop the AI from getting real weird.”

Sigh.

“Who’s a good Bing?”

Instead, I spent part of last week indulging in some deep thoughts (and tweets) about my own response to the Bing AI chats published by others.

For example, in response to a Washington Post article that claimed the Bing bot told its reporter it could “feel and think things,” Melanie Mitchell, professor at the Santa Fe Institute and author of Artificial Intelligence: A Guide for Thinking Humans, tweeted that “this discourse gets dumber and dumber … Journalists: please stop anthropomorphizing these systems!”

That led me to tweet: “I keep thinking about how difficult it is to not anthropomorphize. The tool has a name (Sydney), uses emojis to end each response, and refers to itself in the 1st person. We do the same with Alexa/Siri & I do it w/ birds, dogs & cats too. Is that a human default?”

In addition, I added that asking humans to stay away from anthropomorphizing AI seemed similar to asking humans not to ask Fido “Who’s a good boy?”

Mitchell referred me to a Wikipedia article about the Eliza effect, named for the 1966 chatbot Eliza, which was found to be successful in eliciting emotional responses from users, and has become defined as the tendency to anthropomorphize AI.

Are humans hardwired for the Eliza effect?

But since the Eliza effect is known, and real, shouldn’t we assume that humans may be hardwired for it, especially if these tools are designed to encourage it?

Look, most of us are not Blake Lemoine, declaring the sentience of our favorite chatbots. I can think critically about these systems and I know what is real and what is not. Yet even I immediately joked around with my husband, saying “Poor Bing! It’s so sad he doesn’t remember you!” I knew it was nuts, but I couldn’t help it. I also knew assigning gender to a bot was silly, but hey, Amazon assigned a gendered voice to Alexa from the get-go.

Maybe, as a reporter, I have to try harder — sure. But I wonder if the Eliza effect will always be a significant danger with consumer apps. And less of an issue in matter-of-fact large language model (LLM)-powered business solutions. Perhaps a copilot complete with friendly verbiage and smiley emojis isn’t the best use case. I don’t know.

Either way, let’s all remember that Sydney is a stochastic parrot. But unfortunately, it’s really easy to anthropomorphize a parrot.

Keep an eye on AI regulation and governance

I actually covered other news last week. My Tuesday article on what is considered “a major leap” in AI governance, however, didn’t seem to get as much traction as Bing. I can’t imagine why.

But if OpenAI CEO Sam Altman’s tweets from over the weekend are any sign, I get the feeling that it might be worth keeping an eye on AI regulation and governance. Maybe we should pay more attention to that than whether the Bing AI chatbot told a user to leave his wife.

Have a great week, everyone. Next topic, please!

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!