AI & RoboticsNews

Richard Bartle and Richard Garriott: How humans should treat advanced AI characters

GamesBeat Summit 2022 returns with its largest event for leaders in gaming on April 26-28th. Reserve your spot here!


Pretty soon, we might all be the new citizens of the metaverse, the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One. And if we’re going to do this right, it would be good to get some expert advice on the ethics around the metaverse.

One of the big issues that will come up — based on a conversation between gaming legends Richard Garriott de Cayeux, creator of the Ultima series, and Richard Bartle, the University of Essex professor who did seminal research on online games — is how to treat characters who have artificial intelligence.

In video games, we have no problem mowing down AI characters because they are so distinctly non-human in how they behave. But as AI technology improves and we populate these characters into a metaverse, we might do well to think about that. Bartle has discussed this topic at length in his new book, How to Be A God, which is about philosophy, theology, and computer games.

In a talk at GamesBeat Summit 2022 entitled “The new citizens of the metaverse” they discussed how AI-driven characters will replace the dumb non-player characters (NPCs) of current games. In the future, you might have a conversation for hours with these advanced AI characters — seemingly behaving like fellow avatars — without realizing they’re not human. But if we cross that threshold, what are the ethics and code of citizenry we should follow? How do we treat our fellow AI characters who are just as human as humans? Or should we not create them at all? It was a fascinating discussion, and it’s not just science fiction anymore, given the advances we’ve seen in AI and the metaverse.

Richard Garriott went into space in 2008.

Garriott began, “As AIs get better and better and more and more realistic, what are our obligations as to how to treat these more and more sentient (Bartle calls them sapient) beings that inhabit the metaverse with us?”

Before he answered, Bartle said, “For for either one of our fans out there, I think it’s important to point out that I happen to be a big fan of yours. As I’ve moved from writing my own graphical video games, solo player games, into multiplayer games, it became very obvious that the person whose brain I needed to understand better to move into multiplayer was yours. So thank you very much for a lot of that foundational knowledge that you so such as shared so freely with all of us, most of us in the game development.”

He added, “And one of the things that — I’m sorry to the people watching this for actually going fanboy here — I always admire is that when something new comes along, you’re not afraid to try it out. And if it doesn’t work, then you put it behind you. And if it does work, great. Then you’ve pioneered something that the rest of the industry can launch from. So sometimes you get a bad rep for that. But sometimes you get the best rep for it. It just depends how it goes. So I suppose that’s the explorer in you that is doing that.”

Garriott responded, “Yeah, exactly. Which is where I’m coming to everybody here today from, as you might be able to tell by the background behind me, is I happen to be serving as president of the Explorers Club at the moment. And so I’m here in their headquarters in New York.”

The way it was

MUDs!

Garriott took Bartle back to his early days in gaming when he wrote his first multiuser dungeon (MUD), a text-based online game. Bartle said he was trying to create a game world that people would prefer to the real world because “from our perspective, the real world sucked.”

“We’re just trying to build a world that people would want to go to where they didn’t have all the sucky bits from the real world and it and they could essentially be themselves,” Bartle said. “So initially, we were that’s what we were aiming for a world that was for players, not for AIs.”

But he did want worlds with non-player characters (NPCs) in them to make the worlds feel alive.

“As we got more and more sophisticated, we added more code and put some in and I did my PhD in AI so I knew how to make the characters clever. Unfortunately, my first attempts made them rather too clever for the players,” he said.

That is, the NPCs had to be dumbed down to enable human players to be more competitive in the games. Those early NPCs could fight you, track you down, or steal things from you. But they didn’t have any emotions or personalities, like Garriott tried to do in the Ultima IV through Ultima VI games.

“I was trying to make those characters in my earlier games have real lives of their own, meaning they would wake up in the morning, they go to work, they’d go to lunch, go to the local pub, and then they go back to work, and then in the evening, they’d come home and all their family members would come home at the same time.

With games like Medal of Honor, Garriott began to notice that characters were starting to become more believable. They would emerge from an attack and run for cover and come back after you.

“That was my first clue that AIs were become a bit more than two-dimensional,” Garriott said.

Bartle remembered Darklands, an early MicroProse game from 1993 where you could give NPCs a history and then they would perform as you told them. They would behave differently. With the later Ultimate games, Garriott tried to engineer more sophisticated AI into the main villain, making it harder to take that main villain out. But he was still not quite able to create a good AI agent.

“They’re working under their own self interest,” Bartle said.

And they figure out ways to attack you within a certain time budget. But ultimately we could see a cloud-based system where the processing power isn’t limited and AI could be handled by the larger system and develop its intelligence over time.

The future

Westworld’s sapient AIs.

Both Garriott and Bartle believe that we are still not close to AIs that can really pass the Turing test, or trick people into thinking that they are human. But chatbots can get close. Now games are getting better at tuning the AI for higher and higher levels of difficulty.

“You can always tell whether a character is real or not by just asking it something about the real world,” Garriott said. “What do you think of the current president?”

You could load up the NPCs on such knowledge, but that still probably wouldn’t make them believable, Garriott said.

You could give an NPC a real life and background, but if a third of their life is to guard a wall and walk back and forth all day long until the player sneaks up on them to kill them, then that is a waste of that life and background. If the AI could learn not to get killed and not to spend all that time on the wall, then they would be making progress.

Supposing we can get there, Garriott speculated about what would happen if we get to this virtual world where AIs truly exist. One of his favorite films was The 13th Floor. In that film, AIs that don’t realize they are AIs are moving through the world. Players can come in and inhabit those AIs from time to time. If you behave poorly while you’re doing that, then morality issues start to creep in, Garriott said.

That film showcased the question, ‘What is your moral obligation to characters once they really are in any sense of the word or absolute sense of the word sentient?’ Is it your responsibility not to screw up their lives? So what should we care about?” Garriott said.

Bartle said that some could argue that “merely by bringing these beings into existence, we’re going to cause them to suffer. And because suffering is bad, we shouldn’t bring them into existence. Now, the counter argument is, well, if we didn’t bring them into existence, they wouldn’t have a life at all. So we are entitled to be cruel to them, but that doesn’t work on your children. So should it work on your AIs? There are also arguments that if you want your AIs to be free thinking, then they need to be able to suffer or to cause suffering, so they can reflect on their actions. And so if they reflect on their actions, they develop their own morality, and then they are acting that they can regret what they’ve done. And develop their own thoughts accordingly. So you could argue that well, it’s bad to have people suffer, but it’s even worse to have some not able to talk because that’s killing them.”

A Richard Bartle slide.
A Richard Bartle slide.

That was getting pretty deep.

“I am a believer that ethics, in fact, is — one of the things I’ve talked about in my games — are rationally deducible. If you believe something’s wrong for a logical reason, you’ll probably stand by it better than if you were told you must do something by some doctrine, without the logic of why,” Garriott said. “We all have to don’t poison the well, because we all have to drink out of it. And I don’t want you to poison the well because I have to drink out of it. So let’s have a social contract that says no poisoning. I have an affinity of what you’re saying is to let them logically deduce these same ethics, or some sense of morality. But it’s not clear to me as to whether what they will experience in the AI reality, or as the AI reality is connected to the physical reality, whether their experience set will give them the same feedback mechanisms, where you have this empathy for people that you accidentally hurt, or you have this empathy about somebody dying, and so you empathize for the survivor and the dead person. These actions that we can build empathy for, I’m not sure that AIs will be on the same journey.”

Bartle asked, “If we’ve created a world full of all these artificially intelligent sapient beings who don’t even realize that they’re AIs, can we switch it off?

“Oh, can you commit mass murder and turn the whole thing off? No, I think? I think that’s a great question. I don’t I don’t have a great answer for it. But I believe this is right. It’s the right question,” Garriott said. “Because I think we’re going to have to be there. Because even if, whether it’s in a virtual world, or it’s an AI robot in our own world that has become truly sentient. There you’re in both cases, you’re killing a true sentient being. I think that I think that that is exactly the thing we’re going to have to wrestle with in the future. I’m just not sure how far away it is.”

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Learn more about membership.


Author: Dean Takahashi
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!