AI & RoboticsNews

In an AI-driven world, effective game moderation is more urgent than ever

Even with increasingly effective moderation tools, the sheer scope of player content and communication is expanding every day, and online discourse has never been more heated. In this GamesBeat Next panel, Hank Howie, game industry evangelist at Modulate was joined by Alexis Miller, director of product management at Schell Games and Tomer Poran, VP solution strategy at ActiveFence, to talk about best practices for moderating game communities of all sizes and demographics, from codes of conduct to strategy, technology and more.

It’s an especially critical conversation as privacy, safety and trust regulations become a larger part of the conversation, Poran said and AI is growing more powerful as a tool for not only detecting but also creating harmful or toxic content — and it has become crucial in the evolution of content moderation strategies in gaming spaces, which have lagged behind other online arenas.

“While child safety and user safety have been incredibly important pillars in gaming, even before social media, [gaming] is a phase behind social media in its approach to content moderation,” Poran said. “One thing we’re seeing in gaming, that we saw maybe several years ago in social media, is this move toward proactivity. We’re seeing more sophisticated, proactive content moderation tools like ActiveFence and Modulate and many other vendors. We’re seeing more investment from companies in these technologies.”

Up until just a few years ago, it was nearly impossible to moderate voice content, Howie added — and even when the technology was developed, it was too expensive to implement. But once Modulate made it affordable, developers suddenly had access to everything that was said in their games.

“Every single company we’ve ever rolled out with, what they say is, I knew it was bad, but I didn’t think it was this bad.” Howie said. “The stuff that gets said online, the harm that can get done. And now the technology is there. We can stop it.”

And as the technology becomes more sophisticated, Miller said, developers will be able to fine-tune their moderation strategies, which is something that’s been lacking, she said.

“It’s an area where there’s opportunity in the industry, recognizing that there are such different audiences between games,” Miller said. “Our audience is very young. That has a lot of different implications for what should be flagged compared to, say, a casino gambling game.”

Safety by design makes these tools and strategies a priority right out of the gate, determining what product features will be required, and what guardrails should be set in place, from monitoring to enforcement guidelines.

“It’s asking not only what can go wrong, it’s asking, what would someone who’s looking to do harm make of this feature, of this product?” Poran said. “It’s the practice of asking those questions and putting in the right mechanisms. It does look very different between different products and different games. It’s asking, what is unique to our game? What can and will go wrong in this unique environment?”

One of the solutions ActiveFence offers as part of safety by design is what they call safety red teaming, in which they use their knowledge of the bad actors they monitor around the world to mimic this behaviors. It’s used as a tool to safety test a game before it’s launched and uncover any features that could potentially be abused.

“Once your community understands that certain behaviors will no longer be tolerated, certain things can no longer be said, you’d be surprised at how quickly they snap into line,” Howie said.

That means enlisting your game community in moderation efforts to weed out the trolls who are hurting your game and community.

“We’ve seen stats on our side where if you can cut out the toxicity, we’ve been able to reduce churn 15-20 percent with new players and with returning players,” he said. “Once they see they can get into a game and not get yelled at, not get made to feel badly, they stick around. They play more. I would hazard a guess they spend more, too.”

A code of conduct evolves over time — and needs to adapt to unexpected situations, Miller said. The Schell Games team puts a great deal of thought into how they would handle any moderation challenges and protect their users, who are quite young, including cheating, toxic behaviors and child endangerment. They tested tools like ToxMod with their beta channel, but things did not progress as planned, she said.

“It was not a good prediction for what was going to happen when it went in the live game. We learned that our beta community was a lot nicer to each other than when it was in the live game,” she said. “We actually had to revise some of our policies around moderation because we did our first draft based on the beta. Then when we went live it was like, whew, this is — you’re right. It’s worse than you think it’s going to be.”

A code of conduct can’t be done in a vacuum, Poran agreed.

“No point in writing a 50-page code of conduct and then going out and seeing that half of it doesn’t apply to the community and it’s not the problem,” he said. “Policy and code of conduct is evolving, continually evolving based on community feedback, on your agents, your moderators that come back and tell you, hey, this is happening a lot, and the code of conduct isn’t really clear on what we need to do here. Then that needs to evolve. But it’s something that has to be done intentionally and constantly.”

“Things like radicalization, recruitment, and then there’s always racism and misogyny — it’s all coming together at the same time,” Howie said. “The technology is there to meet it. The time is excellent to really take a look at this area and say, we just need to clean it up and make it better for our players.”

Even with increasingly effective moderation tools, the sheer scope of player content and communication is expanding every day, and online discourse has never been more heated. In this GamesBeat Next panel, Hank Howie, game industry evangelist at Modulate was joined by Alexis Miller, director of product management at Schell Games and Tomer Poran, VP solution strategy at ActiveFence, to talk about best practices for moderating game communities of all sizes and demographics, from codes of conduct to strategy, technology and more.

It’s an especially critical conversation as privacy, safety and trust regulations become a larger part of the conversation, Poran said and AI is growing more powerful as a tool for not only detecting but also creating harmful or toxic content — and it has become crucial in the evolution of content moderation strategies in gaming spaces, which have lagged behind other online arenas.

“While child safety and user safety have been incredibly important pillars in gaming, even before social media, [gaming] is a phase behind social media in its approach to content moderation,” Poran said. “One thing we’re seeing in gaming, that we saw maybe several years ago in social media, is this move toward proactivity. We’re seeing more sophisticated, proactive content moderation tools like ActiveFence and Modulate and many other vendors. We’re seeing more investment from companies in these technologies.”

Up until just a few years ago, it was nearly impossible to moderate voice content, Howie added — and even when the technology was developed, it was too expensive to implement. But once Modulate made it affordable, developers suddenly had access to everything that was said in their games.

“Every single company we’ve ever rolled out with, what they say is, I knew it was bad, but I didn’t think it was this bad.” Howie said. “The stuff that gets said online, the harm that can get done. And now the technology is there. We can stop it.”

And as the technology becomes more sophisticated, Miller said, developers will be able to fine-tune their moderation strategies, which is something that’s been lacking, she said.

“It’s an area where there’s opportunity in the industry, recognizing that there are such different audiences between games,” Miller said. “Our audience is very young. That has a lot of different implications for what should be flagged compared to, say, a casino gambling game.”

Safety by design

Safety by design makes these tools and strategies a priority right out of the gate, determining what product features will be required, and what guardrails should be set in place, from monitoring to enforcement guidelines.

“It’s asking not only what can go wrong, it’s asking, what would someone who’s looking to do harm make of this feature, of this product?” Poran said. “It’s the practice of asking those questions and putting in the right mechanisms. It does look very different between different products and different games. It’s asking, what is unique to our game? What can and will go wrong in this unique environment?”

One of the solutions ActiveFence offers as part of safety by design is what they call safety red teaming, in which they use their knowledge of the bad actors they monitor around the world to mimic this behaviors. It’s used as a tool to safety test a game before it’s launched and uncover any features that could potentially be abused.

Implementing codes of conduct

“Once your community understands that certain behaviors will no longer be tolerated, certain things can no longer be said, you’d be surprised at how quickly they snap into line,” Howie said.

That means enlisting your game community in moderation efforts to weed out the trolls who are hurting your game and community.

“We’ve seen stats on our side where if you can cut out the toxicity, we’ve been able to reduce churn 15-20 percent with new players and with returning players,” he said. “Once they see they can get into a game and not get yelled at, not get made to feel badly, they stick around. They play more. I would hazard a guess they spend more, too.”

A code of conduct evolves over time — and needs to adapt to unexpected situations, Miller said. The Schell Games team puts a great deal of thought into how they would handle any moderation challenges and protect their users, who are quite young, including cheating, toxic behaviors and child endangerment. They tested tools like ToxMod with their beta channel, but things did not progress as planned, she said.

“It was not a good prediction for what was going to happen when it went in the live game. We learned that our beta community was a lot nicer to each other than when it was in the live game,” she said. “We actually had to revise some of our policies around moderation because we did our first draft based on the beta. Then when we went live it was like, whew, this is — you’re right. It’s worse than you think it’s going to be.”

A code of conduct can’t be done in a vacuum, Poran agreed.

“No point in writing a 50-page code of conduct and then going out and seeing that half of it doesn’t apply to the community and it’s not the problem,” he said. “Policy and code of conduct is evolving, continually evolving based on community feedback, on your agents, your moderators that come back and tell you, hey, this is happening a lot, and the code of conduct isn’t really clear on what we need to do here. Then that needs to evolve. But it’s something that has to be done intentionally and constantly.”

“Things like radicalization, recruitment, and then there’s always racism and misogyny — it’s all coming together at the same time,” Howie said. “The technology is there to meet it. The time is excellent to really take a look at this area and say, we just need to clean it up and make it better for our players.”


Author: VB Staff
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
DefenseNews

Raytheon to develop two Standard Missile types with better targeting

DefenseNews

Boeing’s defense unit shows profit, despite $222M loss on KC-46, T-7

DefenseNews

Here are the two companies creating drone wingmen for the US Air Force

Cleantech & EV'sNews

CATL unveils world's first LFP battery with 4C ultra-fast charging for 370-mi in 10 mins

Sign up for our Newsletter and
stay informed!