GamingNews

Riot Games and Ubisoft team up on machine learning to detect harmful game chat

Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.

Ubisoft and Riot Games have teamed up to share machine learning data so they can more easily detect harmful chat in multiplayer games.

The “Zero Harm in Comms” research project is intended to develop better AI systems that can detect toxic behavior in games, said Yves Jacquier, executive director of Ubisoft La Forge, and Wesley Kerr, director of software engineering at Riot Games, in an interview with GamesBeat.

“The objective of the project is to initiate cross-industry alliances to accelerate research on harm detection,” Jacquier said. “It’s a very complex problem to be solved, both in terms of science trying to find the best algorithm to detect any type of content. But also, from a very practical standpoint, making sure that we’re able to share data between the two companies through a framework that will allow you to do that, while preserving the privacy of players and the confidentiality.”

This is a first for a cross-industry research initiative involving shared machine learning data. Basically, both companies have developed their own deep-learning neural networks. These systems use AI to automatically go through in-game text chat to recognize when players are being toxic toward each other.

Event

Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.


Register Now

The neural networks get better with additional data that is fed into them. But one company can only feed so much data from its games into the system. And so that’s where the alliance comes in. In the research project, both companies will share non-private player comments with each other to improve the quality of their neural networks and thereby get to more sophisticated AI quicker.

League of Legends Worlds Championship 2022.
League of Legends Worlds Championship 2022. Anyone being toxic here?

Other companies are working on this problem — like ActiveFence, Spectrum Labs, Roblox, Microsoft’s Two Hat, and GGWP. The Fair Play Alliance also brings together game companies that want to solve the problem of toxicity. But this is the first case where big game companies share ML data with each other.

I can imagine some toxic things companies don’t want to share with each other. One common form of toxicity is “doxxing” players, or giving out their personal information like where they live. If someone engages in doxxing a player, one company should not share the text of that toxic message with another because that would mean breaking privacy laws, especially in the European Union. It doesn’t matter that the intentions are good. So companies will have to figure out how to share cleaned-up data.

“We’re hoping this partnership allows us to safely share data between our companies to tackle some of these harder problems to detect where we only have a few training examples,” Kerr said. “By sharing data, we’re actually building a bigger pool of training data, and we will be able to really detect this disruptive behavior and ultimately remove it from our games.”

This research initiative aims to create a cross-industry shared database and labeling ecosystem that gathers in-game data, which will better train AI-based preemptive moderation tools to detect and mitigate disruptive behavior.

Both active members of the Fair Play Alliance, Ubisoft and Riot Games firmly believe that the creation of safe and meaningful online experiences in games can only come through collective action and knowledge sharing. As such, this initiative is a continuation of both companies’ bigger journey of creating gaming structures that foster more rewarding social experiences and avoid harmful interactions.

“Disruptive player behaviors is an issue that we take very seriously but also one that is
very difficult to solve. At Ubisoft, we have been working on concrete measures to ensure
safe and enjoyable experiences, but we believe that, by coming together as an industry,
we will be able to tackle this issue more effectively.” said Jacquier. “Through this technological partnership with Riot Games, we are exploring how to better prevent in-game toxicity as designers of these environments with a direct link to our communities.”

Companies also have to learn to watch out for false reports or false positives on toxicity. If you say, “I’m going to take you out” in the combat game Rainbow Six Siege, that might simply fit into the fantasy of the game. In another context, it can be very threatening, Jacquier said.

Ubisoft and Riot Games are exploring how to lay the technological foundations for future industry collaboration and creating the framework that guarantees the ethics and the privacy of this initiative. Thanks to Riot Games’ highly competitive games and to Ubisoft’s very diversified portfolio, the resulting database should cover every type of player and in-game behavior in order to better train Riot Games’ and Ubisoft’s AI systems.

“Disruptive behavior isn’t a problem that is unique to games – every company that has an online social platform is working to address this challenging space. That is why we’re committed to working with industry partners like Ubisoft who believe in creating safe communities and fostering positive experiences in online spaces,” said Kerr. “This project is just an example of the wider commitment and work that we’re doing across Riot to develop systems that create healthy, safe, and inclusive interactions with our games.”

Still at an early stage, the “Zero Harm in Comms” research project is the first step of an ambitious cross-industry project that aims to benefit the entire player community in the future. As part of the first research exploration, Ubisoft and Riot are committed to sharing the learnings of the initial phase of the experiment with the whole industry next year, no matter the outcome.

Jacquier said a recent survey found that two-thirds of players who witness toxicity do not report it. And more than 50% of players have experienced toxicity, he said. So the companies can’t just rely on what gets reported.

Ubisoft’s own efforts to detect toxic text go back years, and its first effort at using AI to detect it was about 83% effective. That number has to go up.

Kerr pointed out many other efforts are being made to reduce toxicity, and this cooperation on one facet is a relatively narrow but important project.

“It’s not the only investment we’re making,” Kerr said. “We acknowledge it’s a very complex problem.”

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings.


Author: Dean Takahashi
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!