AI & RoboticsNews

Facebook details the AI simulation tool it built to find bugs and vulnerabilities

Facebook today detailed Web-Enabled Simulation (WAS), an approach to building large-scale simulations of complex social networks. As previously reported, WES leverages AI techniques to train bots to simulate people’s behaviors on social media, which Facebook says it hopes to use to uncover bugs and vulnerabilities.

In person and online, people act and interact with one another in ways that can be challenging for traditional algorithms to model, according to Facebook. For example, people’s behavior evolves and adapts over time and is distinct from one geography to the next, making it difficult to anticipate the ways a person or community might respond to changes in their environments.

WES ostensibly solves this by automating interactions among thousands or even millions of user-like bots. Drawing on a combination of online and offline simulation to train bots with heuristics and supervised learning as well as reinforcement learning techniques, WES provides a spectrum of simulation characteristics that capture engineering concerns such as speed, scale, and realism. While the bots are deployed on Facebook’s hundreds of millions of lines of code, they’re isolated from real users so that they’re only able to interact with themselves (excepting “read-only” bots that have “privacy-preserving” access to the real Facebook). However, Facebook asserts this real-infrastructure simulation ensures the bots’ actions remain faithful to the effects people using the platform would witness.

WES bots are made to play out different scenarios, such as a hacker trying to access someone’s private photos. Each scenario may have only a few bots acting them out, but the system is designed to have thousands of different scenarios running in parallel.

“We need to train the bots to behave in some sense like real users,” Mark Harman, professor of computer science at University College London and a research scientist at Facebook, explained during a call with reporters. “We don’t have to have them model any particular use, so they just have to have the high-level statistical properties that that real users exhibit … But the simulation results we get a much closer much more faithful to the reality of what real users would do.”

Facebook notes that WES remains in the research stages and hasn’t been deployed in production. But in an experiment, scientists at the company used it to create WW, a simulation built atop Facebook’s production codebase. WW can generate bots that seek to buy items disallowed on Facebook’s platform (like guns or drugs); attempt to scam each other; and perform actions like conducting searches, visiting pages, and sending messages. Courtesy of a mechanical design component, WW can also run simulations to test whether bots are able to violate Facebook’s safeguards, helping to identify statistical patterns and product mechanisms that might make it harder to behave in ways that violate the company’s Community Standards.

“There’s parallels to the problem of evaluating games designed by AI, where you have to just accept that you can’t model human behaviour, and so to evaluate games you have to focus on the stuff you measure like the likelihood of a draw or making sure a more skilled agent always beats a less skilled one,” Mike Cook, an AI researcher with a fellowship at Queen Mary University of London who wasn’t involved with Facebook’s work, told VentureBeat. “Having bots just roam a copy of the network and press buttons and try things is a great way to find bugs, and something that we’ve been doing (in one way or another) for years and years to test out software big and small.”

A Facebook analysis of the most impactful production bugs indicated that as much as 25% were social bugs, of which “at least” 10% could be discovered through WES. To spur research in this direction, the company recently launched a request for proposals inviting academic researchers and scientists to contribute new ideas to WES and WW. Facebook says it has received 85 submissions to date.

WES and WW build on Facebook’s Sapienz system, which automatically designs, runs, and reports the results of tens of thousands of test cases every day across the company’s mobile app codebases, as well as its SybilEdge fake account detector. Another of the company’s systems — deep entity classification (DEC) — identifies hundreds of millions of potentially fraudulent users via an AI framework.

But Facebook’s efforts to offload content moderation to AI and machine learning have been at best uneven. In May, Facebook’s automated system threatened to ban the organizers of a group working to hand-sew masks on the platform from commenting or posting, informing them that the group could be deleted altogether. It also marked legitimate news articles about the pandemic as spam.

Facebook attributed those missteps to bugs while acknowledging that AI isn’t the be-all and end-all. At the same time, in its most recent quarterly Community Standards report, it didn’t release — and says it couldn’t estimate — the accuracy of its hate speech detection systems. (Of the 9.6 million posts removed in the first quarter, Facebook said its software detected 88.8% before users reported them.)

There’s evidence that objectionable content regularly slips through Facebook’s filters. In January, Seattle University associate professor Caitlin Carlson published results from an experiment in which she and a colleague collected more than 300 posts that appeared to violate Facebook’s hate speech rules and reported them via the service’s tools. Only about half of the posts were ultimately removed.

“AI is not the answer to every single problem,” Facebook CTO Mike Schroepfer told VentureBeat in a previous interview. “I think humans are going to be in the loop for the indefinite future. I think these problems are fundamentally human problems about life and communication, and so we want humans in control and making the final decisions, especially when the problems are nuanced. But what we can do with AI is, you know, take the common tasks, the billion scale tasks, the drudgery out.”


Author: Kyle Wiggers.
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!

Worth reading...
Xbox Games Showcase: Microsoft comes out in force