AI & RoboticsNews

Facebook’s DEC AI identified hundreds of millions of fake accounts over 2 years

Facebook CEO Mark Zuckerberg often likes to assert that AI has substantially cut down on the amount of abuse perpetrated by millions of users, and he’s not wrong — in its most recent Community Standards Enforcement Report, Facebook said it removed more than 3.2 billion fake accounts between April and September, compared with just over 1.5 billion during the same period last year. And at least a part of the uptick is attributable to a machine learning framework called deep entity classification (DEC), which Facebook detailed for the first time during its 2019 Scale conference in October. It’s responsible for a 20% reduction in fake accounts on the platform in the two years since it was deployed, which concretely amounts to “hundreds of millions” of accounts.

DEC was created to address problems Facebook encountered in its traditional approaches to automated abusive account detection. Historically, a team would identify a set of features — such as an account’s age, number of friends, and location —  and label each as “abusive” or “benign,” data which they’d use to train an account classifier model. Because the features were hand-written by engineers, the feature space was relatively small, making it easier for attackers to suss out. Eventually, attackers began gaming specific features, for example waiting until accounts matured before publishing harmful content.

In contrast, DEC extracts the “deep features” of accounts by aggregating properties of behavioral features for other, related accounts in a social graph. It’s recursive in nature, resulting in over 20,000 features for every account as opposed to merely dozens or hundreds. And it uses a multi-stage, multi-task learning technique using large amounts of low-precision, automatically-generated labels in tandem with small amounts of high-precision human-provided labels, cutting down on the amount of annotation work required prior to training.

Facebook DEC AI

DEC first considers an account’s direct features by entity type, such as age and gender (user entities), fan count and category (page), member count (group), operating system (device), and country and reputation (IP address) before fanning out to other entities the account interacts with, like pages, admins, group members, users sharing a device, groups shared to, and registered accounts. After the features are extracted, aggregation is applied both numerically (e.g., the mean number of groups of friends) and categorically (e.g., the percentage of the most common category) before the results of both first-order and second-order fan-out entities are aggregated.

The approach was validated using three different models and a wealth of production data from Facebook — a behavioral model that took in only direct features, a DEC model with tens of thousands of features, and a more sophisticated DEC with an even larger corpus. The results showed that while the basic behavioral model couldn’t predict abusive accounts with greater than 95% accuracy, both DEC-based models surpassed this and identified a greater number of fake accounts.

“Over the past few years that DEC has been in production, we’ve seen a step reduction in the number of fake accounts on the platform,” said Facebook software engineer Sara Khodeir. “Even though attacker volumes increase, DEC catches them at pretty much the same volume.”

Facebook DEC AI

DEC is but one automated technique Facebook is actively using to fight fake accounts and abusive behavior on its platform. Another is a language-agnostic AI model trained on 93 languages across 30 dialect families; it’s used in tandem with other classifiers to tackle multiple language problems at once. And on the video side of the equation, Facebook says its salient sampler model — which quickly scans through the video and processes “important” parts of uploaded clips — enables it to recognize more than 10,000 different actions in 65 million videos.

Facebook is broadly moving toward an AI training technique called self-supervised learning, in which unlabeled data is used in conjunction with small amounts of labeled data to produce an improvement in learning accuracy. In one experiment, its researchers were able to train a language understanding model that made more precise predictions with just 80 hours of data compared with 12,000 hours of manually labeled data.

At Facebook’s F8 developer conference earlier this year, Facebook’s director of AI Manohar Paluri said that AI models like it are being used to protect the integrity of elections in India, a country where people speak 22 different languages and write in 13 different scripts. “This technique of self-supervision is working across multiple modalities, text, language, computer vision video, and speech,” he said. “It’s a several orders of magnitude reduction in work.”


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!