AI & RoboticsNews

AI Weekly: Facebook’s news summarization tool reeks of bad intentions

This week, BuzzFeed News, citing sources familiar with the matter, wrote that Facebook is developing an AI tool that summarizes news articles so that users don’t have to read them. The tool — codenamed “TLDR” in reference to the acronym “too long, didn’t read” — reportedly reduces articles to bullet points and provides narration, as well as a virtual assistant to answer questions.

The media industry, which is in the midst of a historic slump in revenue, didn’t react kindly to the report. Facebook’s relationship with publishers has been strained even in the best of times, with the latter accusing the former of profiting off of its work. Facebook product launches such as Instant Articles, which stripped outlets of recirculation and monetization opportunities, as well as algorithmic changes deprioritizing content in favor of “meaningful interactions,” have cemented this divide. Just this week, the New York Times reported that Facebook planned to roll back a change designed to promote reliable news coverage in the aftermath of the U.S. election. This development follows reports that the company adjusted its news feed algorithm in 2017 to reduce the visibility of “liberal” outlets like Mother Jones, intending to head off accusations of bias against conservative media.

Facebook has gone so far as to say it would block the sharing of local and international news stories on its products if legislation requiring tech platforms to pay publishers for content becomes law, but tools like TLDR would eliminate the need for it to do so. By condensing news articles into bite-sized summaries that would presumably live on Facebook, the likely result would be a further reduction in click-through rates to publishers. Already, it’s estimated that around 43% of U.S. adults get their news from Facebook. Disincentivizing visits to sources with a tool like TLDR would cause that percentage to rise.

Facebook might be inclined to say that summarization would lead to more informed discussion on its platform, given that around 59% of links shared on social media have never been clicked. But an enormous body of work shows that natural processing algorithms such as the ones likely underpinning TLDR are susceptible to bias. Often, a portion of these algorithms’ training data is sourced from online communities with pervasive gender, race, and religious prejudices. AI research firm OpenAI notes that this can lead to placing words like “naughty” or “sucked” near female pronouns and “Islam” near words like “terrorism.” Other studies, like one published by Intel, MIT, and Canadian AI initiative CIFAR researchers in April, have found high levels of stereotypical bias from some of the most popular models, including Google’s BERT and XLNetOpenAI’s GPT-2, and Facebook’s RoBERTa.

To be fair, some firms, OpenAI among them, have achieved some success in the AI summarization domain. In 2017, Salesforce researchers coauthored a paper describing a summarization algorithm that learns from examples of high-quality summaries, employing a mechanism called “attention” to ensure that it doesn’t produce too many repetitive strands of text. More recently, by training a reward machine learning model to predict which summaries humans will prefer from a Reddit dataset and fine-tuning a language model to produce summaries that score highly according to the reward model, OpenAI says it managed to “significantly” improve the quality of summaries of news articles as evaluated by a team of human reviewers.

But summarizing text perfectly would require genuine intelligence, including commonsense knowledge and a mastery of language. And while algorithms like OpenAI’s GPT-3 push the limits in this regard, they’re a long way from attaining human-level reasoning. Researchers affiliated with Facebook and Tel Aviv University recently observed that a pretrained language model — GPT-2, the precursor to GPT-3 — was unable to follow basic natural language instructions. In another example, Facebook and the University College London scientists found that 60%-70% of answers given by models tested on industry-standard, open source benchmarks were embedded somewhere in the training sets, indicating that the models merely memorized the answers.

That’s setting aside the fact that Facebook has a poor track record when it comes to AI as applied to objectionable content, which doesn’t instill much confidence in tools like TLDR. According to BuzzFeed, one departing employee estimated earlier this month that, even with AI and third-party moderators, the company was “deleting less than 5% of all of the hate speech posted to Facebook.” (Facebook later pushed back on that claim.)

It remains to be seen what form TLDR will take, how it will be deployed, and which publishers might ultimately be impacted. But the evidence points to a problematic and potentially ill-considered rollout. BuzzFeed recently quoted Facebook CTO Mike Schroepfer as saying that the company “has to build [tools such as TLDR] responsibly” to “earn trust” and “the right to continue to grow.” So far, in the AI domain and other areas like advertising and acquisitions, Facebook is very clearly failing to earn that trust.

For AI coverage, send news tips to Khari Johnson, Kyle Wiggers, and Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Kyle Wiggers

AI Staff Writer


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!