AI & RoboticsNews

Google claims its AI is becoming better at recognizing breaking news and misinformation

Google says it’s using AI and machine learning techniques to more quickly detect breaking news around crises like natural disasters. That’s according to Pandu Nayak, vice president of search at Google, who revealed that the company’s systems now take minutes to recognize breaking news as opposed to 40 minutes a few years ago.

Faster breaking news detection is likely to become critical as natural disasters around the world unfold, and as the 2020 U.S. election day nears. Wildfires like those raging in California and Oregon can change (and have changed) course on a dime, and timely, accurate election information in the face of disinformation campaigns will be key to protecting the processes’ integrity.

“Over the past few years, we’ve improved our systems to … ensure we’re returning the most authoritative information available,” Nayak wrote in a blog post. “As news is developing, the freshest information published to the web isn’t always the most accurate or trustworthy, and people’s need for information can accelerate faster than facts can materialize.”

In a related development, Google says it recently launched an update using BERT-based language understanding models to improve the matching between news stories and available fact checks. (In April 2017, Google began including publisher fact checks of public claims alongside search results.) According to Nayak, the systems now better understand whether a fact check claim is related to the topic of a story and surface those checks more prominently in Google News’ Full Coverage feature.

Nayak says these efforts dovetail with Google’s work to improve the quality of search results for topics susceptible to hateful, offensive, and misleading content. There’s been progress on that front too, he claims, in the sense that Google’s systems can more reliably spot topic areas at risk for information.

For instance, within the panels in search results that display snippets from Wikipedia, one of the sources fueling Google’s Knowledge Graph, Nayak says its machine learning tools are now better at preventing potentially inaccurate information from appearing. When false info from vandalized Wikipedia pages slips through, he claims the systems can detect those cases with 99% accuracy.

The improvements have trickled down to the systems that govern Google’s autocomplete suggestions as well, which automatically choose not to show predictions if a search is unlikely to lead to reliable content. The systems previously protected against “hateful” and “inappropriate” predictions, but they’ve now expanded to elections. Google says it will remove predictions that could be interpreted as claims for or against any candidate or political party and statements about voting methods, requirements, the status of voting locations, and the integrity or legitimacy of electoral processes.

“We have long-standing policies to protect against hateful and inappropriate predictions from appearing in Autocomplete,” Nayak wrote. “We design our systems to approximate those policies automatically, and have improved our automated systems to not show predictions if we detect that the query may not lead to reliable content. These systems are not perfect or precise, so we enforce our policies if predictions slip through.”


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

Mike Verdu of Netflix Games leads new generative AI initiative

AI & RoboticsNews

Google just gave its AI access to Search, hours before OpenAI launched ChatGPT Search

AI & RoboticsNews

Runway goes 3D with new AI video camera controls for Gen-3 Alpha Turbo

DefenseNews

Why the Defense Department needs a chief economist

Sign up for our Newsletter and
stay informed!