AI & RoboticsNews

Salesforce’s AI navigates Wikipedia to find answers to complex questions

Wikipedia offers a wealth of knowledge on countless topics to those who know where to look, but therein lies the rub — navigating its database of over 6 billion articles requires some web-crawling finesse. In an effort to streamline the search, researchers at Salesforce developed what they call a graph-based trainable retriever-reader framework, which sequentially retrieves paragraphs from English Wikipedia articles to answer complex open-domain questions. They say it achieves state-of-the-art performance across a range of benchmarks, showcasing its effectiveness.

As the researchers explain, open-domain question answering widely uses retrieve-read approaches. A typical retrieve-read approach involves selecting a few paragraphs for each query, using an efficient term-based retriever system, and then reading the top-ranked paragraphs to extract an answer. They’re generally effective for simple and single-hop questions that can be answered by a single paragraph, but they often fail when confronted with complicated and multi-hop questions.

The Salesforce researchers instead propose a graph-based recurrent retriever-reader framework that learns to retrieve reasoning paths over the Wikipedia graph. The intuition is that a retrieval problem can be formulated as a neural path over the large-scale graph, where each paragraph represents a node in the graph and each internal hyperlink is considered an edge.

Salesforce AI Wikipedia

The graph-based recurrent retriever retrieves paragraphs for a given question using the graphical structure by conditioning on the previously retrieved paragraphs. It estimates plausible reasoning paths (i.e., sequences of paragraphs) starting from seed paragraphs and terminating with what’s called an end-of-evidence symbol, and it selects top reasoning paths for the question to verify which path is the most plausible to extract the answer.

Together, these techniques enable the framework to field both single- and multi-hop open-domain problems more robustly than existing systems, the researchers say. In experiments prior to which the retriever was trained on a single graphics card with 11 GB of memory, they found that it outperformed the previous best model on a benchmark by 14 points.

“The discrete reasoning paths are helpful in interpreting our framework’s reasoning process,” wrote the coauthors of a paper detailing the work. “[T]he retrieved reasoning path gives us interpretable insights into the underlying entity relationships used for multi-hop reasoning. We hope this work facilitates future research focusing on the retrieval component in open-domain QA.”

Natural language processing is an area of acute interest for Salesforce, whose Einstein AI platform produces billions of predictions each day. In June 2018, it published a paper on a natural language processing model that could perform up to 10 tasks at once. And in June 2019, researchers at the tech giant proposed a corpus — Common Sense Explanations (CoS-E) — for training and inference with a novel machine learning framework (Commonsense Auto-Generated Explanation, or CAGE), which they said improved performance on question-and-answer benchmark by an order of magnitude.


Author: Kyle Wiggers.
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!