AI & RoboticsNews

Google open-sources LaserTagger, an AI model that speeds up text generation

Sequence-to-sequence AI models, which were introduced by Google in 2014, aim to map fixed-length input (usually text) with a fixed-length output where the length of the input and output might differ. They’re used in text-generating tasks including summarization, grammatical error correction, and sentence fusion, and recent architectural breakthroughs have made them more capable than before. But they’re imperfect in that they (1) require large amounts of training data to reach acceptable levels of performance and that they (1) typically generate the output word-by-word (which makes them inherently slow).

That’s why researchers at Google developed LaserTagger, an open source text-editing model that predicts a sequence of edit operations to transform a source text into a target text. They assert that LaserTagger tackles text generation in a fashion that’s less error-prone — and that’s easier to train and faster to execute.

The release of LaserTagger follows on the heels of notable contributions from Google to the field of natural language processing and understanding. This week, the tech giant took the wraps off of Meena, a neural network with 2.6 billion parameters that can handle multiturn dialogue. And earlier this month, Google published a paper describing Reformer, a model that can process the entirety of novels.

LaserTagger takes advantage of the fact that for many text-generation tasks, there’s often an overlap between the input and the output. For instance, when detecting and fixing grammatical mistakes or when fusing several sentences, most of the input text can remain unchanged — only a small fraction of words need to be modified. LaserTagger, then, produces a sequence of edit operations instead of actual words, like keep (which copies a word to the output, delete (which removes a word), and keep-addx or delete-addx (which adds phrase X before the tagged word and optionally deletes the tagged word).

Added phrases come from a restricted vocabulary that’s been optimized to minimize vocabulary size and maximize the number of training examples. The only words necessary to add to the target text come from the vocabulary alone, preventing the model from adding arbitrary words and mitigating the problem of hallucination (i.e., producing outputs that aren’t supported by the input text).  And LaserTagger can predict edit operations in parallel with high accuracy, enabling an end-to-end speedup compared with models that perform predictions sequentially.

Evaluated on several text generation tasks, LaserTagger performed “comparably strong” with, and up to 100 times faster than, a baseline model that used a large number of training examples. Even when trained using only a few hundred or a few thousand training examples, it produced “reasonable” results that could be manually edited or curated.

“The advantages of LaserTagger become even more pronounced when applied at large scale, such as improving the formulation of voice answers in some services by reducing the length of the responses and making them less repetitive,” wrote the team. “The high inference speed allows the model to be plugged into an existing technology stack, without adding any noticeable latency on the user side, while the improved data efficiency enables the collection of training data for many languages, thus benefiting users from different language backgrounds.”


Author: Kyle Wiggers.
Source: Venturebeat

Related posts
AI & RoboticsNews

The show’s not over: 2024 sees big boost to AI investment

AI & RoboticsNews

AI on your smartphone? Hugging Face’s SmolLM2 brings powerful models to the palm of your hand

AI & RoboticsNews

Why multi-agent AI tackles complexities LLMs can’t

DefenseNews

US Army buys long-flying solar drones to watch over Pacific units

Sign up for our Newsletter and
stay informed!