AI & RoboticsNews

Facebook taught its AI to speak math

I speak two languages, English and Bad English. My understanding of math is significantly worse. In fact, I had to redo Calculus 2A four different times in college in order to graduate, mostly because I could never properly calculate a ladder’s rate of acceleration as it fell away from a wall. You know, the sort of theoretical quandaries that really matter in our day to day lives.

My numerical idiocy aside, Facebook has trained an AI to solve the toughest of math problems. Real superstring stuff. In effect, FB has taught their neural network to view complex mathematical equations “as a kind of language and then [treat] solutions as a translation problem for sequence-to-sequence neural networks.”

This is actually quite a feat since most neural networks operate on an approximation system: they can figure out if an image is of a dog or a marmoset or a steam radiator with a reasonable amount of certainty but precisely calculating figures in a symbolic problem like b – 4ac = 7 is a whole different kettle of fish. Facebook managed this by not treating the equation like a math problem but rather like a language problem. Specifically the research team approached the issue using neural machine translation (NMT). In short, they taught an AI to speak math. The result was a system capable of solving equations in a fraction of the time that algebra-based systems like Maple, Mathematica, and Matlab would take.

“By training a model to detect patterns in symbolic equations, we believed that a neural network could piece together the clues that led to their solutions, roughly similar to a human’s intuition-based approach to complex problems,” the research team wrote in a blog post released today. “So we began exploring symbolic reasoning as an NMT problem, in which a model could predict possible solutions based on examples of problems and their matching solutions.”

Essentially the research team taught the AI to unpack mathematical equations much in the same way that we do for complex phrases, like the example below. Instead of breaking out the verbs, nouns and adjectives, the system silos the various individual variables.

The researchers focused primarily on solving differential and integration equations, but, because those two flavors of math don’t always have solutions for a given equation, the team had to get tricky in generating training data for the machine learning system.

“For our symbolic integration equations, for example, we flipped the translation approach around: Instead of generating problems and finding their solutions, we generated solutions and found their problem (their derivative), which is a much easier task,” the team wrote and which I vaguely understand. “This approach of generating problems from their solutions — what engineers sometimes refer to as trapdoor problems — made it feasible to create millions of integration examples.”

Still, it apparently worked. The team achieved a success rate of 99.7 percent on integration problems and 94 percent and 81.2 percent, respectively, for first- and second-order differential equations, compared to 84 percent on the same integration problems and 77.2 percent and 61.6 percent for differential equations using Mathematica. It also took FB’s program just over half a second to arrive at its conclusion rather than the several minutes it required for existing systems to do the same.


Author: Andrew Tarantola
Source: Engadget

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!