AI & RoboticsNews

DeepMind unveils first AI to discover faster matrix multiplication algorithms

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Can artificial intelligence (AI) create its own algorithms to speed up matrix multiplication, one of machine learning’s most fundamental tasks? Today, in a paper published in Nature, DeepMind unveiled AlphaTensor, the “first artificial intelligence system for discovering novel, efficient and provably correct algorithms.” The Google-owned lab said the research “sheds light” on a 50-year-old open question in mathematics about finding the fastest way to multiply two matrices.

Ever since the Strassen algorithm was published in 1969, computer science has been on a quest to surpass its speed of multiplying two matrices. While matrix multiplication is one of algebra’s simplest operations, taught in high school math, it is also one of the most fundamental computational tasks and, as it turns out, one of the core mathematical operations in today’s neural networks. 

Matrix multiplication is used for processing smartphone images, understanding speech commands, generating computer graphics for computer games, data compression and more. Today, companies use expensive GPU hardware to boost matrix multiplication efficiency, so any extra speed would be game-changing in terms of lowering costs and saving energy.

AlphaTensor, according to a DeepMind blog post, builds upon AlphaZero, an agent that has shown superhuman performance on board games like chess and Go. This new work takes the AlphaZero journey further, moving from playing games to tackling unsolved mathematical problems. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.


Register Here

DeepMind uses AI to improve computer science

This research delves into how AI could be used to improve computer science itself, said Pushmeet Kohli, head of AI for science at DeepMind, at a press briefing.  

“If we’re able to use AI to find new algorithms for fundamental computational tasks, this has enormous potential because we might be able to go beyond the algorithms that are currently used, which could lead to improved efficiency,” he said. 

This is a particularly challenging task, he explained, because the process of discovering new algorithms is so difficult, and automating algorithmic discovery using AI requires a long and difficult reasoning process — from forming intuition about the algorithmic problem to actually writing a novel algorithm and proving that the algorithm is correct on specific instances. 

“This is a difficult set of steps and AI has not been very good at that so far,” he said. 

An ‘intriguing, mind-boggling problem’

DeepMind took on the matrix multiplication challenge because it’s a known problem in computation, he said.

“It’s also a very intriguing, mind-boggling problem because matrix multiplication is something that we learn in high school,” he said. “It’s an extremely basic operation, yet we don’t currently know the best way to actually multiply these two sets of numbers. So that’s extremely stimulating for us also as researchers to start to understand this better.” 

According to DeepMind, AlphaTensor discovered algorithms that are more efficient than the state of the art for many matrix sizes and outperform human-designed ones.

AlphaTensor begins without any knowledge about the problem, Kohli explained, and then gradually learns what is happening and improves over time. “It first finds this classroom algorithm that we were taught, and then it finds historical algorithms such as Strassen’s and then at some point, it surpasses them and discovers completely new algorithms that are faster than previously.”

Kohli said he hopes that this paper inspires others in using AI to guide algorithmic discovery for other fundamental competition tasks. “We think this is a major step in our path towards really using AI for algorithmic discovery,” he said.

DeepMind’s AlphaTensor uses AlphaZero

According to Thomas Hubert, staff research engineer at DeepMind, it is really AlphaZero running behind the scenes of AlphaTensor as a single-player game. “It is the same algorithm that learned how to play chess that was applied here for matrix multiplication, but that needed to be extended to handle this infinitely large space — but many of the components are the same,” he said.

In fact, according to DeepMind, this game is so challenging that “the number of possible algorithms to consider is much greater than the number of atoms in the universe, even for small cases of matrix multiplication.” Compared to Go, which was an AI challenge for decades, the number of possible moves is 30 orders of magnitude larger.

“The game is about basically zeroing out the tensor, with some allowed moves that are actually representing some algorithmic operations,” he explained. “This gives us two very important results: One is that if you can decompose zero out the tensor perfectly, then you’re guaranteed to have a provably correct algorithm. Second, the number of steps it takes to decompose this tensor actually gives you the complexity of the algorithm. So it’s very, very clean.”

DeepMind’s paper also pointed out that AlphaTensor discovers a richer space of matrix multiplication algorithms than previously thought — up to thousands for each size.

According to the blog post, the authors said they adapted AlphaTensor to specifically find algorithms that are fast on a given hardware, such as Nvidia V100 GPU, and Google TPU v2. These algorithms multiply large matrices 10-20% faster than the commonly used algorithms on the same hardware, which showcases AlphaTensor’s flexibility in optimizing arbitrary objectives,” the blog post said.

Increased AI impact on science and mathematics

Back in July, researchers showed that DeepMind’s AlphaFold tool could predict the structures of more than 200 million proteins from around a million species, which covered nearly every known protein on earth. Kohli said that AlphaTensor shows the potential that AI has not just in science but in mathematics.

“To see AI fulfill that promise to go beyond what human scientists have been able to do for the last 50 years, it is personally incredibly exciting,” said Kohli. “It just shows the amount of impact that AI and machine learning can have.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!