The infrastructure required to handle AI workloads is often as complex as it is sprawling, but a cottage industry of startups has emerged whose focus is developing solutions for end customers. SambaNova Systems is one such startup — the Palo Alto, California-based firm, which was founded in 2017 by Rodrigo Liang and Stanford Professors Kunle Olukotun and Chris Ré, provides systems that run AI and data-intensive apps from the datacenter to the edge. In a reflection of investors’ ravenous appetite for the market, it today announced that it’s raised $250 million in series C funding.
“Raising $250M in this funding round with support from new and existing investors puts us in a unique category of capitalization,” said CEO Liang, a veteran of Sun Microsystems and Oracle. “This enables us to further extend our market leadership in enterprise computing.”
SambaNova’s products — and its customers, for that matter — remain largely under lock and key, but the company previously revealed it’s developing “software-defined” devices inspired by DARPA-funded research in efficient AI processing. Leveraging a combination of algorithmic optimizations and custom board-based hardware, SambaNova claims it’s able to dramatically improve the performance and capability of most AI-imbued apps.
According to Olukotun, SambaNova’s platform is designed to scale from tiny electronic devices to enormous remote datacenters. “SambaNova’s innovations in machine learning algorithms and software-defined hardware will dramatically improve the performance and capability of intelligent applications,” he added. “The flexibility of the SambaNova technology will enable us to build a unified platform providing tremendous benefits for business intelligence, machine learning, and data analytics.”
One thing’s for certain: SambaNova’s founders are a decorated bunch. Olukotun — who recently received the IEEE’s Computer Society’s Harry H. Goode Memorial Award — is the leader of the Stanford Hydra Chip Multiprocessor (CMP) research project, which produced a chip design that pairs four specialized processors and their caches with a shared secondary cache. As for Ré, he’s an associate professor in the Department of Computer Science at Stanford University in the InfoLab and a MacArthur Genius Award recipient who’s also affiliated with the Statistical Machine Learning Group, Pervasive Parallelism Lab, and Stanford AI Lab.
Author: Kyle Wiggers.
Source: Venturebeat