AI & RoboticsNews

SambaNova raises $676M to mass-produce AI training and inference chips

Join GamesBeat Summit 2021 this April 28-29. Register for a free or VIP pass today.


SambaNova Systems, a startup developing chips for AI workloads, today announced it has raised $676 million, valuing the company at more than $5 billion post-money. SambaNova says it plans to expand its customer base — particularly in the datacenter market — as it becomes one of the most capitalized AI companies in the world with over $1 billion raised.

AI accelerators are a type of specialized hardware designed to speed up AI applications such as neural networks, deep learning, and various forms of machine learning. They focus on low-precision arithmetic or in-memory computing, which can boost the performance of large AI algorithms and lead to state-of-the-art results in natural language processing, computer vision, and other domains. That’s perhaps why they’re forecast to have a growing share of edge computing processing power, making up a projected 70% of it by 2025, according to a recent survey by Statista.

SambaNova occupies a cottage industry of startups whose focus is developing infrastructure to handle AI workloads. The Palo Alto, California-based firm, which was founded in 2017 by Oracle and Sun Microsystems veteran Rodrigo Liang and Stanford professors Kunle Olukotun and Chris Ré, provides systems that run AI and data-intensive apps from the datacenter to the edge.

Olukotun, who recently received the IEEE Computer Society’s Harry H. Goode Memorial Award, is leader of the Stanford Hydra Chip Multiprocessor research project, which produced a chip design that pairs four specialized processors and their caches with a shared secondary cache. Ré, an associate professor in the Department of Computer Science at Stanford University’s InfoLab, is a MacArthur genius award recipient who’s also affiliated with the Statistical Machine Learning Group, Pervasive Parallelism Lab, and Stanford AI Lab.

SambaNova’s AI chips — and its customers, for that matter — remain largely under lock and key. But the company previously revealed it is developing “software-defined” devices inspired by DARPA-funded research in efficient AI processing. Leveraging a combination of algorithmic optimizations and custom board-based hardware, SambaNova claims it’s able to dramatically improve the performance and capability of most AI-imbued apps.

SambaNova

SambaNova’s 40-billion-transistor Cardinal SN10 RDU (Reconfigurable Dataflow Unit), which is built on TSMC’s N7 process, consists of an array of reconfigurable nodes for data, storage, and switching. It’s designed to perform in-the-loop training and allow for model reclassification and optimization on the fly during inference-with-training workloads. Each Cardinal chip has six controllers for memory, enabling 153 GB/s bandwidth, and the eight chips are connected in an all-to-all configuration. This last bit is made possible by a switching network that allows the chips to scale.

SambaNova isn’t selling Cardinal on its own, but rather as a solution to be installed in a datacenter. The basic unit of SambaNova’s offering is called the DataScale SN10-8R, featuring an AMD processor paired with eight Cardinal chips and 12 terabytes of DDR4 memory, or 1.5 TB per Cardinal. SambaNova says it will customize its products based on customers’ needs, with a default set of networking and management features that SambaNova can remotely manage.

The large memory capacity ostensibly gives the SN10-8R a leg up on rival hardware like Nvidia’s V100. As SambaNova VP of product Marshall Choy told the Next Platform, the Cardinal’s reconfigurable architecture can eliminate the need for things like downsampling high-resolution images to low resolution for training and inference, preserving information in the original image. The result is the ability to train models with arguably higher overall quality while eliminating the need for additional labeling.

On the software side of the equation, SambaNova has its own graph optimizer and compiler, letting customers using machine learning frameworks like PyTorch and TensorFlow have their workloads recompiled for Cardinal. The company aims to support natural language, computer vision, and recommender models containing over 100 billion parameters — the parts of the model learned from historical training data — as well as a larger memory footprint allowing for hardware utilization and greater accuracy.

SambaNova has competition in a market that’s anticipated to reach $91.18 billion by 2025. Hailo, a startup developing hardware to speed up AI inferencing at the edge, in March 2020 nabbed $60 million in venture capital. California-based Mythic has raised $85.2 million to develop custom in-memory compute architecture. Graphcore, a Bristol, U.K.-based startup creating chips and systems to accelerate AI workloads, has a war chest in the hundreds of millions of dollars. And Baidu’s growing AI chip unit was recently valued at $2 billion after funding.

SambaNova

But SambaNova says the first generation of Cardinal taped out in spring 2019, with the first samples of silicon already in customers’ servers. In fact, SambaNova had been selling to customers for over a year before this point — the only public versions are from the Department of Energy at Lawrence Livermore and Los Alamos. Lawrence Livermore integrated one of SambaNova’s systems with its Corona supercomputing cluster, primarily used for simulations of various physics phenomena.

SambaNova is also the beneficiary of a market that’s seeing unprecedented — and sustained — customer demand. Surges in car and electronics purchasing at the start of the pandemic have exacerbated a growing microchip shortage. In response, U.S. President Joe Biden recently committed $180 billion to R&D for advanced computing, as well as specialized semiconductor manufacturing for AI and quantum computing, all of which have become central to the country’s national tech strategy.

“We began shipping product during the pandemic and saw an acceleration of business and adoption relative to expectations,” a spokesperson told VentureBeat via email. “COVID-19 also brought a silver lining in that it has generated new use cases for us. Our tech is being used by customers for COVID-19 therapeutic and anti-viral compound research and discovery.”

According to Bronis de Supinski, chief technology officer at Lawrence Livermore, SambaNova’s platform is being used to explore a technique called cognitive simulation, where AI is used to accelerate processing of portions of simulations. He claims a roughly 5 times improvement compared with GPUs running the same models.

Along with the new SN10-8R product, SambaNova is set to offer two cloud-like service options: The first — SambaNova AI Platform — is a free-to-use developer cloud for research institutions with compute access to the hardware. The second — DataFlow as a Service — is for business customers that want the flexibility of the cloud without paying for the hardware. In both cases, SambaNova will handle management and updates.

Softbank led SambaNova’s latest funding round, a series D. The company, which has over 300 employees, previously closed a $250 million series C round led by BlackRock and preceded by a $150 million series B spearheaded by Intel Capital.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!