Today Palo-Alto-based SambaNova Systems unveiled a new AI chip, the SN40L, which will power its full-stack large language model (LLM) platform, the SambaNova Suite, that helps enterprises go from chip to model — building and deploying customized generative AI models.
Rodrigo Liang, cofounder and CEO of SambaNova Systems, told VentureBeat that SambaNova goes farther than Nvidia does up the stack — helping enterprises actually train their models properly.
“Many people were enthusiastic about the infrastructure that we have, but the problem they were running into is they didn’t have the expertise, so they would hand off to other companies like OpenAI to build the model,” he explained.
As a result, SambaNova decided that it believed this is a “Linux” moment for AI — that open source AI models would be the big winners — so in addition to pre-trained foundation models, its SambaNova Suite, offers a curated collection of open-source generative AI models optimized for the enterprise, deployed on-premises or in the cloud.
“We take the base model and do all the cleanup for the enterprise,” Liang explained, as well as the hardware optimization, which he said most customers don’t want to deal with. “They don’t want to hunt down GPUs,” he said. “They don’t want to figure out the structure of a GPU.”
But while SambaNova does not stop at chip development and moves all the way up the software stack, Liang insists that “chip for chip, we outdo Nvidia.”
According to a press release, SambaNova’s SN40L can serve a 5 trillion parameter model, with 256k+ sequence length possible on a single system node. It says this “enables higher quality models, with faster inference and training at a lower total cost of ownership.” In addition, “larger memory unlocks true multimodal capabilities from LLMs, enabling companies to easily search, analyze, and generate data in these modalities.”
Still, the company also made several additional announcements about new models and capabilities within SambaNova Suite:
Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
Today Palo-Alto-based SambaNova Systems unveiled a new AI chip, the SN40L, which will power its full-stack large language model (LLM) platform, the SambaNova Suite, that helps enterprises go from chip to model — building and deploying customized generative AI models.
Rodrigo Liang, cofounder and CEO of SambaNova Systems, told VentureBeat that SambaNova goes farther than Nvidia does up the stack — helping enterprises actually train their models properly.
“Many people were enthusiastic about the infrastructure that we have, but the problem they were running into is they didn’t have the expertise, so they would hand off to other companies like OpenAI to build the model,” he explained.
A ‘Linux’ moment for AI
As a result, SambaNova decided that it believed this is a “Linux” moment for AI — that open source AI models would be the big winners — so in addition to pre-trained foundation models, its SambaNova Suite, offers a curated collection of open-source generative AI models optimized for the enterprise, deployed on-premises or in the cloud.
Event
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
“We take the base model and do all the cleanup for the enterprise,” Liang explained, as well as the hardware optimization, which he said most customers don’t want to deal with. “They don’t want to hunt down GPUs,” he said. “They don’t want to figure out the structure of a GPU.”
SambaNova does not stop at chip development
But while SambaNova does not stop at chip development and moves all the way up the software stack, Liang insists that “chip for chip, we outdo Nvidia.”
According to a press release, SambaNova’s SN40L can serve a 5 trillion parameter model, with 256k+ sequence length possible on a single system node. It says this “enables higher quality models, with faster inference and training at a lower total cost of ownership.” In addition, “larger memory unlocks true multimodal capabilities from LLMs, enabling companies to easily search, analyze, and generate data in these modalities.”
Still, the company also made several additional announcements about new models and capabilities within SambaNova Suite:
- Llama2 variants (7B, 70B): state-of-the-art of open-source language models enabling customers to adapt, expand, and run the best LLM models available, while retaining ownership of these models
- BLOOM 176B: the most accurate multilingual foundation model in the open-source community, enabling customers to solve more problems with a wide variety of languages, whilst also being able to extend the model to support new, low resource languages
- A new embeddings model for vector-based retrieval augmented generation enabling customers to embed their documents into vector embeddings, which can be retrieved during the Q&A process and NOT result in hallucinations. The LLM then takes the results to analyze, extract, or summarize the information
- A world-leading automated speech recognition model to transcribe and analyze voice data
- Additional multi-modal and long sequence length capabilities
- Inference optimized systems with 3-tier Dataflow memory for uncompromised high bandwidth and high capacity
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Author: Sharon Goldman
Source: Venturebeat
Reviewed By: Editorial Team