AI & RoboticsNews

Intel acquires AI chip startup Habana Labs for $2 billion

In a clear signal of its ambitions for the $91.18 billion AI chip market, Intel this morning announced that it has acquired Habana Labs, an Israel-based developer of programmable deep learning accelerators for data centers. The deal is worth approximately $2 billion, and Intel says it’ll strengthen its AI strategy as Habana begins to sample its proprietary silicon to customers.

Habana will remain an independent business unit post-acquisition and will continue to be led by its current management team, and it’ll report to Intel’s data platforms group. Chairman Avigdor Willenz will serve as senior adviser to the business unit as well as to Intel.

“This acquisition advances our AI strategy, which is to provide customers with solutions to fit every performance need — from the intelligent edge to the data center,” said executive vice president and general manager of the data platforms group at Intel Navin Shenoy. “More specifically, Habana turbo-charges our AI offerings for the data center with a high-performance training processor family and a standards-based programming environment to address evolving AI [compute requirements].”

Habana offers two silicon products targeting workloads in AI and machine learning: the Gaudi AI Training Processor and the Goya AI Inference Processor. The former, which is optimized for “hyperscale” environments, is anticipated to power data centers that deliver up to four times the throughput versus systems built with the equivalent number of graphics chips at half the energy per chips (140 watts). As for the Goya processor, which was unveiled in June and which is commercially available, it ostensibly offers superior inference performance where throughput and latency are concerned.

Gaudi is available as a standard PCI-Express card that supports eight ports of 100GB Ethernet, as well as a mezzanine card that is compliant with the Open Compute Project accelerator module specs. It features one of the industry’s first on-die implementation of Remote Direct Memory Access over Ethernet (RDMA and RoCE) on an AI chip, which provides ten 100Gb or 20 50Gb communication links, enabling it to scale up to as many “thousands” of discrete cards. (A complete system with eight Gaudis called the HLS-1 will ship in the coming months.)

On the software side of the equation, Habana offers a development and execution environment — SynapseAI — with libraries and a JIT compiler designed to help customers deploy solutions as AI workloads. Importantly, it supports all of the standard AI and machine learning frameworks (e.g., Google’s TensorFlow and Facebook’s PyTorch), as well as the Open Neural Network Exchange format championed by Microsoft, IBM, Huawei, Qualcomm, AMD, ARM, and others.

“We have been fortunate to get to know and collaborate with Intel given its investment in Habana, and we’re thrilled to be officially joining the team,” said CEO of Habana David Dahan. “Intel has created a world-class AI team and capability. We are excited to partner with Intel to accelerate and scale our business. Together, we will deliver our customers more AI innovation, faster.”


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

Microsoft brings AI to the farm and factory floor, partnering with industry giants

AI & RoboticsNews

Edge data is critical to AI — here’s how Dell is helping enterprises unlock its value

AI & RoboticsNews

Box continues to expand beyond just data sharing, with agent-driven enterprise AI studio and no-code apps

Cleantech & EV'sNews

Porsche launches three new Taycan EV models, adding more performance and range

Sign up for our Newsletter and
stay informed!