AI & RoboticsNews

Cooper Lake will deliver a 60% increase in AI inferencing and training performance

During a press conference at the 2020 Consumer Electronics Show, Intel gave an update its efforts with respect to its AI and machine learning. The details were a bit hard to come by, but it gave a glimpse at the performance that can be expected from its future Xeon Scalable processor family, codename Cooper Lake.

Cooper Lake, which will be available in the first half of 2020, will deliver up to a 60% increase in both AI inferencing and training performance. Delivering that in part is DL Boost, which encompasses a number of x86 technologies designed to accelerate AI  vision, speech, language, generative, and recommendation workloads, which supports the bfloat16 (Brain Floating Point) computer number format on Cooper Lake products. (Bfloat16 was  originally by Google and implemented in its third generation Tensor Processing Unit, a custom-designed AI accelerator chip.)

By way of refresher, Cooper Lake features up to 56 processor cores per socket, or twice the processor core count of Intel’s second-gen Scalable Xeon chips. They’ll also have higher memory bandwidth, higher AI inference, and training performance at a lower power envelope, and they’ll have platform compatibility with the upcoming 10-nanometer Ice Lake processor.

Intel products are used for more data center runs on AI than on any other platform, Intel claims.

The future of Intel is AI. Its books imply as much — the Santa Clara company’s AI chip segments notched $3.5 billion in revenue this year, and it expects the market opportunity to grow 30% annually from $2.5 billion in 2017 to $10 billion by 2022. Putting this into perspective, AI chip revenues were up from $1 billion a year in 2017.

Earlier this year, Intel purchased Habana Labs, an Israel-based developer of programmable AI and machine learning accelerators for cloud datacenters, for an estimated $2 billion. It came after the purchase of San Mateo-based Movidius, which designs specialized low-power processor chips for computer vision, in September 2016.

Intel bought field-programmable gate array (FPGA) manufacturer Altera in 2015 and a year later acquired Nervana, filling out its hardware platform offerings and setting the stage for an entirely new generation of AI accelerator chipsets. And in August, Intel snatched up Vertex.ai, a startup developing a platform-agnostic AI model suite.


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!