Intel is no stranger to AI-oriented chips, but now it’s turning its attention to those chips that might be thousands of miles away. The tech firm has introduced two new Nervana Neural Network Processors, the NNP-T1000 (below) and NNP-I1000 (above), that are Intel’s first ASICs designed explicitly for AI in the cloud. The NNT-T chip is meant for training AIs in a ‘balanced’ design that can scale from small computer clusters through to supercomputers, while the NNP-I model handles “intense” inference tasks.
The chipmaker also unveiled a next-gen Movidius Vision Processing Unit whose updated computer vision architecture promises over 10 times the inference performance while reportedly managing efficiency six times better than rivals. Those claims have yet to pan out in the real world, but it’s safe to presume that anyone relying on Intel tech for visual AI work will want to give this a look.
You’ll have to be patient for the Movidius chip when it won’t ship until sometime in the first half of 2020. This could nonetheless represent a big leap for AI performance, at least among companies that aren’t relying on rivals like NVIDIA. Intel warned that bleeding-edge uses of AI could require performance to double every 3.5 months — that’s not going to happen if companies simply rely on conventional CPUs. And when internet giants like Facebook and Baidu lean heavily on Intel for AI, you might see practical benefits like faster site loads or more advanced AI features.
Author: Jon Fingas
Source: Engadget