AI & RoboticsNews

Amazon debuts AWS Inf1, an AI inference instance

Amazon Web Services today debuted Inf1, an instance that powers AI inference in the cloud that CEO Andy Jassy calls the lowest cost inference offering available in the cloud.

“…it will have lower latency, it will have 3 times higher throughput, and up to 40% lower cost per instance compared to our G4 instance which is based on an Nvidia chip which previously was the lowest cost inference instance in the cloud,” Jassy said.

The vast majority of costs for cloud customers using AI to power their solutions comes from inference, Jassy said onstage today at the AWS re:Invent conference in Las Vegas.

The news follows the release of the Elastic Inference service and plans to release the Inferentia AI chip. Inf1 will also be powered by Inferentia chip made by Annapurna Labs, an Israeli company acquired by AWS in 2015.

Inf1 instances are generally available today and will integrate with PyTorch, MXNet, and TensorFlow. Inf1 will be made available for EKS instances and Amazon’s SageMaker for machine learning in 2020.

The news was announced today onstage at AWS re:Invent alongside Graviton 2, a 7-nanometer, 64-bit chip made to rival Intel’s X86 in data centers. Jassy said Graviton 2 will power m6g, r6g, and c6g instances.

Graviton 2 will have 4 times more compute and 40% better price performance than Intel’s X86 processors, Jassy said.


Author: Khari Johnson
Source: Venturebeat

Related posts
GamingNews

Nintendo Has Replaced Samus' Voice Actor For Metroid Prime 4, So It's No Longer Mass Effect's Jennifer Hale Doing the Grunts

GamingNews

The Physics Inside a Black Hole Are Still a Mystery in the 41st Millennium, According to a New Warhammer 40,000 Novel — Even to the Necrons

GamingNews

Mario Kart World Update 1.4.0 Tweaks Track Layouts, Adds Custom Item Rules, Now Lets You See What Music is Playing

CryptoNews

Vanguard’s Massive Crypto Reversal Triggers ‘Highly Bullish’ Mainstream Momentum

Sign up for our Newsletter and
stay informed!