AI & RoboticsNews

Nvidia expands edge AI tech for healthcare and robotics

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


There is a growing need for artificial intelligence (AI) at the edge to help enable and support a variety of use cases, including medical, embedded and robotics.

At the Nvidia GTC conference today, the company announced a series of new hardware and software initiatives that span multiple industries, all designed to help accelerate the performance, capabilities and adoption of AI at the edge. Among the announcements is the launch of the Nvidia IGX platform — targeted at both industrial and medical use cases. The new platform includes the IGX Orin hardware, which is a small form factor AI optimized computing system. In addition to the new IGX Platform, Nvidia also announced the Jetson Orin Nano, as a high-powered small form factor device for edge AI in robotics.

“Combining imaging sensors with real-time, deep learning computer vision, we can bring robotics to healthcare,” Kimberly Powell, vice president and general manager for healthcare at Nvidia said during a press briefing.

The robot assisted doctor, will see you now

Powell explained that IGX includes the Orin robotics processor, Ampere tensor core GPU hardware, the ConnectX streaming I/O processor as well as a functional safety island and microcontroller unit. The safety features are particularly important, as Powell noted that more robots and humans will be working side by side in the same environments in the years to come.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.


Register Here

The Nvidia Clara Holoscan AI platform sits on top of IGX for medical devices and imaging robotics pipelines that includes detection, segmentation and real-time visualization.

“Holoscan running on IGX gives the medical device industry a unified and commercial off the shelf platform for real time edge processing, saving huge amounts of hardware and platform engineering,” Powell said. ” Now medical devices can benefit from the same business model innovation as self-driving cars to be AI powered and become software defined.”

IGX for medical devices is already seeing some early traction from medical device manufacturers. At GTC, Nvidia announced that Activ Surgical will be using IGX for its next-generation platform that uses sensor technology to see beyond natural light to deliver real-time imaging and physical structure visualization to surgeons during the surgery. 

Moon Surgical is adopting Clara Holoscan on IGX platform for its robotic assisted Maestro system that provides surgeons with enhanced sensory and assisted scope control. 

Additionally, Proximie announced that it will be using IGX to help enable its operating room telepresence technology that delivering real-time remote surgeon collaboration.

Robotics AI is about more than robots

Nvidia is also using GTC as the venue to announce its newest version of the Jetson Orin Nano platform for robotics and edge AI.

Nvidia Jetson Orin Nano
The Nvidia Jetson Orin Nano.

Deepu Talla, vice president and general manager of embedded and edge computing at Nvidia, commented in a GTC press briefing that the first Jetson Nano was announced back in 2019 as an entry-level platform for AI and robotics. With the new Orin Nano update, he noted that there is a significant performance boost of up to 80 times over prior generations. The Jetson Orin Nano is based on the Nvidia Ampere architecture GPU and is set to be available in two different versions, including 4GB and 8GB versions.

The Jetson Orin Nano is intended to be a component of an overall approach to enabling edge AI robotics. The other components include Nvidia’s Isaac robot simulation platform as well as the software to run the actual robots, which increasingly is the open-source ROS (Robot Operating System). Talla said that Nvidia has been working with the ROS community over the past year and has made numerous code contributions that help to accelerate the software.

It’s important to note that what Nvidia refers to, broadly as robotics, is, in fact, a very large category of edge AI use cases.

“Typically, people think of robots as something that has arms and legs or wheels,” Talla said. 

While that type of mobile robot does exist, Tall emphasized that there are plenty of stationary robots. Nvidia is defining robots as other autonomous devices that are also watching things and have the capability of understanding and interpreting their surroundings. That definition of edge AI robotics can include things like edge AI in a self-checkout, traffic analytics and even video conference.

“We’re not building robots ourselves, that’s not what we’re trying to do,” Talla said. ” We’re providing a platform to build the robots faster.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sean Michael Kerner
Source: Venturebeat

Related posts
AI & RoboticsNews

Mike Verdu of Netflix Games leads new generative AI initiative

AI & RoboticsNews

Google just gave its AI access to Search, hours before OpenAI launched ChatGPT Search

AI & RoboticsNews

Runway goes 3D with new AI video camera controls for Gen-3 Alpha Turbo

DefenseNews

Why the Defense Department needs a chief economist

Sign up for our Newsletter and
stay informed!