Today marked the kickoff of Nvidia’s GPU Technology Conference in Suzhou, China, where CEO Jensen Huang debuted a host of products and services during his keynote address. In addition to Drive AGX Orin, the latest version of the Santa Clara-based company’s software-defined solution for self-driving vehicles and robots, Nvidia announced the open-sourcing of a suite of AI models for autonomous decision-making and visual perception. Even more, it revealed a hardware collaboration with Didi Chuxing, one of the world’s largest transportation technology companies with over 550 million users and tens of millions of drivers.
Drive AGX Orin
Nvidia’s Drive AGX Orin — which will come to production applications in 2022 — slots alongside the two existing AGX Drive platforms, AGX Drive Xavier and AGX Drive Pegasus. Unlike Xavier, which is designed to handle highly automated and fully autonomous driving, and Pegasus, which powers lightly automated advanced driver-assistance systems, Huang says that Orin was engineered from the ground up to run a large number of apps and models simultaneously while achieving safety standards such as ISO 26262 ASIL-D. (The ASIL, or Automotive Safety Integrity Level, is a risk classification scheme that’s established by performing an analysis of potential hazards, taking into account said hazards’ overall severity, exposure, and controllability.)
At the silicon heart of Orin is a brand-new system-on-chip comprising 17 billion transistors in total, which Huang says is the fruit of billions of dollars in R&D over the course of four years. It integrates with Nvidia’s next-generation graphics chip architecture and Hercules cores, the latter of which are based on Arm’s DynamIQ technology and which improve power and area efficiency by 10% compared with the previous generation. They’re complemented by AI and machine learning accelerator cores that deliver a collective 200 trillion operations per second (TOPS), compared with the 320 TOPS and 30 TOPS of which Pegasus and Xavier are capable, respectively. Moreover, Orin can handle over 200Gbps of data while consuming only between 60 Watts to 70 Watts of power (at 200 TOPS).
Orin, like Pegasus, can scale from level 2 (according to the Society of Automotive Engineers, cars capable of controlling either steering or speed but not both) to level 5 (cars fully capable of self-driving without supervision) automation. And like Xavier, Orin is programmable through APIs and libraries in CUDA, Nvidia’s parallel computing platform and app programming interface model, and the company’s TensorRT platform for high-performance machine learning inference.
Huang says that Orin will be available in a range of configurations based on a single architecture when it begins shipping to manufacturers. “Creating a safe autonomous vehicle is perhaps society’s greatest computing challenge,” he said. “The amount of investment required to deliver autonomous vehicles has grown exponentially, and the complexity of the task requires a scalable, programmable, software-defined AI platform like Orin.”
Open source AI
Alongside AGX Drive Orin, Nvidia announced that it’ll provide open source access to several of the models at the core of Drive, its full-stack autonomous car and driver assistance solution. Specifically, it plans to make available the AI systems that ship with Drive AGX, which are tailored to tasks like traffic light and sign recognition, object-spotting of vehicles and pedestrians, path perception, and gaze detection and gesture recognition. (One model recently spotlighted on Nvidia’s blog automatically generates control outputs for cars’ high beams using signals derived from road conditions.)
Huang noted that many of the models in question have been in development for years collectively, and that they’ve been used broadly by automakers, truck manufacturers, robo-taxi companies, software companies, and universities alike. “The AI autonomous vehicle is a software-defined vehicle required to operate around the world on a wide variety of data sets,” he said. “By providing [autonomous vehicle] developers access to our [models] and the advanced learning tools to optimize them for multiple data sets, we’re enabling shared learning across companies and countries, while maintaining data ownership and privacy. Ultimately, we are accelerating the reality of global autonomous vehicles.”
Each model can be customized and enhanced with Nvidia’s newly released suite of tools, which enable training using a number of machine learning development techniques. Some of the techniques Huang highlighted were active learning, which improves accuracy and reduces data collection costs by automating data selection using AI; federated learning, which enables the use of data sets across countries and with other parties while maintaining data privacy; and transfer learning, which leverages pre-training and fine-tuning to develop models for specific apps and capabilities.
Didi Chuxing partnership
Lastly, Nvidia revealed that Didi Chuxing will use Nvidia graphics cards in datacenters to train the machine learning algorithms underpinning its autonomous systems. Didi — whose cars tap Nvidia’s Drive platform to fuse data from multiple sensors (including cameras, lidar, and radar), ensuring they remain aware of their surroundings — says it will build an AI infrastructure and launch virtual GPU cloud servers for computing, rendering, and gaming. Additionally, Didi says its cloud technology division, Didi Cloud, will adopt a new virtual graphics card license model to provide Nvidia-powered cloud computing services targeting transportation, AI, graphics rendering, video game, and education workloads.
“Developing safe autonomous vehicles requires end-to-end AI, in the cloud and in the car,” said Rishi Dhall, vice president of autonomous vehicles at Nvidia. “Nvidia AI will enable Didi to develop safer, more efficient transportation systems and deliver a broad range of cloud services.”
Author: Kyle Wiggers
Source: Venturebeat