AI & RoboticsNews

Cruise updates the AI brains behind its autonomous cars twice weekly

GM’s Cruise announced this past June that it would postpone plans for a driverless taxi service, which it previously said would debut in 2019. But the setback hasn’t discouraged its over 1,000 staffers from iterating toward a public launch. In fact, work on the underlying systems and infrastructure has accelerated in the intervening months, according to head of Cruise’s AI and machine learning division Hussein Mehanna.

“We continue to demonstrate progress. The passion and drive in the last six months I’ve witnessed demonstrates that people at Cruise … want to deliver the safest, most comfortable vehicle ever,” said Mehanna, a veteran of Microsoft, Google, and Facebook who previously served as Snap’s director of engineering. “The faster we update [these systems] in the driver’s seat, the faster we can get to the point where we release a car.”

The company to this end detailed the work conducted by its Mapping team, which creates the high-definition maps that enable its cars to self-orient on the road. The maps in question comprise two asset types — 3D tiles rendered from short- and long-range lidar data and data-encoding labels — that provide information about features like lane boundaries, traffic lights (and their locations), and roadway curb edges. And it’s this information that lightens the processing load on the cars, allowing them to focus on navigation.

GM Cruise

Above: A point cloud generated from the visual sensors on Cruise vehicles.

Image Credit: GM Cruise

Cruise also taps maps to encode the expected behavior of other vehicles based on “thousands” of interactions in a given area. Its engineers tests hypotheses using an in-house production system and iterate on different versions of features to assess their impact on cars’ performance, after which they scale the new feature across the entire map.

These maps would quickly become outdated but for “sophisticated” product and operational solutions. They detect real-world changes like construction and redevelopment projects and send updates to every vehicle in Cruise’s fleet within minutes.

Continuous learning and pose detection

A separate initiative at Cruise dubbed the “continuous learning machine” aims to update the control policy of the GM subsidiary’s cars — an AI “driver” — upwards of twice weekly. The goal is to automate the detection of edge cases encountered while driving, and to seed those examples to the AI driver so its algorithms can self-improve.

“Cruise in a year probably drives far more miles than a human would drive in a lifetime, and if our AI … and our systems are at a point where they can learn from all of that,” said Mehanna.

Cruise’s cars encounter myriad obstacles on the road, including gaggles of foolhardy pedestrians. In the pursuit of redundancy, the systems that analyze vision sensor data narrow in on the heads of people to distinguish when they’re making eye contact with the driver side.

GM Cruise

Above: Footage from an autonomous Cruise car.

Image Credit: GM Cruise

These vision algorithms also consider poses, from which they attempt to predict the actions people within view might take before or after they cross the road ahead. For instance, they’re able to detect when someone’s looking at their phone, and infer that they’re unlikely to cross right away or that they might be distracted.

They’re even perceptive enough to spot people dressed as giant palm trees, said Mehanna — one of the many unusual scenarios Cruise’s cars have encountered in the years they’ve been driving on public roads. “It’s a negotiation process,” he added. “We believe that understanding the [human] pose is extremely important for driving safely in a city like San Francisco, and that’s something I don’t think people necessarily [think] much about it.”

Recent news about the pedestrian death caused a year ago by one of Uber’s self-driving cars underlines the importance of pose detection. Documents published last week by the National Transportation Safety Board revealed that Uber’s autonomous SUVs weren’t equipped to recognize humans walking outside of a crosswalk.

GM Cruise

Mehanna asserts that up-to-the-minute context is critical in truly self-driving scenarios, broadly speaking. Cruise revealed earlier this year that it’s testing computer vision and sound detection AI to help its cars respond to passing emergency vehicles, one of several efforts intended to bolster its cars’ situational awareness.

“One of the challenges of machine learning and applying it — and one of the reasons autonomous vehicles are one of the most exciting and difficult AI problems on the planet — is handling the long tail. There’s also the fact that we expect extremely high performance levels from autonomous vehicles and their underlying models,” said Mehanna. “Unlike most machine learning applications, which are either advertisements or recommendations for movies or Amazon, no one gets hurt if you show them a bad movie recommendation or organic search results. With autonomous vehicles, the bar is much higher.”

Cruise continues to test its cars in Scottsdale, Arizona and the metropolitan Detroit area, with the bulk of deployment concentrated in San Francisco. It’s scaled up rapidly, growing its starting fleet of 30 driverless vehicles to about 130 by June 2017. Cruise hasn’t disclosed the exact total publicly, but the company has 180 self-driving cars registered with California’s DMV, and three years ago, documents obtained by IEEE Spectrum suggested the company planned to deploy as many as 300 test cars around the country.


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!