AI & RoboticsNews

How AI is impacting the automotive world

Presented by Qualcomm Technologies, Inc.


The idea of self-driving cars has captured consumer imagination, but AI is having a much broader impact across the entire automotive industry. From design, manufacturing, and infrastructure to predictive maintenance, safety, and a slew of AI-enabled cockpit features, auto experiences are evolving and improving. It’s having a foundation-disrupting impact for auto manufacturers, smart cities, and consumers alike.

Not surprising, on-device AI is the primary force driving this transformation. It’s enabling compute-intense AI workloads, such as complex neural network models, to execute in real time with high accuracy. In addition, as chipset providers add advanced AI capabilities and waterfall them across product tiers, sophisticated use cases are not just for luxury vehicles, but for entry-level cars as well.

Overall, the impact of AI on the auto industry can be broken down across three pillars: the in-vehicle experience, autonomous driving, and smart transportation. Here’s a look at some of the most profound advances that AI technology is powering in the automotive world.

The AI-powered cockpit

Cars have been connected for fifteen years now, dating back to when the first cellular modems were rolled out in higher-end vehicles. Now AI has transformed the entire in-vehicle experience for cars at every price point. It’s redefining every aspect of the cockpit, including personalization, in-car virtual assistants, driver behavior monitoring, and intelligent driver assistance systems.

For auto manufacturers, the digital cockpit has long been seen as an extension of their brand. Now, manufacturers are developing their own differentiated software and applications to control the entire user experience. They are working with relevant ecosystem partners to create more value for their customers, including a more interactive, personal, and intuitive experience. There’s already more AI in your car than you are probably aware of. AI serves as the glue that connects in-vehicle systems and experiences together to enhance user experiences and passenger safety.

AI powers personalized infotainment systems with customized car settings, content, and recommendations that learn from a user’s preferences and habits over time. For example, AI-powered infotainment systems will know the exact movie to play for your kids in the backseat as well as the song playlist to sooth your nerves after a stressful day. In-car virtual assistant systems with an AI-powered voice UI will allow you to engage with your car’s system in a simple, intuitive way. Instead of swiping through screens looking for options, you simply speak — ask for directions, change the temperature, play a movie for the kids in the back seat, request a podcast, and more. Advancements in AI speech recognition, natural language processing, and text-to-speech makes operating these cockpits easier than ever.

The car’s intelligent driver assistance system includes in-cabin monitoring and ultra-HD surround-view monitoring. With an inward-facing camera, the AI-powered system can keep an eye on whether the driver is paying attention and alert by monitoring facial expressions, voice, gestures, body language, and more.

Outside sensors, from cameras to radar and ultrasound, monitor and report the driving conditions in real time while the car is in motion. As cars become capable of perceiving the world and developing contextual awareness, they not only can provide intelligent alerts but also more advanced driver assistance. For instance, if the vehicle spots an icy road ahead, it offers a warning, engages the all-wheel drive, and potentially slows the car down. These are baby step toward autonomy, which will not happen overnight.

AI is the common thread powering these new capabilities. The deep learning revolution has allowed for dramatic improvements in computer vision, voice recognition, object classification, scene understanding, and more. However, the concurrency of all these functions creates complex and compute-intensive workloads.

Platforms like the 3rd Generation Qualcomm Snapdragon Automotive Cockpit Platform are stepping up to help enable these functions. It’s the automotive industry’s first-announced scalable AI-based platform designed to support the higher levels of computing and intelligence needed for advanced capabilities.

Qualcomm Technologies platforms have also advanced to the point where new services can be introduced to connected cars more rapidly, so post-sales upgrades to the cockpit are easier than ever. Faster development cycles and ease of updates means vehicles can transition to the improved AI models as they emerge, enabling rapid incremental improvements in driver experiences from natural voice UI to more robust hands-free driving. Incremental advancements like this can help drivers slowly adjust to the new autonomous driving paradigm.

The path to autonomous driving

Consumers may be eager for the benefits and promise of a fully autonomous car in theory. But in practice, taking your hands off the wheel is unquestionably unnerving. One of the great benefits of a personalized, immersive infotainment system and digital cockpit is that it helps drivers trust the system. The more clearly you understand what the car is doing and why, such as alerting you to your surroundings and what it plans to do in response, the more you trust the system.

The gradual ramp-up in levels of automation is helping pave the way to self-driving cars, but we’re not there yet. Currently, the industry is focused on five levels of autonomy of Advanced Driver-Assistance Systems (ADAS), with level 5 providing full autonomy in cars that do not even have steering wheels.

Next-gen cars feature level 2 and level 3 autonomy solutions which improve safety, convenience, and productivity for passenger vehicles. These applications rely on sensor fusion of various complementary sensors: camera, radar, lidar, cellular vehicle-to-everything (C-V2X), and positioning. The input from all of these sensors allows the vehicle to perceive and understand the environment, plan its path, and take the right actions to keep the car and passengers safe. At this level, human override is still required, and the car can still alert the driver to take control when action is required.

The camera is a crucial sensor — cameras in the front, sides, and back, with both near and far views, will offer a surround view. These cameras, powered by deep neural networks, are the car’s eyes and are capable of identifying objects, cars, pedestrians, and so on. It can read signs and understand where the lanes in the road are located. The camera is also useful for precise positioning. For example, the Qualcomm Vision Enhanced Precise Positioning (VEPP) software combines the output of multiple sensors — Global Navigation Satellite Services (GNSS), camera, Inertial Measurement Unit (IMU), and wheel sensors — to offer accurate and cost-effective global vehicle positioning, accurate to less than 1 meter.

Radar is already in many cars now, usually in bumpers to detect proximity. Research has shown that applying AI to radar can extract much more information, such as being able to detect and locate objects, like other moving vehicles. This is particularly useful because radar works in various conditions — rain or snow, day or night — since it’s an active sensor that emits electromagnetic waves, usually in the mmWave spectrum. Lidar is also an active sensor since it emits lasers and receives the reflection. They offer higher resolution and more points of reference to create a 3D point cloud of the environment. However, the downside of lidar is its cost.

C-V2X acts as a sensor and designed to allow the vehicle to communicate with other devices, from other cars on the road to the smartphone in a pedestrian’s hand to the smart infrastructure in the surrounding environment, such as light signals and intelligent roadside units (RSUs) — special wireless access points that can be connected to roadside infrastructure. It can also communicate to the cellular network and gather the collective intelligence in the cloud. The unique benefit of C-V2X compared to other sensors is that it can essentially see around corners and beyond line of sight. Automakers and roadside infrastructure providers are looking to C-V2X solutions, such as the Qualcomm 9150 C-V2X chipset, to provide enhanced capabilities for safety and autonomous driving.

Each sensor has its own strengths. It’s the combination of all these sensors into AI algorithms that creates an accurate, reliable, real-time perception and understanding of the environment. Right now, it allows the human operator to make the safest choices. In the future, as AI algorithms advance — as sensors grow even more sophisticated, and cities begin to implement smart infrastructure — the vehicle itself will be able to take over these choices and pilot safely.

The rise of the smart city and smart transportation

Major shifts are being made in developing next-generation city infrastructure and transportation networks, and AI is playing a vital role in that development. Over the next decade, as advances in technology come into play, transportation networks will become smarter, safer, and more efficient for pedestrians, commuters, and drivers.

Next-generation infrastructure will run AI for perception and sensor fusion to build road world models, which are basically very accurate 3D HD maps of the environment. Similar to the car, smart city infrastructure could handle inputs from multiple high-resolution cameras, radars, and run neural networks, precise positioning, and sensor fusion algorithms to generate regularly updated maps. The infrastructure could then send road world models and local 3D HD map updates to cars at the intersection through C-V2X communication.

For example, intelligent roadside units with AI-powered cameras and radars will be able to detect obstacles such as road construction, traffic congestion, or lane reconfiguration during emergencies. These real-time updates allow smart infrastructure systems to keep an up-to-date model of these roadways. C-V2X direct communications can communicate this data to connected vehicles and offer solutions, instructing cars to take new or alternate routes to avoid blockages or even change lanes, speed up, slow down, or stop as necessary to keep traffic smooth.

For pedestrian detection and warning, an RSU with AI for camera perception can detect a pedestrian intending to cross the street. By using C-V2X direct communication, the RSU will broadcast a warning message over the air to alert cars at the intersection and provide them with the vicinity of the pedestrian. In addition, a smartphone with C-V2X enabled could also send its precise position to make cars aware of its location.

RSUs can broadcast over the air messages related to “Time to Signal” changes leveraging Traffic Signal Phase and Timing (SPaT) messaging. This will let the car and driver know when the traffic signal will turn green or red. Plus, it will allow the city to provide the most efficient flow of traffic.

What will make smart cities work is a combination of C-V2X, next-generation RSUs, autonomous cars, and holistic city-level intelligence provided by a cloud AI based on the aggregated data. In the end, autonomy is going to save lives. It’s going to give us back a lot more time while making transportation more enjoyable for everyone. And right now, it’s opening up exponential opportunity for the auto industry.


VEPP uses GNSS corrections for atmospheric and clock offsets from a reference Real Time Kinematic (RTK) network. These corrections are typically obtained from a fixed reference station near the vehicle. 2 DGPS is based on unfiltered positioning representing the level of GNSS multipath. The (DGPS) performance is not representative of performance achievable in Qualcomm’s GNSS solutions. 3 The High precision GNSS/INS output shown is the real-time output, without the benefit of post processing. The large errors incurred are noteworthy, despite the benefit of a survey-grade IMU.

Qualcomm Snapdragon, Qualcomm 9150, and Qualcomm Automotive products, including Qualcomm VEPP and Qualcomm location solutions, are products and/or solutions of Qualcomm Technologies, Inc. and/or its subsidiaries.


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact sales@venturebeat.com.


Author: VB Staff
Source: Venturebeat

Related posts
AI & RoboticsNews

Mike Verdu of Netflix Games leads new generative AI initiative

AI & RoboticsNews

Google just gave its AI access to Search, hours before OpenAI launched ChatGPT Search

AI & RoboticsNews

Runway goes 3D with new AI video camera controls for Gen-3 Alpha Turbo

DefenseNews

Why the Defense Department needs a chief economist

Sign up for our Newsletter and
stay informed!