Cleantech & EV'sNews

Hacker shows what Tesla Full Self-Driving’s vision depth perception neural net can see

A hacker managed to pull Tesla’s vision depth perception neural net from his car with “Full Self-Driving” package.

You can see how the vehicle detects depth with a point-cloud view powered by computer vision.

Tesla has recently started to move away from its radar sensor, which is useful to detect depth, and it is instead relying only on camera-based computer vision.

This is a very different approach from the rest of the industry that not only uses radar but also lidar sensors as well.

Tesla CEO Elon Musk maintains that cameras and neural nets are the keys to achieving self-driving.

He told Electrek last month:

The whole road system is designed to work with optical imagers (eyes) and neural nets (brain). That’s why cameras and silicon neural nets are the solution.

One of the problems with the lack of radar is that it complicates depth perception, which radar is great at, but Tesla plans to detect depth with a point-cloud view generated by its cameras and neural nets.

Green, a Tesla hacker who managed to get incredible access to Tesla software with root access in its vehicles, has accessed the depth perception neural net in Tesla’s Full Self-Driving package and released a video of it:

As Green mentioned, this neural net could actually produce a 3D view of the vehicle’s surroundings.

But the resolution is not as high as shown by Tesla in a previous presentation:

However, this new neural net is what runs live in the vehicle as it is driving around with Tesla’s Full Self-Driving package with “city driving” activated.

Green noted that this neural net only runs with the main front-facing camera, which is one out of three front-facing cameras and one out of eight total cameras.

This neural net should soon reach more Tesla vehicles as the automaker is expected to expand its FSD Beta rollout to more vehicles with the release of its FSD Beta v9 software update.


Subscribe to Electrek on YouTube for exclusive videos and subscribe to the podcast.


Author: Fred Lambert
Source: Electrek

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!