Cleantech & EV'sNews

Tesla starts pushing new Full Self-Driving Beta update with improvements

Tesla has started to push a new Full Self-Driving Beta software update with a series of improvements ahead of the v12 update, which is supposed to take the system out of beta.

FSD Beta enables Tesla vehicles to drive autonomously to a destination entered in the car’s navigation system, but the driver needs to remain vigilant and ready to take control at all times.

It originally launched in October of 2020 and Tesla has been expanding the program to more vehicles in North America with every software update since.

Tesla has been regularly releasing updates with improvements with the goal of making the system safer, and eventually fulfilling its promise of truly making the system self-driving by taking responsibility and allowing drivers not to pay attention anymore.

It’s unclear when that is going to happen but in the meantime, Tesla is pushing new updates like this new version 11.4.8.

Release notes state that Tesla is making incremental improvements to several different aspects of FSD beta:

  • Added option to activate Autopilot with a single talk depression, instead of two, to help simplify activation and disengagement.
  • Introduced a new efficient video module to the vehicle detection semantic, velocity, and attributes networks that allowed for increased performance at lower latency. This was achieved by creating a multi-layered, hierarchical video module that caches intermediate computations to dramatically reduce the amount of compute that happens at any particular time.
  • Improved distant crossing object detections by an additional 6%, and improved the precision of vehicle detection by refreshing old datasets with better autolabeling and introducing the new video module.
  • Improved the precision of cut-in vehicle detection by 15%, with additional data and the changes to the video architecture that improve performance and latency.
  • Reduced vehicle velocity error by 3%, and reduced vehicle acceleration error by 10%, by improving autolabeled dataset introducing the new video module, and aligning model training and inference more closely.
  • Reduced the latency of the vehicle semantics network by 15% with the new video module architecture, at no cost to performance.
  • Reduced the error of pedestrian and bicycle rotation by over 8% by leveraging object kinematics more extensively when jointly optimizing pedestrian and bicycle tracks in autolabeled datasets.
  • Improved geometric accuracy of Vision Park Assist predictions by 16%, by leveraging 10x more HW4 data, tripling resolution, and increasing overall stability of measurements.
  • Improved path blockage lane change accuracy by 10% due to updates to static object detection networks.

While the update looks significant based on the release notes, it’s still not v12, which is going to include neural net-based vehicle controls, nor the true full self-driving that CEO Elon Musk said would happen by the end of the year.


Author: Fred Lambert
Source: Electrek

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!