Tesla is starting to release a new Full Self-Driving (FSD) Beta software update that includes many high-level changes that should positively impact performance.
FSD Beta enables Tesla vehicles to drive autonomously to a destination entered in the car’s navigation system, but the driver needs to remain vigilant and ready to take control at all times.
Since the responsibility lies with the driver and not Tesla’s system, it is still considered a level two driver-assist system despite its name. It has been sort of a “two steps forward, one step back” type of program, as some updates have seen regressions in terms of driving capabilities.
Tesla has been frequently releasing new software updates to the FSD Beta program and adding more owners to it.
The company now has over 100,000 people in the FSD Beta program and plans to expand it to everyone who buys access in North America by the end of the year through a few more software updates to refine the system.
Considering we are already in November and it generally takes at least a month for Tesla to deliver a new FSD Beta update, we expect Tesla is one or two updates away from the promised wider release.
Today, the automaker has started pushing a new FSD Beta update (v10.69.3) to employees for internal testing, which generally means that it will expand to beta testers in the customer fleet soon.
According to the release notes below, the update doesn’t include any new features, but it includes a lot of high-level updates to Tesla’s neural nets to improve the overall performance of the system.
Tesla Full Self-Driving Beta Release Notes v10.69.3 Release Notes via Not a Tesla App:
– Upgraded the Object Detection network to photon count video streams and retrained all parameters with the latest autolabeled datasets (with a special emphasis on low visibility scenarios).
– Improved the architecture for better accuracy and latency, higher recall of far away vehicles, lower velocity error of crossing vehicles by 20%, and improved VRU precision by 20%.
– Converted the VRU Velocity network to a two-stage network, which reduced latency and improved crossing pedestrian velocity error by 6%.
– Converted the non-VRU Attributes network to a two-stage network, which reduced latency, reduced incorrect lane assignment of crossing vehicles by 45%, and reduced incorrect parked predictions by 15%.
– Reformulated the autoregressive Vector Lanes grammar to improve the precision of lanes by 9.2%, recall of lanes by 18.7%, and recall of forks by 51.1%. Includes a full network update where all components were retrained with 3.8x the amount of data.
– Added a new “road markings” module to the Vector Lanes neural network which improves lane topology error at intersections by 38.9%.
– Upgraded the Occupancy Network to align with road surface instead of ego for improved detection stability and improved recall at hill crest.
– Reduced runtime of candidate trajectory generation by approximately 80% and improved smoothness by distilling an expensive trajectory optimization procedure into a lightweight planner neural network.
– Improved decision-making for short-deadline lane changes around gores by richer modeling of the trade-off between going off-route versus trajectory required to drive through the gore region.
– Reduced false slowdowns for pedestrians near crosswalks by using a better model for the kinematics of the pedestrian.
– Added control for more precise object geometry as detected by the general occupancy network.
– Improved control for vehicles cutting out of our desired path by better modeling of their turning/lateral maneuvers thus avoiding unnatural slowdowns.
– Improved longitudinal control while offsetting around static obstacles by searching over feasible vehicle motion profiles.
– Improved longitudinal control smoothness for in-lane vehicles during high relative velocity scenarios by also considering relative acceleration in the trajectory optimization.
– Reduced best-case object photon-to-control system latency by 26% through adaptive planner scheduling, restructuring of trajectory selection, and parallelizing perception compute. This allows us to make quicker decisions and improves reaction time.
Subscribe to Electrek on YouTube for exclusive videos and subscribe to the podcast.
Author: Fred Lambert
Source: Electrek