AI & RoboticsNews

What is Autonomous AI?

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – August 3. Join AI and data leaders for insightful talks and exciting networking opportunities. Learn more about Transform 2022


Autonomous artificial intelligence is defined as routines designed to allow robots, cars, planes and other devices to execute extended sequences of maneuvers without guidance from humans. The revolution in artificial intelligence (AI) has reached a stage when current solutions can reliably complete many simple, coordinated tasks. Now the goal is to extend this capability by developing algorithms that can plan ahead and build a multistep strategy for accomplishing more. 

Thinking strategically requires a different approach than many successful well-known applications for AI. Machine vision or speech recognition algorithms, for instance, focus on a particular moment in time and have access to all of the data that they might need. Many applications for machine learning work with training sets that cover all possible outcomes. 

Autonomous operation often requires imagining a number of potential outcomes for the future, anticipating possible problems and then setting a course of action that minimizes the dangers while maximizing some other factor like speed or reliability. Learning to play the game of chess is good training for these tasks, both for computers and humans. 

The autonomous devices can already rely upon a number of mature technologies that were developed to help humans. There are already elaborate digital maps of roads as well as well-tested tools for finding the best route through them. Sonar sensors and cameras already warn of potential collisions. 

Much of the work of creating autonomy requires attention to strategic algorithms as well as understanding how to build better sensors and interpret their results. Some companies are pushing for better cameras with active lighting from lasers to provide more precise information about the world. Others are trying to use better mathematical models to squeeze better information from standard sensors. 

What are the important parts of Autonomous AI? 

The field is still very new and researchers are continually refining their algorithms and their approaches to the problem, but it is common to break the job into these layers. 

  • Sensing — Building a model of the constantly shifting world requires a collection of sensors that are usually cameras and often controlled lighting from lasers or other sources. The sensors usually also include position information from GPS or some other independent mechanism. 
  • Fusion — The details from the various sensors must be organized into a single, coherent view of what’s happening around the vehicle. Some images may be occluded. Some may be failing. They may not always be consistent. The sensor fusion algorithms must sort through the details and construct a reliable model that can be used in later stages for planning. 
  • Perception — After the model is constructed, the system must begin to identify important areas like any roads or paths or moving objects. 
  • Planning — Finding the best path forward requires studying the model and also importing information from other sources like mapping software, weather forecasts, traffic sensors and more. 
  • Control — After a path is chosen, any device must ensure that the motors and steering work to move along the path without being diverted by bumps or small obstacles. 

In general, information flows from the top layer of the sensors down to the control layer as decisions are made. There are feedback loops, though, that bring information from the lower layers back up to the top to improve sensing, planning and perception. 

The systems also bring in data from external sources. One big advantage of autonomous systems appears when the devices communicate with each other, exchanging information in a process sometimes called “fleet learning.” Being able to fuse the sensor readings allows the devices to make smarter decisions using historical data from other devices that may have been in the same position earlier. Detecting moving objects like pedestrians can be difficult with only a few seconds of video because people may be standing still but it gets easier when it’s possible to compare the sensor data with similar images taken earlier in the day. 

What are some ways to simplify the job? 

Many autonomous systems are able to work quite well by simplifying the environment and limiting the options. For example, autonomous shuttle trains have operated for years in amusement parks, airports and other industrial settings. Their routes are predetermined, limited and often kept free of moving obstacles. This simplifies each of the stages in the algorithm. 

Many plans for creating working autonomous systems depend upon creating this limited environment. Some, for example, speak of autonomous vehicles that circulate on industrial campuses. Others concentrate on warehouses. Minimizing the random obstacles is key.

Another potential solution is to arrange for human override and minimize the amount of time this is needed. Some imagine that the cars might gently pause or freeze if the scene becomes too complex to interpret. Either the passenger or some distant person in some central mission control facility can take over until the issue is resolved. 

What are the levels of autonomous AI vehicle guidance?

To simplify the progression to fully autonomous AIs guiding vehicles, some AI scientists break apart the transition from human to machine guidance. This allows a legal framework to evolve and people to categorize some of their tools. The frameworks are not fixed and some, for instance, break their hierarchy into five layers and some six. The distinctions are not firm and some algorithms may exhibit behavior from two or three levels at the same time. 

The levels are the following: 

  • Level 0 — The human makes all decisions except, perhaps, some automatic systems like windshield wipers or heating. 
  • Level 1 — The human is able to start delegating responsibility for either braking or lane following to the car. 
  • Level 2 — The car will handle several major tasks like braking, acceleration or lane following, but the human must remain ready to take control at all times. Some systems may even require the human to keep hands on the steering wheel. 
  • Level 3 — The human may turn away from the road occasionally for a short amount of time but must be ready to respond to an alarm in case they’re needed. The car is able to handle control over well-defined and mapped routes like freeways but not roads or paths that aren’t studied and mapped in advance. 
  • Level 4 — The human can turn to other tasks but can take control at any point. In some cases where the paths aren’t well understood by the AI, the human may be required to take over. 
  • Level 5 — The human can treat the service like a taxi and surrender all control. 

The levels are not precise because the success of the AI may depend upon the route. A particular set of algorithms may deliver close to full autonomy on well-defined paths like following freeway lanes with little traffic but may fail in unusual or undefined situations. 

How are the giants approaching the challenge? 

Cruise Automation is a startup owned by General Motors. It’s been building fully autonomous versions of Chevy’s Bolt car and deploying them in cities like San Francisco to sell rides. They’re also running the same cars in Phoenix to deliver goods for Walmart. 

Apple has not announced any public products but there have been numerous reports that they’re hiring engineers with expertise in this area. One of the developers of Tesla’s autopilot software, for instance, jumped to Apple. 

Alphabet’s division is building a module called the Waymo Driver that can be installed on top of a traditional car and integrated with the control hardware. Their effort is one of the first seen on public streets and the company brags of millions of miles of extended testing. They’re also running a ride hailing service called Waymo One in Phoenix with the technology and working with long-haul trucking companies to test the software on carrying goods on long trips. 

Microsoft’s public work is more general and experimental. Their research group is, for instance, sharing the Moab code base under the MIT License to allow anyone to experiment with the higher order challenges of sensing, planning and acting. This is part of a bigger low-code tool called Bonsai which can guide any industrial process, not just drive a truck. Pepsi, for example, is using the technology to improve the quality of their Cheeto snacks

Oracle is also using the word as part of the name of the latest version of their flagship database, which uses AI algorithms to tune performance, thus saving staff time. 

IBM is applying their AI technology to guiding ships. Their AI captain is being built to avoid collisions while making intelligent decisions about wind, weather and tides. 

How are startups impacting autonomous AI?

Some startups are building complete systems and creating vertically integrated transportation systems. Pony.ai, for instance, is building a sensor array that sits on top of existing car models and passes control instructions to guide them. They’ve created versions for a number of models from car manufacturers like Lexus, Hyundai and Lincoln. They also run a Robotaxi service in Guangzhou and Beijing, as well as Irvine and Fremont in California, sending autonomous cars to riders who hail them with a phone app. 

Wayve is focusing on bringing agile machine learning algorithms in a similar module. They emphasize a model where the car is constantly improving and adjusting to the neighborhood while sharing information with others in the fleet. They routinely test cars on London streets and are exploring creating autonomous delivery fleets. 

Argo is building a platform that bundles together lidar-based sensor hardware, guidance software and any of the mapping information needed for running fully autonomous vehicles. They’ve integrated their autonomous platform with cars from Ford and Volkswagen. They’re also partnering with Walmart to create local delivery vehicles. 

Many of the startups are tackling portions of the challenge, from designing better sensors to creating better planning algorithms. AEye is building 4Sight, an adaptive sensor system built around lidar sensors. They currently make two products, known as M and A, optimized for industrial and automotive applications respectively.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.


Author: Peter Wayner
Source: Venturebeat

Related posts
AI & RoboticsNews

AI risk management startup ValidMind raises $8.1M to help banks comply with regulations

DefenseNews

Amid faltering domestic program, Taiwan orders more MQ-9B drones

DefenseNews

BAE demos platform that gives Army AMPVs turret system options

DefenseNews

US Army’s fresh look at watercraft includes unmanned options

Sign up for our Newsletter and
stay informed!