AI & RoboticsNews

Inside Boston Dynamics’ project to create humanoid robots

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


Boston Dynamics is known for the flashy videos of its robots doing impressive feats. Among Boston Dynamics’ creations is Atlas, a humanoid robot that has become popular for showing unrivaled ability in jumping over obstacles, doing backflips, and dancing. The videos of Boston Dynamics robots usually go viral, accumulating millions of views on YouTube and generating discussions on social media.

And the robotics company’s latest video, which shows Atlas successfully running a parkour track, is no exception. Within hours of its release, it received hundreds of thousands of views and became one of the top-ten trends of U.S. Twitter.

But the more interesting video was an unprecedented behind-the-scenes account of how Boston Dynamics’ engineers developed and trained Atlas to run the parkour track. The video shows some of Atlas’s failures and is a break from the company’s tradition of showing highly polished results of this work. The video and an accompanying blog post provide some very important insights into the challenges of creating humanoid robots.

Research vs commercial robots

Officially, Boston Dynamics is a for-profit organization. The company wants to commercialize its technology and sell products. But at its heart, Boston Dynamics is a research lab filled with engineers and scientists who also want to push the limits of science regardless of the commercial benefits. Aligning these two goals is very difficult, and testament to the fact is that Boston Dynamics has changed ownership several times in the past decade, going from Google to SoftBank to Hyundai.

The company is looking to create a successful business model, and it has already released a few commercial robots, including Spot, a multi-purpose robo-dog, and Stretch, a mobile robo-arm that can move boxes. Both have found interesting applications in different industries, and with Hyundai’s manufacturing capacity, Boston Dynamics might be able to turn them into profitable ventures.

Atlas, on the other hand, is not one among Boston Dynamics’ commercial projects. The company describes it as a “research platform.”

This is not because humanoid biped robots are not commercially useful. We humans have designed our homes, cities, factories, offices, and objects to accommodate our physique. A biped robot that could walk surfaces and handle objects as we do can have unlimited utility and be one of the—if not the—most lucrative business opportunities for the robotics industry. It would have great advantage over current mobile robots, which are restricted to specific environments (flat grounds, uniform lighting, flat-sided objects, etc.) or require their environments to be changed to accommodate their limits.

However, biped robots are also really hard to create. Even Atlas, which is by far the most advanced biped robot, is still a long way from reaching the smooth and versatile motor skills of humans. And a look at some of the failures in the new Atlas video shows the gap that remains to be filled.

The challenges of biped robots

In animals and humans, growth and learning happen together. You learn to crawl, stand, walk, run, jump, and do sports as your body and brain develop.

But growing robots is impossible (at least for the foreseeable future). Robotics engineers start with a fully developed robot (which is iteratively adjusted as they experiment with it) and must teach it all the skills it needs to use its body efficiently. In robotics, as with many other fields of science, engineers seek ways to avoid replicating nature in full detail by taking shortcuts, creating models, and optimizing for goals.

In the case of Atlas, the engineers and scientists of Boston Dynamics believe that optimizing the robot for parkour performance will help them achieve all the nuances of bipedal motor skills (and create sensational videos that get millions of views on YouTube).

As Boston Dynamics put it in its blog post: “A robot’s ability to complete a backflip may never prove useful in a commercial setting … But it doesn’t take a great deal of imagination or sector-specific knowledge to see why it would be helpful for Atlas to be able to perform the same range of movements and physical tasks as humans. If robots can eventually respond to their environments with the same level of dexterity as the average adult human, the range of potential applications will be practically limitless.”

So, the basic premise is that if you can get a robot to do backflips, jump across platforms, vault over barriers, and run on very narrow paths, you would have taught it all the other basic movement and physical skills that all humans possess.

The blog further states: “Parkour, as narrow and specific as it may seem, gives the Atlas team a perfect sandbox to experiment with new behaviors. It’s a whole-body activity that requires Atlas to maintain its balance in different situations and seamlessly switch between one behavior and another.”

The evolution of Atlas has been nothing short of impressive. Aside from the flashy moves, it is showing some very interesting fundamental capabilities, such as adjusting its balance when it makes an awkward landing. According to Boston Dynamics, the engineers have also managed to generalize Atlas’s behavior by providing it with a set of template behaviors such as jumping and vaulting, and letting it adjust those behaviors to new scenarios it encounters.

But the robot still struggles with some very basic skills seen in all primates. For example, in some cases, Atlas falls flat on its face when it misses a jump or loses its balance. In such cases, primates instinctively stretch out their arms to soften the blow of the fall and protect their head, neck, eyes, and other vital parts. We learn these skills long before we start running on narrow ledges or jumping on platforms.

A complex environment such as the parkour track helps discover and fix these challenges much faster than a flat and simple environment would.

Simulation vs real-world training

One of the key challenges of robotics is physical-world experience. The Atlas video displays this very well. A team of engineers must regularly repair Atlas after it gets damaged. This cycle drives up costs and slows down training.

Training robots in the physical also has a scale problem. The AI systems that guide the movements of robots such as Atlas require a huge amount of training, orders of magnitude more than a human would need. Taking Atlas through the parkour track thousands of times doesn’t scale and would take years of training and incur huge costs in repairs and adjustments. Of course, the research team could slash training time by using multiple prototypes in parallel on separate tracks. But this would significantly increase the costs and would need huge investments in gear and real estate.

Atlas repair
Engineers at Boston Dynamics must regularly repair Atlas

An alternative to real-world training is simulated learning. Software engineers create three-dimensional environments in which a virtual version of the robot can undergo training at a very fast pace and without the costs of the physical world. Simulated training has become a key component of robotics and self-driving cars, and there are several virtual environments for the training of embodied AI.

But virtual worlds are just an approximation of the real world. They always miss small details that can have a significant impact, and they don’t obviate the need for training robots in the physical world.

The physical world exposes some of the challenges that are very hard to simulate in a virtual environment, such as slipping off an unstable ledge or the tip of the foot getting stuck in a crevice.

The Atlas video shows several such cases. One notable example takes place when Atlas reaches a barrier and its arm to vault over it. This is a simple routine that doesn’t require great physical strength. But although Atlas manages the feat, its arm shakes awkwardly.

“If you or I were to vault over a barrier, we would take advantage of certain properties of our bodies that would not translate to the robot,” according to Scott Kuindersma, Atlas Team Lead. “For example, the robot has no spine or shoulder blades, so it doesn’t have the same range of motion that you or I do. The robot also has a heavy torso and comparatively weak arm joints.”

These kinds of details are hard to simulate and need real-world testing.

Perception in robots

According to Boston Dynamics, Atlas uses “perception” to navigate the world. The company’s website states that Atlas uses “depth sensors to generate point clouds of the environment and detect its surroundings.” This is similar to the technology used in self-driving cars to detect roads, objects, and people in their surroundings.

Boston Dynamics Atlas virtual training
Atlas uses depth sensors to map its surroundings

This is another shortcut that the AI community has been taking. Human vision doesn’t rely on depth sensors. We use stereo vision, parallax motion, intuitive physics, and feedback from all our sensory systems to create a mental map of the environment. Our perception of the world is not perfect and can be duped, but it’s good enough to make us excellent navigators of the physical world most of the time.

It will be interesting to see if the vision and depth sensors alone will be enough to bring Atlas on par with human navigation or if Boston Dynamics will develop a more complicated sensory system for its flagship robot.

Atlas still has a long way to go. For one thing, the robot will need hands if it is to handle objects, and that is itself a very challenging task. Atlas probably won’t be a commercial product anytime soon, but it is providing Boston Dynamics and the robotics industry a great platform to learn about the challenges that nature has solved.

“I find it hard to imagine a world 20 years from now where there aren’t capable mobile robots that move with grace, reliability, and work alongside humans to enrich our lives,” Boston Dynamics’ Kuindersma said. “But we’re still in the early days of creating that future. I hope that demonstrations like this provide a small glimpse of what’s possible.”

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Ben Dickson
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!