NewsPhotography

Scientists Mimic a Camera with Ear-Based Sonar to Image a Human Face

Traditionally, camera-based technology has been used to track facial movements in real-time. But now, researchers at Cornell University have developed a wearable earphone device — or “earable” — that bounces sound off of a human’s cheeks and transforms the echoes into an avatar of that person’s entire moving face.

The device performs as well as camera-based face tracking technology but uses less power, offers more privacy, and is less vulnerable to hackers, according to the researchers.

Devices that track facial movements using a camera are “large, heavy and energy-hungry, which is a big issue for wearables,” lead researcher Cheng Zhang tells the Cornell Chronicle .

“Also importantly, they capture a lot of private information,” adds Zhang.

Zhang says that facial tracking through acoustic technology — rather than camera technology — can offer better privacy, affordability, comfort and battery life.

The system, called EarIO, transmits facial movements to a smartphone in real-time and is compatible with commercially available headsets for hands-free video conferencing. The EarIO works like a ship sending out pulses of sonar. A speaker on each side of the earphone sends acoustic signals to each side of the face, using a microphone to pick up echoes. As wearers talk, smile or raise their eyebrows, the skin moves and stretches, changing the echo profiles. A deep learning algorithm developed by the researchers uses artificial intelligence (AI) to continually process the data and translate the shifting echoes into complete facial expressions.

By using sound instead of data-intensive images, the “earable” can communicate through Bluetooth with smartphones, keeping a user’s information private. With images, the device would need to connect to a Wi-Fi network and send data back and forth to the cloud, potentially making it vulnerable to hackers.

Using acoustic signals also takes less energy than recording images, and the EarIO uses 1/25 of the energy of other camera-based systems. Currently, the earable lasts around three hours on a wireless earphone battery, but future research will focus on extending the use time.

One limitation of the technology is that before the first use, the EarIO must collect 32 minutes of facial data to train the algorithm but Zhang says they hope to “make this device plug and play” in the future.


Image credits: Header photo by Cornell University.


Author: Pesala Bandara
Source: Petapixel

Related posts
AI & RoboticsNews

DeepSeek’s first reasoning model R1-Lite-Preview turns heads, beating OpenAI o1 performance

AI & RoboticsNews

Snowflake beats Databricks to integrating Claude 3.5 directly

AI & RoboticsNews

OpenScholar: The open-source A.I. that’s outperforming GPT-4o in scientific research

DefenseNews

US Army fires Precision Strike Missile in salvo shot for first time

Sign up for our Newsletter and
stay informed!