AI & RoboticsNews

AI could tell smart speakers what direction your voice is coming from

AI might soon be an ally in the quest to banish wake words from voice assistants. Carnegie Mellon University researchers have developed a machine learning model that estimates the direction a voice is coming from, indicating your intent without the need for a special phrase or gesture. The approach relies on the inherent properties of sound as it bounces around a room.

The system recognizes that the first, loudest and clearest sound is always the one aimed directly at a given subject. Anything else tends to be quieter, delayed and muffled. The model is also aware that human speech frequencies vary depending on the direction you’re facing. Lower frequencies tend to be more omnidirectional.

This method is “lightweight,” software-based and doesn’t require sending audio data to the cloud, the researchers added.

It could be a while before you see the technology in use, although the team has publicly released code and data to help others build on their work. It’s easy to see where this might lead, at least. You could tell a smart speaker to play music without using a wake word or setting off a horde of other connected devices. It might also help with privacy by requiring your physical presence while eschewing the need for gaze-detecting cameras. In other words, it would be closer to that Star Trek vision of voice assistants that always know when you’re talking to them.


Author: Jon Fingas, @jonfingas
November 3, 2020

Source: Engadget

Related posts
CryptoNews

Bitwise Files 11 Single-Token Crypto ETFs With SEC, Signaling Strong Altcoin Demand

CryptoNews

Ripple Sees Institutional Acceleration: ‘We’ve Never Been in a Better Position Heading Into a New Year’

CryptoNews

Strategy Ends 2025 With 672,497 BTC as Saylor’s Hyper-Bullish Bitcoin Accumulation Stance Holds Firm

NewsSpace

The race to mine the moon is on – and it urgently needs some clear international rules

Sign up for our Newsletter and
stay informed!