AI & RoboticsNews

Thought-detection: AI has infiltrated our last bastion of privacy

Our thoughts are private – or at least they were. New breakthroughs in neuroscience and artificial intelligence are changing that assumption, while at the same time inviting new questions around ethics, privacy, and the horizons of brain/computer interaction.

Research published last week from Queen Mary University in London describes an application of a deep neural network that can determine a person’s emotional state by analyzing wireless signals that are used like radar. In this research, participants in the study watched a video while radio signals were sent towards them and measured when they bounced back. Analysis of body movements revealed “hidden” information about an individual’s heart and breathing rates. From these findings, the algorithm can determine one of four basic emotion types: anger, sadness, joy, and pleasure. The researchers proposed this work could help with the management of health and wellbeing and be used to perform tasks like detecting depressive states.

Ahsan Noor Khan, a PhD student and first author of the study, said: “We’re now looking to investigate how we could use low-cost existing systems, such as Wi-Fi routers, to detect emotions of a large number of people gathered, for instance in an office or work environment.” Among other things, this could be useful for HR departments to assess how new policies introduced in a meeting are being received, regardless of what the recipients might say. Outside of an office, police could use this technology to look for emotional changes in a crowd that might lead to violence.

The research team plans to examine public acceptance and ethical concerns around the use of this technology. Such concerns would not be surprising and conjure up a very Orwellian idea of the ‘thought police’ from 1984. In this novel, the thought police watchers are expert at reading people’s faces to ferret out beliefs unsanctioned by the state, though they never mastered learning exactly what a person was thinking.

This is not the only thought technology example on the horizon with dystopian potential. In “Crocodile,” an episode of Netflix’s series Black Mirror, the show portrayed a memory-reading technique used to investigate accidents for insurance purposes. The “corroborator” device used a square node placed on a victim’s temple, then displayed their memories of an event on screen. The investigator says the memories: “may not be totally accurate, and they’re often emotional. But by collecting a range of recollections from yourself and any witnesses, we can help build a corroborative picture.”

Above: Black Mirror, “Crocodile”

If this seems farfetched, consider that researchers at Kyoto University in Japan developed a method to “see” inside people’s minds using an fMRI scanner, which detects changes in blood flow in the brain. Using a neural network, they correlated these with images shown to the individuals, and projected the results onto a screen. Though far from polished, this was essentially a reconstruction of what they were thinking about. One prediction estimates this technology could be in use by the 2040s.

Brain computer interfaces (BCI) are making steady progress on several fronts. In 2016, research at Arizona State University showed a student wearing what looks like a swim cap that contained nearly 130 sensors connected to a computer to detect the student’s brain waves.

Above: An Arizona State University PhD student demo’s a mind-controlled drone flight in 2016.

The student is controlling the flight of three drones with his mind. The device lets him move the drones simply by thinking directional commands: up, down, left, right.

Advance a few years to 2019 and the headgear is far more streamlined. Now there are brain-drone races.

Above: Flying drones with your brain in 2019. Source: University of Southern Florida

Besides the flight examples, BCIs are being developed for medical applications. MIT researchers have developed a computer interface that can transcribe words that the user verbalizes internally but does not actually speak aloud. A wearable device with electrodes pick-up neuromuscular signals in the jaw and face that are triggered by internal verbalizations, also referred to as subvocalizations. The signals are fed to a neural network that has been trained to correlate these signals with particular words. The idea behind this development is to meld humans and machines “such that computing, the internet, and AI would weave into human personality as a ‘second self.’” Those who cannot speak could use the technology to communicate as the subvocalizations could connect to a synthesizer that would speak the words.

Above: Interfacing with devices through silent speech. Source: MIT Media Lab

Chip implants could be coming soon

The ultimate BCI could be that proposed by Neuralink, owned by Elon Musk. Unlike the previous examples, Neuralink promises direct implants into the brain. The near-term goal of Neuralink and others is to build a BCI that can cure a wide variety of diseases. Longer-term, Musk has a grander vision: He believes this interface will be necessary for humans to keep pace with increasingly powerful AI. Just last week, Musk announced that human trials of the implants could begin later this year. He claims the company already has a monkey with “a wireless implant in [his] skull with tiny wires who can play video games with his mind.”

The advancements being made in BCI are beginning to match what science fiction authors have dreamed up in works of fiction. In , a new novel by Gish Jen, a “RegiChip” is implanted at birth into all of those deemed “Surplus,” meaning there will not be work for them in the aftermath of mass automation. Instead, they will be issued a universal basic income and have no responsibilities but to consume, to keep the automated economy operating at an efficient level. Among other things, the RegiChip is used to track everyone, their physical location but also their activities, to complete a surveillance society. Of course, the RegiChip, like all digital technologies, has the potential to be hacked.

Cognitive scientists have said that the mind is the software of the brain. Increasingly, physical software has the capacity to meld with and augment the human mind. If AI-enabled BCI achievements already seem unbelievable, it stands to reason that BCI breakthroughs in the not-too-distant future could be truly momentous. Will the technology be harnessed for positive use cases to cure diseases or for mind control? As with most technology, there will likely be both good and bad. Software is poised to eat the mind. For now, our unexpressed thoughts remain private, but that may no longer be true in the near future.

Gary Grossman is the Senior VP of Technology Practice at Edelman and Global Lead of the Edelman AI Center of Excellence.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member


Author: Gary Grossman, Edelman
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!