HAL in 2001: A Space Odyssey. Ava in Ex Machina. Scarlett Johansson’s sultry-voiced Samantha in the movie Her. For decades, pop culture has promised us a future where artificial intelligence (AI) is evolved enough to form relationships with humans. But in every take, that future predicted by authors, film directors and actors has missed the mark.
Pop culture’s first AI-human relationship was the brainchild of Mary Shelley, who created Frankenstein in 1818. In doing so, she set readers dreaming of a day in which robots imbued with empathy could meet humans’ desire for real connection.
Today, thanks to incredible innovations in the realm of artificial intelligence, that day has come. Humans are now forming relationships with AI on a daily basis, and the intersection of machine learning into everyday human life is so intense that it would be nearly impossible to untangle our relationship with machine learning from our relationships with other sentient beings. Every time we use Google Assistant to make a phone call to a loved one, lean on facial recognition to upload photos to the internet, or browse an online store to pick up a gift for a friend or family member, we are integrating AI into our social, familial and romantic lives.
But while the future has arrived, it is entirely different from what the movies predicted. When it comes to pop culture and AI-human interactions, Hollywood got it all wrong. Just a quick glance at recent pop culture interpretations of the AI-human relationship show serious deficiencies. In 2001: A Space Odyssey, HAL 9000, the onboard computer of Discovery One, goes rogue when David and Frank decide to reprogram him, and his computerized glitch translates into the AI version of a murderous mental break.
In Her, the operating system named Samantha has pristine language skills and an impeccable ability for conversation, so much so that her user, Theodore, falls in love with her. She is not just a virtual assistant — she is an unseen seductress.
In Ex Machina, the so-called gynoid, Ava, has a human face on her robotic body, but she also has an entirely human emotion — hate — in her computer-generated heart.
In each of these dramatic interpretations, AI is pushed from algorithms into emotion. Hate, love, psychosis — these are all human experiences that the flesh-and-blood writers of these dramas imposed on their computerized characters, but for anyone watching cinema in hopes of gaining a true understanding of the potential of AI, they will be left with little to go on.
There is no doubt that AI has made impressive leaps in recent years, but despite its evolution, the technology remains in its early stages. The truth about AI has very little to do with the dramatic takes provided by popular culture. Here’s what we actually know.
Yes, people form bonds with AI. But even as they do, they know AI isn’t human.
ElliQ, a voice-operated care companion, has improved the lives of many seniors by keeping older adults engaged and active in their own homes. She’s digital and AI-powered, but nevertheless, the seniors who use her have reported feeling less lonely, especially during the long lockdown periods of COVID-19. She tells jokes, prods people to exercise, reminds them to drink water and offers conversation as an antidote to loneliness.
But despite her skills and sense of humor, all of ElliQ’s users say they know that she is not a real person. The bonds they form with her are different from their relationships with the people who form their support circle.
We have seen this markedly different human-robotic relationship in observing how people interact with Jenny, the AI sales coach at the center of our immersive sales simulations. At companies like Zoom, Jenny is considered a team member and was even given her own HR profile. She provides live conversations with salespeople to help them improve their performance.
But while she is friendly and approachable like a human team member, our research indicates that the source of her appeal is actually derived from the fact that she is not human, and therefore provides an unemotional assessment without embarrassing her practice partner. Her strength stems from the fact that she is AI-powered, which removes shame and inhibition from her coaching sessions. A computer can only assess based on set criteria and, as a result, those who utilize her services are able to improve with fewer negative feelings.
Beware: For AI to be successful, humans want to know they’re speaking to a computer from the very first moment.
As AI continues to increase its emotional range, businesses must remember that deception is the number one deterrent to AI success. When humans are deceived into thinking they’re speaking to a human when in fact it’s an AI, it will ultimately let them down, severing emotional bonds. But when humans know from the outset that they’re speaking to a robot, they unconsciously adjust their communication — they don’t argue, and they don’t get overly personal.
In the future, this knowledge of AI will open significant channels for emotional healing, mental health treatment, and social and professional growth. The story of Joshua Barbeau, who had conversations with his dead fiancé via an AI to help cope with grief, is a striking indicator of the potential that exists when AI is embraced without deception.
Of course, we must proceed with caution. Due to a shortage of therapists and a crisis of mental health issues in the wake of the pandemic, chatbot mental health therapy, such as through apps like Talkspace, is quickly becoming mainstream. Yet it remains extremely risky. There is potential, and AI has shown great promise as a frontline tool for combating the growing mental health crisis, particularly in suicide prevention. But the technology is young, and testing data is scarce. There are no quick fixes or Hollywood endings, even with the most cutting-edge technology.
There is no doubt that when we speak of a future where humans and robots converse and form emotional bonds, that future has already arrived. But unlike the dramatic foreshadowing of movies and literature, that future is also with significantly less fanfare. AI technology is still very new, and its ability to help humans develop and grow is promising. But if you think you know anything about AI from watching the movies, you should think again.
Ariel Hitron is cofounder and CEO of Second Nature.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
Author: Ariel Hitron
Source: Venturebeat