AI & RoboticsNews

Why AI is teetering on the edge of a disillusionment cliff | The AI Beat

This will be an unpopular take: I believe AI is teetering on the edge of a disillusionment cliff. Whether that goes beyond Gartner’s famous “trough,” and whether it lasts, remains to be seen. But something is shifting — and it’s not just the slight early-autumn chill in the air. 

I know what you’re thinking. How could that be? After all, AI is booming. The past couple of weeks in AI news have been two of the most exciting, exhilarating and jam-packed since I began covering the space for VentureBeat in April 2022. 

Just last week, I sat in a press-filled audience at Meta’s Menlo Park headquarters watching Mark Zuckerberg tout Meta’s new AI chatbot and image generator, integrated with Facebook, Instagram and WhatsApp. The week before, I was in Washington DC as Amazon announced the new generative AI-powered Alexa, and in New York City when Microsoft announced it would integrate its Copilot into Windows 11 — including OpenAI’s DALL-E 3, which was announced the same week. 

But hear me out: Along with the fast pace of compelling, even jaw-dropping AI developments, AI also faces a laundry list of complex challenges. Hot startups like Jasper are cutting their internal valuations. Big Tech AI is facing multiple lawsuits around copyright issues. Hollywood unions have pushed back on generative AI. Deepfakes are proliferating to the point that even Tom Hanks was forced to post a warning about a dental ad using an AI-generated deepfake of his likeness. Many Americans believe AI will negatively impact the 2024 elections

The bottom line is that AI may have incredible positive potential for humanity’s future, but I don’t think companies are doing a great job of communicating what that is. Where is the “why” — as in, why are we going through all the angst of building all of this? What is the current and future value of generative AI to individuals, workers, enterprises, and society at large? How do the benefits outweigh the risks? 

OpenAI says it is developing AI to “benefit all of humanity,” while Anthropic touts its bonafides in making sure AI doesn’t destroy humanity (gee, thanks!). Cerebral Valley hackers log sixteen-hour days in a Bay Area mansion in search of the holy grail of AI “innovation” that humanity will go hog-wild for, while reportedly “VCs dangle AI chips to woo founders,” who claim to know what humanity will go hog-wild for. 

That’s not enough. We need more than these vague pronouncements. For AI to succeed, humanity needs to get on board. And after decades of experience with technology developments, from email and the internet to social media — and the hype and fallout that has accompanied each one — making that happen might not be the societal slam-dunk that Silicon Valley thinks it is. 

I admit that I started thinking about this after spending the weekend walking underneath a grove of hundreds-of-feet-tall, centuries-old California Redwoods. As I gazed at the morning fog settling over the giant trees, I wondered: Why are we really spending billions on GPUs and data centers, fueling a spike in water consumption during a drought? For products consumers may or may not want or need? For applications enterprises will spend years developing, betting on an elusive ROI? 

AI is powerful. AI is exciting. But none of it compares to the wonder and awe I felt walking in Armstrong Redwoods State Natural Reserve. I mean, I literally walked around hugging massive trunks and yelling “Good morning, trees!” I mean, these are the non-humans worth anthropomorphizing — not OpenAI’s ChatGPT, Anthropic’s Claude or Meta’s crazy AI grandpa chatbot going off the rails

I think AI might need a messaging overhaul that even ChatGPT would find hard to handle. 

Take the nascent consumer market for generative AI tools, for example. There is plenty of excitement around large language models, of course, but there is little hard evidence yet that the fickle masses will latch on — at scale — to applications like AI characters, AI wearables and an AI Alexa that, to work optimally, must listen in to your conversations. 

At the same time, many consumers have little knowledge of generative AI tools or have never tried them — but there is plenty of worry about how generative AI can disrupt job functions, be used to commit fraud, manipulate people and spread misinformation. According to a recent Pew Research Center poll, 52% of respondents said they were more concerned than excited about AI. According to Eden Zoller, chief analyst at Omdia, AI adoption is not guaranteed: “Consumers need to be shown that generative AI has value and can be trusted,” she said. 

The workforce is also a prime audience for generative AI, as Big Tech and startups woo employees with heavenly AI-powered copilots and workflows. But messaging is messy there, too. Is it clear that employees will embrace these enterprise-level tools wholeheartedly, without pushback? And are they even using them in ways that employers want? More than half of US employees are already using generative AI tools, at least occasionally, to accomplish work-related tasks. Yet some three-quarters of companies still lack an established, clearly communicated organizational AI policy. 

In addition, the potential enterprise business market for generative AI is massive right now, but also demanding and fickle. Certainly every CEO has suffered from generative AI FOMO over the past year. But it could take years to wring ROI from the massive AI investments companies are making. It seems inevitable that disillusionment will set in — and AI vendors need to be ready to reassure enterprises that there really is a pot of gold at the end of the AI rainbow. 

In my opinion, AI needs to place its house on firmer ground if it wants to stop teetering on the edge of the disillusionment cliff. We need more than vague pronouncements like that from You.com’s Richard Socher, who posted recently that “I do think that eventually AI – and the technological and scientific breakthroughs it enables – will help us spread light, intelligence and knowledge into an otherwise mostly dark universe.” Umm…what? 

And we need more than the words of JPMorgan CEO Jamie Dimon, who told Bloomberg TV yesterday that “your children are going to live to 100 and not have cancer because of technology, and literally they’ll probably be working three-and-a-half days a week.” Really? Does that reassure employees about to get laid off due to AI? 

Giving AI a reality check, I think, could offer the technology its greatest chance for success. 

By managing expectations and choosing the most appropriate and trustworthy use cases for AI, AI companies can provide what people really want and need. By remembering that AI and data is about human beings, AI companies can focus on maintaining human dignity, security and consent.  And by helping people understand the real future potential of AI — that goes beyond silly apps, productivity tools and generated content, and isn’t a sci-fi fantasy the average person doesn’t care about — AI companies can communicate what it is we should all be so excited about. 

VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More


This will be an unpopular take: I believe AI is teetering on the edge of a disillusionment cliff. Whether that goes beyond Gartner’s famous “trough,” and whether it lasts, remains to be seen. But something is shifting — and it’s not just the slight early-autumn chill in the air. 

I know what you’re thinking. How could that be? After all, AI is booming. The past couple of weeks in AI news have been two of the most exciting, exhilarating and jam-packed since I began covering the space for VentureBeat in April 2022. 

Just last week, I sat in a press-filled audience at Meta’s Menlo Park headquarters watching Mark Zuckerberg tout Meta’s new AI chatbot and image generator, integrated with Facebook, Instagram and WhatsApp. The week before, I was in Washington DC as Amazon announced the new generative AI-powered Alexa, and in New York City when Microsoft announced it would integrate its Copilot into Windows 11 — including OpenAI’s DALL-E 3, which was announced the same week. 

But hear me out: Along with the fast pace of compelling, even jaw-dropping AI developments, AI also faces a laundry list of complex challenges. Hot startups like Jasper are cutting their internal valuations. Big Tech AI is facing multiple lawsuits around copyright issues. Hollywood unions have pushed back on generative AI. Deepfakes are proliferating to the point that even Tom Hanks was forced to post a warning about a dental ad using an AI-generated deepfake of his likeness. Many Americans believe AI will negatively impact the 2024 elections

Event

AI Unleashed

An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.

 


Learn More

The bottom line is that AI may have incredible positive potential for humanity’s future, but I don’t think companies are doing a great job of communicating what that is. Where is the “why” — as in, why are we going through all the angst of building all of this? What is the current and future value of generative AI to individuals, workers, enterprises, and society at large? How do the benefits outweigh the risks? 

Humanity needs to get on board with AI

OpenAI says it is developing AI to “benefit all of humanity,” while Anthropic touts its bonafides in making sure AI doesn’t destroy humanity (gee, thanks!). Cerebral Valley hackers log sixteen-hour days in a Bay Area mansion in search of the holy grail of AI “innovation” that humanity will go hog-wild for, while reportedly “VCs dangle AI chips to woo founders,” who claim to know what humanity will go hog-wild for. 

That’s not enough. We need more than these vague pronouncements. For AI to succeed, humanity needs to get on board. And after decades of experience with technology developments, from email and the internet to social media — and the hype and fallout that has accompanied each one — making that happen might not be the societal slam-dunk that Silicon Valley thinks it is. 

I admit that I started thinking about this after spending the weekend walking underneath a grove of hundreds-of-feet-tall, centuries-old California Redwoods. As I gazed at the morning fog settling over the giant trees, I wondered: Why are we really spending billions on GPUs and data centers, fueling a spike in water consumption during a drought? For products consumers may or may not want or need? For applications enterprises will spend years developing, betting on an elusive ROI? 

AI is powerful. AI is exciting. But none of it compares to the wonder and awe I felt walking in Armstrong Redwoods State Natural Reserve. I mean, I literally walked around hugging massive trunks and yelling “Good morning, trees!” I mean, these are the non-humans worth anthropomorphizing — not OpenAI’s ChatGPT, Anthropic’s Claude or Meta’s crazy AI grandpa chatbot going off the rails

I think AI might need a messaging overhaul that even ChatGPT would find hard to handle. 

AI might need a messaging overhaul

Take the nascent consumer market for generative AI tools, for example. There is plenty of excitement around large language models, of course, but there is little hard evidence yet that the fickle masses will latch on — at scale — to applications like AI characters, AI wearables and an AI Alexa that, to work optimally, must listen in to your conversations. 

At the same time, many consumers have little knowledge of generative AI tools or have never tried them — but there is plenty of worry about how generative AI can disrupt job functions, be used to commit fraud, manipulate people and spread misinformation. According to a recent Pew Research Center poll, 52% of respondents said they were more concerned than excited about AI. According to Eden Zoller, chief analyst at Omdia, AI adoption is not guaranteed: “Consumers need to be shown that generative AI has value and can be trusted,” she said. 

The workforce is also a prime audience for generative AI, as Big Tech and startups woo employees with heavenly AI-powered copilots and workflows. But messaging is messy there, too. Is it clear that employees will embrace these enterprise-level tools wholeheartedly, without pushback? And are they even using them in ways that employers want? More than half of US employees are already using generative AI tools, at least occasionally, to accomplish work-related tasks. Yet some three-quarters of companies still lack an established, clearly communicated organizational AI policy. 

In addition, the potential enterprise business market for generative AI is massive right now, but also demanding and fickle. Certainly every CEO has suffered from generative AI FOMO over the past year. But it could take years to wring ROI from the massive AI investments companies are making. It seems inevitable that disillusionment will set in — and AI vendors need to be ready to reassure enterprises that there really is a pot of gold at the end of the AI rainbow. 

The AI landscape needs to stand on firmer ground

In my opinion, AI needs to place its house on firmer ground if it wants to stop teetering on the edge of the disillusionment cliff. We need more than vague pronouncements like that from You.com’s Richard Socher, who posted recently that “I do think that eventually AI – and the technological and scientific breakthroughs it enables – will help us spread light, intelligence and knowledge into an otherwise mostly dark universe.” Umm…what? 

And we need more than the words of JPMorgan CEO Jamie Dimon, who told Bloomberg TV yesterday that “your children are going to live to 100 and not have cancer because of technology, and literally they’ll probably be working three-and-a-half days a week.” Really? Does that reassure employees about to get laid off due to AI? 

Giving AI a reality check, I think, could offer the technology its greatest chance for success. 

By managing expectations and choosing the most appropriate and trustworthy use cases for AI, AI companies can provide what people really want and need. By remembering that AI and data is about human beings, AI companies can focus on maintaining human dignity, security and consent.  And by helping people understand the real future potential of AI — that goes beyond silly apps, productivity tools and generated content, and isn’t a sci-fi fantasy the average person doesn’t care about — AI companies can communicate what it is we should all be so excited about. 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!