AI & RoboticsNews

How ChatGPT and generative AI could bring the Star Trek holodeck to life

Star Trek

For Star Trek fans and tech nerds, the holodeck concept is a form of geek grail. The idea is an entirely realistic simulated environment where just speaking the request (prompt) seemingly brings to life an immersive environment populated by role-playing AI-powered digital humans. As several Star Trek series envisioned, a multitude of scenes and narratives could be created, from New Orleans jazz clubs to private eye capers. Not only did this imagine an exciting future for technology, but it also delved into philosophical questions such as the humanity of digital beings.

Since it first emerged on screen in 1988, the holodeck has been a mainstay quest of the Digerati. Over the years, several companies, including Microsoft and IBM, have created labs in pursuit of building the underlying technologies. Yet, the technical challenges have been daunting for both software and hardware. Perhaps AI, and in particular generative AI, can advance these efforts. That is just one vision for how generative AI might contribute to the next generation of technology.

Sam Altman, CEO of OpenAI, believes that ChatGPT could be the interface technology for a holodeck that responds naturally to our verbal commands. It provides an interface that feels “fundamentally right,” he said in a recent interview with Time. Could a holodeck and other futuristic scenarios emerge over the next 12 months? If the last year is any indication, then possibly. In this post, we will look back and project forward.

Generative AI — before the term became widely known — first captured the world’s imagination almost exactly a year ago. That is when a now former Google engineer went public with his views that a chatbot based on the LaMDA large language model (LLM) was sentient. This led to hundreds of articles discussing this claim and the views of most technology experts who countered that a LLM was not and could not be sentient. The viral debate was a watershed moment that heralded the arrival of generative AI. Over the ensuing 12 months, there has been an almost non-stop whirlwind of dramatic technological advances, a profusion of opinions, palpable excitement, and escalating worries.

The sentience debate was followed two months later by another viral story. This time it was about an image created using Midjourney. The image was entered into an art competition and won, much to the consternation of digital artists and graphic designers.

Jason Allen’s AI-generated work “Théâtre D’opéra Spatial” took first place in the digital category at the Colorado State Fair. https://t.co/6bFNFERCki

Although AI had already been incorporated into artists’ tools, this image was controversial because it was generated entirely by AI and won an art competition. This led many to view the moment as a tipping point where technology could displace creative professionals. This also set off a firestorm of controversy about copyright protection, as generated images are based on material scraped without permission from the internet, some percentage of which are covered by copyright protection. Subsequently, lawsuits are in process in an attempt to halt the inclusion of these images in model training datasets.

These stories seemed almost tame after the introduction of ChatGPT in late November, only five months after claims of LLM sentience. Unlike the Google experience, ChatGPT was made available to the public. Within five days, the new chatbot had a million users, making it the fastest-growing consumer application ever.

ChatGPT is startlingly conversant, can answer questions, create plays and articles, write and debug code, take tests, translate languages, manipulate data, provide advice and tutor. Using ChatGPT and the image generation tools felt to me like magic, which reminded me of a now 60-year-old quote from science fiction writer and futurist Arthur C. Clarke: “Any sufficiently advanced technology is indistinguishable from magic.”

The success of ChatGPT set off a chatbot frenzy. LLMs and chatbots are now in the market from Microsoft, Google, Meta, Databricks, Cohere, Anthropic, Nvidia and many others. Some of the LLMs and image generators are now open source, meaning that anyone could download the technology not only to use, but adapt it for their needs. In March, OpenAI released GPT-4, an even more powerful LLM.

While the upside of the technology is astronomically high, worries proliferated as fast as the software throughout the spring. This was and is especially so for concerns about bad actors who could use these tools to create and spread toxic misinformation or worse. As there are effectively no limits on who can access and manipulate open-source models, the worry is that open-source models could be impervious to regulatory attempts and society will be flooded with AI-powered misinformation, deepfakes and phishing scams.

In April, another generative AI tool was used to simulate the music of pop stars Drake and The Weeknd. The song “Heart on My Sleeve” went viral across social media platforms. This led to a widespread mixture of excitement and consternation. Even Paul McCartney has recently jumped in, saying AI was helping the surviving Beatles produce a song featuring vocals by John Lennon, who was killed in 1980.

While AI is helping some recording artists, many now share similar worries as graphic artists about looming obsolescence. As do actors, some of whom worry they too could be replaced in movies and television shows. Generative AI is now an issue in the strike by the Writers Guild of America, who believe that Hollywood studios could use chatbots to develop scripts for movies and television shows.

Arguably adding to those worries, Vimeo just launched AI tools that it says will transform professional video production. The company is introducing a script generator and an automated video editor. Vimeo views this as democratizing the creation of video content.

Not surprisingly, there has been a backlash against AI implementations. This response is consistent with Newton’s third law of motion, which states that for every action, there is an equal and opposite reaction. As part of that reaction, the Future of Life Institute published an open letter in March signed by thousands of technology and business leaders calling for a pause in AI development.

In May, the Center for AI Safety (CAIS) released a one-sentence statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” Governments around the world are now actively attempting to understand AI technology and its implications and to develop useful regulations. Meanwhile, McKinsey issued a report this month predicting that generative AI would contribute up to $4.4 trillion annually to the world economy.

In my view, the next 12 months could be at least as wild though efforts to reign in the pace of technological advances through regulation could have some effect. Over the next 12 months, even the U.S. could pass legislation — although less onerous to the industry than what is currently proposed in the EU.

Beyond that, we can expect enterprises to further implement generative AI-based tools and solutions across their organizations. As Deloitte said in a recent report, a myriad of challenges need to be overcome before generative AI can be deployed at scale.

Nevertheless, within the next 12 months, we can expect that most Fortune 500 companies will have incorporated the technology in at least a portion of their business. There will be expanded use of chatbots, AI content creation tools and software development, and increased use of AI for media, entertainment, and education.

Open-source models and tools will continue to appear and spread, allowing more people to leverage generative AI for personal and professional use, as well as for innovation. This openness will spur both promising innovation and reasons for concern. More effective cyber attacks are a likely outcome. As The New York Times columnist Thomas Friedman notes, open source code can be exploited by anyone. He asks: “What would ISIS do with the code?”

Generative AI’s economic impact will lead to both job losses and new types of jobs, sources of income, skills development, and business opportunities. Worries about significant job market disruption will grow and become a central issue in the 2024 U.S. presidential campaign.

International competition over AI will further intensify, as will calls for more cooperation in managing risks. An international conference to discuss this will struggle to find common ground. Views on governance vary globally as “human values” are not consistent across cultures, and a unified approach will remain elusive as pressure to “win”— and profit — beats all.

These projections are reasonable, even normative, given the pace of AI development. What is more difficult — although perhaps just as likely — is the unexpected, whether from innovation or due to a black swan event.

In the next 12 months, we could see generative AI used to create a Hollywood-level, feature-length film. This could be from a major U.S. studio, but just as likely from abroad. Avengers: Infinity War and Endgame co-director Joe Russo is already on record saying he believes this could happen within the next two years, likely championed by younger filmmakers. The success of this film could herald a new era of AI-generated media and entertainment and exacerbate concerns over the impact of generative AI on human creative work.

Perhaps too, a single software developer or a small group of developers could create a huge new system developed in a few months that normally would have required dozens or even hundreds of programmers many years to construct. This could come with automated coding at a massive scale through use of programming co-pilots and recursive agents. Such accelerated development would send shockwaves through the software and technology sector.

These shockwaves might only be surpassed by a functional holodeck. Generative AI might be ready to do its part, although it could be several more years for the hardware to catch up.

The last 12 months for generative AI have provided a wild, society-changing ride. As venture capital firm Sequoia Capital said: “Every industry that requires humans to create original work — from social media to gaming, advertising to architecture, coding to graphic design, product design to law, marketing to sales — is up for reinvention.”

Even though there are real concerns about the safety of AI systems, the implications for the workforce and a need for reasonable regulations, that reinvention with an entire universe of possibilities is now well underway.

Gary Grossman is SVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


For Star Trek fans and tech nerds, the holodeck concept is a form of geek grail. The idea is an entirely realistic simulated environment where just speaking the request (prompt) seemingly brings to life an immersive environment populated by role-playing AI-powered digital humans. As several Star Trek series envisioned, a multitude of scenes and narratives could be created, from New Orleans jazz clubs to private eye capers. Not only did this imagine an exciting future for technology, but it also delved into philosophical questions such as the humanity of digital beings.

Digerati dreams

Since it first emerged on screen in 1988, the holodeck has been a mainstay quest of the Digerati. Over the years, several companies, including Microsoft and IBM, have created labs in pursuit of building the underlying technologies. Yet, the technical challenges have been daunting for both software and hardware. Perhaps AI, and in particular generative AI, can advance these efforts. That is just one vision for how generative AI might contribute to the next generation of technology.

Sam Altman, CEO of OpenAI, believes that ChatGPT could be the interface technology for a holodeck that responds naturally to our verbal commands. It provides an interface that feels “fundamentally right,” he said in a recent interview with Time. Could a holodeck and other futuristic scenarios emerge over the next 12 months? If the last year is any indication, then possibly. In this post, we will look back and project forward.

The generative AI whirlwind of the last 12 months

Generative AI — before the term became widely known — first captured the world’s imagination almost exactly a year ago. That is when a now former Google engineer went public with his views that a chatbot based on the LaMDA large language model (LLM) was sentient. This led to hundreds of articles discussing this claim and the views of most technology experts who countered that a LLM was not and could not be sentient. The viral debate was a watershed moment that heralded the arrival of generative AI. Over the ensuing 12 months, there has been an almost non-stop whirlwind of dramatic technological advances, a profusion of opinions, palpable excitement, and escalating worries.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

The sentience debate was followed two months later by another viral story. This time it was about an image created using Midjourney. The image was entered into an art competition and won, much to the consternation of digital artists and graphic designers.

Although AI had already been incorporated into artists’ tools, this image was controversial because it was generated entirely by AI and won an art competition. This led many to view the moment as a tipping point where technology could displace creative professionals. This also set off a firestorm of controversy about copyright protection, as generated images are based on material scraped without permission from the internet, some percentage of which are covered by copyright protection. Subsequently, lawsuits are in process in an attempt to halt the inclusion of these images in model training datasets.

A time of AI magic

These stories seemed almost tame after the introduction of ChatGPT in late November, only five months after claims of LLM sentience. Unlike the Google experience, ChatGPT was made available to the public. Within five days, the new chatbot had a million users, making it the fastest-growing consumer application ever.

ChatGPT is startlingly conversant, can answer questions, create plays and articles, write and debug code, take tests, translate languages, manipulate data, provide advice and tutor. Using ChatGPT and the image generation tools felt to me like magic, which reminded me of a now 60-year-old quote from science fiction writer and futurist Arthur C. Clarke: “Any sufficiently advanced technology is indistinguishable from magic.”

The success of ChatGPT set off a chatbot frenzy. LLMs and chatbots are now in the market from Microsoft, Google, Meta, Databricks, Cohere, Anthropic, Nvidia and many others. Some of the LLMs and image generators are now open source, meaning that anyone could download the technology not only to use, but adapt it for their needs. In March, OpenAI released GPT-4, an even more powerful LLM.

Concerns about bad actors proliferate

While the upside of the technology is astronomically high, worries proliferated as fast as the software throughout the spring. This was and is especially so for concerns about bad actors who could use these tools to create and spread toxic misinformation or worse. As there are effectively no limits on who can access and manipulate open-source models, the worry is that open-source models could be impervious to regulatory attempts and society will be flooded with AI-powered misinformation, deepfakes and phishing scams.

In April, another generative AI tool was used to simulate the music of pop stars Drake and The Weeknd. The song “Heart on My Sleeve” went viral across social media platforms. This led to a widespread mixture of excitement and consternation. Even Paul McCartney has recently jumped in, saying AI was helping the surviving Beatles produce a song featuring vocals by John Lennon, who was killed in 1980.

Artist facing obsolescence?

While AI is helping some recording artists, many now share similar worries as graphic artists about looming obsolescence. As do actors, some of whom worry they too could be replaced in movies and television shows. Generative AI is now an issue in the strike by the Writers Guild of America, who believe that Hollywood studios could use chatbots to develop scripts for movies and television shows.

Arguably adding to those worries, Vimeo just launched AI tools that it says will transform professional video production. The company is introducing a script generator and an automated video editor. Vimeo views this as democratizing the creation of video content.

Not surprisingly, there has been a backlash against AI implementations. This response is consistent with Newton’s third law of motion, which states that for every action, there is an equal and opposite reaction. As part of that reaction, the Future of Life Institute published an open letter in March signed by thousands of technology and business leaders calling for a pause in AI development.

In May, the Center for AI Safety (CAIS) released a one-sentence statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” Governments around the world are now actively attempting to understand AI technology and its implications and to develop useful regulations. Meanwhile, McKinsey issued a report this month predicting that generative AI would contribute up to $4.4 trillion annually to the world economy.

What will the next year bring?

In my view, the next 12 months could be at least as wild though efforts to reign in the pace of technological advances through regulation could have some effect. Over the next 12 months, even the U.S. could pass legislation — although less onerous to the industry than what is currently proposed in the EU.

Beyond that, we can expect enterprises to further implement generative AI-based tools and solutions across their organizations. As Deloitte said in a recent report, a myriad of challenges need to be overcome before generative AI can be deployed at scale.

Nevertheless, within the next 12 months, we can expect that most Fortune 500 companies will have incorporated the technology in at least a portion of their business. There will be expanded use of chatbots, AI content creation tools and software development, and increased use of AI for media, entertainment, and education.

Promising innovation, strong concern

Open-source models and tools will continue to appear and spread, allowing more people to leverage generative AI for personal and professional use, as well as for innovation. This openness will spur both promising innovation and reasons for concern. More effective cyber attacks are a likely outcome. As The New York Times columnist Thomas Friedman notes, open source code can be exploited by anyone. He asks: “What would ISIS do with the code?”

Generative AI’s economic impact will lead to both job losses and new types of jobs, sources of income, skills development, and business opportunities. Worries about significant job market disruption will grow and become a central issue in the 2024 U.S. presidential campaign.

International competition over AI will further intensify, as will calls for more cooperation in managing risks. An international conference to discuss this will struggle to find common ground. Views on governance vary globally as “human values” are not consistent across cultures, and a unified approach will remain elusive as pressure to “win”— and profit — beats all.

What about the unexpected?

These projections are reasonable, even normative, given the pace of AI development. What is more difficult — although perhaps just as likely — is the unexpected, whether from innovation or due to a black swan event.

In the next 12 months, we could see generative AI used to create a Hollywood-level, feature-length film. This could be from a major U.S. studio, but just as likely from abroad. Avengers: Infinity War and Endgame co-director Joe Russo is already on record saying he believes this could happen within the next two years, likely championed by younger filmmakers. The success of this film could herald a new era of AI-generated media and entertainment and exacerbate concerns over the impact of generative AI on human creative work.

Perhaps too, a single software developer or a small group of developers could create a huge new system developed in a few months that normally would have required dozens or even hundreds of programmers many years to construct. This could come with automated coding at a massive scale through use of programming co-pilots and recursive agents. Such accelerated development would send shockwaves through the software and technology sector.

The holodeck and the unknown: Reinvention of every industry

These shockwaves might only be surpassed by a functional holodeck. Generative AI might be ready to do its part, although it could be several more years for the hardware to catch up.

The last 12 months for generative AI have provided a wild, society-changing ride. As venture capital firm Sequoia Capital said: “Every industry that requires humans to create original work — from social media to gaming, advertising to architecture, coding to graphic design, product design to law, marketing to sales — is up for reinvention.”

Even though there are real concerns about the safety of AI systems, the implications for the workforce and a need for reasonable regulations, that reinvention with an entire universe of possibilities is now well underway.

Gary Grossman is SVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Gary Grossman, Edelman
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!