AI & RoboticsNews

Fragmented truth: How AI is distorting and challenging our reality

chatbots

When Open AI first released ChatGPT, it seemed to me like an oracle. Trained on vast swaths of data, loosely representing the sum of human interests and knowledge available online, this statistical prediction machine might, I thought, serve as a single source of truth. As a society, we arguably have not had that since Walter Cronkite every evening told the American public: “That’s the way it is” — and most believed him.

What a boon a reliable source of truth would be in an era of polarization, misinformation and the erosion of truth and trust in society. Unfortunately, this prospect was quickly dashed when the weaknesses of this technology quickly appeared, starting with its propensity to hallucinate answers. It soon became clear that as impressive as the outputs appeared, they generated information based simply on patterns in the data they had been trained on and not on any objective truth.

But not only that. More issues appeared as ChatGPT was soon followed by a plethora of other chatbots from Microsoft, Google, Tencent, Baidu, Snap, SK Telecom, Alibaba, Databricks, Anthropic, Stability Labs, Meta and others. Remember Sydney? What’s more, these various chatbots all provided substantially different results to the same prompt. The variance depends on the model, the training data, and whatever guardrails the model was provided.

These guardrails are meant to hopefully prevent these systems from perpetuating biases inherent in the training data, generating disinformation and hate speech and other toxic material. Nevertheless, soon after the launch of ChatGPT, it was apparent that not everyone approved of the guardrails provided by OpenAI.

For example, conservatives complained that answers from the bot betrayed a distinctly liberal bias. This prompted Elon Musk to declare he would build a chatbot that is less restrictive and politically correct than ChatGPT. With his recent announcement of xAI, he will likely do exactly that.

Anthropic took a somewhat different approach. They implemented a “constitution” for their Claude (and now Claude 2) chatbots. As reported in VentureBeat, the constitution outlines a set of values and principles that Claude must follow when interacting with users, including being helpful, harmless and honest. According to a blog post from the company, Claude’s constitution includes ideas from the U.N. Declaration of Human Rights, as well as other principles included to capture non-western perspectives. Perhaps everyone could agree with those.

Meta also recently released their LLaMA 2 large language model (LLM). In addition to apparently being a capable model, it is noteworthy for being made available as open source, meaning that anyone can download and use it for free and for their own purposes. There are other open-source generative AI models available with few guardrail restrictions. Using one of these models makes the idea of guardrails and constitutions somewhat quaint.

Although perhaps all the efforts to eliminate potential harms from LLMs are moot. New research reported by the New York Times revealed a prompting technique that effectively breaks the guardrails of any of these models, whether closed-source or open-source. Fortune reported that this method had a near 100% success rate against Vicuna, an open-source chatbot built on top of Meta’s original LlaMA.

This means that anyone who wants to get detailed instructions for how to make bioweapons or to defraud consumers would be able to obtain this from the various LLMs. While developers could counter some of these attempts, the researchers say there is no known way of preventing all attacks of this kind.

Beyond the obvious safety implications of this research, there is a growing cacophony of disparate results from multiple models, even when responding to the same prompt. A fragmented AI universe, like our fragmented social media and news universe, is bad for truth and destructive for trust. We are facing a chatbot-infused future that will add to the noise and chaos. The fragmentation of truth and society has far-reaching implications not only for text-based information but also for the rapidly evolving world of digital human representations.

Today chatbots based on LLMs share information as text. As these models increasingly become multimodal — meaning they could generate images, video and audio — their application and effectiveness will only increase.

One possible use case for multimodal application can be seen in “digital humans,” which are entirely synthetic creations. A recent Harvard Business Review story described the technologies that make digital humans possible: “Rapid progress in computer graphics, coupled with advances in artificial intelligence (AI), is now putting humanlike faces on chatbots and other computer-based interfaces.” They have high-end features that accurately replicate the appearance of a real human.

According to Kuk Jiang, cofounder of Series D startup company ZEGOCLOUD, digital humans are “highly detailed and realistic human models that can overcome the limitations of realism and sophistication.” He adds that these digital humans can interact with real humans in natural and intuitive ways and “can efficiently assist and support virtual customer service, healthcare and remote education scenarios.”

One additional emerging use case is the newscaster. Early implementations are already underway. Kuwait News has started using a digital human newscaster named “Fedha” a popular Kuwaiti name. “She” introduces herself: “I’m Fedha. What kind of news do you prefer? Let’s hear your opinions.“

By asking, Fedha introduces the possibility of newsfeeds customized to individual interests. China’s People’s Daily is similarly experimenting with AI-powered newscasters.

Currently, startup company Channel 1 is planning to use gen AI to create a new type of video news channel, what The Hollywood Reporter described as an AI-generated CNN. As reported, Channel 1 will launch this year with a 30-minute weekly show with scripts developed using LLMs. Their stated ambition is to produce newscasts customized for every user. The article notes: “There are even liberal and conservative hosts who can deliver the news filtered through a more specific point of view.”

Channel 1 cofounder Scott Zabielski acknowledged that, at present, digital human newscasters do not appear as real humans would. He adds that it will take a while, perhaps up to 3 years, for the technology to be seamless. “It is going to get to a point where you absolutely will not be able to tell the difference between watching AI and watching a human being.”

Why might this be concerning? A study reported last year in Scientific American found “not only are synthetic faces highly realistic, they are deemed more trustworthy than real faces,” according to study co-author Hany Farid, a professor at the University of California, Berkeley. “The result raises concerns that ‘these faces could be highly effective when used for nefarious purposes.’”

There is nothing to suggest that Channel 1 will use the convincing power of personalized news videos and synthetic faces for nefarious purposes. That said, technology is advancing to the point where others who are less scrupulous might do so.

As a society, we are already concerned that what we read could be disinformation, what we hear on the phone could be a cloned voice and the pictures we look at could be faked. Soon video — even that which purports to be the evening news — could contain messages designed less to inform or educate but to manipulate opinions more effectively.

Truth and trust have been under attack for quite some time, and this development suggests the trend will continue. We are a long way from the evening news with Walter Cronkite.

Gary Grossman is SVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


When Open AI first released ChatGPT, it seemed to me like an oracle. Trained on vast swaths of data, loosely representing the sum of human interests and knowledge available online, this statistical prediction machine might, I thought, serve as a single source of truth. As a society, we arguably have not had that since Walter Cronkite every evening told the American public: “That’s the way it is” — and most believed him.

What a boon a reliable source of truth would be in an era of polarization, misinformation and the erosion of truth and trust in society. Unfortunately, this prospect was quickly dashed when the weaknesses of this technology quickly appeared, starting with its propensity to hallucinate answers. It soon became clear that as impressive as the outputs appeared, they generated information based simply on patterns in the data they had been trained on and not on any objective truth.

AI guardrails in place, but not everyone approves

But not only that. More issues appeared as ChatGPT was soon followed by a plethora of other chatbots from Microsoft, Google, Tencent, Baidu, Snap, SK Telecom, Alibaba, Databricks, Anthropic, Stability Labs, Meta and others. Remember Sydney? What’s more, these various chatbots all provided substantially different results to the same prompt. The variance depends on the model, the training data, and whatever guardrails the model was provided.

These guardrails are meant to hopefully prevent these systems from perpetuating biases inherent in the training data, generating disinformation and hate speech and other toxic material. Nevertheless, soon after the launch of ChatGPT, it was apparent that not everyone approved of the guardrails provided by OpenAI.

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.


Register Now

For example, conservatives complained that answers from the bot betrayed a distinctly liberal bias. This prompted Elon Musk to declare he would build a chatbot that is less restrictive and politically correct than ChatGPT. With his recent announcement of xAI, he will likely do exactly that.

Anthropic, Meta approaches

Anthropic took a somewhat different approach. They implemented a “constitution” for their Claude (and now Claude 2) chatbots. As reported in VentureBeat, the constitution outlines a set of values and principles that Claude must follow when interacting with users, including being helpful, harmless and honest. According to a blog post from the company, Claude’s constitution includes ideas from the U.N. Declaration of Human Rights, as well as other principles included to capture non-western perspectives. Perhaps everyone could agree with those.

Meta also recently released their LLaMA 2 large language model (LLM). In addition to apparently being a capable model, it is noteworthy for being made available as open source, meaning that anyone can download and use it for free and for their own purposes. There are other open-source generative AI models available with few guardrail restrictions. Using one of these models makes the idea of guardrails and constitutions somewhat quaint.

Fractured truth, fragmented society

Although perhaps all the efforts to eliminate potential harms from LLMs are moot. New research reported by the New York Times revealed a prompting technique that effectively breaks the guardrails of any of these models, whether closed-source or open-source. Fortune reported that this method had a near 100% success rate against Vicuna, an open-source chatbot built on top of Meta’s original LlaMA.

This means that anyone who wants to get detailed instructions for how to make bioweapons or to defraud consumers would be able to obtain this from the various LLMs. While developers could counter some of these attempts, the researchers say there is no known way of preventing all attacks of this kind.

Beyond the obvious safety implications of this research, there is a growing cacophony of disparate results from multiple models, even when responding to the same prompt. A fragmented AI universe, like our fragmented social media and news universe, is bad for truth and destructive for trust. We are facing a chatbot-infused future that will add to the noise and chaos. The fragmentation of truth and society has far-reaching implications not only for text-based information but also for the rapidly evolving world of digital human representations.

Produced by author with Stable Diffusion.

AI: The rise of digital humans

Today chatbots based on LLMs share information as text. As these models increasingly become multimodal — meaning they could generate images, video and audio — their application and effectiveness will only increase.

One possible use case for multimodal application can be seen in “digital humans,” which are entirely synthetic creations. A recent Harvard Business Review story described the technologies that make digital humans possible: “Rapid progress in computer graphics, coupled with advances in artificial intelligence (AI), is now putting humanlike faces on chatbots and other computer-based interfaces.” They have high-end features that accurately replicate the appearance of a real human.

According to Kuk Jiang, cofounder of Series D startup company ZEGOCLOUD, digital humans are “highly detailed and realistic human models that can overcome the limitations of realism and sophistication.” He adds that these digital humans can interact with real humans in natural and intuitive ways and “can efficiently assist and support virtual customer service, healthcare and remote education scenarios.”

Digital human newscasters

One additional emerging use case is the newscaster. Early implementations are already underway. Kuwait News has started using a digital human newscaster named “Fedha” a popular Kuwaiti name. “She” introduces herself: “I’m Fedha. What kind of news do you prefer? Let’s hear your opinions.“

By asking, Fedha introduces the possibility of newsfeeds customized to individual interests. China’s People’s Daily is similarly experimenting with AI-powered newscasters.

Currently, startup company Channel 1 is planning to use gen AI to create a new type of video news channel, what The Hollywood Reporter described as an AI-generated CNN. As reported, Channel 1 will launch this year with a 30-minute weekly show with scripts developed using LLMs. Their stated ambition is to produce newscasts customized for every user. The article notes: “There are even liberal and conservative hosts who can deliver the news filtered through a more specific point of view.”

Can you tell the difference?

Channel 1 cofounder Scott Zabielski acknowledged that, at present, digital human newscasters do not appear as real humans would. He adds that it will take a while, perhaps up to 3 years, for the technology to be seamless. “It is going to get to a point where you absolutely will not be able to tell the difference between watching AI and watching a human being.”

Why might this be concerning? A study reported last year in Scientific American found “not only are synthetic faces highly realistic, they are deemed more trustworthy than real faces,” according to study co-author Hany Farid, a professor at the University of California, Berkeley. “The result raises concerns that ‘these faces could be highly effective when used for nefarious purposes.’”

There is nothing to suggest that Channel 1 will use the convincing power of personalized news videos and synthetic faces for nefarious purposes. That said, technology is advancing to the point where others who are less scrupulous might do so.

As a society, we are already concerned that what we read could be disinformation, what we hear on the phone could be a cloned voice and the pictures we look at could be faked. Soon video — even that which purports to be the evening news — could contain messages designed less to inform or educate but to manipulate opinions more effectively.

Truth and trust have been under attack for quite some time, and this development suggests the trend will continue. We are a long way from the evening news with Walter Cronkite.

Gary Grossman is SVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Gary Grossman, Edelman
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!