Check out all the on-demand sessions from the Intelligent Security Summit here.
Today, ChatGPT is two months old.
Yes, believe it or not, it was less than nine weeks ago that OpenAI launched what it simply described as an “early demo” a part of the GPT-3.5 series — an interactive, conversational model whose dialogue format “makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.”
ChatGPT quickly caught the imagination — and feverish excitement — of both the AI community and the general public. Since then, the tool’s possibilities as well as limitations and hidden dangers have been well established, and any hints of slowing down its development were quickly dashed when Microsoft announced its plans to invest billions more into OpenAI.
Can anyone catch up and compete with OpenAI and ChatGPT? Every day it seems like contenders, both new and old, step into the ring. Just this morning, for example, Reuters reported that Chinese internet search giant Baidu plans to launch an AI chatbot service similar to OpenAI’s ChatGPT in March.
Event
Intelligent Security Summit On-Demand
Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.
Here are four top players potentially making moves to challenge ChatGPT:
Anthropic: Claude
According to a New York Times article last Friday, Anthropic, a San Francisco startup, is close to raising roughly $300 million in new funding, which could value the company at around $5 billion.
Keep in mind that Anthropic has always had money to burn: Founded in 2021 by several researchers who left OpenAI, it gained more attention last April when, after less than a year in existence, it suddenly announced a whopping $580 million in funding — which, it turns out, mostly came from Sam Bankman-Fried and the folks at FTX, the now-bankrupt cryptocurrency platform accused of fraud. There have been questions as to whether that money could be recovered by a bankruptcy court.
Anthropic, and FTX, has also been tied to the Effective Altruism movement, which former Google researcher Timnit Gebru called out recently in a Wired opinion piece as a “dangerous brand of AI safety.”
Anthropic developed an AI chatbot, Claude — available in closed beta through a Slack integration — that reports say is similar to ChatGPT and has even demonstrated improvements. Anthropic, which describes itself as “working to build reliable, interpretable, and steerable AI systems,” created Claude using a process called “Constitutional AI,” which it says is based on concepts such as beneficence, non-maleficence and autonomy.
According to an Anthropic paper detailing Constitutional AI, the process involves a supervised learning and a reinforcement learning phase: “As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them.”
DeepMind: Sparrow
In a TIME article two weeks ago, DeepMind’s CEO and cofounder Demis Hassabis said that DeepMind is is considering releasing its chatbot Sparrow in a “private beta” sometime in 2023. In the article, Hassabis said it is “right to be cautious” in its release, so that the company can work on reinforcement learning-based features like citing sources — something ChatGPT does not have.
DeepMind, which is the British-owned subsidiary of Google parent company Alphabet, introduced Sparrow in a paper in September. It was hailed as an important step toward creating safer, less-biased machine learning (ML) systems, thanks to its application of reinforcement learning based on input from human research participants for training.
DeepMind says Sparrow is a “dialogue agent that’s useful and reduces the risk of unsafe and inappropriate answers.” The agent is designed to “talk with a user, answer questions and search the internet using Google when it’s helpful to look up evidence to inform its responses.”
However, DeepMind has said it considers Sparrow a research-based, proof-of-concept model that is not ready to be deployed, according to Geoffrey Irving, a safety researcher at DeepMind and lead author of the paper introducing Sparrow.
“We have not deployed the system because we think that it has a lot of biases and flaws of other types,” Irving told VentureBeat last September. “I think the question is, how do you weigh the communication advantages — like communicating with humans — against the disadvantages? I tend to believe in the safety needs of talking to humans … I think it is a tool for that in the long run.”
Google: LaMDA
You might remember LaMDA from last summer’s “AI sentience” whirlwind, when Blake Lemoine, a Google engineer, was fired due to his claims that LaMDA — short for Language Model for Dialogue Applications — was sentient.
“I legitimately believe that LaMDA is a person,” Lemoine told Wired last June.
But LaMDA is still considered to be one of ChatGPT’s biggest competitors. Launched in 2021, Google said in a launch blog post that LaMDA’s conversational skills “have been years in the making.”
Like ChatGPT, LaMDA is built on Transformer, the neural network architecture that Google Research invented and open-sourced in 2017. The Transformer architecture “produces a model that can be trained to read many words (a sentence or paragraph, for example), pay attention to how those words relate to one another and then predict what words it thinks will come next.”
And like ChatGPT, LaMDA was trained on dialogue. According to Google, “During its training, [LaMDA] picked up on several of the nuances that distinguish open-ended conversation from other forms of language.”
A New York Times article from January 20 said that last month, Google founders Larry Page and Sergey Brin met with company executives to discuss ChatGPT, which could be a threat to Google’s $149 billion search business. In a statement, a Google spokeswoman said: “We continue to test our AI technology internally to make sure it’s helpful and safe, and we look forward to sharing more experiences externally soon.”
Character AI
What happens when engineers who developed Google’s LaMDA get sick of Big Tech bureaucracy and decide to strike out on their own?
Well, just three months ago, Noam Shazeer (who was also one of the authors of the original Transformer paper) and Daniel De Freitas launched Character AI, its new AI chatbot technology that allows users to chat and role-play with, well, anyone, living or dead — the tool can impersonate historical figures like Queen Elizabeth and William Shakespeare, for example, or fictional characters like Draco Malfoy.
According to The Information, Character “has told investors it wants to raise as much as $250 million in new funding, a striking price for a startup with a product still in beta.” Currently, the report said, the technology is free to use, and Character is “studying how users interact with it before committing to a specific plan to generate revenue.”
In October, Shazeer and De Freitas told the Washington Post that they left Google to “get this technology into as many hands as possible.”
“I thought, ‘Let’s build a product now that can that can help millions and billions of people,’” Shazeer said. “Especially in the age of COVID, there are just millions of people who are feeling isolated or lonely or need someone to talk to.”
And, as he told Bloomberg last month: “Startups can move faster and launch things.”
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Author: Sharon Goldman
Source: Venturebeat