As organizations around the world look to rapidly evaluate, test, and deploy generative AI into their workflows — either on the backend, front-end (customer-facing) or both — many decision-makers remain rightfully concerned about some of the lingering issues, among them, the problem of AI hallucinations.
But a new startup, Gleen AI, has burst on the scene and claims to “solve hallucination,” according to Ashu Dubey, CEO and co-founder of Gleen, who spoke to VentureBeat exclusively in a video call interview.
Today, Gleen AI announces a $4.9 million funding round from Slow Ventures, 6th Man Ventures, South Park Commons, Spartan Group, and other venture firms and angel investors including former Facebook/Meta Platforms’ VP of product management Sam Lessin, to continue building out its anti-hallucination data layer software for enterprises, which is targeted initially toward helping them configure AI models to provide customer support.
Generative AI tools such as the popular, commercially available large language models (LLMs) such as ChatGPT, Claude 2, LLaMA 2, Bard, and others, are trained to respond to human-entered prompts and queries by producing data that has been associated with the words and ideas the human user has entered.
But the gen AI models don’t always get it right, and in many cases, produce information that is inaccurate or not relevant but that the model’s training has associated previously with something the human user has said.
One good recent example, ChatGPT trying to answer “when has the Earth eclipsed Mars?” and providing a convincing sounding explanation that is entirely inaccurate (the very premise of the question is flawed and inaccurate — the Earth can’t eclipse Mars).
While these inaccurate responses can be at times humorous or interesting, for businesses trying to rely on them to produce accurate data for employees or users, the results can be hugely risky — especially for highly-regulated, life-and-death information in healthcare, medicine, heavy industry, and others.
“What we do is, when we send data [from a user] to an LLM, we give facts that can create a good answer,” Dubey said. “If we don’t believe we have enough facts, we won’t send the data to the LLM.”
Specifically, Gleen has created proprietary AI and machine learning (ML) layer independent of whatever LLM that their enterprise customer wants to deploy.
This layer securely sifts through and enterprise’s own internal data, turning it into a vector database, and uses this data improve the quality of the the AI model’s answers.
Gleen’s layer does the following:
The AI layer acts as a checkpoint, cross-checking the LLM’s response before it is delivered to the end user. This eliminates the risk of the chatbot providing false or fabricated information. It’s like having a quality control manager for chatbots.
“We only engage the LLM when we have high confidence the facts are comprehensive,” Dubey explained. “Otherwise we are transparent that more information is needed from the user.”
Gleen’s software also enables users to quickly create customer-support chatbots for their customers, and adjust their “personality” depending on the use-case.
Gleen’s solution is AI model-agonistic, and can support any of the multiple leading models out there that have application programming interface (API) integrations.
For those customers wanting the most popular LLM, it supports OpenAI’s GPT-3.5 Turbo model. For those concerned about data being sent to the LLM host company, it also supports LLaMA 2 run on the company’s private servers (though OpenAI has repeatedly said it does not collect or use customer data to train its models, except when the customer expressly allows it).
For some security-sensitive customers, Gleen offers the option to use a proprietary LLM that never touches the open internet. But Dubey believes LLMs themselves are not the source of hallucination.
“LLMs will hallucinate when not given enough relevant facts to ground the response,” said Dubey. “Our accuracy layer solves that by controlling the inputs to the LLM.”
Right now, the end result of a customer using Gleen is a custom chatbot that can be plugged into their own Slack or surfaced as an end-user facing support agent.
Gleen AI is already being used by customers spanning quantum computing, crypto and other technical domains where accuracy is paramount.
“Implementing Gleen AI was close to no effort on our side,” said Estevan Vilar, community support at Matter Labs, a company dedicated to making the cryptocurrency Ethereum more enterprise friendly. “We just provided a few links, and the rest was smooth.”
Gleen is offering prospective customers a free “AI playground” where they can create their own custom chatbot using their company’s data.
As more companies look to tap into the power of LLMs while mitigating their downsides, Gleen AI’s accuracy layer may offer them the path to deploying generative AI at the level of accuracy they and their customers demand.
“Our vision is every company will have an AI assistant powered by their own proprietary knowledge graph,” said Dubey. “This vector database will become as important of an asset as their website, enabling personalized automation across the entire customer lifecycle.”
Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
As organizations around the world look to rapidly evaluate, test, and deploy generative AI into their workflows — either on the backend, front-end (customer-facing) or both — many decision-makers remain rightfully concerned about some of the lingering issues, among them, the problem of AI hallucinations.
But a new startup, Gleen AI, has burst on the scene and claims to “solve hallucination,” according to Ashu Dubey, CEO and co-founder of Gleen, who spoke to VentureBeat exclusively in a video call interview.
Today, Gleen AI announces a $4.9 million funding round from Slow Ventures, 6th Man Ventures, South Park Commons, Spartan Group, and other venture firms and angel investors including former Facebook/Meta Platforms’ VP of product management Sam Lessin, to continue building out its anti-hallucination data layer software for enterprises, which is targeted initially toward helping them configure AI models to provide customer support.
The problem with hallucinations
Generative AI tools such as the popular, commercially available large language models (LLMs) such as ChatGPT, Claude 2, LLaMA 2, Bard, and others, are trained to respond to human-entered prompts and queries by producing data that has been associated with the words and ideas the human user has entered.
Event
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
But the gen AI models don’t always get it right, and in many cases, produce information that is inaccurate or not relevant but that the model’s training has associated previously with something the human user has said.
One good recent example, ChatGPT trying to answer “when has the Earth eclipsed Mars?” and providing a convincing sounding explanation that is entirely inaccurate (the very premise of the question is flawed and inaccurate — the Earth can’t eclipse Mars).
While these inaccurate responses can be at times humorous or interesting, for businesses trying to rely on them to produce accurate data for employees or users, the results can be hugely risky — especially for highly-regulated, life-and-death information in healthcare, medicine, heavy industry, and others.
What Gleen does to prevent hallucinations
“What we do is, when we send data [from a user] to an LLM, we give facts that can create a good answer,” Dubey said. “If we don’t believe we have enough facts, we won’t send the data to the LLM.”
Specifically, Gleen has created proprietary AI and machine learning (ML) layer independent of whatever LLM that their enterprise customer wants to deploy.
This layer securely sifts through and enterprise’s own internal data, turning it into a vector database, and uses this data improve the quality of the the AI model’s answers.
Gleen’s layer does the following:
- Aggregates structured and unstructured enterprise knowledge from multiple sources like help documentation, FAQs, product specs, manuals, wikis, forums and past chat logs.
- Curates and extracts key facts, eliminating noise and redundancy. Dubey said this “allows us to glean the signal from the noise.” (Also the origin of Gleen’s name.)
- Constructs a knowledge graph to understand relationships between entities. The graph aids in retrieving the most relevant facts for a given query.
- Checks the LLM’s response against the curated facts before delivering the output. If evidence is lacking, the chatbot will say “I don’t know” rather than risk hallucination.
The AI layer acts as a checkpoint, cross-checking the LLM’s response before it is delivered to the end user. This eliminates the risk of the chatbot providing false or fabricated information. It’s like having a quality control manager for chatbots.
“We only engage the LLM when we have high confidence the facts are comprehensive,” Dubey explained. “Otherwise we are transparent that more information is needed from the user.”
Gleen’s software also enables users to quickly create customer-support chatbots for their customers, and adjust their “personality” depending on the use-case.
Gleen’s solution is AI model-agonistic, and can support any of the multiple leading models out there that have application programming interface (API) integrations.
For those customers wanting the most popular LLM, it supports OpenAI’s GPT-3.5 Turbo model. For those concerned about data being sent to the LLM host company, it also supports LLaMA 2 run on the company’s private servers (though OpenAI has repeatedly said it does not collect or use customer data to train its models, except when the customer expressly allows it).
For some security-sensitive customers, Gleen offers the option to use a proprietary LLM that never touches the open internet. But Dubey believes LLMs themselves are not the source of hallucination.
“LLMs will hallucinate when not given enough relevant facts to ground the response,” said Dubey. “Our accuracy layer solves that by controlling the inputs to the LLM.”
Early feedback is promising
Right now, the end result of a customer using Gleen is a custom chatbot that can be plugged into their own Slack or surfaced as an end-user facing support agent.
Gleen AI is already being used by customers spanning quantum computing, crypto and other technical domains where accuracy is paramount.
“Implementing Gleen AI was close to no effort on our side,” said Estevan Vilar, community support at Matter Labs, a company dedicated to making the cryptocurrency Ethereum more enterprise friendly. “We just provided a few links, and the rest was smooth.”
Gleen is offering prospective customers a free “AI playground” where they can create their own custom chatbot using their company’s data.
As more companies look to tap into the power of LLMs while mitigating their downsides, Gleen AI’s accuracy layer may offer them the path to deploying generative AI at the level of accuracy they and their customers demand.
“Our vision is every company will have an AI assistant powered by their own proprietary knowledge graph,” said Dubey. “This vector database will become as important of an asset as their website, enabling personalized automation across the entire customer lifecycle.”
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Author: Carl Franzen
Source: Venturebeat
Reviewed By: Editorial Team