AI & RoboticsNews

Galileo offers new tools to explain why your AI model is hallucinating

Why is a specific generative AI model producing hallucinations when given a seemingly typical prompt? It’s often a perplexing question that is difficult to answer.

San Francisco-based artificial intelligence startup Galileo is aiming to help its users to better understand and explain the output of large language models (LLMs), with a series of new monitoring and metrics capabilities that are being announced today. The new features are part of an update to the Galileo LLM Studio, which the company first announced back in June. Galileo was founded by former Google employees and raised an $18 million round of funding to help bring data intelligence to AI.

Galileo Studio now allows users to evaluate the prompts and context of all of the inputs, but also observe the outputs in real time. With the new monitoring capabilities, the company claims that it is able to provide better insights into why model outputs are being generated, with new metrics and guardrails to optimize LLMs.

“What’s really new here in the last couple of months is we have closed the loop by adding real time monitoring, because now you can actually observe what’s going wrong,” Vikram Chatterji, co-founder and CEO of Galileo told VentureBeat in an exclusive interview. “It has become an end to end product for continuous improvement of large language model applications.”

Modern LLMs typically rely on the use of API calls from an application to the LLM to get a response.

Chatterji explained that Galileo intercepts those API calls both for the input going into the LLM and now also for the generated output. With that intercepted data, Galileo is able to provide users with near real-time information about performance of the model as well as the accuracy of the outputs.

Measuring the factual accuracy of a generated AI output, often leads to a discussion about hallucination, when it generates an output that is not accurately based on facts.

Generative AI for text with transformer models all work by predicting what the next correct word should be in a sequence of words. It’s an approach that is generated with the use of model weights and scores, which typically are completely hidden from the end user.

“Essentially what the LLM is doing is it’s trying to predict the probability of what the next word should be,” he said. “But it also has an idea for what the next alternative words should be and it assigns probabilities to all of those different tokens or different words.”

Galileo hooks into the model itself to get visibility into exactly what those probabilities are and then provides a basis of additional metrics to better explain model output and understand why a particular hallucination occurred.

By providing that insight, Chatterji said the goal is to help developers to better adjust models and fine tuning to get the best results. He noted that where Galileo really helps is by not just quantifying telling developers that the potential for hallucination exists, but also literally explaining in a visual way what words or prompts a model was confused on, on a per-word basis.

The risk of an LLM based application providing a response that could lead to trouble, by way of inaccuracy, language or confidential information disclosure, is one that Chatterji said will keep some developers up at night.

Being able to identify why a model hallucinated and providing metrics around it is helpful, but more is needed.

So, the Galileo Studio update also includes new guardrail metrics. For AI models, a guardrail is a limitation on what the model can generate, in terms of information, tone and language.

Chatterji noted that for organizations in financial services and healthcare, there are regulatory compliance concerns about information that can be disclosed and the language that is used. With guardrail metrics, Galileo users can set up their own guardrails and then monitor and measure model output to make sure that LLM never goes off the rails.

Another metric that Galileo is now tracking is one that Chatterji referred to as “groundedness,” the ability to determine if a model’s output is grounded or within the bounds of the training data it was provided.

For example, Chatterji explained that if a model is trained on mortgage loan documents but then provides an answer about something completely outside of those documents, Galileo can detect that through the groundedness metric. This lets users know if a response is truly relevant to the context the model was trained on.

While groundedness might sound like another way to determine if a hallucination has occurred there is a nuanced difference.

Galileo’s hallucination metric analyzes how confident a model was in its response and identifies specific words it was unsure about, measuring the model’s own confidence and potential confusion.

In contrast, the groundedness metric checks if the model’s output is grounded in, or relevant to the actual training data that was provided. Even if a model seems confident, its response could be about something completely outside the scope of what it was trained on.

“So now we have a whole host of metrics that the users can now get a better sense for exactly what’s going on in production,”Chatterji said.

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


Why is a specific generative AI model producing hallucinations when given a seemingly typical prompt? It’s often a perplexing question that is difficult to answer.

San Francisco-based artificial intelligence startup Galileo is aiming to help its users to better understand and explain the output of large language models (LLMs), with a series of new monitoring and metrics capabilities that are being announced today. The new features are part of an update to the Galileo LLM Studio, which the company first announced back in June. Galileo was founded by former Google employees and raised an $18 million round of funding to help bring data intelligence to AI.

Galileo Studio now allows users to evaluate the prompts and context of all of the inputs, but also observe the outputs in real time. With the new monitoring capabilities, the company claims that it is able to provide better insights into why model outputs are being generated, with new metrics and guardrails to optimize LLMs.

“What’s really new here in the last couple of months is we have closed the loop by adding real time monitoring, because now you can actually observe what’s going wrong,” Vikram Chatterji, co-founder and CEO of Galileo told VentureBeat in an exclusive interview. “It has become an end to end product for continuous improvement of large language model applications.”

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.


Register Now

How LLM monitoring works in Galileo

Modern LLMs typically rely on the use of API calls from an application to the LLM to get a response.

Chatterji explained that Galileo intercepts those API calls both for the input going into the LLM and now also for the generated output. With that intercepted data, Galileo is able to provide users with near real-time information about performance of the model as well as the accuracy of the outputs.

Measuring the factual accuracy of a generated AI output, often leads to a discussion about hallucination, when it generates an output that is not accurately based on facts.

Generative AI for text with transformer models all work by predicting what the next correct word should be in a sequence of words. It’s an approach that is generated with the use of model weights and scores, which typically are completely hidden from the end user.

“Essentially what the LLM is doing is it’s trying to predict the probability of what the next word should be,” he said. “But it also has an idea for what the next alternative words should be and it assigns probabilities to all of those different tokens or different words.”

Galileo hooks into the model itself to get visibility into exactly what those probabilities are and then provides a basis of additional metrics to better explain model output and understand why a particular hallucination occurred.

By providing that insight, Chatterji said the goal is to help developers to better adjust models and fine tuning to get the best results. He noted that where Galileo really helps is by not just quantifying telling developers that the potential for hallucination exists, but also literally explaining in a visual way what words or prompts a model was confused on, on a per-word basis.

Guardrails and grounding help developers to sleep at night

The risk of an LLM based application providing a response that could lead to trouble, by way of inaccuracy, language or confidential information disclosure, is one that Chatterji said will keep some developers up at night.

Being able to identify why a model hallucinated and providing metrics around it is helpful, but more is needed.

So, the Galileo Studio update also includes new guardrail metrics. For AI models, a guardrail is a limitation on what the model can generate, in terms of information, tone and language.

Chatterji noted that for organizations in financial services and healthcare, there are regulatory compliance concerns about information that can be disclosed and the language that is used. With guardrail metrics, Galileo users can set up their own guardrails and then monitor and measure model output to make sure that LLM never goes off the rails.

Another metric that Galileo is now tracking is one that Chatterji referred to as “groundedness,” the ability to determine if a model’s output is grounded or within the bounds of the training data it was provided.

For example, Chatterji explained that if a model is trained on mortgage loan documents but then provides an answer about something completely outside of those documents, Galileo can detect that through the groundedness metric. This lets users know if a response is truly relevant to the context the model was trained on.

While groundedness might sound like another way to determine if a hallucination has occurred there is a nuanced difference.

Galileo’s hallucination metric analyzes how confident a model was in its response and identifies specific words it was unsure about, measuring the model’s own confidence and potential confusion.

In contrast, the groundedness metric checks if the model’s output is grounded in, or relevant to the actual training data that was provided. Even if a model seems confident, its response could be about something completely outside the scope of what it was trained on.

“So now we have a whole host of metrics that the users can now get a better sense for exactly what’s going on in production,”Chatterji said.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sean Michael Kerner
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!