AI & RoboticsNews

How iGenius’s GPT for numbers is evolving language models to give enterprise data a voice

Uljan Sharka, founder and CEO of iGenius, has spent the last seven years working on language models and generative AI. To this point, it’s been all about the technology, from the size of the model to how much training data it uses to inference times. And what he’s learned over the past seven years, and three different development cycles, is that it’s not about the technology – it’s about how we serve human needs. And that takes a whole new way of looking at LLMs.

At VB Transform 2023, Sharka spoke with VB CEO Matt Marshall about why enterprise LLMs are a particularly complex nut to crack, and why they’ve taken a GPT-for-numbers approach with their virtual advisor for data intelligence called crystal. In other words, enabling generative AI to respond to data-related queries, not just content.

That’s the foundational principle for designing a solution that ensures even teams with low data literacy have the ability to make better, faster data-driven decisions on a daily basis.

“What’s happening right now in enterprise is that we got obsessed with language models, and we’re right. Language is without a doubt the best way to humanize technology,” he said. “But the way we’re implementing it is still to evolve. First of all, we’re thinking of language models exclusively, when at the enterprise level we still need to deal with a lot more complexity.”

Every company has the data it needs in its databases and business intelligence tools to optimize decision-making, but again, not every team can access these, and might not even have the skills or understanding necessary to ask for what they need, and then interpret that data.

“We started with the idea of helping organizations maximize the value of their goldmine of data that they already possess,” Sharka said. “Our vision is to use language as the future of the interface. Language was the starting point. We didn’t come up with this idea of the composite AI, but as we started building and started talking to companies out there, we were challenged continuously.”

The interface is only a small percentage of what’s required to make a sophisticated, complex database certified and accessible for any level of tech savvy.

“We’re innovating the user experience with language, but we’re still keeping the core of numbers technology — data science, algorithms — at the heart of the solution,” he said.

iGenius needed to solve the major issues that plague most gen AI systems — including hallucinations, outdated answers, security, non-compliance and validity. So, to make the model successful, Sharka said, they ended up combining several AI technologies with a composite AI strategy.

Composite AI combines data science, machine learning and conversational AI in one system.

“Our GPT for numbers approach is a composite AI that combines a data integration platform, which includes permissioning, integrating all the existing data sources, with a knowledge graph technology so we could leverage the power of generative AI,” he explained. “First of all, to build a custom data set, we need to help companies actually transform their structured data in a data set that is then going to result in a language model.”

crystal’s AI engine, or business knowledge graph, can be used in any industry since it uses transfer learning, meaning that crystal transfers its pre-trained knowledge base, and then incorporates only new industry-related training or language on top of its base. From there, its incremental learning component means that rather than retraining from scratch every time new information is added, it only adds new data on top of its consistent base.

And with a users’ usage data, the system self-trains in order to tailor its functions to an individual’s needs and wants, putting them in charge of the data. It also offers suggestions based on profile data and continuously evolves.

“We actually make this a living and breathing experience which adapts based on how users interact with the system,” Sharka explained. “This means we don’t just get an answer, and we don’t just get visual information in addition to the text. We get assistance from the AI, which is reading that information and providing us with more context, and then updating and adapting in real-time to what could be the next best option.”

As you click each suggestion, the AI adapts, so that the whole scenario of the user experience is designed around the user in real time. This is crucial because one of the major barriers to less tech-literate users is not understanding prompt engineering.

“This is important because we’re talking a lot about AI as the technology that is going to democratize information for everyone,” he said. He goes on to point out how critical this is because the majority of users in organizations are non-data-skilled, and don’t know what to ask.

Customers like Allianz and Enel also pushed them from the start toward the idea that a language model should not serve any possible use case, but instead serve a company’s specific domain and private data.

“Our design is all about helping organizations to deploy this AI brain for a dedicated use case, which can be totally isolated from the rest of the network,” he said. “They can then, from there, connect their data, transform it to a language model, and open it with ready-to-use apps to potentially thousands of users.”

As enterprise gen AI platforms evolve, new design components will be crucial to consider when implementing a solution that’s user-friendly.

“Recommendation engines and asynchronous components are going to be key to close the skills gap,” Sharka explained. “If we want to democratize AI for real, we need to make it easy for everyone on par. No matter if you know how to prompt or don’t know how to prompt, you need to be able to take all the value from that technology.”

This includes adding components that have succeeded in the consumer space, the kinds of features that users have come to expect in their online interactions, like recommendation engines.

“I think recommendation engines are going to be key to support these models, to hyper-personalize the experience for end users, and also guide users toward a safe experience, but also to avoid domain-based use cases failing,” he said. “When you’re working on specific domains, you really need to guide the users so that they understand that this is technology to help them work, and not to ask about the weather or to write them a poem.”

An asynchronous component is also essential, to make it possible for users to not just talk with the technology, but have the technology talk back to them. For example, iGenius has designed what they call asynchronous data science.

“Now, with gen AI, you can have a business user that has never worked with this type of technology just normally speak to the technology as they do with people, as they do with a data scientist,” Sharka explained. “Then the technology is going to take that task, go into the background, execute, and when the result is ready it will reach the user at their best possible touch point.”

“Imagine having crystal message you and initiate the conversation about something important that’s laying in your data.”

Uljan Sharka, founder and CEO of iGenius, has spent the last seven years working on language models and generative AI. To this point, it’s been all about the technology, from the size of the model to how much training data it uses to inference times. And what he’s learned over the past seven years, and three different development cycles, is that it’s not about the technology – it’s about how we serve human needs. And that takes a whole new way of looking at LLMs.

At VB Transform 2023, Sharka spoke with VB CEO Matt Marshall about why enterprise LLMs are a particularly complex nut to crack, and why they’ve taken a GPT-for-numbers approach with their virtual advisor for data intelligence called crystal. In other words, enabling generative AI to respond to data-related queries, not just content.

That’s the foundational principle for designing a solution that ensures even teams with low data literacy have the ability to make better, faster data-driven decisions on a daily basis.

“What’s happening right now in enterprise is that we got obsessed with language models, and we’re right. Language is without a doubt the best way to humanize technology,” he said. “But the way we’re implementing it is still to evolve. First of all, we’re thinking of language models exclusively, when at the enterprise level we still need to deal with a lot more complexity.”

Changing the LLM paradigm from the ground up

Every company has the data it needs in its databases and business intelligence tools to optimize decision-making, but again, not every team can access these, and might not even have the skills or understanding necessary to ask for what they need, and then interpret that data.

“We started with the idea of helping organizations maximize the value of their goldmine of data that they already possess,” Sharka said. “Our vision is to use language as the future of the interface. Language was the starting point. We didn’t come up with this idea of the composite AI, but as we started building and started talking to companies out there, we were challenged continuously.”

The interface is only a small percentage of what’s required to make a sophisticated, complex database certified and accessible for any level of tech savvy.

“We’re innovating the user experience with language, but we’re still keeping the core of numbers technology — data science, algorithms — at the heart of the solution,” he said.

iGenius needed to solve the major issues that plague most gen AI systems — including hallucinations, outdated answers, security, non-compliance and validity. So, to make the model successful, Sharka said, they ended up combining several AI technologies with a composite AI strategy.

Composite AI combines data science, machine learning and conversational AI in one system.

“Our GPT for numbers approach is a composite AI that combines a data integration platform, which includes permissioning, integrating all the existing data sources, with a knowledge graph technology so we could leverage the power of generative AI,” he explained. “First of all, to build a custom data set, we need to help companies actually transform their structured data in a data set that is then going to result in a language model.”

crystal’s AI engine, or business knowledge graph, can be used in any industry since it uses transfer learning, meaning that crystal transfers its pre-trained knowledge base, and then incorporates only new industry-related training or language on top of its base. From there, its incremental learning component means that rather than retraining from scratch every time new information is added, it only adds new data on top of its consistent base.

And with a users’ usage data, the system self-trains in order to tailor its functions to an individual’s needs and wants, putting them in charge of the data. It also offers suggestions based on profile data and continuously evolves.

“We actually make this a living and breathing experience which adapts based on how users interact with the system,” Sharka explained. “This means we don’t just get an answer, and we don’t just get visual information in addition to the text. We get assistance from the AI, which is reading that information and providing us with more context, and then updating and adapting in real-time to what could be the next best option.”

As you click each suggestion, the AI adapts, so that the whole scenario of the user experience is designed around the user in real time. This is crucial because one of the major barriers to less tech-literate users is not understanding prompt engineering.

“This is important because we’re talking a lot about AI as the technology that is going to democratize information for everyone,” he said. He goes on to point out how critical this is because the majority of users in organizations are non-data-skilled, and don’t know what to ask.

Customers like Allianz and Enel also pushed them from the start toward the idea that a language model should not serve any possible use case, but instead serve a company’s specific domain and private data.

“Our design is all about helping organizations to deploy this AI brain for a dedicated use case, which can be totally isolated from the rest of the network,” he said. “They can then, from there, connect their data, transform it to a language model, and open it with ready-to-use apps to potentially thousands of users.”

Designing LLMs of the future

As enterprise gen AI platforms evolve, new design components will be crucial to consider when implementing a solution that’s user-friendly.

“Recommendation engines and asynchronous components are going to be key to close the skills gap,” Sharka explained. “If we want to democratize AI for real, we need to make it easy for everyone on par. No matter if you know how to prompt or don’t know how to prompt, you need to be able to take all the value from that technology.”

This includes adding components that have succeeded in the consumer space, the kinds of features that users have come to expect in their online interactions, like recommendation engines.

“I think recommendation engines are going to be key to support these models, to hyper-personalize the experience for end users, and also guide users toward a safe experience, but also to avoid domain-based use cases failing,” he said. “When you’re working on specific domains, you really need to guide the users so that they understand that this is technology to help them work, and not to ask about the weather or to write them a poem.”

An asynchronous component is also essential, to make it possible for users to not just talk with the technology, but have the technology talk back to them. For example, iGenius has designed what they call asynchronous data science.

“Now, with gen AI, you can have a business user that has never worked with this type of technology just normally speak to the technology as they do with people, as they do with a data scientist,” Sharka explained. “Then the technology is going to take that task, go into the background, execute, and when the result is ready it will reach the user at their best possible touch point.”

“Imagine having crystal message you and initiate the conversation about something important that’s laying in your data.”


Author: Jen Larsen
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!