AI & RoboticsNews

Observe.ai unveils 30-billion-parameter contact center LLM and a generative AI product suite

Observe.ai

Conversation intelligence platform Observe.ai today introduced its contact center large language model (LLM), with a 30-billion-parameter capacity, along with a generative AI suite designed to enhance agent performance. The company claims that in contrast to models like GPT, its proprietary LLM is trained on a vast dataset of real-world contact center interactions.

Although a few similar offerings have been announced recently, Observe.ai emphasized that its model’s distinctive value lies in the calibration and control it provides users. The platform allows users to fine-tune and customize the model to suit their specific contact center requirements.

The company said that its LLM has undergone specialized training on multiple contact center datasets, equipping it to handle various AI-based tasks (call summarization, automated QA, coaching, etc.) customized for contact center teams.

With its LLM’s capabilities, Observe.ai’s generative AI suite strives to boost agent performance across all customer interactions: phone calls and chats, queries, complaints and daily conversations that contact center teams handle.

Observe.AI believes these features will empower agents to provide better customer experiences.

“Our LLM has undergone extensive training on a domain-specific dataset of contact center interactions. The training process involved utilizing a substantial corpus of data points extracted from the hundreds of millions of conversations Observe.ai has processed over the last five years,” Swapnil Jain, CEO of Observe.AI, told VentureBeat.

Jain emphasized the importance of quality and relevance in the instruction dataset, which comprised hundreds of curated instructions across various tasks directly applicable to contact center use cases.

This meticulous approach to dataset curation, he said, improved the LLM’s ability to deliver the accurate and contextually appropriate responses the industry requires.

According to the company, its contact center LLM has outperformed GPT-3.5 in initial benchmarks, showing a 35% boost in accuracy in conversation summarization and a 33% improvement in sentiment analysis. Jain said these figures are projected to improve further through continuous training.

Moreover, the LLM underwent training exclusively on redacted data, ensuring the absence of personally identifiable information (PII). Observe.AI points out its use of redaction techniques to prioritize customer data privacy while harnessing the capabilities of generative AI.

According to Jain, the widespread adoption of generative AI has spurred approximately 70% of businesses from diverse industries to explore its potential benefits, particularly in areas such as customer experience, retention and revenue growth. Contact center leaders are among the enthusiastic adopters eager to take advantage of these transformative technologies.

However, despite their promise, Jain believes that generic LLMs face challenges that impede their effectiveness in contact centers.

These challenges include a lack of specificity and control, an inability to distinguish between correct and incorrect responses and a limited proficiency in understanding human conversation and real-world contexts. Consequently, he said that these generic models, including GPT, often yield inaccuracies and confabulations, also known as “hallucinations,” rendering them unsuitable for business settings.

“Generic models are trained on open internet data. Therefore, these models don’t learn the nuances of spoken human conversation (think disfluencies, repetitions, broken sentences, etc.) and also contend with transcription errors due to speech-to-text models,” said Jain. “So they might be good for general tasks like summarizing a conversation but miss the relevant context for conversations within the contact center.”

Jain explained that his company has tackled these challenges by incorporating five years of well-processed and pertinent data into its model. It gathered this data from hundreds of millions of customer interactions to train the model on contact center-specific tasks.

“We have a nuanced and accurate understanding of what ‘successful’ customer experiences look like in real-world contexts. Our customers can then further refine and tailor this to the unique needs of their business,” Jain said. “Our approach provides a full framework for contact centers to calibrate the machine and verify that the actual outputs align with their expectations. This is the nature of a ‘glass box’ AI model that offers complete transparency and engenders trust in the system.”

The company’s new generative AI suite empowers agents throughout the entire customer interaction lifecycle, he added.

The Knowledge AI feature facilitates quick and accurate responses to customer inquiries by eliminating manual searches across numerous internal knowledge bases and FAQs; while the Auto Summary feature enables agents to concentrate on the customer, reducing post-call tasks while ensuring the quality and consistency of call notes.

The Auto Coaching tool delivers personalized, evidence-based feedback to agents immediately after concluding a customer interaction. This facilitates skill improvement and aims to enhance the learning experience for agents, supplementing their regular supervisor-based coaching sessions.

Observe.ai claims that its proprietary model’s surpassing of GPT in consistency and relevance marks a significant advancement.

“Our LLM only trains on data that is completely redacted of any sensitive customer information and PII. Our redaction benchmarks for this are exemplary for the industry — we avoid over-redaction of sensitive information in 150 million instances across 100 million calls with fewer than 500 reported errors,” explained Jain. “This ensures sensitive information is protected and privacy and compliance are upheld while retaining maximum information for LLM training.”

He also said that the company has implemented a robust data protocol for storing all customer data, including data generated by the LLM, in full compliance with regulatory requirements. Each customer/account is allocated a dedicated storage partition, ensuring data encryption and unique identification for every customer/account.

Jain said that we are witnessing a crucial juncture amidst the flourishing of generative AI. He emphasized that the contact center industry is rife with repetitive tasks and believes that generative AI will empower human talent to perform their jobs with remarkable efficiency and speed, surpassing their current capabilities tenfold.

“I think the successful disruptors in this industry will focus on creating a generative AI that is fully controllable; trustworthy with complete visibility into outcomes; and secure,” said Jain. “We’re focusing on building trustworthy, reliable and consistent AI that ultimately helps human talent do their jobs better. We aim to create AI that allows humans to focus more on creativity, strategic thinking, and creating positive customer experiences.”

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Conversation intelligence platform Observe.ai today introduced its contact center large language model (LLM), with a 30-billion-parameter capacity, along with a generative AI suite designed to enhance agent performance. The company claims that in contrast to models like GPT, its proprietary LLM is trained on a vast dataset of real-world contact center interactions.

Although a few similar offerings have been announced recently, Observe.ai emphasized that its model’s distinctive value lies in the calibration and control it provides users. The platform allows users to fine-tune and customize the model to suit their specific contact center requirements.

The company said that its LLM has undergone specialized training on multiple contact center datasets, equipping it to handle various AI-based tasks (call summarization, automated QA, coaching, etc.) customized for contact center teams.

With its LLM’s capabilities, Observe.ai’s generative AI suite strives to boost agent performance across all customer interactions: phone calls and chats, queries, complaints and daily conversations that contact center teams handle.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

Observe.AI believes these features will empower agents to provide better customer experiences.

“Our LLM has undergone extensive training on a domain-specific dataset of contact center interactions. The training process involved utilizing a substantial corpus of data points extracted from the hundreds of millions of conversations Observe.ai has processed over the last five years,” Swapnil Jain, CEO of Observe.AI, told VentureBeat.

Jain emphasized the importance of quality and relevance in the instruction dataset, which comprised hundreds of curated instructions across various tasks directly applicable to contact center use cases.

This meticulous approach to dataset curation, he said, improved the LLM’s ability to deliver the accurate and contextually appropriate responses the industry requires.

According to the company, its contact center LLM has outperformed GPT-3.5 in initial benchmarks, showing a 35% boost in accuracy in conversation summarization and a 33% improvement in sentiment analysis. Jain said these figures are projected to improve further through continuous training.

Moreover, the LLM underwent training exclusively on redacted data, ensuring the absence of personally identifiable information (PII). Observe.AI points out its use of redaction techniques to prioritize customer data privacy while harnessing the capabilities of generative AI.

Eliminating hallucinations to provide accurate insights and context 

According to Jain, the widespread adoption of generative AI has spurred approximately 70% of businesses from diverse industries to explore its potential benefits, particularly in areas such as customer experience, retention and revenue growth. Contact center leaders are among the enthusiastic adopters eager to take advantage of these transformative technologies.

However, despite their promise, Jain believes that generic LLMs face challenges that impede their effectiveness in contact centers.

These challenges include a lack of specificity and control, an inability to distinguish between correct and incorrect responses and a limited proficiency in understanding human conversation and real-world contexts. Consequently, he said that these generic models, including GPT, often yield inaccuracies and confabulations, also known as “hallucinations,” rendering them unsuitable for business settings.

“Generic models are trained on open internet data. Therefore, these models don’t learn the nuances of spoken human conversation (think disfluencies, repetitions, broken sentences, etc.) and also contend with transcription errors due to speech-to-text models,” said Jain. “So they might be good for general tasks like summarizing a conversation but miss the relevant context for conversations within the contact center.”

Jain explained that his company has tackled these challenges by incorporating five years of well-processed and pertinent data into its model. It gathered this data from hundreds of millions of customer interactions to train the model on contact center-specific tasks.

“We have a nuanced and accurate understanding of what ‘successful’ customer experiences look like in real-world contexts. Our customers can then further refine and tailor this to the unique needs of their business,” Jain said. “Our approach provides a full framework for contact centers to calibrate the machine and verify that the actual outputs align with their expectations. This is the nature of a ‘glass box’ AI model that offers complete transparency and engenders trust in the system.”

The company’s new generative AI suite empowers agents throughout the entire customer interaction lifecycle, he added.

The Knowledge AI feature facilitates quick and accurate responses to customer inquiries by eliminating manual searches across numerous internal knowledge bases and FAQs; while the Auto Summary feature enables agents to concentrate on the customer, reducing post-call tasks while ensuring the quality and consistency of call notes.

The Auto Coaching tool delivers personalized, evidence-based feedback to agents immediately after concluding a customer interaction. This facilitates skill improvement and aims to enhance the learning experience for agents, supplementing their regular supervisor-based coaching sessions.

A new benchmark for contact center LLMs

Observe.ai claims that its proprietary model’s surpassing of GPT in consistency and relevance marks a significant advancement.

“Our LLM only trains on data that is completely redacted of any sensitive customer information and PII. Our redaction benchmarks for this are exemplary for the industry — we avoid over-redaction of sensitive information in 150 million instances across 100 million calls with fewer than 500 reported errors,” explained Jain. “This ensures sensitive information is protected and privacy and compliance are upheld while retaining maximum information for LLM training.”

He also said that the company has implemented a robust data protocol for storing all customer data, including data generated by the LLM, in full compliance with regulatory requirements. Each customer/account is allocated a dedicated storage partition, ensuring data encryption and unique identification for every customer/account.

Jain said that we are witnessing a crucial juncture amidst the flourishing of generative AI. He emphasized that the contact center industry is rife with repetitive tasks and believes that generative AI will empower human talent to perform their jobs with remarkable efficiency and speed, surpassing their current capabilities tenfold.

“I think the successful disruptors in this industry will focus on creating a generative AI that is fully controllable; trustworthy with complete visibility into outcomes; and secure,” said Jain. “We’re focusing on building trustworthy, reliable and consistent AI that ultimately helps human talent do their jobs better. We aim to create AI that allows humans to focus more on creativity, strategic thinking, and creating positive customer experiences.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Victor Dey
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Tesla reveals how many Cybertrucks it produced through yet another recall

Cleantech & EV'sNews

Toyota is delaying new US-made electric SUVs, but there's more to it

Cleantech & EV'sNews

24M shares test data of new Impervio separator that helps reduce EV battery fires

AI & RoboticsNews

Meta enters AI video wars with powerful Movie Gen set to hit Instagram in 2025

Sign up for our Newsletter and
stay informed!