AI & RoboticsNews

Merlyn Mind launches education-focused LLMs for classroom integration of generative AI

Merlyn Mind, an AI-powered digital assistant platform, announced the launch of a suite of large language models (LLMs) specifically tailored for the education sector under an open-source license. 

Merlyn said that its LLMs, developed with an emphasis on education workflows and safety requirements, would empower teachers and students to engage with generative models that operate on user-selected curricula, fostering an enhanced learning experience. 

The LLMs, part of the company’s generative AI platform designed for educational purposes, can interact with specific collections of educational content.

“No education-specific LLMs have been announced to date, i.e., at the actual modeling level. Some education services use general-purpose LLMs (most integrate with OpenAI), but these can encounter the drawbacks we’ve been discussing (hallucinations, lack of ironclad safety, privacy complexities, etc.),” Satya Nitta, CEO and cofounder of Merlyn Mind, told VentureBeat. “By contrast, our purpose-built generative AI platform and LLMs are the first developed and tuned to the needs of education.”

>>Follow VentureBeat’s ongoing generative AI coverage<<

According to Nitta, typical LLMs are trained on vast amounts of internet data, resulting in responses generated from that content. These responses may not align with educational requirements. In contrast, Merlyn’s LLMs rely solely on academic corpora chosen by users or institutions, without accessing the broader internet.

“As education institutions, school leaders and teachers make thoughtful strategic choices on the content and curriculum they use to best help students, Merlyn’s AI platform is built for this reality with a solution that draws from the school’s chosen corpora to overcome hallucinations and inaccuracies with a generative AI experience,” added Nitta.  

Teachers and students can use the education-focused generative AI platform through the Merlyn voice assistant. In the classroom, users can ask Merlyn questions verbally or request it to generate quizzes and classroom activities based on the ongoing conversation. 

The platform also allows teachers to generate content such as slides, lesson plans and assessments tailored to their curriculum and aligned content.

Merlyn’s Nitta noted that existing state-of-the-art LLMs often generate inaccurate responses, referred to as hallucinations. For instance, OpenAI’s GPT-4, despite being an improvement over its predecessors, still experiences hallucinations approximately 20% of the time.

He emphasized the importance in education of precise and accurate responses, as user prompts must draw from specific content sources. The company employs various techniques to ensure reliable, accurate responses and minimize hallucinations.

When a user submits a request, such as asking a question or issuing a command to generate assessments, the LLM begins by retrieving the most relevant passages from the content used by the school district or educator for teaching. This content is then presented to the language model.

The model generates responses solely based on the provided content and does not draw from its pretraining materials. To verify the accuracy of the response, it undergoes an additional check by an alternate language model to ensure alignment with the original request.

Merlyn said it has fine-tuned the primary model so that when it cannot produce a high-quality response it admits the failure, rather than generating a false response.

“Hallucination-free responses, with attribution to the source material, are commensurate with the need to preserve the sanctity of information during teaching and learning,” said Nitta. “Our approach is already showing that we hallucinate less than 3% of the time, and we are well on our way to nearly 100% hallucination-free responses, which is our goal.”

The company said it adheres to rigorous privacy standards, ensuring compliance with legal, regulatory and ethical requirements specific to educational environments. These include the Family Educational Rights and Privacy Act (FERPA), the Children’s Online Privacy Protection Act (COPPA), GDPR, and relevant student data privacy laws in the United States. Merlyn explicitly guarantees that personal information will never be sold.

“We screen for and delete personally identifiable information (PII) we detect in our conversational experiences and transcripts. Our policy is to delete text transcripts of voice audio within six months of creation or within 90 days of termination of our customer contract, whichever is sooner,” said Nitta. “We only retain and use de-identified data derived from text transcripts to improve our services and for other lawful purposes.”

The company said that its education-focused LLMs are smaller and more efficient than generalist models. Merlyn’s models vary in size from six billion to 40 billion parameters; mainstream general-purpose models typically have over 175 billion.

Nitta also highlighted that the LLMs demonstrate high efficiency in training and operation (inferencing) compared to general-purpose models.

“Merlyn’s LLMs’ average latency is around 90 milliseconds per [generated] word compared to 250+ milliseconds per generated word for the larger models. This becomes an enormous advantage if an LLM or multiple LLMs have to be used sequentially to respond to a user query,” he explained. “Using a 175-billion-parameter [model] three times in succession can lead to unreasonably long latencies, poor user experience, much less efficient use of computing resources — leaving a much larger environmental footprint than Merlyn’s LLMs.”

Nitta said that generative AI has enormous potential to transform education. But it has to be used correctly, with safety and accuracy paramount. 

“We hope that the developer community will download the models and use them to check the safety of their LLM responses as part of their solutions. In addition to our voice assistant, Merlyn is available in a familiar chatbot interface which responds multimodally (including aligned images), and we are also being requested to make Merlyn available through an API,” he said. “For technically oriented users, we are also contributing some of our education LLMs to open source.”

He expressed that, similar to other AI advancements, the most impactful solutions within specific industries, such as education, emerge when teams purposefully develop AI technologies.

“These platforms and solutions will be imbued with a deep awareness of domain-specific workflows and needs and will understand specific contexts and domain-specific data,” Nitta said. “When these conditions are met, generative AI will utterly transform industries and segments, ushering in untold gains in productivity and enabling humans to reach our highest potential.”

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Merlyn Mind, an AI-powered digital assistant platform, announced the launch of a suite of large language models (LLMs) specifically tailored for the education sector under an open-source license. 

Merlyn said that its LLMs, developed with an emphasis on education workflows and safety requirements, would empower teachers and students to engage with generative models that operate on user-selected curricula, fostering an enhanced learning experience. 

The LLMs, part of the company’s generative AI platform designed for educational purposes, can interact with specific collections of educational content.

“No education-specific LLMs have been announced to date, i.e., at the actual modeling level. Some education services use general-purpose LLMs (most integrate with OpenAI), but these can encounter the drawbacks we’ve been discussing (hallucinations, lack of ironclad safety, privacy complexities, etc.),” Satya Nitta, CEO and cofounder of Merlyn Mind, told VentureBeat. “By contrast, our purpose-built generative AI platform and LLMs are the first developed and tuned to the needs of education.”

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

>>Follow VentureBeat’s ongoing generative AI coverage<<

According to Nitta, typical LLMs are trained on vast amounts of internet data, resulting in responses generated from that content. These responses may not align with educational requirements. In contrast, Merlyn’s LLMs rely solely on academic corpora chosen by users or institutions, without accessing the broader internet.

“As education institutions, school leaders and teachers make thoughtful strategic choices on the content and curriculum they use to best help students, Merlyn’s AI platform is built for this reality with a solution that draws from the school’s chosen corpora to overcome hallucinations and inaccuracies with a generative AI experience,” added Nitta.  

Teachers and students can use the education-focused generative AI platform through the Merlyn voice assistant. In the classroom, users can ask Merlyn questions verbally or request it to generate quizzes and classroom activities based on the ongoing conversation. 

The platform also allows teachers to generate content such as slides, lesson plans and assessments tailored to their curriculum and aligned content.

Eliminating hallucinations to provide accurate educational insights

Merlyn’s Nitta noted that existing state-of-the-art LLMs often generate inaccurate responses, referred to as hallucinations. For instance, OpenAI’s GPT-4, despite being an improvement over its predecessors, still experiences hallucinations approximately 20% of the time.

He emphasized the importance in education of precise and accurate responses, as user prompts must draw from specific content sources. The company employs various techniques to ensure reliable, accurate responses and minimize hallucinations.

When a user submits a request, such as asking a question or issuing a command to generate assessments, the LLM begins by retrieving the most relevant passages from the content used by the school district or educator for teaching. This content is then presented to the language model.

The model generates responses solely based on the provided content and does not draw from its pretraining materials. To verify the accuracy of the response, it undergoes an additional check by an alternate language model to ensure alignment with the original request.

Merlyn said it has fine-tuned the primary model so that when it cannot produce a high-quality response it admits the failure, rather than generating a false response.

“Hallucination-free responses, with attribution to the source material, are commensurate with the need to preserve the sanctity of information during teaching and learning,” said Nitta. “Our approach is already showing that we hallucinate less than 3% of the time, and we are well on our way to nearly 100% hallucination-free responses, which is our goal.”

Privacy, compliance and efficiency

The company said it adheres to rigorous privacy standards, ensuring compliance with legal, regulatory and ethical requirements specific to educational environments. These include the Family Educational Rights and Privacy Act (FERPA), the Children’s Online Privacy Protection Act (COPPA), GDPR, and relevant student data privacy laws in the United States. Merlyn explicitly guarantees that personal information will never be sold.

“We screen for and delete personally identifiable information (PII) we detect in our conversational experiences and transcripts. Our policy is to delete text transcripts of voice audio within six months of creation or within 90 days of termination of our customer contract, whichever is sooner,” said Nitta. “We only retain and use de-identified data derived from text transcripts to improve our services and for other lawful purposes.”

The company said that its education-focused LLMs are smaller and more efficient than generalist models. Merlyn’s models vary in size from six billion to 40 billion parameters; mainstream general-purpose models typically have over 175 billion.

Nitta also highlighted that the LLMs demonstrate high efficiency in training and operation (inferencing) compared to general-purpose models.

“Merlyn’s LLMs’ average latency is around 90 milliseconds per [generated] word compared to 250+ milliseconds per generated word for the larger models. This becomes an enormous advantage if an LLM or multiple LLMs have to be used sequentially to respond to a user query,” he explained. “Using a 175-billion-parameter [model] three times in succession can lead to unreasonably long latencies, poor user experience, much less efficient use of computing resources — leaving a much larger environmental footprint than Merlyn’s LLMs.”

A future of opportunities for LLMs in education

Nitta said that generative AI has enormous potential to transform education. But it has to be used correctly, with safety and accuracy paramount. 

“We hope that the developer community will download the models and use them to check the safety of their LLM responses as part of their solutions. In addition to our voice assistant, Merlyn is available in a familiar chatbot interface which responds multimodally (including aligned images), and we are also being requested to make Merlyn available through an API,” he said. “For technically oriented users, we are also contributing some of our education LLMs to open source.”

He expressed that, similar to other AI advancements, the most impactful solutions within specific industries, such as education, emerge when teams purposefully develop AI technologies.

“These platforms and solutions will be imbued with a deep awareness of domain-specific workflows and needs and will understand specific contexts and domain-specific data,” Nitta said. “When these conditions are met, generative AI will utterly transform industries and segments, ushering in untold gains in productivity and enabling humans to reach our highest potential.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Victor Dey
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!