AI & RoboticsNews

AI21 Labs debuts Contextual Answers, a plug-and-play AI engine for enterprise data

AI21 Labs, the Tel Aviv-based NLP major behind the Wordtune editor, has announced the launch of a plug-and-play generative AI engine to help enterprises drive value from their data assets.

Dubbed Contextual Answers, the offering comes as a dedicated API that can be directly embedded into digital assets to implement large language model (LLM) technology on select organizational data. It can enable business employees or customers to gain the required information through a conversational experience, without engaging with different teams or software systems.

“This is the first time that this type of technology is offered as a solution that works out of the box and doesn’t require significant effort and resources. We built the entire solution as a plug-and-play capability and optimized each component, allowing our client to get the best results in the industry without investing the time of AI, NLP or data science practitioners,” said Tel Delbari, who leads the API team at AI21 Labs.

Since the rise of ChatGPT, enterprises of all sizes have been looking for ways to implement LLMs into their data stack and provide internal teams and customers with a faster and more seamless way to interact with accurate, useful information. The usual approach here is to fine-tune existing models and make them work in specific enterprise scenarios, but that method demands significant engineering — something not every company can afford.

With the new Contextual Answers API, AI21 Labs provides a solution that can bring any generative AI use case to life right from the word go.

“Enterprises can get started by uploading documents to our Studio using the web GUI or by uploading the documents via our API and SDK. After loading the files, they can send questions and get answers via the API. Our API is easy to use and every developer, even without being an NLP or AI expert, can start using it,” Delbari told VentureBeat.

Once the AI engine is up and running, business customers or internal employees can write any free-form question, be it for internal support, checking policies or searching information in large documents or manuals. The model will take that query and deliver a concise answer from the context within the uploaded knowledge base. It works for both structured and unstructured information.

“The model is specifically optimized for adapting to internal jargon, acronyms, project names, etc. As long as the documents contain the information, the model will be able to automatically learn it and its meaning. In addition, it will not mix the organizational knowledge with external knowledge, jargon or information that it learned from the internet, keeping it grounded and truthful to the organizational data and internal language,” Delbari explained.

Since the AI engine supports unlimited upload of internal corporate data, Delbari clarified that it takes into account access and security of the information.

For access control and role-based content separation, he says, the model can be limited to using a specific file, a number of files, a specific folder, or tags or metadata. Meanwhile, for security and data confidentiality, he claims that the company’s AI21 Studio ensures a secured and soc-2 certified environment.

“We promise a separated and protected environment that is already trusted by companies from various industries, including banks and pharmaceutical companies,” he said. He also noted that the AI engine can be used via AWS Sagemaker Jumpstart and AWS Bedrock, allowing enterprises to use the core capability of this product on their virtual private clouds (VPCs).

Moving ahead, the company plans to embed the feature into its writing platform Wordtune, allowing users to quickly retrieve select information from uploaded documents.

Leading data ecosystem players Databricks and Snowflake have been working on similar projects. The former recently announced LakehouseIQ, which uses LLM technology to answer specific queries on lakehouse data with context, while the latter has launched Document AI, a purpose-built, multimodal LLM that can get insights from unstructured documents.

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


AI21 Labs, the Tel Aviv-based NLP major behind the Wordtune editor, has announced the launch of a plug-and-play generative AI engine to help enterprises drive value from their data assets.

Dubbed Contextual Answers, the offering comes as a dedicated API that can be directly embedded into digital assets to implement large language model (LLM) technology on select organizational data. It can enable business employees or customers to gain the required information through a conversational experience, without engaging with different teams or software systems.

“This is the first time that this type of technology is offered as a solution that works out of the box and doesn’t require significant effort and resources. We built the entire solution as a plug-and-play capability and optimized each component, allowing our client to get the best results in the industry without investing the time of AI, NLP or data science practitioners,” said Tel Delbari, who leads the API team at AI21 Labs.

Meeting demand for enterprise-specific generative AI

Since the rise of ChatGPT, enterprises of all sizes have been looking for ways to implement LLMs into their data stack and provide internal teams and customers with a faster and more seamless way to interact with accurate, useful information. The usual approach here is to fine-tune existing models and make them work in specific enterprise scenarios, but that method demands significant engineering — something not every company can afford.

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

Register Now

With the new Contextual Answers API, AI21 Labs provides a solution that can bring any generative AI use case to life right from the word go.

“Enterprises can get started by uploading documents to our Studio using the web GUI or by uploading the documents via our API and SDK. After loading the files, they can send questions and get answers via the API. Our API is easy to use and every developer, even without being an NLP or AI expert, can start using it,” Delbari told VentureBeat.

Once the AI engine is up and running, business customers or internal employees can write any free-form question, be it for internal support, checking policies or searching information in large documents or manuals. The model will take that query and deliver a concise answer from the context within the uploaded knowledge base. It works for both structured and unstructured information.

“The model is specifically optimized for adapting to internal jargon, acronyms, project names, etc. As long as the documents contain the information, the model will be able to automatically learn it and its meaning. In addition, it will not mix the organizational knowledge with external knowledge, jargon or information that it learned from the internet, keeping it grounded and truthful to the organizational data and internal language,” Delbari explained.

Efforts toward data access control, security

Since the AI engine supports unlimited upload of internal corporate data, Delbari clarified that it takes into account access and security of the information.

For access control and role-based content separation, he says, the model can be limited to using a specific file, a number of files, a specific folder, or tags or metadata. Meanwhile, for security and data confidentiality, he claims that the company’s AI21 Studio ensures a secured and soc-2 certified environment.

“We promise a separated and protected environment that is already trusted by companies from various industries, including banks and pharmaceutical companies,” he said. He also noted that the AI engine can be used via AWS Sagemaker Jumpstart and AWS Bedrock, allowing enterprises to use the core capability of this product on their virtual private clouds (VPCs).

Moving ahead, the company plans to embed the feature into its writing platform Wordtune, allowing users to quickly retrieve select information from uploaded documents.

Leading data ecosystem players Databricks and Snowflake have been working on similar projects. The former recently announced LakehouseIQ, which uses LLM technology to answer specific queries on lakehouse data with context, while the latter has launched Document AI, a purpose-built, multimodal LLM that can get insights from unstructured documents.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Shubham Sharma
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!