AI & RoboticsNews

Pinecone leads ‘explosion’ in vector databases for generative AI

Vector databases, a relatively new type of database that can store and query unstructured data such as images, text and video, are gaining popularity among developers and enterprises who want to build generative AI applications such as chatbots, recommendation systems and content creation.

One of the leading providers of vector database technology is Pinecone, a startup founded in 2019 that has raised $138 million and is valued at $750 million. The company said Thursday it has “way more than 100,000 free users and more than 4,000 paying customers,” reflecting an explosion of adoption by developers from small companies as well as enterprises that Pinecone said are experimenting like crazy with new applications.

By contrast, the company said that in December it had fewer than in the low thousands of free users, and fewer than 300 paying customers.

Pinecone held a user conference on Thursday in San Francisco, where it showcased some of its success stories and announced a partnership with Microsoft Azure to speed up generative AI applications for Azure customers.

>>Follow all our VentureBeat Transform 2023 coverage<<

Bob Wiederhold, the president and COO of Pinecone, said in his keynote talk that generative AI is a new platform that has eclipsed the internet platform and that vector databases are a key part of the solution to enable it. He said the generative AI platform is going to be even bigger than the internet, and “is going to have the same and probably even bigger impacts on the world.”

Wiederhold explained that vector databases allow developers to access domain-specific information that is not available on the internet or in traditional databases, and to update it in real time. This way, they can provide better context and accuracy for generative AI models such as ChatGPT or GPT-4, which are often trained on outdated or incomplete data scraped from the web.

Vector databases allow you to do semantic search, which is a way to convert any kind of data into vectors that allow you to do “nearest neighbor” search. You can use this information to enrich the context window of the prompts. This way, “you will have far fewer hallucinations, and you will allow these fantastic chatbot technologies to answer your questions correctly, more often,” Wiederhold said.

Wiederhold’s remarks came after he spoke Wednesday at VB Transform, where he explained to enterprise executives how generative AI is changing the nature of the database, and why at least 30 vector database competitors have popped up to serve the market. See his interview below.

Wiederhold said that large language models (LLMs) and vector databases are the two key technologies for generative AI.

Whenever new data types and access patterns appear, assuming the market is large enough, a new subset of the database market forms, he said. That happened with relational databases and no-SQL databases, and that’s happening with vector databases, he said. Vectors are a very different way to represent data, and nearest neighbor search is a very different way to access data, he said.

He explained that vector databases have a more efficient way of partitioning data based on this new paradigm, and so are filling a void that other databases, such as relational and no-SQL databases, are unable to fill.

He added that Pinecone has built its technology from scratch, without compromising on performance, scalability or cost. He said that only by building from scratch can you have the lowest latency, the highest ingestion speeds and the lowest cost of implementing use cases.

He also said that the winner database providers are going to be the ones that have built the best managed services for the cloud, and that Pinecone has delivered there as well. 

However, Wiederhold also acknowledged Thursday that the generative AI market is going through a hype cycle and that it will soon hit a “trough of reality” as developers move on from prototyping applications that have no ability to go into production. He said this is a good thing for the industry as it will separate the real production-ready, impactful applications from the “fluff” of prototyped applications that currently make up the majority of experimentation.

Signs of the tapering off, he said, include a decline in June in the reported number of users of ChatGPT, but also Pinecone’s own user adoption trends, which have shown a halting of an “incredible” pickup from December through April. “In May and June, it settled back down to something more reasonable,” he said.

Wiederhold responded to questions at VB Transform about the market size for vector databases. He said it’s a very big or even enormous market, but that it’s still unclear whether it will be a $10 billion market or a $100 billion market. He said that question will get sorted out as best practices get worked out over the next two or three years.

He said that there is a lot of experimentation going on with different ways to use generative AI technologies, and that one big question has arisen from a trend toward larger context windows for LLM prompts. If developers could stick more of their data, perhaps even their entire database, directly in a context window, then a vector database wouldn’t be needed to search data. 

But he said that is unlikely to happen. He drew an analogy with humans who, when swamped with information, can’t come up with better answers. Information is most useful when it’s manageably small so that it can be internalized, he said. “And I think the same kind of thing is true [with] the context window in terms of putting huge amounts of information into it.” He cited a Stanford University study that came out this week that looked at existing chatbot technology and found that smaller amounts of information in the context window produced better results. (Update: VentureBeat asked for a specific reference to the paper, and Pinecone provided it here).

Also, he said some large enterprises are experimenting with training their own foundation models, and others are fine-tuning existing foundation models, and both of these approaches can bypass the need for calling on vector databases. But both approaches require a lot of expertise, and are expensive. “There’s a limited number of companies that are going to be able to take that on.”

Separately, at VB Transform on Wednesday, this question about building models or simply piggybacking on top of GPT-4 with vector databases was a key question for executives across the two days of sessions. Naveen Rao, CEO of MosaicML, which helps companies build their own large language models, also spoke at the event, and acknowledged that a limited number of companies have the scale to pay $200,000 for model building and also have the data expertise, preparation and other infrastructure necessary to leverage those models. He said his company has 50 customers, but that it has had to be selective to reach that number. That number will grow over the next two or three years, though, as those companies clean up and organize their data, he said. That promise, in part, is why Databricks announced last week that it will acquire MosaicML for $1.3 billion.

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


Vector databases, a relatively new type of database that can store and query unstructured data such as images, text and video, are gaining popularity among developers and enterprises who want to build generative AI applications such as chatbots, recommendation systems and content creation.

One of the leading providers of vector database technology is Pinecone, a startup founded in 2019 that has raised $138 million and is valued at $750 million. The company said Thursday it has “way more than 100,000 free users and more than 4,000 paying customers,” reflecting an explosion of adoption by developers from small companies as well as enterprises that Pinecone said are experimenting like crazy with new applications.

By contrast, the company said that in December it had fewer than in the low thousands of free users, and fewer than 300 paying customers.

Pinecone held a user conference on Thursday in San Francisco, where it showcased some of its success stories and announced a partnership with Microsoft Azure to speed up generative AI applications for Azure customers.

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

 


Register Now

>>Follow all our VentureBeat Transform 2023 coverage<<

Bob Wiederhold, the president and COO of Pinecone, said in his keynote talk that generative AI is a new platform that has eclipsed the internet platform and that vector databases are a key part of the solution to enable it. He said the generative AI platform is going to be even bigger than the internet, and “is going to have the same and probably even bigger impacts on the world.”

Vector databases: a distinct type of database for the generative AI era

Wiederhold explained that vector databases allow developers to access domain-specific information that is not available on the internet or in traditional databases, and to update it in real time. This way, they can provide better context and accuracy for generative AI models such as ChatGPT or GPT-4, which are often trained on outdated or incomplete data scraped from the web.

Vector databases allow you to do semantic search, which is a way to convert any kind of data into vectors that allow you to do “nearest neighbor” search. You can use this information to enrich the context window of the prompts. This way, “you will have far fewer hallucinations, and you will allow these fantastic chatbot technologies to answer your questions correctly, more often,” Wiederhold said.

Wiederhold’s remarks came after he spoke Wednesday at VB Transform, where he explained to enterprise executives how generative AI is changing the nature of the database, and why at least 30 vector database competitors have popped up to serve the market. See his interview below.

Bob Wiederhold, COO of Pinecone, right, speaks with investor Tim Tully of Menlo Ventures at VB Transform on Wednesday

Wiederhold said that large language models (LLMs) and vector databases are the two key technologies for generative AI.

Whenever new data types and access patterns appear, assuming the market is large enough, a new subset of the database market forms, he said. That happened with relational databases and no-SQL databases, and that’s happening with vector databases, he said. Vectors are a very different way to represent data, and nearest neighbor search is a very different way to access data, he said.

He explained that vector databases have a more efficient way of partitioning data based on this new paradigm, and so are filling a void that other databases, such as relational and no-SQL databases, are unable to fill.

He added that Pinecone has built its technology from scratch, without compromising on performance, scalability or cost. He said that only by building from scratch can you have the lowest latency, the highest ingestion speeds and the lowest cost of implementing use cases.

He also said that the winner database providers are going to be the ones that have built the best managed services for the cloud, and that Pinecone has delivered there as well. 

However, Wiederhold also acknowledged Thursday that the generative AI market is going through a hype cycle and that it will soon hit a “trough of reality” as developers move on from prototyping applications that have no ability to go into production. He said this is a good thing for the industry as it will separate the real production-ready, impactful applications from the “fluff” of prototyped applications that currently make up the majority of experimentation.

Signs of cooling off for generative AI, and the outlook for vector databases

Signs of the tapering off, he said, include a decline in June in the reported number of users of ChatGPT, but also Pinecone’s own user adoption trends, which have shown a halting of an “incredible” pickup from December through April. “In May and June, it settled back down to something more reasonable,” he said.

Wiederhold responded to questions at VB Transform about the market size for vector databases. He said it’s a very big or even enormous market, but that it’s still unclear whether it will be a $10 billion market or a $100 billion market. He said that question will get sorted out as best practices get worked out over the next two or three years.

He said that there is a lot of experimentation going on with different ways to use generative AI technologies, and that one big question has arisen from a trend toward larger context windows for LLM prompts. If developers could stick more of their data, perhaps even their entire database, directly in a context window, then a vector database wouldn’t be needed to search data. 

But he said that is unlikely to happen. He drew an analogy with humans who, when swamped with information, can’t come up with better answers. Information is most useful when it’s manageably small so that it can be internalized, he said. “And I think the same kind of thing is true [with] the context window in terms of putting huge amounts of information into it.” He cited a Stanford University study that came out this week that looked at existing chatbot technology and found that smaller amounts of information in the context window produced better results. (Update: VentureBeat asked for a specific reference to the paper, and Pinecone provided it here).

Also, he said some large enterprises are experimenting with training their own foundation models, and others are fine-tuning existing foundation models, and both of these approaches can bypass the need for calling on vector databases. But both approaches require a lot of expertise, and are expensive. “There’s a limited number of companies that are going to be able to take that on.”

Separately, at VB Transform on Wednesday, this question about building models or simply piggybacking on top of GPT-4 with vector databases was a key question for executives across the two days of sessions. Naveen Rao, CEO of MosaicML, which helps companies build their own large language models, also spoke at the event, and acknowledged that a limited number of companies have the scale to pay $200,000 for model building and also have the data expertise, preparation and other infrastructure necessary to leverage those models. He said his company has 50 customers, but that it has had to be selective to reach that number. That number will grow over the next two or three years, though, as those companies clean up and organize their data, he said. That promise, in part, is why Databricks announced last week that it will acquire MosaicML for $1.3 billion.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Matt Marshall
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!