AI & RoboticsNews

DataStax CEO: 2025 will be the year we see true AI transformation

DataStax and the Future of AI: Insights from CEO Chet Kapoor

As enterprise leaders grapple with the complexities of implementing generative AI, DataStax CEO Chet Kapoor offers a reassuring perspective: the current challenges are a normal part of technological revolutions, and 2025 will be the year when AI truly transforms business operations.

Kapoor is on the front lines of how enterprise companies are implementing AI, because DataStax offers an operational database that companies use when they go to production with AI applications. Customers include Priceline, CapitalOne and Audi.

Speaking in a recent interview with VentureBeat, Kapoor draws parallels between the current state of generative AI and previous tech revolutions such as the web, mobile and cloud. “We’ve been here before,” he says, noting that each wave typically starts with high enthusiasm, followed by a “trough of disillusionment” as companies encounter implementation challenges.

For IT, product and data science leaders in mid-sized enterprises, Kapoor’s message is clear: While GenAI implementation may be challenging now, the groundwork laid in 2024 will pave the way for transformative applications in 2025.

The path to AI transformation

Kapoor outlines three phases of GenAI adoption that companies typically progress through:

  1. Delegate: Companies start by seeking 30% efficiency gains, or cost cutting, often through tools like GitHub Copilot or internal applications.
  2. Accelerate: The focus shifts to becoming 30% more effective, not just efficient, which means building apps that allow productivity gains.
  3. Invent: This is where companies begin to reinvent themselves using AI technology.

“We think 2024 is a year of production AI,” Kapoor states. “There’s not a single customer that I talk to who will not have some project that they have actually implemented this year.” However, he believes the real transformation will begin in 2025: That’s when we see apps that “will actually change the way we live,” he says.

Overcoming implementation challenges

Kapoor identifies three key areas that companies need to address for successful AI implementation:

  1. Technology Stack: A new, open-source based architecture is emerging. “In 2024, it has to be open-source based, because you have to have transparency, you have to have meritocracy, you have to have diversity,” Kapoor emphasizes.
  2. People: The composition of AI teams is changing. While data scientists remain important, Kapoor believes the key is empowering developers. “You need 30 million developers to be able to build it, just like the web,” he says.
  3. Process: Governance and regulation are becoming increasingly important. Kapoor advocates for involving regulators earlier than in past tech revolutions, while cautioning against stifling innovation.

Looking ahead to 2025

Kapoor strongly advocates for open-source solutions in the GenAI stack, and that companies align themselves around this as they consider ramping up with AI next year. “If the problem is not being solved in open source, it’s probably not worth solving,” he asserts, highlighting the importance of transparency and community-driven innovation for enterprise AI projects.

Jason McClelland, CMO of DataStax, adds that developers are leading the charge in AI innovation. “While most of the world is out there figuring out what is AI, is it real, how does it work,” he says, “developers are building.” McClelland notes that the rate of change in AI is unprecedented, with technology, terminology and audience understanding shifting by maybe 20% a month.”

McClelland also offers an optimistic timeline for AI maturation. “At some point over the next six to 12 to 18 months, the AI platform is going to be baked,” he predicts. This perspective aligns with Kapoor’s view that 2025 will be a transformative year and that enterprise leaders have a narrow window to prepare their organizations for the impending shift.

Addressing challenges in generative AI

At a recent event in NYC called RAG++, hosted by DataStax, experts discussed the current challenges facing generative AI and potential solutions. The consensus was that future improvements in large language models (LLMs) are unlikely to come from simply scaling up the pre-training process, which has been the primary driver of advancements so far.

Instead, experts highlighted several innovative approaches will take LLMs to the next level::

  1. Increasing context windows: This allows LLMs to access more precise data related to user queries.
  2. Mixture of experts” approach: This involves routing questions or tasks to specialized sub-LLMs.
  3. Agentic AI and industry-specific foundation models: These tailored approaches aim to improve performance in specific domains.

OpenAI, a leader in the field, recently released a new series of models called GPT-01, which incorporates “Chain of Thought” technology. This innovation allows the model to approach problems step-by-step and even self-correct, resulting in significant improvements in complex problem-solving. OpenAI views this as a crucial step in enhancing the “reasoning” capabilities of LLMs, potentially addressing issues of mistakes and hallucinations that have plagued the technology.

While some AI critics remain skeptical about these improvements, studies continue to demonstrate the technology’s impact. Ethan Mollick, a professor at Wharton specializing in AI, has conducted research showing 20-40% productivity gains for professionals using GenAI. “I remain confused by the ‘GenAI is a dud’ arguments,” Mollick tweeted recently. “Adoption rates are the fastest in history. There is value.”

For enterprise leaders navigating the complex landscape of AI implementation, Kapoor’s message is one of optimism tempered with realism. The challenges of today are laying the groundwork for transformative changes in the near future. As we approach 2025, those who have invested in understanding and implementing AI will be best positioned to reap its benefits and lead in their industries.


Author: VB Staff
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

DeepSeek’s first reasoning model R1-Lite-Preview turns heads, beating OpenAI o1 performance

AI & RoboticsNews

Snowflake beats Databricks to integrating Claude 3.5 directly

AI & RoboticsNews

OpenScholar: The open-source A.I. that’s outperforming GPT-4o in scientific research

DefenseNews

US Army fires Precision Strike Missile in salvo shot for first time

Sign up for our Newsletter and
stay informed!