AI & RoboticsNews

IBM sees enterprise customers are using ‘everything’ when it comes to AI, the challenge is matching the LLM to the right use case

Over the last 100 yearsIBM has seen many different tech trends rise and fall. What tends to win out are technologies where there is choice. At VB Transform 2025 today, Armand Ruiz, VP of AI Platform at IBM detailed how Big Blue is thinking about generative AI and how its enterprise users are actually deploying the technology. A key theme that Ruiz emphasized is that at this point, it’s not about choosing a single large language model provider or technology.

Increasingly, enterprise customers are systematically rejecting single-vendor AI strategies in favor of multi-model approaches that match specific large language model to targeted use cases.

IBM has its own open-source AI models with the Granite family, but it is not positioning that technology as the only choice, or even the right choice for all workloads. This enterprise behavior is driving IBM to position itself not as a foundation model competitor, but as what Ruiz referred to as a control tower for AI workloads.

“When I sit in front of a customer, they’re using everything they have access to, everything,” Ruiz explained. “For coding, they love Anthropic and for some other use cases like  for reasoning, they like o3 and then for large language models customization, with their own data and fine tuning, they like either our Granite series or Mistral with their small models, or even Llama…it’s just matching the large language model to the right use case. And then we help them as well to make recommendations.”

The Multi-LLM gateway strategy

IBM’s response to this market reality is a newly released model gateway that provides enterprises with a single API to switch between different large language models while maintaining observability and governance across all deployments.

The technical architecture allows customers to run open-source models on their own inference stack for sensitive use cases while simultaneously accessing public APIs like AWS Bedrock or Google Cloud’s Gemini for less critical applications.

“That gateway is providing our customers a single layer with a single API to switch from one LLM to another and add observability and governance all throughout,” Ruiz said.

The approach directly contradicts the common vendor strategy of locking customers into proprietary ecosystems. IBM is not alone in taking a multi-vendor approach to model selection. Multiple tools have emerged in recent months for model routing, which aim to direct workloads to the appropriate model.

Agent orchestration protocols emerge as critical infrastructure

Beyond multi-model management, IBM is tackling the emerging challenge of agent-to-agent communication through open protocols.

The company has developed ACP (Agent Communication Protocol) and contributed it to the Linux Foundation. ACP is a competitive effort to Google’s Agent2Agent (A2A) protocol which just this week was contributed by Google to the Linux Foundation.

Ruiz noted that both protocols aim to facilitate communication between agents and reduce custom development work. He expects that eventually, the different approaches will converge, and currently, the differences between A2A and ACP are mostly technical.

The agent orchestration protocols provide standardized ways for AI systems to interact across different platforms and vendors.

The technical significance becomes clear when considering enterprise scale: some IBM customers already have over 100 agents in pilot programs. Without standardized communication protocols, each agent-to-agent interaction requires custom development, creating an unsustainable integration burden.

AI is about transforming workflows and the way work is done

In terms of how Ruiz sees AI impacting enterprises today, he suggests it really needs to be more than just chatbots.

“If you are just doing chatbots, or you’re only trying to do cost savings with AI, you are not doing AI,” Ruiz said. “I think AI is really about completely transforming the workflow and the way work is done.”

The distinction between AI implementation and AI transformation centers on how deeply the technology integrates into existing business processes. IBM’s internal HR example illustrates this shift: instead of employees asking chatbots for HR information, specialized agents now handle routine queries about compensation, hiring, and promotions, automatically routing to appropriate systems and escalating to humans only when necessary.

“I used to spend a lot of time talking to my HR partners for a lot of things. I handle most of it now with an HR agent,” Ruiz explained. “Depending on the question, if it’s something about compensation or it’s something about just handling separation, or hiring someone, or doing a promotion, all these things will connect with different HR internal systems, and those will be like separate agents.”

This represents a fundamental architectural shift from human-computer interaction patterns to computer-mediated workflow automation. Rather than employees learning to interact with AI tools, the AI learns to execute complete business processes end-to-end.

The technical implication: enterprises need to move beyond API integrations and prompt engineering toward deep process instrumentation that allows AI agents to execute multi-step workflows autonomously.

Strategic implications for enterprise AI investment

IBM’s real-world deployment data suggests several critical shifts for enterprise AI strategy:

Abandon chatbot-first thinking: Organizations should identify complete workflows for transformation rather than adding conversational interfaces to existing systems. The goal is to eliminate human steps, not improve human-computer interaction.

Architect for multi-model flexibility: Rather than committing to single AI providers, enterprises need integration platforms that enable switching between models based on use case requirements while maintaining governance standards.

Invest in communication standards: Organizations should prioritize AI tools that support emerging protocols like MCP, ACP, and A2A rather than proprietary integration approaches that create vendor lock-in.

“There is so much to build, and I keep saying everyone needs to learn AI and especially business leaders need to be AI first leaders and understand the concepts,” Ruiz said.


Author: Sean Michael Kerner
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Creatio 8.3 Twin CRM Challenges Salesforce with Built-In AI

AI & RoboticsNews

Boston Consulting Group: To unlock enterprise AI value, start with the data you’ve been ignoring

CryptoNews

Kraken Secures MiCA License, Expands Regulated Crypto Services Across 30 EU States

CryptoNews

Aurora Mobile Allocates 20% Treasury to Bitcoin, Crypto

Sign up for our Newsletter and
stay informed!

Share Your Thoughts!

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Worth reading...
Creatio 8.3 Twin CRM Challenges Salesforce with Built-In AI