AI & RoboticsNews

AI agent benchmarks are misleading, study warns


AI agents are becoming a promising new research direction with potential applications in the real world. These agents use foundation models such as large language models (LLMs) and vision language models (VLMs) to take natural language instructions and pursue complex goals autonomously or semi-autonomously. AI agents can use various tools such as browsers, search engines and code compilers to verify their actions and reason about their goals. 

However, a recent analysis by researchers at Princeton University has revealed several shortcomings in current agent benchmarks and evaluation practices that hinder their usefulness in real-world applications.

Their findings highlight that agent benchmarking comes with distinct challenges, and we can’t evaluate agents in the same way that we benchmark foundation models.

Cost vs accuracy trade-off

One major issue the researchers highlight in their study is the lack of cost control in agent evaluations. AI agents can be much more expensive to run than a single model call, as they often rely on stochastic language models that can produce different results when given the same query multiple times. 


Countdown to VB Transform 2024

Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry.


To increase accuracy, some agentic systems generate several responses and use mechanisms like voting or external verification tools to choose the best answer. Sometimes sampling hundreds or thousands of responses can increase the agent’s accuracy. While this approach can improve performance, it comes at a significant computational cost. Inference costs are not always a problem in research settings, where the goal is to maximize accuracy.

However, in practical applications, there is a limit to the budget available for each query, making it crucial for agent evaluations to be cost-controlled. Failing to do so may encourage researchers to develop extremely costly agents simply to top the leaderboard. The Princeton researchers propose visualizing evaluation results as a Pareto curve of accuracy and inference cost and using techniques that jointly optimize the agent for these two metrics.

The researchers evaluated accuracy-cost tradeoffs of different prompting techniques and agentic patterns introduced in different papers.

“For substantially similar accuracy, the cost can differ by almost two orders of magnitude,” the researchers write. “Yet, the cost of running these agents isn’t a top-line metric reported in any of these papers.”

The researchers argue that optimizing for both metrics can lead to “agents that cost less while maintaining accuracy.” Joint optimization can also enable researchers and developers to trade off the fixed and variable costs of running an agent. For example, they can spend more on optimizing the agent’s design but reduce the variable cost by using fewer in-context learning examples in the agent’s prompt.

The researchers tested joint optimization on HotpotQA, a popular question-answering benchmark. Their results show that joint optimization formulation provides a way to strike an optimal balance between accuracy and inference costs.

“Useful agent evaluations must control for cost—even if we ultimately don’t care about cost and only about identifying innovative agent designs,” the researchers write. “Accuracy alone cannot identify progress because it can be improved by scientifically meaningless methods such as retrying.”

Model development vs downstream applications

Another issue the researchers highlight is the difference between evaluating models for research purposes and developing downstream applications. In research, accuracy is often the primary focus, with inference costs being largely ignored. However, when developing real-world applications on AI agents, inference costs play a crucial role in deciding which model and technique to use.

Evaluating inference costs for AI agents is challenging. For example, different model providers can charge different amounts for the same model. Meanwhile, the costs of API calls are regularly changing and might vary based on developers’ decisions. For example, on some platforms, bulk API calls are charged differently. 

The researchers created a website that adjusts model comparisons based on token pricing to address this issue. 

They also conducted a case study on NovelQA, a benchmark for question-answering tasks on very long texts. They found that benchmarks meant for model evaluation can be misleading when used for downstream evaluation. For example, the original NovelQA study makes retrieval-augmented generation (RAG) look much worse than long-context models than it is in a real-world scenario. Their findings show that RAG and long-context models were roughly equally accurate, while long-context models are 20 times more expensive.

Overfitting is a problem

In learning new tasks, machine learning (ML) models often find shortcuts that allow them to score well on benchmarks. One prominent type of shortcut is “overfitting,” where the model finds ways to cheat on the benchmark tests and provides results that do not translate to the real world. The researchers found that overfitting is a serious problem for agent benchmarks, as they tend to be small, typically consisting of only a few hundred samples. This issue is more severe than data contamination in training foundation models, as knowledge of test samples can be directly programmed into the agent.

To address this problem, the researchers suggest that benchmark developers should create and keep holdout test sets that are composed of examples that can’t be memorized during training and can only be solved through a proper understanding of the target task. In their analysis of 17 benchmarks, the researchers found that many lacked proper holdout datasets, allowing agents to take shortcuts, even unintentionally. 

“Surprisingly, we find that many agent benchmarks do not include held-out test sets,” the researchers write. “In addition to creating a test set, benchmark developers should consider keeping it secret to prevent LLM contamination or agent overfitting.”

They also that different types of holdout samples are needed based on the desired level of generality of the task that the agent accomplishes.

“Benchmark developers must do their best to ensure that shortcuts are impossible,” the researchers write. “We view this as the responsibility of benchmark developers rather than agent developers, because designing benchmarks that don’t allow shortcuts is much easier than checking every single agent to see if it takes shortcuts.”

The researchers tested WebArena, a benchmark that evaluates the performance of AI agents in solving problems with different websites. They found several shortcuts in the training datasets that allowed the agents to overfit to tasks in ways that would easily break with minor changes in the real world. For example, the agent could make assumptions about the structure of web addresses without considering that it might change in the future or that it would not work on different websites.

These errors inflate accuracy estimates and lead to over-optimism about agent capabilities, the researchers warn.

With AI agents being a new field, the research and developer communities have yet much to learn about how to test the limits of these new systems that might soon become an important part of everyday applications.

“AI agent benchmarking is new and best practices haven’t yet been established, making it hard to distinguish genuine advances from hype,” the researchers write. “Our thesis is that agents are sufficiently different from models that benchmarking practices need to be rethought.”

 

Author: Ben Dickson
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!

Worth reading...
Habib at VB Transform: Writer’s vision for full stack AI