Google study shows LLMs abandon correct answers under pressure, threatening multi-turn AI systems
July 16, 2025
A new study by researchers at Google DeepMind and University College London reveals how large language models (LLMs) form, maintain and lose confidence in their answers. The findings reveal striking similarities between the cognitive biases of LLMs and humans, while also highlighting stark differences.
The research reveals that LLMs can be overconfident in their own answers yet quickly lose that…