首页|Standards for Belief Representations in LLMs

Standards for Belief Representations in LLMs

扫码查看
As large language models (LLMs) continue to demonstrate remarkable abilities across various domains, computer scientists are developing methods to understand their cognitive processes, particularly concerning how (and if) LLMs internally represent their beliefs about the world. However, this field currently lacks a unified theoretical foundation to underpin the study of belief in LLMs. This article begins filling this gap by proposing adequacy conditions for a representation in an LLM to count as belief-like. We argue that, while the project of belief measurement in LLMs shares striking features with belief measurement as carried out in decision theory and formal epistemology, it also differs in ways that should change how we measure belief. Thus, drawing from insights in philosophy and contemporary practices of machine learning, we establish four criteria that balance theoretical considerations with practical constraints. Our proposed criteria include accuracy, coherence, uniformity, and use, which together help lay the groundwork for a comprehensive understanding of belief representation in LLMs. We draw on empirical work showing the limitations of using various criteria in isolation to identify belief representations.

LLMsBeliefDecision theoryFormal epistemologyAIRadical InterpretationExplainable AIInterpretability

Daniel A. Herrmann、Benjamin A. Levinstein

展开 >

Faculty of Philosophy, University of Groningen, Groningen, The Netherlands

University of Illinois at Urbana-Champaign, Champaign, USA

2025

Minds and machines

Minds and machines

ISSN:0924-6495
年,卷(期):2025.35(1)
  • 70