Google, UK researchers defend mental state claims in AI

Google, UK researchers defend mental state claims in AI

Tech in Asia·2025-06-19 17:00

🔍 In one sentence

Researchers argue that attributing mental states to Large Language Models (LLMs) can be valid under certain conditions, challenging the deflationist view that such attributions are always mistaken.

🏛️ Paper by:

Institute of Philosophy, University of London; Google, Paradigms of Intelligence Team; Leverhulme Centre for the Future of Intelligence, University of Cambridge

✏️ Authors:

Alex Grzankowski et al.

🧠 Key discovery

The study critiques two common deflationary arguments against attributing mental states to LLMs and introduces a moderate view—’modest inflationism’—that allows for such attributions in specific, less demanding terms. This opens space for a more nuanced understanding of how LLMs relate to users.

📊 Surprising results

Key stat: Users often interact with LLMs as if they possess thoughts and emotions, raising questions about whether such attributions are entirely unjustified. Breakthrough: The paper proposes that attributions of mental states like beliefs or desires can be legitimate if interpreted without strong metaphysical commitments. Comparison: This contrasts with deflationist views, which reject all mental state attributions to LLMs as fundamentally flawed.

📌 Why this matters

The study challenges broad skepticism toward AI by suggesting that limited mental state attributions can improve interactions with LLMs. Users may engage more thoughtfully if they perceive the system as having a form of agency, which can be useful in contexts like support services or education.

💡 What are the potential applications?

AI in Customer Service: Designing LLMs to reflect basic mental state attributions may improve user interaction. Mental Health Support: LLMs could be used in supportive roles if users feel more at ease treating them as responsive entities. Education: Educational tools may become more effective when conversational AI is framed as having goal-directed behavior.

⚠️ Limitations

The research notes that while basic mental state attributions may be defensible, ascribing complex states like consciousness is more contentious and not clearly supported.

👉 Bottom line:

LLMs may not have minds like humans, but treating them as having limited mental states could help improve how people interact with these systems.

📄 Read the full paper: Deflating Deflationism: A Critical Perspective on Debunking Arguments Against LLM Mentality

……

Read full article on Tech in Asia

Other