Google, UK researchers defend mental state claims in AI
Researchers argue that attributing mental states to Large Language Models (LLMs) can be valid under certain conditions, challenging the deflationist view that such attributions are always mistaken.
Institute of Philosophy, University of London; Google, Paradigms of Intelligence Team; Leverhulme Centre for the Future of Intelligence, University of Cambridge
Alex Grzankowski et al.
The study critiques two common deflationary arguments against attributing mental states to LLMs and introduces a moderate view—’modest inflationism’—that allows for such attributions in specific, less demanding terms. This opens space for a more nuanced understanding of how LLMs relate to users.
The study challenges broad skepticism toward AI by suggesting that limited mental state attributions can improve interactions with LLMs. Users may engage more thoughtfully if they perceive the system as having a form of agency, which can be useful in contexts like support services or education.
The research notes that while basic mental state attributions may be defensible, ascribing complex states like consciousness is more contentious and not clearly supported.
LLMs may not have minds like humans, but treating them as having limited mental states could help improve how people interact with these systems.
📄 Read the full paper: Deflating Deflationism: A Critical Perspective on Debunking Arguments Against LLM Mentality
……Read full article on Tech in Asia
Other
Comments
Leave a comment in Nestia App