Google study analyzes AI chatbot use in health queries
Researchers developed a dataset to examine user interactions with AI chatbots for health information, revealing significant limitations in current system capabilities.
UNC Chapel Hill, Duke University, University of Washington, Google
Akshay Paruchuri et al.
The study introduces HealthChat-11K, a dataset of 11,000 real-world conversations showing how users seek health information from AI chatbots. It identifies recurring patterns such as incomplete context and leading questions, which can result in inaccurate or misleading responses. These findings suggest that existing AI systems may not fully grasp or address the complexity of medical inquiries.
The research questions the assumption that AI can act as a reliable health advisor without a nuanced understanding of human questions. Users may unintentionally frame questions in ways that bias the AI’s answers, potentially impacting health-related decisions.
The analysis centers on user messages and does not evaluate chatbot responses, which also influence the quality of the interaction. The dataset is English-only, limiting its applicability to non-English-speaking populations.
Analyzing how users engage with health-focused AI tools is essential for building safer, more effective systems.
📄 Read the full paper: “What’s Up, Doc?”: Analyzing How Users Seek Health Information in Large-Scale Conversational AI Datasets
……Read full article on Tech in Asia
Technology
Comments
Leave a comment in Nestia App