Artificial Intelligence

Oxford Study Warns AI Chatbots Are Unsafe for Medical Advice

OXFORD: AI chatbots pose risks to people seeking medical advice despite excelling at standardized medical knowledge tests, according to a study published in Nature Medicine by Oxford Internet Institute and Nuffield Department of Primary Care Health Sciences.

The randomized trial involving 1,298 UK participants found those using LLMs, GPT-4o, Llama 3, and Command R+made no better medical decisions than control groups using internet searches or personal judgment.

Study co-author Dr. Rebecca Payne, a GP and Clarendon-Reuben Doctoral Scholar, stated “Despite all the hype, AI just isn’t ready to take on the role of physician. Patients need to be aware that asking a large language model about their symptoms can be dangerous, giving wrong diagnoses and failing to recognize when urgent help is needed.”

Participants assessed ten medical scenarios developed by doctors, ranging from severe headaches after nights out to new mothers feeling constantly exhausted. They identified potential conditions and recommended courses of action like visiting GPs or attending A&E.

Researchers identified three key challenges: users didn’t know what information LLMs needed, models provided vastly different answers to slight question variations, and users struggled to distinguish good from bad information when both appeared together.

Lead author Andrew Bean noted “In this study, we show that interacting with humans poses a challenge even for top LLMs.”

Senior author Dr. Adam Mahdi emphasized, “The disconnect between benchmark scores and real-world performance should be a wake-up call. We cannot rely on standardized tests alone. AI systems need rigorous testing with diverse, real users.”

The research was supported by Prolific, Oxford’s AI Government and Policy Research Programme funded by Dieter Schwarz Stiftung, Royal Society, UKRI, and NIHR Oxford Biomedical Research Centre.

Amita Parul

Amita Parul is an Independent journalist with experience in reporting and commentary on current events and sociopolitical developments. She contributes original reporting and analysis that aligns with Tea4Tech’s editorial standards for accuracy, transparency, and context, focusing on business and technology trends. || Amita covers emerging news stories and provides explanatory insights that help readers understand both the events and their implications.

Recent Posts

Google Makes It Easy to Move from Other AI Chatbots to Gemini

California: Google has recently announced new features, namely “switching tools”, to help people make a switch from other AI chatbots, such as ChatGPT…

14 hours ago

WhatsApp’s New Update Brings AI Replies, Storage Tools & More

California: WhatsApp is introducing a slew of new features for its users, all aimed at making chats easier to manage and faster to respond to…

1 day ago

Defense AI Startup Shield AI Raises $2B at $12.7B Valuation

San Diego: Shield AI has raised $2 billion in new funding at a $12.7 billion…

2 days ago

Conntour Raises $7M to Build an AI Search Engine for Security Cameras

Tel Aviv: Conntour has raised $7 million in seed funding to build an AI-powered search…

2 days ago

Deccan AI Raises $25M to Power AI Post-Training for Frontier Labs

San Francisco: Deccan AI has raised $25 million in a Series A round to scale…

2 days ago

Google Launches Lyria 3 Pro, Its Most Advanced AI Music Model

California: Google has officially launched Lyria 3 Pro, a new artificial intelligence model designed to generate…

2 days ago