AI Chatbots Can Give Dangerous Health Advice, New Study Warns


Artificial intelligence chatbots such as ChatGPT may appear highly capable, but new research suggests they can provide inaccurate and potentially dangerous health advice, raising serious concerns about their growing use for medical guidance.

A study published Monday in Nature Medicine found that AI-powered chatbots do not outperform traditional internet searches when it comes to helping people identify medical conditions or decide when to seek professional care.

AI Is Not Ready to Replace Doctors, Researchers Say


“Despite all the hype, AI just isn’t ready to take on the role of the physician,” said Dr. Rebecca Payne of Oxford University, a co-author of the study.

She warned that relying on large language models for medical advice could lead to misdiagnosis or failure to recognize symptoms that require urgent treatment.

“Patients need to understand that asking a chatbot about symptoms can be dangerous,” Payne added.

How the Study Was Conducted

The British-led research team assessed how well people could use AI chatbots to understand health issues and determine appropriate next steps.

Nearly 1,300 participants in the UK were presented with 10 common health scenarios, including:

  • Headaches after alcohol consumption

  • Extreme fatigue in new mothers

  • Symptoms associated with gallstones

Participants were randomly assigned to use one of three AI chatbots:

  • OpenAI’s GPT-4o

  • Meta’s Llama 3

  • Command R+

A control group used standard internet search engines instead.

Poor Accuracy Across the Board

The findings were striking:

  • Users correctly identified their health condition only about one-third of the time

  • Just 45% chose the appropriate medical action, such as seeing a doctor or going to a hospital

According to researchers, these results were no better than those achieved by people using regular web searches.

Why AI Performs Worse in Real Life Than in Exams


AI chatbots often score extremely high on medical exams and clinical benchmarks. However, researchers say there is a major gap between test performance and real-world use.

The study identified several reasons:

  • Users often fail to provide complete or accurate symptom details

  • Chatbot responses can be misunderstood or ignored

  • Medical options presented by AI are sometimes confusing to non-experts

This “communication breakdown,” researchers said, significantly limits AI’s usefulness in real medical decision-making.

Growing Use Raises Public Health Concerns

The study notes that one in six adults in the United States now consults AI chatbots for health-related information at least once a month — a number expected to rise as AI tools become more accessible.

“This research highlights the very real medical risks posed by chatbots,” said David Shaw, a bioethicist at Maastricht University, who was not involved in the study.

Shaw advised the public to rely on trusted healthcare providers and official medical institutions, such as the UK’s National Health Service, rather than AI-generated advice.

Bottom Line

While AI chatbots continue to advance rapidly, researchers stress that they should not be used as substitutes for professional medical care.

Experts recommend using AI tools only for general information — and never for diagnosis, treatment decisions, or emergency situations.



Post a Comment

0 Comments