A 60-year-old man was hospitalized after following dietary advice he received online. He mistakenly used toxic sodium bromide instead of table salt while seeking alternatives to sodium chloride due to concerns about his sodium intake.
Unfortunately, the AI suggested sodium bromide, a compound that was once used in early 20th-century medications but can be harmful in large doses. After three months of using sodium bromide, he developed bromism and began experiencing illusions, fear, excessive thirst, and skin lesions. This led to his requiring a psychiatric hold and intensive treatment.
Without consulting a healthcare professional, the man purchased sodium bromide and used it in his cooking, which resulted in severe symptoms. Doctors diagnosed him with bromism, a rare condition caused by elevated levels of bromide. Previously healthy, his condition worsened to neurosis, requiring a three-week hospital stay during which he received fluid and electrolyte therapy. He has since recovered, but this case highlights the risks associated with following unverified advice from AI sources.
A man asked ChatGPT how to remove salt from his diet. It landed him in the hospital https://t.co/tdkuuAUBSi pic.twitter.com/kDG9oD9Lhd
— The Independent (@Independent) August 8, 2025
Experts emphasise that AI tools like ChatGPT are not substitutes for medical professionals. OpenAI’s Terms of Use clearly state its services are not for diagnosing or treating health conditions. CEO Sam Altman, on This Past Weekend with Theo Von, warned against relying on ChatGPT for emotional or medical support, noting it lacks legal protections akin to therapists or doctors, per TechCrunch.
After using ChatGPT, man swaps his salt for sodium bromide—and suffers psychosis https://t.co/hDiAnWxvPJ
— Ars Technica (@arstechnica) August 7, 2025
This incident has reignited debates about AI’s ethical boundaries in healthcare, underscoring the need for clear disclaimers and user education. As AI use grows, ensuring users seek professional advice for critical decisions is vital to prevent harm. The case serves as a cautionary tale for over-reliance on AI-generated recommendations.