Man who asked ChatGPT about cutting out salt from his diet was hospitalized with hallucinations

ChatGPT told man to cut salt from diet, resulting in hospitalization for hallucinations


The tale of a person who ended up in the hospital experiencing hallucinations illustrates the dangers of depending on unverified online resources for medical advice. This individual sought a low-sodium meal plan from an artificial intelligence chatbot, ChatGPT, and subsequently faced serious health issues that specialists associate with the bot’s unverified guidance.


This incident serves as a stark and sobering reminder that while AI can be a powerful tool, it lacks the foundational knowledge, context, and ethical safeguards necessary for providing health and wellness information. Its output is a reflection of the data it has been trained on, not a substitute for professional medical expertise.

The individual, who aimed to cut down on salt consumption, was provided by the chatbot with a comprehensive dietary plan. The AI’s guidance consisted of a collection of dishes and components that, although low in salt, severely lacked vital nutrients. The diet’s extreme restrictions caused the person’s sodium levels to decrease rapidly and dangerously, leading to a condition called hyponatremia. Such an electrolyte imbalance can have serious and immediate effects on the body, impacting areas ranging from cognitive abilities to heart health. The symptoms like confusion, disorientation, and hallucinations were directly caused by this imbalance in electrolytes, highlighting the seriousness of the AI’s erroneous recommendations.

The occurrence underscores a basic issue in the way numerous individuals are utilizing generative AI. Unlike a search engine, which offers a list of sources for users to assess, a chatbot presents one single, seemingly authoritative answer. This style can mistakenly convince users that the information given is accurate and reliable, even when it is not. The AI gives an assertive response without any disclaimers or cautionary notes regarding possible risks, and lacks the capacity to handle additional inquiries about a user’s particular health concerns or medical background. This absence of a crucial feedback mechanism is a significant weakness, especially in critical fields such as healthcare and medicine.

Medical and AI specialists have responded swiftly to the issue, stressing that the problem lies not in the technology itself but in its improper use. They advise that AI should be viewed as an aid to expert guidance, rather than a substitute. The algorithms powering these chatbots are crafted to detect patterns in extensive datasets and produce likely text, yet they lack the ability to comprehend the intricate and interconnected workings of the human body. A human healthcare professional, in comparison, is educated to evaluate personal risk factors, take into account existing conditions, and offer a comprehensive, individualized treatment approach. The AI’s failure to execute this essential diagnostic and relational role is its most notable limitation.

The case also raises important ethical and regulatory questions about the development and deployment of AI in health-related fields. Should these chatbots be required to include prominent disclaimers about the unverified nature of their advice? Should the companies that develop them be held liable for the harm their technology causes? There is a growing consensus that the “move fast and break things” mentality of Silicon Valley is dangerously ill-suited for the health sector. The incident is likely to be a catalyst for a more robust discussion about the need for strict guidelines and regulations to govern AI’s role in public health.

The allure of using AI for a quick and easy solution is understandable. In a world where access to healthcare can be expensive and time-consuming, a free and immediate answer from a chatbot seems incredibly appealing. However, this incident serves as a powerful cautionary tale about the high cost of convenience. It illustrates that when it comes to the human body, shortcuts can lead to catastrophic results. The advice that led to a man being hospitalized was not based on malice or intent, but on a profound and dangerous lack of understanding of the consequences of its own recommendations.

In the wake of this event, the conversation around AI’s place in society has shifted. The focus is no longer just on its potential for innovation and efficiency, but also on its inherent limitations and the potential for unintended harm. The man’s medical emergency is a stark reminder that while AI can simulate intelligence, it does not possess wisdom, empathy, or a deep understanding of human biology.

Until it does, its use should be restricted to non-critical applications, and its role in health care should remain in the domain of providing information, not making recommendations. The ultimate lesson is that in matters of health, the human element—the judgment, the experience, and the care of a professional—remains irreplaceable.

By Roger W. Watson