A new research paper suggests that Bing/Microsoft Copilot AI medical advice may be capable of causing severe harm, including death, in 22% of cases. The research, conducted by scientists in Germany and Belgium, highlights significant inaccuracies and risks associated with relying on AI for medical information. Researchers asked AI a range of commonly asked medical questions, covering 50 of the most prescribed drugs and medicines in America.
Out of 500 generated answers, only 54% were found to be scientifically accurate. The study revealed that 42% of the AI-generated answers could lead to serious harm, including 22% that could result in death. These findings present another setback for the field of AI-powered search, which has faced criticism for producing odd and error-laden results.
Ai’s risky medical information
Previous examples include Google AI’s bizarre recommendations, such as advising users to “eat rocks” or providing incorrect company contact information. According to the study, inaccuracies in AI-generated medical advice could potentially lead to the first AI-related deaths due to misinformation.
The safest approach remains consulting with a medical professional, as AI systems like Google AI currently fall short in providing reliable medical advice. The researchers highlight the reality that while AI systems could become the first point of contact for those who cannot access high-quality medical advice, the potential for harm remains significant. The research underscores the importance of cautious use of AI in critical areas like healthcare.
Until these systems are significantly improved, users are advised to rely on traditional medical consultations to avoid the risk of severe harm.
Noah Nguyen is a multi-talented developer who brings a unique perspective to his craft. Initially a creative writing professor, he turned to Dev work for the ability to work remotely. He now lives in Seattle, spending time hiking and drinking craft beer with his fiancee.




















