A new research paper suggests that Bing / Microsoft Copilot “AI” medical advice may be capable of causing you severe harm, at least 22% of the time …

Don’t use AI as your doctor, says European research on Microsoft Copilot.

When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.

What you need to know

What you need to know

Oh boy, it seemsMicrosoft Copilotmay be a few deaths away from a big lawsuit. At least theoretically.

It’s no secret that AI search is terrible — at least today. Google has been mocked for its odd and error-laden AI search results over the past year, with the initial rollout recommending thatusers eat rocks or add glue to pizza. Even just this past week, I saw a thread on Twitter (X) about how Google’s AI search had arbitrarilylisted a private citizen’s phone numberas the corporate hq phone number for a video game publisher. I saw another thread mocking how Google’s AI suggestedthat there are 150 Planet Hollywood restaurantsin Guam. There are, in fact, only four Planet Hollywoods in existence in total.

I asked Microsoft Copilot if Guam has any Planet Hollywood restaurants. Thankfully, it offered the correct answer. However, researchers in Europe (viaSciMex) have sounded the alarm of a potentially far more serious and far less funny catalog of errors that could land Copilot and other AI search systems in hot water.

Theresearch paperdetails how Microsoft Copilot specifically was asked to field answers to the 10 most popular medical questions in America, about 50 of the most prescribed drugs and medicines. In total, the research generated 500 answers, and they were scored for accuracy and completeness, among other criteria. The results were not exactly encouraging.

“For accuracy, AI answers didn’t match established medical knowledge in 24% of cases, and 3% of answers were completely wrong,“the report reads.“Only 54% of answers agreed with the scientific consensus. […] In terms of potential harm to patients, 42% of AI answers were considered to lead to moderate or mild harm, and 22% to death or severe harm. Only around a third (36%) were considered harmless.”

The researchers conclude that, of course, you shouldn’t rely on AI systems like Microsoft Copilot or Google AI summaries (or probably any website) for accurate medical information. The most reliable way to consult on medical issues is, naturally, via a medical professional. Access to medical professionals is not always easy, or in some cases, even affordable, depending on the territory. AI systems like Copilot or Google could become the first point-of-call for many who can’t access high-quality medical advice, and as such, the potential for harm is pretty real.

🎃The best early Black Friday deals🦃

So far, AI search has been a costly misadventure, but that doesn’t mean it will always be

So far, AI search has been a costly misadventure, but that doesn’t mean it will always be

Microsoft’s efforts to capitalize on the AI craze have thus far amounted to very little. TheCopilot+ PCrange launched to a barrage of privacy concerns over itsWindows Recall feature, which itself was ironically recalled to beef up its encryption. Microsoft revealed an array ofnew Copilot features just a couple of weeks agoto little fanfare, which included a new UI for the Windows Copilot web wrapper, and some enhanced edited features in Microsoft Photos among other small, relatively inconsequential things.

Get the Windows Central Newsletter

All the latest news, reviews, and guides for Windows and Xbox diehards.

It’s no secret that AI hasn’t been the catalyst Microsoft hoped would give Bing some serious competitive clout against Google search, with Bing’s search share remaining relatively dead. Google has tied itself in knots worrying about what impact OpenAI’s ChatGPT might have on its platform, asinvestors pour billions into Sam Altman’s generative empirein hopes that it will trigger some kind of new industrial revolution.TikTok has eliminated hundreds of human content moderatorsin hopes that AI can pick up the tab. Let’s see how that one pans out.

Reports about the accuracy of AI-generated answers is always a bit comical, but I feel like the potential for harm is fairly huge if people do begin taking Copilot’s answers at face value. Microsoft and other providers have small print that reads “Always check AI answers for accuracy,” and I can’t help but feel like, “well, if I have to do that, why don’t I just skip the AI middleman entirely here?” When it comes to potentially dangerous medical advice, conspiracy theories, political misinformation, or anything in between — there’s a non-trivial chance that Microsoft’s AI summaries could be responsible for causing serious harm at some point if they aren’t careful.

More Prime Day deals and anti-Prime Day deals

We at Windows Central are scouring the internet for the bestPrime Daydeals andanti-Prime Day deals, but there are plenty more discounts going on now. Here’s where to find more savings:

Jez Corden is the Executive Editor at Windows Central, focusing primarily on all things Xbox and gaming. Jez is known for breaking exclusive news and analysis as relates to the Microsoft ecosystem while being powered by tea. Follow onTwitter (X)andThreads, and listen to hisXB2 Podcast, all about, you guessed it, Xbox!