Medical device companies are rapidly embedding artificial intelligence into surgical and diagnostic equipment. While promoting "intelligence" as a growth driver, they are also introducing new failure modes and liability risks into hospitals. Reports of suspected injuries and malfunctions submitted to regulators are on the rise.
According to a Reuters review of safety and legal records, along with interviews with doctors, nurses, scientists, and regulators, reports received in recent years by the U.S. Food and Drug Administration (FDA) have included issues such as misleading surgical navigation, failure to report abnormal heart rhythms, and prenatal ultrasounds "misidentifying body parts."
In a specific case, Acclarent, a medical device company under healthcare giant Johnson & Johnson, announced in 2021 the integration of a machine learning algorithm into its TruDi navigation system for sinus surgeries.
Citing unconfirmed FDA reports, Reuters states that after the AI integration, reports of malfunctions and adverse events related to TruDi reached at least 100, significantly higher than the single-digit level before AI was added, and multiple suspected injury lawsuits have emerged.
While risks are being exposed, regulatory capacity is also under strain. Citing five current and former FDA scientists, Reuters reported that as a wave of applications for AI medical devices surges, the FDA finds it harder to "keep pace" following staff reductions in key teams.
The TruDi Case: Surge in Reports Post-AI, Lawsuits Allege "Misleading Navigation" Johnson & Johnson's Acclarent introduced machine learning into the TruDi Navigation System in 2021 to assist ear, nose, and throat surgeons in sinus-related procedures.
Reuters noted that the AI functionality was added approximately three years after the device's initial market release. Before AI integration, the FDA received 7 unconfirmed reports of device malfunctions and 1 report of patient injury. After AI was added, the FDA received at least 100 unconfirmed reports of malfunctions and adverse events.
Citing related reports, Reuters stated that at least 10 people were injured between late 2021 and November 2025, with most incidents allegedly linked to TruDi incorrectly indicating the position of surgical instruments within the skull. Reported consequences included cerebrospinal fluid leaking from the nose, accidental penetration of the skull base, and strokes resulting from unexpected damage to major arteries.
Two stroke patients filed lawsuits in Texas, alleging the TruDi system's AI contributed to their injuries. One lawsuit claimed the product "may have been safer" before AI integration. Integra stated there is "no credible evidence" of a causal link between the TruDi system, its AI technology, and the alleged injuries.
Signals Like "Misidentifying Body Parts": FDA Reports Point to Various AI-Enhanced Devices The FDA emphasizes that adverse event and malfunction reports have inherent limitations; they may lack detail, be redacted to protect commercial secrets, or involve multiple reports for a single incident, and cannot be solely used to establish causation.
Nevertheless, a Reuters tally shows that among reports submitted to the FDA between 2021 and October 2025, at least 1,401 involved products using AI from an FDA-listed inventory of 1,357 such products (the FDA also noted this list is incomplete). At least 115 of these reports mentioned software, algorithm, or programming issues.
Reuters cited an example: a report submitted to the FDA in June 2025 stated that the Sonio Detect software for prenatal ultrasound had an issue where the algorithm "mislabeled fetal structures and associated them with the wrong body parts"; the report did not claim patient injury. The manufacturer, Samsung Medison, said the report "does not indicate any safety issues."
Another category of clues comes from heart rhythm monitoring. Reuters reported that at least 16 reports alleged that Medtronic's AI-assisted cardiac monitoring devices failed to identify abnormal rhythms or asystole (cardiac arrest); these reports did not mention injury.
Medtronic told Reuters that upon review, the device missed detecting an abnormal event only once, which "did not lead to patient injury." The company stated that some events were related to data display issues rather than the AI itself but declined to elaborate on each case individually.
Recall Study: AI Device Recall Rate Double Overall Rate, Defects Emerge Faster Beyond individual reports, recall data is also heightening investor focus on "post-market risk profiles."
Citing a research letter published in JAMA Health Forum in August 2025, Reuters reported that researchers from Johns Hopkins, Georgetown, and Yale found that 60 AI-enabled medical devices authorized by the FDA were associated with 182 product recalls. Among these, 43% of the recalls occurred less than a year after authorization.
The study stated that this recall incidence rate is approximately double the recall rate for all devices authorized under similar FDA rules.
Approval Pathways and Guardrails: Most AI Devices Approved Without Patient Testing, Traditional Framework Questioned Reuters pointed out that while the FDA typically requires clinical trials for new drugs, medical devices are subject to different review pathways.
Dr. Alexander Everhart, a lecturer at Washington University School of Medicine in St. Louis and a medical device regulation expert, told Reuters that most AI-enabled devices entering the market do not require testing on patients. Instead, they often meet regulatory requirements by referencing previously authorized devices that lacked AI capabilities.
Everhart believes the uncertainties introduced by AI are challenging established practices. He told Reuters that the FDA's traditional regulatory approach for medical devices is "ill-equipped" to ensure the safety and effectiveness of AI devices. He expressed concern that reliance in practice falls more heavily on manufacturer self-policing, raising questions about whether regulatory guardrails are sufficient.
Regulatory Capacity Under Pressure: Authorizations Double, Workload Rises After Key Team Cuts Reuters reported that there are currently at least 1,357 FDA-authorized medical devices using AI, double the number prior to 2022.
Citing informed sources, Reuters reported that early last year, the Trump administration began dismantling a key AI team as part of cost-cutting efforts led by Elon Musk. Approximately 15 out of about 40 AI scientists in the DIDSR were laid off or chose to leave, and the Digital Health Center of Excellence, responsible for AI device policy, lost about one-third of its staff, roughly 30 people.
Some former employees said that after the staff reductions, workloads for some reviewers nearly doubled, noting that "when resources are insufficient, problems are more likely to be missed."
An HHS spokesperson, Andrew Nixon, told Reuters that the FDA applies the same rigorous standards to AI-assisted medical devices, like those using machine learning, as it does to other products. He stated that patient safety is the highest priority and that the FDA continues to recruit and develop talent in the digital health and AI fields.