Logo Die Sachsen News
Nachrichten / Health

New study shows risks in dealing with AI in medicine

Digital systems help with diagnoses, but harbor risks due to human error
AI supports doctors in everyday clinical practice, but the right approach remains crucial. © pixabay/Tung Nguyen
Von: Wissensland
AI is increasingly helping in hospitals. However, a study from Dresden shows that errors are often caused by the way the technology is used. A guide aims to change this.

We have long relied on digital helpers in our everyday lives. Navigation apps and voice assistants set the tone. Artificial intelligence is also finding its way into medicine. But how safe are these systems when interacting with humans?

Researchers at the Technische Universität Dresden (TUD) show that this is precisely where risks arise. Their guidelines should help to better recognize and avoid them.

Risks arise in interaction

The team led by Stephen Gilbert from the Else Kröner Fresenius Center for Digital Health at TUD has now published its findings in NEJM AI. The key finding is that it is not only the technology that determines safety, but also how people use it.

AI systems today support radiologists in diagnosing cancer, for example, or help with treatment decisions. However, errors can occur. One example is the so-called automation bias. This means that specialists adopt AI recommendations too quickly and no longer check them critically. Another problem is misplaced trust. Some rely too heavily on the technology. Others ignore it. Both can worsen the treatment. Stress caused by complex systems or a gradual loss of specialist knowledge can also increase risks.

Such effects are known internationally. Studies from the USA and Europe show similar patterns. Misinterpretations or overconfidence occur time and again, particularly with decision-making software. Experts therefore speak of a central problem in human-machine interaction.

Mehr aus dieser Kategorie

Guidelines for greater safety

The Dresden researchers have developed seven specific recommendations. They are aimed at manufacturers and test centers. The aim is to identify risks at an early stage. An important point here is the clear allocation of roles. It must be clear what the AI does and what the human decides. Results should be presented in an understandable way. Training and emergency solutions are also needed in the event of system failures. Even after approval, the work does not end. Use should continue to be monitored. Misuse or false confidence must be identified and corrected.

The research team worked together with partners from Oxford and Geneva. In the long term, the participants want to ensure that human factors play a greater role in the approval of AI medical devices. The study shows a clear trend: AI can improve medicine, but it also brings with it new risks. The human factor remains crucial. Patients will only really benefit if technology and users work well together.


Original publication:
Rebecca Mathias, Anne Schmitt, Mateo Campos, Baptiste Vasey, Sebastian Lorenz, Peter McCulloch, Stephen Gilbert: Evaluation of Human Factors-Related Risks in AI-Enabled Medical Devices: A Practical Guide, NEJM AI, 2026.

Die Übersetzungen werden mithilfe von KI automatisiert. Wir freuen uns über Ihr Feedback und Ihre Hilfe bei der Verbesserung unseres mehrsprachigen Dienstes. Schreiben Sie uns an: language@diesachsen.com.
Wissensland
Artikel von

Wissensland

Wissensland ist für die Inhalte selbst verantwortlich. Es gilt der Kodex der Plattform. Die Plattform prüft und behandelt Inhalte gemäß den gesetzlichen Vorgaben, insbesondere nach dem NetzDG.

METIS