We have long relied on digital helpers in our everyday lives. Navigation apps and voice assistants set the tone. Artificial intelligence is also finding its way into medicine. But how safe are these systems when interacting with humans?
Researchers at the Technische Universität Dresden (TUD) show that this is precisely where risks arise. Their guidelines should help to better recognize and avoid them.
Risks arise in interaction
The team led by Stephen Gilbert from the Else Kröner Fresenius Center for Digital Health at TUD has now published its findings in NEJM AI. The key finding is that it is not only the technology that determines safety, but also how people use it.
AI systems today support radiologists in diagnosing cancer, for example, or help with treatment decisions. However, errors can occur. One example is the so-called automation bias. This means that specialists adopt AI recommendations too quickly and no longer check them critically. Another problem is misplaced trust. Some rely too heavily on the technology. Others ignore it. Both can worsen the treatment. Stress caused by complex systems or a gradual loss of specialist knowledge can also increase risks.
Such effects are known internationally. Studies from the USA and Europe show similar patterns. Misinterpretations or overconfidence occur time and again, particularly with decision-making software. Experts therefore speak of a central problem in human-machine interaction.