Intervention through technology in decisions with significant societal impact
Cutting-edge research investigates: (How) can we trust artificialintelligence?
Participants of the TRUST workshop in Vienna (Prof. Dr. Dr. René Riedl from the University of Applied Sciences Upper Austria, 4th person from the left in the first row) Photo credit: FH OÖ
'Trust, but verify’ — or so the old saying goes. And from a scientific perspective, it’s not entirely wrong. Trust acts as a social glue that holds people and societies together. Without it, interpersonal interactions become significantly more difficult. In recent years, artificialintelligence (AI) has stepped onto the stage as a new actor — one with the potential to mislead humans. This raises an important question for an international research team co‑led by René Riedl from the Steyr Campus of the University of Applied Sciences Upper Austria: How strong is our trust in AI?
Humans naturally seek out those they can trust — this enables collective achievements such as democratic governance and economic cooperation. These are things individuals couldn’t accomplish alone. Artificialintelligence (AI) is also a collective achievement. But do we trust it?
A high level of opportunities — but also of risks AI has the potential to improve our lives in many ways. It can efficiently generate all kinds of textual and visual content and automate tasks — all of which increases productivity. But there are also significant risks. Algorithms used in processes such as recruitment can contain hidden biases, just like humans do. And when it comes to disinformation, AI poses a serious threat. Recent examples from several countries have shown just how strongly AI can influence political opinion‑forming.
Trust in AI, trust in its creators? Ultimately, trust is at stake — not only trust in AI systems themselves, but also in the people, companies, and institutions that develop, deploy, and regulate them. As AI increasingly influences decisions with far‑reaching consequences — from healthcare to warfare — trust and accountability must become central priorities.
Article published in a renowned international journal These topics are examined in detail in a recently published open-access article in the renowned journal Humanities and Social Sciences Communications. The article was written by an international team co‑led by Prof. Dr. Dr. René Riedl from the Steyr Campus of the University of Applied Sciences Upper Austria, who heads the joint master’s program ‘Digital Business Management’ offered by the University of Applied Sciences Upper Austria and Johannes Kepler University Linz.
The ideas for the article emerged during a TRUST workshop in Vienna, co‑initiated by the University of Applied Sciences Upper Austria. At this event, René Riedl and Prof. Dr. Frank Krueger from George Mason University (USA) brought together interdisciplinary experts — including researchers from the University of Oxford in the UK and Stanford University in the USA — to explore trust in technological contexts.
Boost for ‘Trust and AI’ as a research field One thing is certain: AI will change our world. To shape this transformation responsibly, all stakeholders — developers, investors, regulators, and researchers — must work together to ensure that trust remains at the heart of the AI era or is restored where necessary.
In their newly published article, the authors present a transdisciplinary approach aimed at stimulating even more international research on the topic of ‘trust in AI.’ This approach, referred to as the TrustNet Framework, describes a dynamic network of technical systems, institutions, and human relationships. It frames trust in AI not as an isolated property but as a collectively created, context‑dependent phenomenon.