RETRAI Workshop

Requirement Engineering for Trustworthy Artificial Intelligence

Panel: Defining Trustworthiness for Intelligent Agents

Roundtable Q&A with audience Interdisciplinary

This panel brings together experts across psychology, machine learning, and software/requirements engineering to unpack what trustworthiness means for human-facing AI agents. Topics will cover how stakes in different domains influence priorities, how to balance precision and verifiability with values and obligations, where the main governance and methodological gaps lie, and what role both the research community and society more broadly should play in shaping and implementing trustworthy AI behavior.

Panelists

Portrait of Panelist 3

Amel Bennaceur

School of Computing at the Open University, UK

Dr. Amel Bennaceur is an associate professor and director of research at the School of Computing at the Open University, UK. Her research focuses on formally-grounded and practice-informed software engineering methods and techniques to ensure the trustworthiness and resilience of intelligent systems. She published the results of this work in 60+ papers in top journals and conferences (TOSEM, TSE, Middleware, and ECSA) in research areas such as Software Engineering and Distributed Systems. She contributed to several EU and EPSRC research projects.

Portrait of Panelist 2

Nikita Dvornik

Palona AI, Montréal, Canada

Dr. Nikita Dvornik is a Lead Research Scientist working on AI agents for e-commerce. He has 10 years of experience across computer vision, robotics, autonomous driving, with a PhD from INRIA, France. His current work focuses on testing and benchmarking LLM Agents, with an emphasis on trustworthy and reliable AI systems.

Portrait of Reem Ayad

Reem Ayad

Department of Psychology, University of Toronto

Reem Ayad is a PhD candidate and SSHRC Doctoral Fellow whose research examines the moral consequences of human–AI relationships and aims to codify socio‑relational norms unique to human–AI interaction. She holds a BSc from the University of Toronto and an LLB from University College London and is a Graduate Affiliate at the Schwartz Reisman Institute for Technology and Society.

Moderator

Portrait of the Moderator

Isobel Standen

Philosophy department at the University of York

Isobel Standen is a third-year PhD student in the Department of Philosophy and a researcher from the Centre for Assuring Autonomy. Her work is at the intersection of philosophy and computer science, where she is involved in projects that bring together multidisciplinary researchers for collaboration. Isobel’s research explores human understanding and decision making, specifically human 'common sense', and how the lack of this capacity may be one of the leading causes of errors made by AI systems.