Skip to main content

Workshop on Human Behavioral Aspects of (X)AI

Venue: Birkbeck Clore Management Centre

Interest in explainable AI has significantly increased in recent years. Explanations provide transparency for what are often black box procedures, which is viewed as critical for fostering the trust in AI in real-world practice. Explainable Artificial Intelligence (XAI) research aims to improve user’s understanding of the decision-making process of AI systems, which in turn should build trust in AI systems.

Research on XAI from the perspective of machine learning (ML) focuses on the development of methods for automated production of explanations of AI decisions without sufficient investigation into how these explanations affect human recipients’ beliefs. On the other hand, the research in psychology and cognitive science on explanations and trust has just began to explore the domain of AI explainability. Bringing these two communities together is crucial if we are to build explainable AI systems that are helpful to the human recipient and that would impact people’s beliefs in ways that we want and hope they would.

The goal of the workshop is to bring these two communities together and to facilitate communication and raise awareness regarding how people process explanations. This would mark an important step in building tools that help us better communicate AI prediction processes to human recipients.


For further details please see 

Contact name:

  • Dr Marko Tesic