UCL School of Management

Research seminar

Professor Tamer Boyaci, ESMT Berlin

Date

Wednesday, 22 May 2024
11:00 – 12:30
Location
Research Group
Operations and Technology
Description

UCL School of Management is delighted to welcome, Professor Tamer Boyaci,to host a research seminar discussing “Beyond the Black Box: Unraveling the Role of Explainability in Human-AI Collaboration”

Abstract

While AI-based decision tools are increasingly employed for their ability to enhance collaborative decision-making processes, challenges such as overreliance or underreliance on AI outputs pose risks to their efficiency in achieving complementary team performance. To address these concerns, explainable AI models have been increasingly studied. Despite the promise of bringing transparency and enhanced understanding of algorithmic decision-making processes, evidence from recent empirical studies has been quite mixed. In this talk, we bring a theoretical perspective on the pivotal of AI explainability in mitigating these challenges. We present an analytical model that incorporates the defining features of human and machine intelligence, capturing the limited but flexible nature of human cognition with imperfect machine recommendations and explanations that reflect the quality of these predictions. We then systematically investigate the multifaceted impact of explainability on decision accuracy, underreliance, overreliance, as well as users’ cognitive loads. Our results indicate that while low explainability levels have no impact on decision accuracy and reliance levels, they lessen the cognitive burden of the decision-maker. On the other hand, providing higher explainability levels enhances accuracy by improving overreliance but at the expense of higher underreliance. Furthermore, the incremental impact of explainability (c.f. a black-box system) is higher when the decision-maker is more cognitively constrained, task is more difficult or when the stakes are lower. Surprisingly, we find that higher explainability levels can escalate the overall cognitive burden, especially when the decision-maker is particularly pressed for time and initially doubts the machine’s quality, scenarios where explanations are expected to be most needed. By eliciting the comprehensive effects of explainability on decision outcomes and cognitive efforts, our study contributes to our understanding of designing effective human-AI systems in diverse decision-making environments.

Open to
Staff
Cost
Free
Last updated Thursday, 21 March 2024