Expert-informed human-AI collaboration in Radiology

Problem: healthcare professionals find current Explainable Artificial intelligence (XAI) systems for diagnostic radiology confusing and hard to trust. further, different providers have different needs.

Solution: develop the foundational knowledge necessary to inform the design of XAI explanations that can meet the needs of stakeholders at varying levels of expertise.

Funding provided by DARPA via NRL.

 

XAI ‘explanations’ like these are inadequate and do not meet the needs of radiology stakeholders.

Study 1: Expert-Informed Explainable AI for Radiology

Why do AI explanations in radiology, despite their remarkable accuracy, fail to gain human trust?

A problem with relying on deep learning for diagnosis is that radiologists often do not know how the AI arrives at its conclusion, making the diagnosis hard to trust. Justifications need to be made in human terms and be understandable to all of the stakeholders who interact with the system, in context.

We use a mix of experiments, ethnography, interview, and survey to uncover the methods by which radiology stakeholders of different expertise levels communicate their and justify their diagnoses.

We find XAI explanations that mirror human processes of reasoning and justification with evidence may be more useful and trustworthy than traditional visual explanations like heat maps. By delineating these communication strategies, our research can inform XAI explanations for radiology that are sensitive to the knowledge, needs and goals of radiology practitioners.

 

Study 2: Enhancing Explainable AI for Image Classification

How can we use human-centered design approached to improve XAI explanations for image classification?

By producing AI-based explanations that are modeled after the explanations given by experts, we can make AI systems that are trustworthy and human-understandable.

In several online experiments, we test the efficacy of XAI explanations modeled after experts and based on cognitive theories of information transfer.

We find significant user experience improvements (trust, learning, and preference) by changing the areas of explanation focus to align with those which humans find more informative. Specifically, our novel averaged and contrastive explanations perform better than a base explanation. Insights can be used to improve interactions with complex AI systems regardless of expertise level.

Contrastive Local Interpretable Model-Agnostic Explanations (LIME) for bird identification.

 

Publications that have come out of this research agenda:

Kaufman, R., Kirsh, D. (2023). Explainable AI And Visual Reasoning: Insights From Radiology. Conference on Computer-Human Interaction (CHI) Human-Centered Explainable AI Workshop. PDF

Kaufman, R. A., Kirsh, D. (2022). Cognitive Differences in Human and AI Explanation. In Proceedings of the Annual Meeting of the Cognitive Science Society. PDF

Soltani, S., Kaufman, R. A., Pazzani, M. (2022). User-Centric Enhancements to Explainable AI Algorithms for Image Classification. In Proceedings of the Annual Meeting of the Cognitive Science Society. PDF

Pazzani, M., Soltani, S., Kaufman, R., Qian, S., & Hsiao, A. (2022). Expert-Informed, User-Centric Explanations for Machine Learning. Thirty-Sixth AAAI Conference on Artificial Intelligence. PDF