Expert-informed human-AI collaboration in Radiology
Problem: current Explainable Artificial intelligence (XAI) systems for diagnostic radiology are confusing and hard to trust. further, different providers have different needs.
Solution: develop the foundational knowledge necessary to design XAI explanations that can meet the needs of radiology stakeholders at varying levels of expertise.
Funding provided by DARPA via NRL.
XAI ‘explanations’ like these are inadequate and do not meet the needs of radiology stakeholders.
Study 1: Expert-Informed Explainable AI for Radiology
Why do AI explanations in radiology, despite their remarkable accuracy, fail to gain human trust?
A problem with relying on deep learning for diagnosis is that radiologists often do not know how the AI arrives at its conclusion, making the diagnosis hard to trust. Justifications need to be made in human terms and be understandable to all of the stakeholders who interact with the system, in context.
We use a mix of experiments, ethnography, interview, and survey to uncover the methods by which radiology stakeholders of different expertise levels communicate and justify their diagnoses.
We find XAI explanations that mirror human processes of reasoning and justification may be more useful and trustworthy than traditional XAI explanations like heat maps. By delineating these communication strategies, our research can inform XAI explanations for radiology that are sensitive to the knowledge, needs and goals of radiology practitioners.
Study 2: Enhancing Explainable AI for Image Classification
How can we use human-centered design approaches to improve explainable AI for image classification?
By producing AI-based explanations that are modeled after human experts, we can make AI systems more trustworthy and understandable. In several online experiments, we tested the efficacy of different XAI explanations based on cognitive theories of information transfer.
We find significant user experience improvements (trust, learning, and preference) by aligning explanations with information humans find most informative. Specifically, our novel ‘averaged and contrastive’ explanations performed better than traditional explanations. Insights can be used to improve interactions with complex AI systems regardless of expertise level.
Contrastive Local Interpretable Model-Agnostic Explanations (LIME) for bird identification.
Publications that have come out of this research agenda:
Kaufman, R., Kirsh, D. (2023). Explainable AI And Visual Reasoning: Insights From Radiology. Conference on Computer-Human Interaction (CHI) Human-Centered Explainable AI Workshop. PDF
Kaufman, R. A., Kirsh, D. (2022). Cognitive Differences in Human and AI Explanation. In Proceedings of the Annual Meeting of the Cognitive Science Society. PDF
Soltani, S., Kaufman, R. A., Pazzani, M. (2022). User-Centric Enhancements to Explainable AI Algorithms for Image Classification. In Proceedings of the Annual Meeting of the Cognitive Science Society. PDF
Pazzani, M., Soltani, S., Kaufman, R., Qian, S., & Hsiao, A. (2022). Expert-Informed, User-Centric Explanations for Machine Learning. Thirty-Sixth AAAI Conference on Artificial Intelligence. PDF