Building trust in autonomous vehicles
Problem: trust and adoption of avs are limited by ONE-SIZE-FITS-ALL APPROACHes to HUman-AI COMMUNICATION.
Solution: improve human-autonomous vehicle interaction by tailoring communication to specific people, contexts, and goals.
FOR AUTONOMOUS VEHICLES (“SELF-DRIVING CARS”) TO ENGENDER APPROPRIATE LEVELS OF TRUST, THEY MUST BE RESPONSIVE TO THE PRAGMATIC, INFORMATIONAL, AND EMOTIONAL NEEDS OF THEIR PASSENGERS.
Study 1: Predicting Trust From Personal Traits Using Machine Learning (CHI, 2025)
Can we use machine learning to predict a person’s trust, concerns with vehicles, and adoption attitudes based on their traits and experiences?
In this study, we surveyed 1500+ people and built comprehensive profiles based on 130 distinct features. Then, we used machine learning and AI explainability (SHAP value regression) to understand the most important factors predicting a person’s trust in AVs.
Our models were highly accurate (90%+) and highlighted several ways to inform the design of trustworthy, personalized vehicles and informed policies.
Example factors used as input into the ML model.
Custom-built VR Driving Simulator.
Study 2: Autonomous Vehicle Errors, Context, and Personalization (CHI, 2025)
What happens when AV communications mess up? What is the effect of driving context?
We developed a custom, state-of-the-art driving simulator and tested how AV communication errors, driving context characteristics, and personal traits affect a person’s comfort relying on the AV, preference for control, confidence in the AV's ability, and explanation satisfaction.
Results emphasize the need for accurate, contextually-adaptive, and personalized AV communications to foster trust, reliance, satisfaction, and confidence. We conclude with design, research, and deployment recommendations for trustworthy AV systems.
Study 3: Learning to Race From an AI Driving Coach - A Study of Information Content and Modality (Scientific Reports, 2024)
Can we build an AI driving coach that can teach novices to drive a race car?
Full-motion driving simulator at Toyota Research Institute in Los Altos, CA.
In a pre-post mixed-methods experiment, we built an AI Coach to teach race car driving to novices, and then tested the effectiveness of the coach in a full-motion driving simulator. We compared 4 variations of the coach's communications to understand how different explanatory techniques impacted our outcomes: driving performance, cognitive load, confidence, expertise, and trust. Through interview and survey, we gathered feedback and derived an understanding of participant learning processes.
Results show AI coaching can effectively teach performance driving skills to novices. Efficient, modality-appropriate explanations should be opted for when designing effective communications that can instruct without overwhelming. Differences in participant learning are attributed to how information directs attention, mitigates uncertainty, and influences overload experienced by participants.
Theoretical Framework: Supporting Human-AV Communication with Cognitive Theory (available on arXiv)
We proposed a systems framework that integrates cognitive theories of joint action and situational awareness as a basis to tailor AI communications so that they can help people meet their goals. This framework can be used to design future AI communications that can interact successfully with different people (personalization) and in different driving contexts (context-awareness).
High-level overview of the proposed human-AV system, based in theories of situational awareness (SA) and joint action.
Publications that have come out of this research agenda:
Kaufman, R., Lee, E., Bedmutha, M., Kirsh, D., Weibel, N. (2025). Predicting Trust In Autonomous Vehicles: Modeling Young Adult Psychosocial Traits, Risk-Benefit Attitudes, And Driving Factors With Machine Learning. CHI Conference on Human Factors in Computing Systems (CHI ’25). PDF
Kaufman, R., Broukhim, A., Kirsh, D., Weibel, N. (2025). What Did My Car Say? Impact of Autonomous Vehicle Explanation Errors and Driving Context On Comfort, Reliance, Satisfaction, and Driving Confidence. CHI Conference on Human Factors in Computing Systems (CHI ’25). PDF
Kaufman, R., Costa, J., Kimani, E. (2024). Effects of Multimodal Explanations for Autonomous Driving on Driving Performance, Cognitive Load, Expertise, Confidence, and Trust. Nature: Scientific Reports. PDF
Kaufman, R., Kirsh, D., Weibel, N. (2024). Developing Situational Awareness for Joint Action With Autonomous Vehicles. ArXiv. PDF