The University of Waterloo Libraries will be performing maintenance on UWSpace tomorrow, November 5th, 2025, from 10 am – 6 pm EST.
UWSpace will be offline for all UW community members during this time. Please avoid submitting items to UWSpace until November 7th, 2025.

Designing for Trust: A Multi-Factor Investigation of Optometrists’ Perspectives on AI-Based Glaucoma Screening Systems

Loading...
Thumbnail Image

Advisor

Burns, Catherine

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

Although glaucoma screening AI models show strong performance, their integration into clinical practice remains limited. Clinicians often face barriers rooted in technological acceptance, with trust emerging as a key determinant of adoption. Prior research has emphasized explainability, but a broader exploration of factors affecting trust is needed. This study investigates multiple factors shaping trust in AI and translates them into design requirements for next-generation glaucoma screening clinical decision support systems (CDSS). In a previous study, two real-world glaucoma patient cases, each comprising three visits at different times, were presented under both unimodal conditions (fundus images only) and multimodal conditions (fundus images, optical coherence tomography, visual fields, and medical history) through a mock interface simulating an AI-based glaucoma screening support system. During these simulated visits, nineteen licensed optometrists interacted with the system and participated in follow-up interviews, where they were asked whether they trusted the system and to explain their reasoning. The objective of this thesis is to identify the factors influencing optometrists’ trust in an AI-powered glaucoma screening tool and to propose design recommendations that can enhance trust in future iterations. The interview data were analyzed using Braun and Clarke’s thematic analysis approach. The emerging themes indicate that trust in the AI system is shaped by multiple factors: (1) alignment with clinicians’ expectations of AI’s role: flagging tool vs. consultant; (2) completeness of information; (3) communications of performance metrics: accuracy, sensitivity, confidence scores, perceived consistency and perceived quality of training data (4) clinical relevance of outputs (trends, actionable recommendations, differential diagnosis); (5) transparency in risk factor weighting, exclusions, and considered variables; (6) decision alignment between optometrists and the AI, assessed across decision inputs, identified risk factors, their relative importance, recommended actions, and the gradient of concordance in final decisions; (7) optimized the AI for cautious screening to captures all potential cases; (8) interface usability supporting timely decisions; (9) users’ self-perceived expertise, occasionally leading to overreliance; (10) onboarding and training that highlighted the system’s features and limitations; and (11) increasing familiarity over time, which helped calibrate trust. Based on these findings, 17 design principles were proposed to guide the development of the next iteration of a trust-supportive interface for glaucoma screening decision support systems.

Description

Keywords

LC Subject Headings

Citation