Evaluating the quality of assessment in one Canadian ophthalmology residency program as an early adopter of CBME
Authors: RachelCurtis, Christine Moon, Tessa Hanmore,Wilma Hopman, StephanieBaxter.
Author Disclosure Block: R. Curtis:None. C. Moon: None. T. Hanmore:None. W. Hopman: None. S. Baxter: None.
Purpose: To investigate the quality of feedback in ophthalmology resident trainee assessments since the introduction of competency-based medical education (CBME) at Queen’s University. Specifically, this study explored the utility of the Quality of Assessment for Learning (QuAL) scoring method of evaluating narrative feedback. This scoring tool was applied to assessments before and after faculty development initiatives to demonstrate which interventions resulted in feedback improvement, and if the quality of feedback improved over time. The QuAL score was also used to determine if evaluator comments associated with the Canadian Ophthalmology Assessment Tool for Surgery (COATS) procedural assessment forms were of higher value than other Entrustable Professional Activity (EPA) forms.
Study Design: Retrospective cohort study.
Methods: Ophthalmology resident assessment data from July 2017-December 2020 was retrieved from Elentra (Integrated Teaching and Learning Platform) and anonymized. Each assessment was organized by form type and stage-specific EPA. Written feedback was assigned a QuAL score out of 5 based on the previously validated rubric. All individual assessments were scored by an ophthalmology faculty member, and a randomized sample of 10% was independently rescored by an ophthalmology resident. Intra-class correlations were used to determine inter-rater agreement. A Linear-by-Linear test was applied to determine QuAL score improvement over time. Independent samples t-tests were used to compare pre- versus post- feedback interventions, and the COATS versus EPA forms.
Results: A total of 2617 individual assessments were graded. The ICC for the two independent graders was 0.90 (95% CI 0.88-0.92, p<0.001) for the total QuAL scores. The Linear-by-Linear test indicated a significant change over time (p<0.001). QuAL scores significantly increased after a targeted departmental grand rounds presentation aimed at improving coaching behaviors in feedback delivery, with pre- and post-intervention means of 1.64 and 2.61, p<0.001 respectively. COATS forms (n= 483) demonstrated a significantly higher QuAL score versus EPA forms (n=2134), with mean values of 3.48 vs. 2.26, p<0.001.
Conclusions: This study demonstrates that the QuAL score is a reliable and useful tool in evaluating narrative feedback quality in the context of CBME, with the potential to inform about the impact of specific faculty development initiatives. Procedural COATS forms with structured feedback prompts may be more effective in guiding evaluators to deliver comments on resident performance.