GP Synergy staff and collaborators are proud to have had a paper published recently in Medical Teacher (an international journal of education in the health sciences). The paper examines the content validity and internal consistency of the General Practice Registrar Competency Assessment Grid (GPR-CAG) used by GP Synergy in GP training.
Alison Fielding, Katie Mulquiney, Rosa Canalese, Amanda Tapley, Elizabeth Holliday, Jean Ball, Linda Klein & Parker Magin (2019): A general practice workplace-based assessment instrument: Content and construct validity, Medical Teacher, DOI: 10.1080/0142159X.2019.1670336
Why do we need workplace-based assessment tools in general practice training?
Workplace-based assessment (WBA) tools that are valid, reliable, useful, cost-effective, and feasible are essential for the provision of effective feedback to General Practice (GP) registrars for their reflection and learning.
Within GP training registrars participate in five Clinical Teaching Visits (CTVs). A CTV is a workplace-based formative assessment of a GP registrar undertaken by a medical educator or experienced GP clinical teaching visitor. CTVs augment in-practice teaching and are unique to Australian general practice vocational training. Find out more about formative assessment of registrars.
What is the General Practice Registrar Competency Assessment Grid?
During a CTV, CTV visitors assess registrars on a number of general practice competencies. Each competency is added to the General Practice Registrar Competency Assessment Grid (GPR-CAG). The CPR-CAG allows the registrar’s achievement of individual competencies to be tracked over time. Competency items are assessed on a four-point scale: performing below expected level; working towards expected level; performing at expected level; and performing above expected level. A fifth option of ‘not assessed’ is also available.
Why have we assessed the GPR-CAG?
Relatively few GP workplace-based assessment instruments have been psychometrically evaluated. We designed and undertook a study to establish the content validity and internal consistency of the GPR-CAG.
How did we undertake our assessment?
Data collection was undertaken between 2014 and 2016 during routine CTVs for registrars undertaking their first term (GPT1) or second term (GPT2) of GP training. Our analysis included data from 555 GPT1 registrars and 537 GPT2 registrars.
A four-factor, 16-item solution was identified for GPT1 competencies (Cronbach’s alpha range: 0.71–0.83) and a seven-factor 27-item solution for GPT2 competencies (Cronbach’s alpha: 0.63–0.84). The emergent factor structures were clinically characterisable and resonant with existing medical education competency frameworks.
What did we find in our assessment?
Our study provides evidence for the content validity and internal consistency of GPR-CAG. This is a valuable first step for establishing its overall validity.
Where to next?
The study has already been useful in reducing, in an evidence-based manner, the number of items that CTV visitors complete in their assessment.
Continued research into other psychometric properties and performance of the grid will further enhance its validation profile and utility. We are currently working further on aspects including ‘factors’ or groupings of items that may be clinically and educationally important characteristics, and their predictive value.
Do you want to know more?
You can find the published paper at Medical Teacher. Alternatively please contact the authors for further information: email@example.com or 1300 477.