OBJECTIVES: To develop benchmark scores of competency for use within a competency-based virtual reality (VR) robotic training curriculum. SUBJECTS AND METHODS: This longitudinal, observational study analysed results from 9 EAU hands-on-training courses in VR simulation. 223 participants ranging from novice to expert robotic surgeons completed 1565 exercises. Competency was set at 75% of the mean expert score. Benchmark scores for all general performances metrics generated by the simulator were calculated. Assessment exercises were selected by expert consensus and through learning curve analysis. Three basic skill and two advanced skill exercises were identified. RESULTS: Benchmark scores based on expert performance offered viable targets for novice and intermediate trainees in robotic surgery. Novice participants met the competency standards for most basic skill exercises however advanced exercises were significantly more challenging. Intermediate participants performed better across the seven metrics but still fell short of the benchmark standard in the more difficult exercises. CONCLUSION: Benchmark scores derived from expert performances offer relevant and challenging scores for trainees to achieve during VR simulation training. Objective feedback allows both participants and trainers to monitor educational progress and ensures that training remains effective. Furthermore, the well-defined goals set through benchmarking offer clear targets for trainees and enable training to move to a more efficient competency based curriculum. This article is protected by copyright. All rights reserved.
Competency Based Training in Robotic Surgery: Benchmark Scores for Virtual Reality Robotic Simulation.
Buffi N;
2017-01-01
Abstract
OBJECTIVES: To develop benchmark scores of competency for use within a competency-based virtual reality (VR) robotic training curriculum. SUBJECTS AND METHODS: This longitudinal, observational study analysed results from 9 EAU hands-on-training courses in VR simulation. 223 participants ranging from novice to expert robotic surgeons completed 1565 exercises. Competency was set at 75% of the mean expert score. Benchmark scores for all general performances metrics generated by the simulator were calculated. Assessment exercises were selected by expert consensus and through learning curve analysis. Three basic skill and two advanced skill exercises were identified. RESULTS: Benchmark scores based on expert performance offered viable targets for novice and intermediate trainees in robotic surgery. Novice participants met the competency standards for most basic skill exercises however advanced exercises were significantly more challenging. Intermediate participants performed better across the seven metrics but still fell short of the benchmark standard in the more difficult exercises. CONCLUSION: Benchmark scores derived from expert performances offer relevant and challenging scores for trainees to achieve during VR simulation training. Objective feedback allows both participants and trainers to monitor educational progress and ensures that training remains effective. Furthermore, the well-defined goals set through benchmarking offer clear targets for trainees and enable training to move to a more efficient competency based curriculum. This article is protected by copyright. All rights reserved.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.