Education
A valid method of laparoscopic simulation training and competence assessment1, 2

https://doi.org/10.1016/S0022-4804(03)00315-9Get rights and content

Abstract

Background

The purpose of our study was to evaluate the construct validity of laparoscopic technical performance measures and the face validity of three laparoscopic simulations.

Materials and methods

Subjects (N = 27) of varying levels of surgical experience performed three laparoscopic simulations, representing appendectomy (LA), cholecystectomy (LC), and inguinal hemiorrhaphy (LH). Five laparoscopic surgeons, blinded to the identity of the subjects, rated the subjects on procedural competence on a binary scale and in four skills categories on a 5-point scale: clinical judgment, dexterity, serial/simultaneous complexity, and spatial orientation. Using a task-specific checklist, non-clinical staff assessed the technical errors. The level of surgical experience was correlated with the ratings, the technical errors, and the time for each procedure. Subject responses to a survey regarding the utility of the inanimate models were evaluated.

Results

Years of experience directly correlated with the skills ratings (all P < 0.001) and with the competence ratings across the three procedures (P < 0.01). Experience inversely correlated with the time for each procedure (P < 0.01) and the technical error total across the three models (P < 0.05). Nearly all subjects agreed that the corresponding procedures were well represented by the simulations (LA 96%, LC 96%, LH 100%).

Conclusion

The laparoscopic simulations demonstrated both face and construct validity. Regardless of the level of surgical experience, the subjects found the models to be suitable representations of actual laparoscopic procedures. Task speed improved with surgical experience. More importantly, the quality of performance increased with experience, as shown by the improvement in the skills assessments by expert laparoscopic surgeons.

Introduction

The rapid expansion of minimally invasive surgery (MIS) and the concordant adoption of technological advances have created both opportunities and challenges in surgical education. The introduction of laparoscopic surgery, with its inherent non-intuitive movements, new instrumentation, and two-dimensional visualization, has limited in part the applicability of the apprenticeship model of surgical education. Other forces putting pressure on this model include the financial and ethical issues of training residents in the operating room, a growing concern for patient safety and reduction of medical errors, the change of focus from process to outcome evidence of training success by the Accreditation Council of Graduate Medical Education (ACGME), and changes in standards in patient care and resident training. While the traditional apprenticeship method of education is well established and will continue to be a mainstay of surgical training, it is evident that instruction in laparoscopic surgery should not begin in the operating room. Rather, as Cuschieri has outlined, endoscopic training should be integrated into residency programs in a step-wise fashion with early instruction in a laparoscopic laboratory. [1]

In response to the burgeoning growth in MIS, traditional methods of instruction have been questioned and have lead to the use of innovative educational tools. Virtual reality simulations, with the potential to provide haptic feedback and automated assessment in a protected learning environment, represent a clear advancement in surgical education. Although in their infancy, such simulations have been partially validated and have resulted in transfer of skills to actual laparoscopic training tasks and to operating room performance in limited studies 2, 3, 4, 5. As the fidelity or degree of realism, of these systems improves, the role of virtual reality simulations as an educational tool will expand.

However, high-technology virtual reality simulations do not meet an important criterion for adoption into a surgical education program. Namely, the availability, ease of use, and cost-effectiveness of such systems currently cannot match those of inexpensive, inanimate, mechanical laparoscopic simulations. It comes as no surprise that these more accessible bench-training exercises (e.g., pegboard manipulation and suturing) have been more widely used in surgical residencies 6, 7, 8.

At the University of Kentucky, a feasible method of independent, objective assessment has been developed using three cost-effective, inanimate laparoscopic models. The assessment by faculty evaluators focuses on fundamental, global skills that have been previously identified as basic components of laparoscopic surgery [9]. The inter-rater reliability of this assessment program has been demonstrated, with measures generally exceeding accepted standards of reliability (intra-class correlation coefficients range, 0.74–0.89) 10, 11. In this study, we aim to establish the face validity and construct validity of performance measures derived from these simulations.

Section snippets

Materials and methods

The study was conducted with the approval of the Investigational Review Board at the University of Kentucky. Twenty-seven subjects (10 students, 10 PGY-1 to 2 surgery residents, 2 PGY-3 to 5 surgery residents, 5 laparoscopic surgeons) performed three laparoscopic simulations using cost-effective, inanimate models developed at the University of Kentucky Center for Minimally Invasive Surgery to represent the selected key segments of laparoscopic inguinal hernia repair (LH), laparoscopic

Results

The degree of realism of the models (face validity) as perceived by the subjects was obtained by survey method. Among the 25 subjects who responded completely to the survey (93% response rate), nearly all agreed that the corresponding procedures were well represented by the simulations (LA 96%, LC 96%, LH 100%). There were two laparoscopic surgeons who did not answer the questionnaire completely.

The level of experience was scaled according to level of training (1 = student, 2 = PGY-1 to 2, 3 =

Discussion

The success of surgical education, whether it focuses on students, residents or practicing surgeons, depends on the commitment of a competent teacher to offer expertise in an unbiased manner in an environment conducive to learning. Another critical component of an effective educational program is evaluation, a necessary tool to provide feedback for the learner and to identify any areas of weakness. Such evaluation allows subsequent instruction to be tailored to the learner’s needs. Evaluations

References (14)

There are more references available in the full text version of this article.

Cited by (43)

  • Years of experience is more effective in defining experts in the gaze analysis of laparoscopic suturing task than task duration

    2021, Applied Ergonomics
    Citation Excerpt :

    For example, time trial training has been implemented assuming that time reduction leads to improved technique (Chikazawa et al., 2018). In other laparoscopic simulation training, the combination of task duration and error score is considered as one of the objective performance assessment parameters (Adrales et al., 2003; Grantcharov et al., 2004; MacDonald et al., 2003; McNatt and Smith, 2001; Smith et al., 2001; Vassiliou et al., 2006). Furthermore, in clinical practice, task duration has been adopted as an assessment parameter because shorter operative times are associated with fewer complications (Ieiri et al., 2013; Jackson et al., 2011).

  • Objective assessment of obstetrics residents’ surgical skills in caesarean: Development and evaluation of a specific rating scale

    2021, Journal of Gynecology Obstetrics and Human Reproduction
    Citation Excerpt :

    Colthart et al. gave an operational definition to self-assessment as being a “personal evaluation of ones’ characteristics and professional abilities in relation to perceived standards” [37]. Several authors demonstrated high validity and reliability of the OSATS scale [15,35,38,39]. The correlation between self and hetero-assessment was high for the global score but the ICC was between 0.5 and 0.75 for the technical and behaviour score.

View all citing articles on Scopus
1

Presented at the 36th Annual Meeting of the Association for Academic Surgery, Boston, MA, November 7–9, 2002.

2

Supported in part by an educational grant from Tyco/U.S. Surgical Corporation.

View full text