Editorial Type:
Article Category: Research Article
 | 
Online Publication Date: 01 Nov 2009

Athletic Training Approved Clinical Instructors' Reports of Real-Time Opportunities for Evaluating Clinical Proficiencies

EdD ATC,
PhD ATC FNATA, and
PhD ATC
Page Range: 630 – 638
DOI: 10.4085/1062-6050-44.6.630
Save
Download PDF

Abstract

Context:

Appropriate methods for evaluating clinical proficiencies are essential to ensuring entry-level competence in athletic training.

Objective:

To identify the methods Approved Clinical Instructors (ACIs) use to evaluate student performance of clinical proficiencies.

Design:

Cross-sectional design.

Setting:

Public and private institutions in National Athletic Trainers' Association (NATA) District 4.

Patients or Other Participants:

Approved Clinical Instructors from accredited athletic training education programs in the Great Lakes Athletic Trainers' Association, which is NATA District 4 (N  =  135).

Data Collection and Analysis:

Participants completed a previously validated survey instrument, Methods of Clinical Proficiency Evaluation in Athletic Training, that consisted of 15 items, including demographic characteristics of the respondents and Likert-scale items (1  =  strongly disagree to 5  =  strongly agree) regarding methods of clinical proficiency evaluation, barriers, educational content areas, and clinical experience settings. We used analyses of variance and 2-tailed, independent-samples t tests to assess differences among ACI demographic characteristics and the methods, barriers, educational content areas, settings, and opportunities for feedback regarding clinical proficiency evaluation. Qualitative analysis of respondents' comments was completed.

Results:

The ACIs (n  =  106 of 133 respondents, 79.7%) most often used simulations to evaluate clinical proficiencies. Only 59 (55.1%) of the 107 ACIs responding to a follow-up question reported that they feel students engage in a sufficient number of real-time evaluations to prepare them for entry-level practice. An independent-samples t test revealed that no particular clinical experience setting provided more opportunities than another for real-time evaluations (t119 range, −0.909 to 1.796, P ≥ .05). The occurrence of injuries not coinciding with the clinical proficiency evaluation timetable (4.00 ± 0.832) was a barrier to real-time evaluations. Respondents' comments indicated much interest in opportunities and barriers regarding real-time clinical proficiency evaluations.

Conclusions:

Most clinical proficiencies are evaluated via simulations. The ACIs should maximize real-time situations to evaluate students' clinical proficiencies whenever feasible. Athletic training education program administrators should develop alternative methods of clinical proficiency evaluations.

Clinical education is a critical component of athletic training education.1 These experiences help students acquire, develop, and master the clinical proficiencies of entry-level practice.2 The clinical proficiencies delineate specific clinical skills expected of a student before entering the profession and guide decision making and skill integration.3 Becoming clinically proficient must represent a major focus of the athletic training student's clinical experience.4 Current standards for the accreditation of an athletic training education program (ATEP) clearly indicate that Approved Clinical Instructors (ACIs) now have more accountability in the teaching, documentation, and evaluation of clinical proficiencies than in previous years.4

Accurately assessing students' clinical skills is a key issue for health profession educators. Student clinicians must perform the skills correctly and safely on real patients before they can begin entry-level practice, where consumer expectations are high.5 Performance-based assessment is intimately linked to professional practice; the performance being assessed must reflect the real practice of the profession.6 Therefore, athletic training clinical proficiencies also should be a measure of real-life application.3 In athletic training education, these evaluations also provide the ACI with the information necessary to design additional quality learning experiences and to modify existing ones.7

A recent survey aimed at administrators (eg, program directors, clinical education coordinators) of ATEPs revealed that simulations were the most prevalent method of clinical proficiency evaluation.8 These simulations may have questionable quality for real-life application. However, ACIs may actually more often use other methods of clinical proficiency evaluation (eg, real time) than those methods indicated by the program administrators. With this in mind, we designed a study to follow up the study by Walker et al.8 The purpose of our follow-up study was to identify the various methods that ACIs use to evaluate athletic training students' clinical proficiencies. In particular, this included learning which content areas and clinical education settings tend to be conducive for real-time evaluation of clinical proficiencies and included exploring barriers to real-time evaluation of clinical proficiencies. We hoped that the results of this study would assist ATEP program directors, clinical education coordinators, and ACIs in developing better strategies to evaluate students' clinical proficiencies.

METHODS

Participants

We invited (via telephone and e-mail) all accredited ATEP program directors as of April 2007 in the Great Lakes Athletic Trainers' Association, which is National Athletic Trainers' Association District 4, excluding our institution, to participate in this study (n  =  81). They distributed the Methods of Clinical Proficiency Evaluation in Athletic Training (MCPEAT) survey to all of the athletic training ACIs in their ATEPs. A total of 135 ACIs from 44 ATEPS completed the MCPEAT survey, but not all participating ACIs responded to all questions. Most respondents were from the college or university setting (n  =  71, 53.0%) and represented all levels of the National Collegiate Athletic Association (NCAA) and the National Association of Intercollegiate Athletics (NAIA). Many of the other respondents represented the corporate/industrial (n  =  26, 19.4%) and secondary school (n  =  17, 12.7%) settings. Experience as an ACI ranged from 1 (n  =  24, 17.8%) to 5 or more years (n  =  63, 46.7%) at their current institutions and 1 to 2 years (n  =  31, 23.3%) to more than 20 years (n  =  1, 0.8%) of total experience as an ACI or a clinical instructor (CI). Respondent demographics are presented in Table 1.

Table 1 Approved Clinical Instructor Demographics
Table 1

Procedures

The institutional review board approved the study. Each program director received the following survey items either electronically or in hard copy based on his or her preference: a cover letter providing instructions and the purpose for the study, MCPEAT surveys for all ACIs, and a postage-paid return envelope for all surveys (if applicable). The MCPEAT survey was distributed by the program director to all ACIs associated with his or her ATEP, and informed consent was implied upon completion and return of the survey. The surveys were coded to track participating institutions, and a reminder e-mail was sent to each program director at the beginning of the week in which the surveys were to be returned. We followed up with e-mails and telephone calls to nonrespondent institutions for 2 more weeks. All principal investigators were blinded to who had returned completed surveys. All data entry, coding, and follow-up e-mails and telephone calls were completed by someone who was not directly associated with this investigation.

Instrumentation

The reliability and validity of the MCPEAT instrument used in this follow-up study have been established.8 The 15-item MCPEAT survey consisted of 6 items covering demographic characteristics of the respondent (eg, primary title, years as an ACI) and 3 items covering common clinical proficiency evaluation methods, including definitions (eg, real time, simulation, standardized patient [SP]) (Table 2). In addition, 4 Likert-scale items (1  =  strongly disagree to 5  =  strongly agree) assessed respondents' perceptions regarding opportunities for real-time clinical proficiency evaluations in various clinical education settings (eg, collegiate athletic competition, corporate/industrial setting, high school athletic practice) relative to the educational content areas3 (eg, Risk Management and Injury Prevention, Pharmacology, Conditioning and Rehabilitative Exercise) and barriers to real-time clinical proficiency evaluation (eg, inadequate volume of injuries, insufficient number of ACIs, patient health care is often a priority). Note that at the time this study was conducted, ATEPs were using the third edition of the Athletic Training Educational Competencies.9

Table 2 Definitions for the Methods of Clinical Proficiency Evaluation in Athletic Training Survey
Table 2

Respondents were invited to address 2 open-ended items: (1) Do you feel that your students engage in a sufficient number of real-time clinical evaluations to adequately prepare them as entry-level ATs? and (2) List other barriers that may hinder real-time evaluation in your ATEP.

Data Analysis

Descriptive statistics were computed for all items on the MCPEAT survey. Analyses of variance (ANOVAs) and 2-tailed, independent-samples t tests were used to analyze differences among ACI demographic characteristics and the methods, settings, and opportunities for feedback regarding clinical proficiency evaluation. The α level was set a priori at .05. Bonferroni corrections were used for multiple comparisons. The minimum target sample size of respondents was 30, which yielded a power of 0.92 for detecting a large effect. Sample sizes of 25 and 20 yielded powers of 0.86 and 0.76, respectively. Data analysis was performed using SPSS (version 13.0; SPSS Inc, Chicago, IL).

Although this was not a qualitative study, we had a sufficient number of comments to warrant qualitative analysis of the 2 open-ended survey items. All qualitative data were analyzed using interpretative coding.10 This process involved taking each comment (coding) and developing categories of concepts, which focused on respondents' perspectives, issues, and concerns. We organized the concept categories into themes using pattern analysis10 and assigned labels to capture their meanings. Three analysts evaluated the data to ensure trustworthiness and accurate interpretation.

RESULTS

Of 133 responding ACIs, 114 (85.7%) reported that the ATEP first required evaluation of clinical proficiencies in a controlled classroom or laboratory setting. Similarly, 57.8% of 135 responding ACIs (n  =  78) reported that their students were required to have these same clinical proficiencies reevaluated during clinical experiences within the same semester (11.6% within a week and 5.8% within a month of the initial evaluation), whereas 17.4% of 121 responding ACIs (n  =  21) reported they had to be reevaluated by the end of the next semester. Furthermore, 22 ACIs reported that clinical proficiencies must be reevaluated by the end of the last semester of the professional phase of the ATEP or indicated other timetables.

Descriptive statistics for frequency of simulated proficiency, real-time, and SP methods of clinical proficiency evaluation reported by ACIs are presented in Table 3. Of 133 responding ACIs, 106 (79.7%) reported they most often completed simulated clinical proficiency evaluations. Of the 116 ACIs who responded to the follow-up question, 62 (53.4%) reported that these simulated evaluations were used more than half the time, and 102 (87.9%) specifically used simulations in which students integrate knowledge and skills to solve clinical problems.

Table 3 Methods of Clinical Proficiency Evaluations
Table 3

Most of the 133 responding ACIs (n  =  99, 74.4%) also reported conducting real-time clinical proficiency evaluations. However, only 39 (36.4%) of the 107 ACIs responding to the follow-up question evaluated more than 50% of clinical proficiencies in real time. Thus, other methods were used more than half the time to evaluate clinical proficiencies. In addition, 59 (55.1%) of these 107 ACIs felt that students engaged in a sufficient number of real-time clinical proficiency evaluations to prepare them for entry-level practice. A 1-way ANOVA revealed no differences between years of experience as a CI or an ACI and the frequency of using real-time clinical proficiency evaluations (F3,123 range, 0.008 to 1.245, P ≥ .05).

Two themes with 3 subthemes each emerged from the representative comments provided for the open-ended items about whether ACIs felt their students engaged in a sufficient number of real-time clinical experiences (Figure 1). Theme 1, “Opportunities for real-time evaluations,” described how students regularly engaged in real-time clinical proficiency evaluations. The first subtheme, need for more opportunities, described how students were completing real-time proficiency evaluations but needed more real-time encounters to adequately prepare for entry-level practice. The second subtheme, supporting clinical setting characteristics, demonstrated certain characteristics of the clinical setting increased opportunities for real-time evaluations. The third subtheme, contrary student characteristics, described that, although opportunities for real-time evaluations existed, certain characteristics of the student prohibited real-time clinical proficiency evaluation.

Figure 1. Conceptual framework of qualitative data: student engagement in real-time clinical proficiency evaluations. AT indicates athletic trainer.Figure 1. Conceptual framework of qualitative data: student engagement in real-time clinical proficiency evaluations. AT indicates athletic trainer.Figure 1. Conceptual framework of qualitative data: student engagement in real-time clinical proficiency evaluations. AT indicates athletic trainer.
Figure 1 Conceptual framework of qualitative data: student engagement in real-time clinical proficiency evaluations. AT indicates athletic trainer.

Citation: Journal of Athletic Training 44, 6; 10.4085/1062-6050-44.6.630

Theme 2, “Insufficient opportunities for real-time evaluations,” described how students did not regularly engage in real-time clinical proficiency evaluations. The first subtheme, insufficient occasions for real-time evaluations, addressed the lack of opportunities for real-time evaluations. The second subtheme, insufficient time, described that time constraints of students and ACIs prevented real-time proficiency evaluations. The third subtheme, simulated proficiency evaluations, demonstrated that simulated proficiency evaluations became the predominant method for evaluating clinical proficiencies.

Regarding other methods of clinical proficiency evaluations, 62 (46.6%) of the 133 respondents reported that they used SPs to evaluate clinical proficiencies. Of the 64 ACIs responding to the follow-up question, 23 (35.9%) used this method to conduct clinical proficiency evaluations more than 50% of the time.

The ACIs reported that sufficient opportunities existed to provide feedback during and after real-time, simulated, and SP clinical proficiency evaluations. A 2-tailed, independent-samples t test revealed no differences between ACI demographic characteristics (eg, years of experience as an ACI; t109 range, −1.638 to 1.973, P ≥ .05) or the athletics affiliation of the institution (eg, NCAA Division I, NAIA) and opportunities to provide meaningful feedback during or after clinical proficiency evaluations (t114 range, 0.25 to 2.073, P ≥ .05).

Educational Content Areas and Clinical Proficiency Evaluations

Descriptive statistics for ACIs' perceptions of whether sufficient opportunities existed for real-time clinical proficiency evaluations in the 12 educational content areas are presented in Table 4. The Orthopedic Clinical Examination and Diagnosis (4.36 ± 0.847), Therapeutic Modalities (4.30 ± 0.769), Conditioning and Rehabilitative Exercise (4.29 ± 0.812), and Acute Care of Injuries and Illnesses (4.28 ± 0.918) educational content areas scored the highest, with more than 90% of ACIs agreeing or strongly agreeing that sufficient opportunities existed in these content areas for real-time clinical proficiency evaluations. The Nutritional Aspects of Injuries and Illnesses (3.07 ± 0.970), Pharmacology (3.02 ± 0.972), and Psychosocial Intervention and Referral (2.76 ± 1.025) educational content areas scored the lowest, with approximately 30% of ACIs disagreeing or strongly disagreeing that sufficient opportunities existed in these content areas for real-time clinical proficiency evaluations.

Table 4 Educational Content Areas That Provide Sufficient Opportunity for Real-Time Clinical Proficiency Evaluations
Table 4

Clinical Experience Settings and Clinical Proficiency Evaluation

Descriptive statistics for ACIs' perceptions of clinical experience settings and their abilities to provide sufficient opportunities for real-time clinical proficiency evaluations are presented in Table 5. The collegiate or high school athletic training room (4.34 ± 0.819), high school athletic practice (4.06 ± 0.908), and collegiate athletic practice (4.01 ± 1.015) scored the highest, with more than 75% of ACIs agreeing or strongly agreeing that these settings provided sufficient opportunities for real-time clinical proficiency evaluations.

Table 5 Clinical Experience Settings That Provide Sufficient Opportunity for Real-Time Clinical Proficiency Evaluations
Table 5

A 2-tailed, independent-samples t test revealed no differences among various clinical experience settings and respondents' opinions regarding their abilities to provide more opportunities for real-time clinical proficiency evaluations (t119 range, −0.808 to 2.959, P ≥ .05). A 1-way ANOVA indicated that ACIs from NCAA Division II and NAIA institutions perceived that more opportunities existed for real-time evaluations at their collegiate athletic practices (F3,105  =  2.979, P  =  .044) and at their collegiate athletic competitions (F3,104  =  9.335, P < .001) than at their other clinical education settings.

Barriers to Real-Time Clinical Proficiency Evaluations

Descriptive statistics for ACIs' levels of agreement regarding barriers to real-time clinical proficiency evaluations are presented in Table 6. Most ACIs (n  =  104, 79.4%) either agreed or strongly agreed that a barrier to real-time clinical proficiency evaluation was that the actual occurrence of an injury or condition does not conveniently coincide with the evaluation timetable established for a particular clinical proficiency. In addition, 78.6% (n  =  103) of the ACIs agreed or strongly agreed that an inadequate volume of injuries or conditions was a barrier to real-time evaluation. Some ACIs (n  =  34, 26.2%) agreed or strongly agreed that a coach or administrator who provided minimal support for clinical education was a barrier to real-time evaluation. A 2-tailed, independent-samples t test revealed no differences between ACI demographic characteristics (eg, years of experience as an ACI; t119 range, −0.327 to 2.028, P ≥ .05) or athletics affiliation (t114 range, 0.240 to 2.035, P ≥ .05) and barriers to real-time clinical proficiency evaluations.

Table 6 Barriers to Real-Time Clinical Proficiency Evaluation
Table 6

The ACIs were also instructed to comment about other barriers they believed hindered real-time evaluations of clinical proficiencies in their ATEPs. Three themes, 1 of which included 2 subthemes, emerged from the representative comments (Figure 2). Theme 1, “Contrary student characteristics,” described how certain characteristics of the students (eg, lack of motivation to complete real-time proficiencies, lack of self-confidence to complete real-time proficiencies) were barriers to the real-time evaluations of clinical proficiencies. Theme 2, “Insufficient time,” described time as a barrier to the real-time evaluation of clinical proficiencies and included 2 subthemes. The first subtheme, insufficient ACI time, described how some ACIs lacked adequate time to dedicate to student evaluation. The second subtheme, insufficient student time, demonstrated ACIs felt students were involved in more activities than ever before and were required to spend less time in clinical education. Theme 3, “Contrary clinical setting characteristics,” described that certain characteristics of the clinical setting (eg, new clinical setting, large volume of students) made real-time evaluations difficult.

Figure 2. Conceptual framework of qualitative data: barriers to real-time clinical proficiency evaluation. ACI indicates Approved Clinical Instructor; AT, athletic trainer.Figure 2. Conceptual framework of qualitative data: barriers to real-time clinical proficiency evaluation. ACI indicates Approved Clinical Instructor; AT, athletic trainer.Figure 2. Conceptual framework of qualitative data: barriers to real-time clinical proficiency evaluation. ACI indicates Approved Clinical Instructor; AT, athletic trainer.
Figure 2 Conceptual framework of qualitative data: barriers to real-time clinical proficiency evaluation. ACI indicates Approved Clinical Instructor; AT, athletic trainer.

Citation: Journal of Athletic Training 44, 6; 10.4085/1062-6050-44.6.630

DISCUSSION

Methods of Clinical Proficiency Evaluation

It is important to appreciate the value of real-time patient encounters and to make such encounters as available as possible for our students. Ideally, clinical education provides opportunities for practicing and applying skills on real patients in real situations rather than on fellow students.11 Students' confidence regarding their clinical abilities and mastery of clinical practice is enhanced through real-time encounters with patients.12 Clinical reasoning is also enhanced by appropriate organization of knowledge. Problem-solving ability cannot be applied across clinical problems. Rather, clinical reasoning is context dependent, that is, specific to a presenting situation.13 Therefore, although an athletic training student is becoming an expert in one kind of clinical situation, he or she may be a novice in unfamiliar situations.14 Clinical reasoning actually incorporates both knowledge and cognitive processes. This means that organization of knowledge is crucial because, although we are able to hold only a limited number of units or chunks of information for immediate memory, the amount of information can be increased through incorporating information into larger chunks.15 Clinical reasoning requires considering many pieces of information about a clinical situation that are organized for efficient recall and use. Students can better develop their clinical expertise if they are assisted in learning and experiencing information in a way that parallels the way in which that information will be used and retrieved in the future. For example, students are best able to recall and use information about clinical evaluation and diagnosis if they learn and experience real-time situations in which they perform evaluations in clinical practice.

Consequently, real-time clinical evaluation is valued as a hallmark process for professional growth,12 because these evaluations are performed in unpredictable environments while students are actively engaged in clinical experiences.12 Clinical evaluation should include the observation of a student's performance of clinical skills and behaviors that are expected in professional practice.12 Students should also have ample opportunities to apply theory to clinical practice, including critical-thinking and decision-making processes,12 and evaluation of athletic training clinical proficiencies should include these theoretical applications (eg, decision making, critical thinking, and skill integration).3 When evaluating students' clinical proficiencies, ACIs need to use methods that allow students to make clinical decisions that depend on identifying and understanding the clinical situation.12 If students are solely focused on themselves (eg, getting a clinical proficiency approved), they are not able to focus on the patient or the clinical situation and understand the importance of the patient's needs.16

Real-time clinical proficiency evaluations are important because they allow students to make decisions based on the clinical situation at hand. For this study, we defined real-time clinical proficiency evaluation as the ACI's evaluation of a student's clinical skills that are demonstrated on an actual patient or athlete. Although most ACIs (74.4%) evaluated clinical proficiencies in real time, only 36.4% of ACIs used real-time evaluations more than half the time. This indicated that most ACIs are using methods other than real-time evaluation. Almost half of the ACIs (44%) noted that they would prefer more real-time clinical proficiency evaluations to prepare their students for entry-level practice. These findings are consistent with findings of related research in which ATEP administrators reported that real-time clinical proficiency evaluations are used but more real-time evaluations would always be better.8

The comments provided by ACIs in our study were both similar to and different from comments provided by ATEP administrators in related research8 regarding reasons that students may not engage in a sufficient number of real-time clinical proficiency evaluations. Both ACIs and ATEP administrators8 commented that insufficient opportunities exist for real-time clinical proficiency evaluations and that more real-time encounters would better prepare students for entry-level practice. Whereas some ACIs and ATEP administrators8 reported that students engaged in a sufficient number of real-time clinical proficiency evaluations, many commented that more opportunities for real-time clinical proficiency evaluations are still needed. The ATEP administrators commented that ACI role strain and insufficient opportunities for real-time evaluations were primary factors for why students do not engage in a sufficient number of real-time clinical proficiency evaluations.8 However, the ACIs commented that student characteristics (eg, student relying too heavily on ACIs, student choosing not to use his or her time wisely) and overreliance on simulated clinical proficiency evaluations often were factors preventing real-time evaluations.

However, sufficient overall opportunities for real-time clinical proficiency evaluations are unlikely to occur because injuries or illnesses do not always occur at the “right” place and at the “right” time. This supports the need for standardized and authentic alternative methods of clinical proficiency evaluations to help ensure that athletic training students have received adequate preparation for entry-level practice. Simulations attempt to provide for structuring and applying of knowledge in a context-specific way.

Although clinical assignments are intended to provide students with a range of experiences, certain opportunities for learning do not occur for all students. Gaps in learning are a concern because of the importance of prototypical cases for clinical reasoning. Proficient and expert clinicians have developed clusters of prototypical cases that they use in making judgments about particular clinical problems.17 Simulations could be used to supplement and generate experiences that students would not otherwise have and to function as prototypical cases.

We identified the use of simulations as an alternative method of clinical proficiency evaluation. A simulation was defined as a scenario or clinical situation in which a student evaluates a mock patient or athlete who portrays a mock injury or pathologic condition (eg, shoulder pain, acute cervical spine injury). The mock patient or athlete is an individual (typically a student peer or ACI) who has had no training to portray the injury or condition in a standardized and consistent fashion. Most ACIs (79.7%) reported using simulations to evaluate clinical proficiencies. Furthermore, simulations were used more than 50% of the time to evaluate clinical proficiencies by more than half of the ACIs. In previous related research with ATEP administrators, the researchers8 also found that simulated proficiency evaluations were the predominant (95%) method for evaluating athletic training students' clinical proficiencies. Simulated clinical proficiency evaluations cannot be limited to checklists that risk limiting assessment to psychomotor skills and omitting important student communication skills and professional behaviors.12 The checklist of psychomotor skills should be complemented by a more complex assessment of clinical reasoning or problem solving, which are important components of clinical practice.12 If simulations are poorly designed and implemented, these higher-level cognitive skills could be easily overlooked or excluded entirely, focusing solely on the psychomotor skills.

An SP encounter is different from a simulation because a case must be carefully developed and the SP must be trained to accurately and consistently portray that case. A case template or uniform document is used most often in medical schools to develop the cases an SP will portray (eg, migraine headache due to domestic violence; hypertension; or receipt of bad news, such as a cancer diagnosis).18 Each SP case, optimally derived from a real-life condition, is developed by a team of individuals (eg, physician, faculty member, SP trainer). When the case is developed, ideally an SP who fits the age, sex, and physical characteristics needed for the case is recruited. That person participates in individual or group training with an SP trainer, who is an individual experienced or trained to work with SPs. This SP and SP trainer review a script or written document, which explains the case and how the SP should answer certain questions (eg, Have you had this condition before? Are you married?). Any physical findings, such as pain, fear, or anxiety, that need to be portrayed are practiced. If the SP is also going to evaluate the student (eg, Did the student palpate the abdomen? Did the student ask your name?), then proper procedures for completing the written evaluation are also included in the training.

The medical literature provides substantial evidence that SPs are widely accepted to assess the clinical competence and performance of medical students.15,17 In a recent literature review, Boulet et al19 commented on the realism of SP encounters. Based on research in which SPs were sent into physicians' offices unannounced, the authors concluded that well-trained SPs are difficult to differentiate from real patients.19 During the past 30 years, SPs have been used in medical education to evaluate (and teach) students' clinical skills.18,20 Their use ensures students most accurately and realistically experience a variety of clinical situations before practicing them on actual patients. Researchers in other allied health care professions, such as nursing and physical therapy, are beginning to study the effect of SPs in their professional preparation programs.

We identified the use of SPs for clinical proficiency evaluations. An SP was defined as an individual who has undergone special training to more formally and consistently portray an injury or condition to multiple students. Nearly half (46.6%) of the ACIs reported using SPs. Of those, more than one-third (35.9%) reported using them more than 50% of the time. Consistent with ACIs, ATEP administrators in previous related research also reported that SPs were used to evaluate clinical proficiencies.8 Given the apparent resemblance of SPs to simulations, we suspect that, although definitions were provided for both evaluation methods on the MCPEAT survey instrument, the ACIs and ATEP administrators confused these 2 different evaluation methods.8 Athletic training educators apparently needed more explanation about the differences between SPs and simulations. We hope that our elaborating on SPs provides clarification.

Educational Content Areas and Clinical Proficiency Evaluations

Similar to ATEP administrators in related research, ACIs in our study reported that the educational content areas of Orthopedic Clinical Examination and Diagnosis and Therapeutic Modalities provided the most sufficient opportunities for real-time clinical proficiency evaluation.8 Interestingly, ACIs reported that the educational content areas of Conditioning and Rehabilitative Exercise and Acute Care of Injuries and Illnesses, respectively, provided the next most sufficient opportunities for real-time evaluations; however, ATEP administrators ranked the Acute Care of Injuries and Illnesses educational content area as the next most sufficient opportunity.8 Both ACIs and ATEP administrators8 believed that the Risk Management and Injury Prevention educational content area provided sufficient opportunities for real-time clinical proficiency evaluations.

Clinical Education Settings and Clinical Proficiency Evaluations

Students' perceptions regarding positive clinical education experiences may be shaped by the clinical environment in which they are placed.12 This may suggest that students would have better clinical experiences if they were placed in clinical education settings that could provide more opportunities for real-time clinical proficiency evaluations. The ACIs in our study reported that collegiate or high school athletic training rooms, high school athletic practices, and collegiate athletic practices provided sufficient opportunities for real-time clinical proficiency evaluations. Similarly, ATEP administrators in a related study reported that these same clinical settings provided sufficient opportunities for real-time clinical proficiency evaluations.8 Clinical education settings that allow students to practice skills in real time foster improved confidence in the student's abilities and mastery of the clinical proficiencies.12

Barriers to Real-Time Clinical Proficiency Evaluations

In our study, several barriers appeared to hinder real-time clinical proficiency evaluation. As ATEP administrators indicated in related research,8 the ACIs in our study indicated that injuries do not necessarily coincide with the timetable for evaluation of related clinical proficiencies and that an inadequate volume of injuries and conditions represent the most prominent barriers to real-time clinical proficiency evaluations. Nearly half of the ACIs (42%) disagreed or strongly disagreed that insufficient numbers of ACIs were available to spend adequate time with students who needed to complete clinical proficiency evaluations. This indicates that, although for some, a sufficient number of ACIs appear to be available, the timely occurrence of an injury or condition continues to be a barrier to real-time clinical proficiency evaluation (regardless of whether a sufficient number of ACIs are available).

The comments provided by the ACIs regarding barriers in our study were both similar to and different from the comments provided by ATEP administrators in related research.8 The ATEP administrators perceived that ACIs were strained and unwilling to complete or uninterested in completing real-time clinical proficiency evaluations.8 These administrators also perceived that the clinical education setting (particularly intercollegiate athletics) could pose barriers to real-time evaluations. However, the ACIs in our study reported that, although the time associated with real-time clinical proficiency evaluations could be a barrier, student and clinical setting characteristics were more prominent barriers. The ACIs reported that student characteristics (eg, inadequate initiative, skills, or confidence to complete a real-time evaluation; lack of motivation; busy schedules) often interfered with real-time clinical proficiency evaluations. In research with nursing students, Radwin21 also indicated that students sometimes feel they are incapable of or hesitant in performing a particular clinical skill. Regarding characteristics of the clinical setting that were barriers to real-time clinical proficiency evaluations, ACIs shared that too many students in the clinical setting at one time can diminish the opportunities to perform real-time clinical proficiency evaluations. In related research in respiratory therapy, Cullen12 identified that clinical experience settings that do not supply quality clinical instruction, such as not providing sufficient feedback, are detrimental to student learning.

Recommendations for Clinical Proficiency Evaluations

Less use of simulated clinical proficiency evaluations in athletic training would aid in developing proficient and sensitive practitioners for the profession. The ACIs should use real-time situations to evaluate students' clinical proficiencies whenever feasible. Recognizing that real-time evaluation is not always feasible, ATEP administrators should develop alternative methods of reliable and valid clinical proficiency evaluations (ie, SPs). We recommend starting with content areas (including on-the-field situations) in which real-time opportunities for certain clinical proficiency evaluations may be more limited (eg, Nutritional Aspects of Injuries and Illnesses, Pharmacology, and Psychosocial Intervention and Referral). The continued use of real-time clinical proficiency evaluations (when possible) is encouraged. Because real-time evaluations occur more often in collegiate and high school athletic training rooms and athletic practices than in other settings, clinical placements should favor these settings. When real-time clinical proficiency evaluations are not possible, we recommend using valid and reliable evaluation methods (ie, SPs) to evaluate the student's performance of clinical proficiencies. We also recommend exploring the literature regarding SPs; joining the Association for Standardized Patient Educators; and attending the “Training and Using Standardized Patients for Teaching and Assessment” workshop that the Southern Illinois University Medical School, Springfield, Illinois, offers annually.

CONCLUSIONS

When investigating various methods that ACIs use to evaluate athletic training students' clinical proficiencies, including learning which content areas and clinical education settings tend to be conducive for real-time evaluation of clinical proficiencies and exploring barriers to real-time evaluation of clinical proficiencies, we found that most ACIs use simulations. The ACIs should maximize real-time situations to evaluate students' clinical proficiencies whenever feasible. Several barriers hindered real-time clinical proficiency evaluation, including the occurrence of injuries and conditions not coinciding with the clinical proficiency evaluation timetable and an inadequate volume of injuries and conditions. The ATEP administrators should develop alternative methods of valid and reliable clinical proficiency evaluations (eg, SPs) when real-time clinical proficiency evaluations are not possible.

ACKNOWLEDGMENTS

This research was supported through a grant from the Great Lakes Athletic Trainers' Association Research Assistance Program.

REFERENCES

Copyright: the National Athletic Trainers' Association, Inc
Figure 1
Figure 1

Conceptual framework of qualitative data: student engagement in real-time clinical proficiency evaluations. AT indicates athletic trainer.


Figure 2
Figure 2

Conceptual framework of qualitative data: barriers to real-time clinical proficiency evaluation. ACI indicates Approved Clinical Instructor; AT, athletic trainer.


Contributor Notes

Kirk J. Armstrong, EdD, ATC; Thomas G. Weidner, PhD, ATC, FNATA; and Stacy E. Walker, PhD, ATC, contributed to conception and design; acquisition and analysis and interpretation of the data; and drafting, critical revision, and final approval of the article.

Address correspondence to Kirk J. Armstrong, EdD, ATC, Assistant Professor, Department of Kinesiology, Georgia College & State University, Milledgeville, GA 31061. Address e-mail to kirk.armstrong@gcsu.edu
  • Download PDF