Editorial Type:
Article Category: Other
 | 
Online Publication Date: 01 May 2014

Evaluating Evidence-Informed Clinical Reasoning Proficiency in Oral Practical Examinations

EdD, ATC,
MS, ATC, and
MS, ATC
Page Range: 43 – 48
DOI: 10.4085/090143
Save
Download PDF

Clinical reasoning is the specific cognitive process used by health care practitioners to formulate accurate diagnoses for complex patient problems and to set up and carry out effective care. Athletic training students and practitioners need to develop and display effective clinical reasoning skills in the assessment of injury and illness as a first step towards evidence-based functional outcomes. In addition to the proper storage of and access to appropriate biomedical knowledge, an equally important component of effective clinical reasoning is the ability to select and interpret various conclusions from the mounting quantity of evidence-based medicine (EBM) sources. In assessing injury and illness, this competency is particularly reliant upon experience, skill execution, and available evidence pertaining to the diagnostic accuracy and utility of various special tests and physical examination procedures. In order to both develop and assess the ability of our students to integrate EBM into their clinical reasoning processes, we have designed exercises and evaluations that pertain to evidence-based clinical decision making during oral practical examinations in our assessment of athletic injury labs. These integrated oral practical examinations are designed to challenge our students' thinking and clinical performance by providing select key features of orthopaedic case pattern presentations and asking students to pick the most fitting diagnostic tests to fit that particular case. Students must not only match the appropriate special/functional tests, etc, to the case's key features, but also choose and explain how useful the chosen tests are for the differential diagnosis process, relative to the best diagnostic evidence. This manuscript will present a brief theoretical framework for our model and will discuss the process we use to evaluate our students' ability to properly select, perform, and explain various orthopaedic examination skills and the relevant evidence available. Specific examples of oral practical exam modules are also provided for elucidation.

INTRODUCTION

Clinical reasoning is the nuanced, yet advanced, cognitive process whereby health care practitioners quickly and safely formulate accurate diagnoses for complex patient problems. Central to this cognitive mechanism is the ability to consider and rule out relevant and competing differential diagnoses in order to act upon subsequent management decisions, and to develop safe and effective plans of care for patients.1 Athletic training is no different than its medical and health care contemporaries—both students and practitioners need to develop and display effective clinical reasoning skills in the assessment of injury and illness in order to provide evidence-based functional outcomes. In clinical practice, a sound and accurate diagnosis (Dx) is indeed the first step towards developing favorable and efficient outcomes; thus, the thinking process that initiates each and every patient encounter must be sound and sophisticated.

Whereas novice clinicians favor the more intuitive, hypothetico-deductive approach to reasoning that's characterized by the formation of multiple hypotheses followed by prolonged, unorganized, and time-intensive evaluation procedures, more experienced practitioners display more streamlined and accurate thinking by using case pattern presentation (CPP) as their primary mode of thinking through complex patient problems.1 It is central to the ability to formulate, store, and recall specific CPPs that the capable clinician be able to readily identify key features (KFs) that fit within a known and experienced pattern, while equally recognizing features that don't fit known or previously experienced patterns—which lead to new case patterns, in a cyclical fashion. Akin to an experienced criminal detective, an experienced clinician is able to quickly and adeptly discern relevant from irrelevant data points, and, better yet, accurately connect the dots between various features (signs, symptoms, mechanism of injury, physical exam results, etc) presented before him or her.

Further, a critical component of effective clinical reasoning is the ability to properly select and interpret various conclusions from the mounting quantity of evidence-based medicine (EBM), particularly as the evidence pertains to the diagnostic accuracy and predictability of various special tests and physical examination procedures.2 It's essential for both developing and practicing clinicians to learn to judiciously and effectively integrate the best evidence available for the diagnostic utility of each clinical test they are taught if they intend to develop the level of competence needed to (1) accurately diagnose illness and injury and (2) subsequently implement effective and efficient treatment plans. To reach this level of evidence-informed clinical competence, students and clinicians alike must be made aware of the diagnostic properties of tests and measures and know which of them have clinical utility, and which of them do not—or at least which of them are limited.3 Specifically, practicing clinicians need to be adept at wading through the various and intimidating statistics behind the plethora of special tests and other physical examination procedures put forth in academic texts, professional journals, and educational programs, and, at some point in their development, they need to know which have usable properties associated with them and which do not.4 The sometimes daunting numbers behind sensitivity, specificity, likelihood ratios, predictive values, Quality Assessment of Diagnostic Accuracy Studies, and utility scores must become KFs in the clinical mind; they must become a natural and usable part of the clinical decision-making process if sound diagnoses are to be made with accuracy and efficiency.

As clinical reasoning and evidence-informed practice are the centerpieces of our institution's athletic training education philosophy, we have intentionally sought mechanisms for teaching and evaluating the aptitude of our students to develop and appreciate this complex and nuanced faculty of the mind. For the oral practical examinations in our Assessment of Injuries laboratory class, we have integrated clinical reasoning with evidence-based practice (EBP) by developing evaluation models that are capable of addressing the critical interconnections between the 2 concepts of mind and practice. As this class is taught to our second-semester sophomore athletic training majors, this pedagogical strategy represents our students' initial exposure to the integration of clinical reasoning and diagnostic accuracy. From a programmatic perspective, this initiative is a central component of our attempts to weave both clinical reasoning and EBP throughout our entire curriculum, both didactically and clinically. As extensive reviews of clinical reasoning and the components of diagnostic accuracy are available elsewhere, the point herein is not to cover those concepts extensively, but rather to share 1 example of how our program attempts to introduce and assess these higher order thinking skills in our more novice students.

ORAL PRACTICAL EXAMINATIONS

The assessment-based oral practical examinations we have developed are designed to challenge and assess our students' ability to understand and apply both clinical reasoning and EBP in a mock-clinical context. These examinations were born out of, and represent a modification of, the significant work done in medical education on the use of KF examinations for assessing clinical decision-making skills by Page et al.5 In summary, the patient-based scenarios are role played by seasoned clinicians who provide a series of “macro evaluations” for the student to address. Each final oral practical examination is made up of a collection of small scenarios that are either presented as a definitive Dx, or a small list of KFs from known orthopaedic CPPs. When given an actual Dx to work from, students are in effect working backwards from that Dx, and are asked to demonstrate mastery of certain components of an evaluation relative to that Dx, a macro evaluation in that it is not a complete A to Z evaluation of the condition (see Figure 1). Because the Dx is known in these sections of the overall exam, we are intending to address what might be called lower-level clinical reasoning, which in reality is befitting of the novice status of our younger students.

Figure 1. Example of a simple case whereby a definitive diagnosis (Dx) is provided to the student. Here, the student is not required to form a differential Dx; thus, a lower level of clinical reasoning is assessed. However, the student must adequately rule out competing Dx (fracture [Fx], syndesmosis sprain, etc) as the evaluation is performed in order to get full credit for a complete and systematic evaluation. Notice also that students in this section are not expected to do a complete evaluation; this macro evaluation only highlights selected portions—just 1 element of an overall examination.Figure 1. Example of a simple case whereby a definitive diagnosis (Dx) is provided to the student. Here, the student is not required to form a differential Dx; thus, a lower level of clinical reasoning is assessed. However, the student must adequately rule out competing Dx (fracture [Fx], syndesmosis sprain, etc) as the evaluation is performed in order to get full credit for a complete and systematic evaluation. Notice also that students in this section are not expected to do a complete evaluation; this macro evaluation only highlights selected portions—just 1 element of an overall examination.Figure 1. Example of a simple case whereby a definitive diagnosis (Dx) is provided to the student. Here, the student is not required to form a differential Dx; thus, a lower level of clinical reasoning is assessed. However, the student must adequately rule out competing Dx (fracture [Fx], syndesmosis sprain, etc) as the evaluation is performed in order to get full credit for a complete and systematic evaluation. Notice also that students in this section are not expected to do a complete evaluation; this macro evaluation only highlights selected portions—just 1 element of an overall examination.
Figure 1. Example of a simple case whereby a definitive diagnosis (Dx) is provided to the student. Here, the student is not required to form a differential Dx; thus, a lower level of clinical reasoning is assessed. However, the student must adequately rule out competing Dx (fracture [Fx], syndesmosis sprain, etc) as the evaluation is performed in order to get full credit for a complete and systematic evaluation. Notice also that students in this section are not expected to do a complete evaluation; this macro evaluation only highlights selected portions—just 1 element of an overall examination.

Citation: Athletic Training Education Journal 9, 1; 10.4085/090143

In contrast, presenting only select KFs of known CPPs and asking students to work through certain components of an evaluation requires higher-level clinical reasoning because it forces students to immediately generate a differential Dx before proceeding with their physical examination (see Figure 2). This more sophisticated thinking requirement is more typical of advanced, experienced clinicians and is representative of what Bordage6 calls elaborated knowledge, by requiring students to more clearly define the patient problem early on in the process and to demonstrate a deeper meaning of the signs/symptoms and other data presented (as KFs). In elaborated knowledge, signs and symptoms are never totally understood in isolation, but rather are considered in relation to the position within the total set of signs and symptoms being presented.

Figure 2. More advanced macro scenario that requires instant differential diagnosis processing and systematic and orderly evaluation relative to the key features (KFs) presented in order to discern relative diagnoses. Again, note that this component does not require a complete evaluation on the student's part, only an evaluation of selected components, and represents a different joint, patient demographic, and category of injury (chronic/insidious versus acute). In this scenario, the athletic training student (ATS) should be able to differentiate between patellofemoral joint dysfunction/instability, varus joint strain, lateral meniscal pathology, and iliotibial band compression syndrome.Figure 2. More advanced macro scenario that requires instant differential diagnosis processing and systematic and orderly evaluation relative to the key features (KFs) presented in order to discern relative diagnoses. Again, note that this component does not require a complete evaluation on the student's part, only an evaluation of selected components, and represents a different joint, patient demographic, and category of injury (chronic/insidious versus acute). In this scenario, the athletic training student (ATS) should be able to differentiate between patellofemoral joint dysfunction/instability, varus joint strain, lateral meniscal pathology, and iliotibial band compression syndrome.Figure 2. More advanced macro scenario that requires instant differential diagnosis processing and systematic and orderly evaluation relative to the key features (KFs) presented in order to discern relative diagnoses. Again, note that this component does not require a complete evaluation on the student's part, only an evaluation of selected components, and represents a different joint, patient demographic, and category of injury (chronic/insidious versus acute). In this scenario, the athletic training student (ATS) should be able to differentiate between patellofemoral joint dysfunction/instability, varus joint strain, lateral meniscal pathology, and iliotibial band compression syndrome.
Figure 2. More advanced macro scenario that requires instant differential diagnosis processing and systematic and orderly evaluation relative to the key features (KFs) presented in order to discern relative diagnoses. Again, note that this component does not require a complete evaluation on the student's part, only an evaluation of selected components, and represents a different joint, patient demographic, and category of injury (chronic/insidious versus acute). In this scenario, the athletic training student (ATS) should be able to differentiate between patellofemoral joint dysfunction/instability, varus joint strain, lateral meniscal pathology, and iliotibial band compression syndrome.

Citation: Athletic Training Education Journal 9, 1; 10.4085/090143

In the final examination version, each student will get a combination of macro evaluations—1 or 2 known diagnoses to evaluate, and 1 or 2 KFs of a known CPP—and there will be a mix of simple and complex cases to address in multiple joints for each student. After performing the required macro evaluation steps (some ask for a history, for an observation, etc) and other contextually relevant physical exam steps (relevant palpation, range of motion [ROM], manual muscle testing [MMT], etc), students are then required to appropriately pick the most fitting and accurate diagnostic tests to fit the CPP being portrayed by the mock patient. Figure 1 is an example of a simple lower extremity known Dx macro evaluation, and Figure 2 is an example of a more complex lower extremity KF presentation; each represents 1 part of an overall final examination. In designing these sorts of evaluations, we are attempting to foster more streamlined, organized, and clinically relevant assessment behaviors and habits in our novice students, and intending to prevent them from the age-old habit of doing every special test they know for the joint under investigation. We are directly intending to foster, develop, and assess a level of elaborated knowledge in our students by getting them to clearly define the patient problems that we have covered in class via the use of differential Dx tables, and we are equally trying to avoid the “jaundiced textbook” phenomenon, whereby students can be seen highlighting entire book chapters and then attempting to do everything on a subsequent practical evaluation.6 Throughout the practical exam, students must not only match the appropriate special/functional tests, etc, to the definitive Dx given or KFs provided (thus demonstrating their mastery of the deeper linkages between the various signs and symptoms that are central to the ability to compare and contrast among competing diagnoses), but also choose and explain how useful the chosen test(s) are for the differential Dx process being entertained, relative to the best evidence (sensitivity/specificity, likelihood ratios, etc) available (thus demonstrating their understanding of how and when to use “best evidence available”).

For example, if the mock patient presented KFs such as a valgus rotation mechanism of injury, reportedly hearing/feeling a loud pop and a sensation of the knee shifting at the moment of injury, the successful student would develop a differential Dx of sprains to the medial collateral ligament, the anterior cruciate ligament (ACL), and maybe even the medial meniscus, and perhaps even a subluxed/dislocated patella. Because the differential Dx includes a ruptured ACL, the student should first choose (and perform) the Lachman test as the best test to confirm and/or rule out this Dx, based on its documented utility score of 1.7 Priority points are given for performing the Lachman test as the best choice to confirm/rule out the ACL damage, and then lower point values are given for students who choose to do the anterior drawer as the best choice (with utility score of 2, based on lower specificity and sensitivity values reported in the literature). Thus, a student who accurately formulated an ACL sprain as his/her number 1 Dx, and chose to execute and explain the Lachman test first, followed by the anterior drawer, would score more points than a student who formulated the same Dx, yet chose to do the anterior drawer first followed by the Lachman test. Fundamentally, both students would have demonstrated sound clinical reasoning in recognizing the KFs presented and formulating a likely differential for the given scenario, and both students would have chosen applicable special tests specific for the decision making for the ACL element of the Dx, but one student would have demonstrated a better mastery of the evidence in choosing the Lachman as the best test for the Dx under consideration.

SCORING AND FEEDBACK

In general, the scoring/grading of each section is somewhat fluid in that each instructor/mock patient is allowed some latitude to “go where the student goes” in the evaluation (given the clinical context). Not only is this element of grading more or less required because of the time-intensive nature of these exams and limited personnel, but also it is consistent with prior research done on KF examinations. Page et al5 point out that flexible scoring is required to accommodate the complexity and configurations of actions often required in the resolution of real-world clinical problems. We believe in the very real notion of clinical and learning context, and that not all things can be controlled for in teaching and learning environments—especially in the realm of clinical learning and the impact and role of clinical experience (from both the instructor doing the grading, and the student attempting to demonstrate mastery). Thus, we follow the recommendations put forth by Page et al5 by allowing our various instructors to grade students as they see fit and to provide specific and constructive feedback that is contextually relevant to each student at the same time. If you look at the 2 examples provided herein (Figures 1 and 2), you will also notice differentiated point values for various items. In keeping with the blueprint put forth by Page et al,5 we also realize that not all steps in the resolution of problem are of equal importance, and so we subscribe to their idea that better time can be spent by focusing on what we feel are the critical steps of certain evaluations, while being respectful of the clinical context element at the same time. To help the reader better use and understand our macro exams, we offer a few guidelines for assistance while fully realizing that programmatic differences and pedagogical style will largely influence how one chooses to implement what we are offering:

  1. Points are not deducted when students do something not listed on the evaluation form. For example, if a student palpates medial bony structures and performs a valgus stress test for the lateral knee pain scenario in Figure 2, the student is not penalized any points. The student merely doesn't get points added, and feedback is given that it is not necessarily wrong to do those things in this scenario, but they can distract from the important steps in an effective and streamlined evaluation of this nature.

  2. There is room to award points for a student choosing to perform something not listed on the evaluation form, something not thought of. For example, if the student wanted to evaluate the patient's gait in order to see what the source of the patient's pain/dysfunction was in the scenario in Figure 1, that would be an appropriate strategy, and so the student would get rewarded. This is an example of the flexible scoring we refer to.

  3. Each instructor can award from 0 to 3 points for the execution of tests and evaluation skills (ROM, MMT, etc) based on the instructor's assessment of the student's performance. Before this oral practical scenario, all of our students will have been previously evaluated on each subskill with a more discerning and detailed rubric and in multiple formats, so we will have previously addressed the details of the psychomotor execution and will be fully comfortable with a more relaxed scoring mechanism that focuses on context and decision making at this point.

  4. Differentiated point values and what's included as required in each scenario are based on various points of emphasis in our program, in both what we teach and how we teach it, and are based on our concept of differential Dx making and the use of KFs. Other instructors may choose to modify the content or point values, or to prioritize other elements of evaluation strategies relative to particular joints and/or diagnoses.

SUMMARY

Using these exam formats forces novice students to match various KFs (eg, history, quality of pain) of either a mock CPP or a definitive Dx, and then to select and demonstrate the best steps for evaluating and decision making, thus requiring them to demonstrate proficiency in cognitive processing and skill execution as well as the application and interpretation of those skills. The purpose of this short paper was to present 1 method for evaluating the ability of students to properly select, perform, and interconnect various physical examination skills and the relevant evidence available, all as interconnected components of their developing clinical reasoning aptitude.

Since the introduction of the fifth edition of the Athletic Training Education Competencies,8 the integration of EBM into established athletic training curriculums continues to pose several challenges for students, educators, and clinical preceptors alike.9,10 The 14 new EBM competencies consist of fairly advanced concepts and skills that must be taught, learned, and most importantly practiced in all educational programs. Essentially, this requires athletic training faculty members to develop and implement effective pedagogical strategies for turning EBM into EBP in neophyte clinicians. To complicate matters, many practicing clinicians were not formally taught the principles of EBM during their educational preparation, and many have limited resources and time for learning and doing EBM on a regular basis, forcing ATEP faculty members and administrators to make the time to learn the concepts of EBM and then become confident enough to teach this information to students and their clinical educators simultaneously.9 Assuring that students understand these concepts and how (and when) to apply relevant elements of EBM in clinical settings and contexts can be difficult for educators unfamiliar with the idea and principles of EBM. Additionally, faculty members face many challenges keeping up with new, emerging evidence that is constantly evolving and finding effective ways to incorporate new evidence and ideas into their pedagogy and across their curriculum.

Beyond the challenges facing athletic training educators, students also face considerable challenges absorbing these new concepts and learning to apply these relatively complicated processes to their clinical experiences in meaningful ways. Regarding diagnostic reasoning, athletic training students must take the evidence being taught to them, such as sensitivity and specificity statistics, or utility scores, and determine what makes a test valuable, and then figure out exactly what to do with the information in the clinical context. For example, although a test may have a low utility score, the sensitivity or specificity number may still make the test clinically relevant in a given context. As educators, we must constantly help students digest the profusion of information and form clinical decisions that position the patient at the center of the process and lead to sound and safe clinical decisions. This process is challenging for seasoned clinicians alone, never mind undergraduate, entry-level students who are simultaneously learning and developing the bioscientific knowledge basis (anatomy, physiology, pathophysiology, biomechanics, etc) and psychomotor skill sets required to understand and perform athletic training.

We have been using these formats for the last 2 years with pleasing success while also knowing that, like most pedagogical approaches, this is not yet a perfect system; we continue to modify and amend our scenarios, the formatting, and our grading criteria/procedures to better fit our needs and intent. Student response has been overwhelmingly positive, as students report liking the real clinical context and concerted emphasis on thinking and doing in a clinical context. They also appreciate the instant and direct feedback we provide not only on their skill execution, but more importantly on their thinking—why they chose to perform certain tests and physical exam procedures. Developing, conducting, and scoring these types of evaluations is indeed time- and energy-intensive on our part, yet to date we are happy not only with what the students tell us about the challenges associated with this thinking-based microexam format, but also with what these exams tell us about our teaching and direct attempts to integrate our curriculum with clinical reasoning and EBP. We feel strongly that it has been an instrumental step towards our efforts to get our students to move towards more elaborated knowledge as they begin to embark on the more meaningful and challenging clinical experiences that lie ahead.

REFERENCES

  • 1
    Geisler PR,
    Lazenby TS.
    Clinical reasoning in athletic training education: modeling expert thinking. Athl Train Educ J. 2009;4
    (2);
    214.
  • 2
    Valovich McCleod T.
    Evidence-based practice. In:
    HillmanSK,
    ed. Core Concepts in Athletic Training & Therapy.
    Champaign, IL
    :
    Human Kinetics;
    2012:559567.
  • 3
    Cleland JA,
    Koppenhaver S.
    Netter's Orthopedic Clinical Examination: An Evidence-Based Approach. 2nd ed.
    Philadelphia, PA
    :
    Saunders Elsevier;
    2011.
  • 4
    Denegar CR,
    Cordova ML.
    Application of statistics in establishing diagnostic certainty. J Athl Train. 2012;47(
    2
    ):233236.
  • 5
    Page G,
    Bordage G,
    Allen T.
    Developing key-feature problems and examinations to assess clinical decision-making skills. Acad Med. 1995;70(
    3
    );194201.
  • 6
    Bordage G.
    Elaborated knowledge: a key to successful diagnostic thinking. Acad Med. 1994;69(
    11
    );883885.
  • 7
    Cook CE,
    Hegedus EJ.
    Orthopedic Physical Examination Tests: An Evidence-Based Approach. 2nd ed.
    Boston, MA
    :
    Pearson;
    2012.
  • 8
    National Athletic Trainers' Association. Athletic training education competencies. 5th ed.
    Dallas, TX
    :
    NATA;
    2011.
  • 9
    Hankemeier DA,
    Van Lunen BL.
    Approved clinical instructors' perspectives on implementation strategies in evidence-based practice for athletic training students. J Athl Train. 2011;46(
    6
    ):655664.
  • 10
    Manspeaker S,
    Van Lunen BL.
    Overcoming barriers to implementation of evidence-based practice concepts in athletic training education: perceptions of select educators. J Athl Train. 2011;46(
    5
    ):514522.
<bold>Figure 1.</bold>
Figure 1.

Example of a simple case whereby a definitive diagnosis (Dx) is provided to the student. Here, the student is not required to form a differential Dx; thus, a lower level of clinical reasoning is assessed. However, the student must adequately rule out competing Dx (fracture [Fx], syndesmosis sprain, etc) as the evaluation is performed in order to get full credit for a complete and systematic evaluation. Notice also that students in this section are not expected to do a complete evaluation; this macro evaluation only highlights selected portions—just 1 element of an overall examination.


<bold>Figure 2.</bold>
Figure 2.

More advanced macro scenario that requires instant differential diagnosis processing and systematic and orderly evaluation relative to the key features (KFs) presented in order to discern relative diagnoses. Again, note that this component does not require a complete evaluation on the student's part, only an evaluation of selected components, and represents a different joint, patient demographic, and category of injury (chronic/insidious versus acute). In this scenario, the athletic training student (ATS) should be able to differentiate between patellofemoral joint dysfunction/instability, varus joint strain, lateral meniscal pathology, and iliotibial band compression syndrome.


Contributor Notes

Please address all correspondence to Paul R Geisler, EdD, ATC, Athletic Training Education, Department of Exercise & Sport Sciences, Ithaca College, Ithaca, NY 14850. pgeisler@ithaca.edu. About the Column Editor: Dr W David Carr is currently an assistant professor at Missouri State University, Springfield.
  • Download PDF