Prediction of Core and Lower Extremity Strains and Sprains in Collegiate Football Players: A Preliminary Study
Poor core stability is believed to increase vulnerability to uncontrolled joint displacements throughout the kinetic chain between the foot and the lumbar spine. To assess the value of preparticipation measurements as predictors of core or lower extremity strains or sprains in collegiate football players. Cohort study. National Collegiate Athletic Association Division I Football Championship Subdivision football program. All team members who were present for a mandatory physical examination on the day before preseason practice sessions began (n = 83). Preparticipation administration of surveys to assess low back, knee, and ankle function; documentation of knee and ankle injury history; determination of body mass index; 4 different assessments of core muscle endurance; and measurement of step-test recovery heart rate. All injuries were documented throughout the preseason practice period and 11-game season. Receiver operating characteristic analysis and logistic regression analysis were used to identify dichotomized predictive factors that best discriminated injured from uninjured status. The 75th and 50th percentiles were evaluated as alternative cutpoints for dichotomization of injury predictors. Players with ≥2 of 3 potentially modifiable risk factors related to core function had 2 times greater risk for injury than those with <2 factors (95% confidence interval = 1.27, 4.22), and adding a high level of exposure to game conditions increased the injury risk to 3 times greater (95% confidence interval = 1.95, 4.98). Prediction models that used the 75th and 50th percentile cutpoints yielded results that were very similar to those for the model that used receiver operating characteristic-derived cutpoints. Low back dysfunction and suboptimal endurance of the core musculature appear to be important modifiable football injury risk factors that can be identified on preparticipation screening. These predictors need to be assessed in a prospective manner with a larger sample of collegiate football players.Context:
Objective:
Design:
Setting:
Patients or Other Participants:
Main Outcome Measure(s):
Results:
Conclusions:
Prevention of injuries is an important domain of athletic training, but very little high-quality research evidence is available to guide the administration of specific injury-prevention practices.1–5 Our current understanding of injury causation is limited, particularly at the individual level, but must be improved in order to develop effective interventions for modifying injury risk.3,5–7 Intrinsic risk factors are characteristics of an individual athlete that increase injury predisposition, whereas injury susceptibility results from the exposure of a predisposed athlete to extrinsic risk factors (eg, body contact with opposing players during participation in a contact sport).6–9 Conversely, an athlete who is repeatedly exposed to potentially injurious conditions without experiencing injury may possess intrinsic characteristics that counteract the extrinsic factors that would otherwise induce injury. Injury-prevention efforts have historically focused on identifying and reducing general risk factors, rather than developing protective neuromuscular adaptations that are specific to an individual athlete's intrinsic characterisitics.9–11
In recent years, the concept of core stability has been advocated as an important consideration for maintaining dynamic joint stability throughout the kinetic chain that extends from the foot to the lumbar spine.12–18 The core has been defined as the lumbopelvic-hip complex, which is composed of the lumbar vertebrae, pelvis, and hip joints and the active and passive structures that either produce or restrict movements of these segments.19 Core stability has been defined as “the ability to control the position and motion of the trunk over the pelvis and leg to allow optimum production, transfer, and control of force and motion to the terminal segment in integrated kinetic chain activities.”17(p190) The abdominal, paraspinal, and gluteal muscles are the focus of core stability training programs, which are believed to enhance performance capabilities and reduce injury risk.20 Poor core stability could be either the cause or the result of low back dysfunction.22–24 Furthermore, weakness and alteration of neuromuscular activation patterns in the lumbopelvic-hip complex and the lower extremity have been documented in patients with joint injuries that are distant from the affected musculature.25–30 The injury-related neural effect on muscle activation can apparently occur in either a distal-to-proximal or proximal-to-distal direction. Lower extremity dysfunction increases susceptibility to low back injury,31 and susceptibility to lower extremity injury appears to be increased by low back dysfunction.30,32–34 A history of low back injury at the beginning of a sport season has been reported to present a 6-fold increase in the risk for sustaining another low back injury during participation in collegiate sports.35 Traditional strength training may be inadequate for optimal development of neuromuscular control of the core,23 and rapid fatigue of the core musculature appears to indicate poor core stability.36,37 Thus, the amount of time an individual can maintain a static body position that involves loading of the core musculature may be valuable for quantifying the risk for injury to either the core or the lower extremity.
Injury-prevention researchers must address interrelated biomechanical, physiologic, behavioral, and medical factors that influence the manner in which forces of different magnitudes, rates, and frequencies affect the body tissues.38,39 A prospective cohort study involves quantifying factors believed to present a risk for injury before the cohort members are exposed to the potential for injury occurrence. Injury cases are documented for a predetermined period of time, after which the prospectively identified risk factors are analyzed separately and collectively for their predictive value.3,6,40 Such a multifactorial approach to assessing sport injury causation has been underutilized.8,11
The purpose of our prospective cohort study was to assess the value of potential predictors of core and lower extremity sprains and strains in collegiate football players. A secondary purpose was to develop a set of strongest predictors for categorizing individual players as having high-risk versus low-risk status, which could be used in future studies for refining and validating a clinical prediction rule to identify individual players with a high level of potentially modifiable injury risk.
METHODS
Cohort Characteristics and Screening Procedures
A National Collegiate Athletic Association Division I Football Championship Subdivision football program provided administrative coordination for acquisition of preparticipation data from team members, and its athletic training staff documented all injuries that occurred over the course of a 4-week preseason practice period and an 11-game season. The cohort consisted of 83 football players (age = 20 ± 1.5 years, height = 185.0 ± 5.4 cm, mass = 99.7 ± 19.0 kg) who voluntarily completed surveys and performed physical tests on the day before preseason practice began and who were members of the team throughout the entire season. Data for 10 additional players who were either unavailable for the preparticipation assessment or who left the team before the end of the season for reasons unrelated to injury were excluded from analysis. All study procedures were approved by the Institutional Review Board of the University of Tennessee at Chattanooga.
To quantify self-perceptions of preparticipation functional status of the low back, knees, and ankles and feet, we administered 3 surveys with well-established psychometric properties: the Oswestry Disability Index (ODI),41 the subjective knee function scale developed by the International Knee Documentation Committee (IKDC),42 and the sports component of the Foot and Ankle Ability Measure (FAAM).43 Players were instructed to rate the status of symptoms and capabilities as perceived on the day of survey completion. The ODI section for sex life was replaced with a section for work and sports activities, which was structured to correspond with the employment and homemaking section used by Fritz and Irrgang.44 Among 23 patients with low back pain who were classified as having stable conditions, test-retest reliability for this version of the ODI over a 4-week interval was 0.90.44 To obtain values for overall functional status, both the IKDC and FAAM surveys included the following instructions: “If both knees (or ankles-feet) are about the same in terms of function and symptoms, rate the status of both knees (or ankles-feet). If one knee (or ankle-foot) presents greater functional limitations and symptoms, rate the status of the worst knee (or ankle-foot).” To obtain information about the severity of any previous lower extremity joint injury, the following question was added to the IKDC and FAAM surveys: “Have you previously sustained a knee (or ankle-foot) injury that required use of crutches and/or prevented participation in work or sports activities for 2 or more days?” Responses were not included in calculating the IKDC and FAAM scores.
To assess endurance of the core musculature, the maximum amount of time that a static body position could be maintained against gravity was quantified by 4 tests that were performed in the same sequence by each player: horizontal back-extension hold, sitting 60° trunk-flexion hold, side-bridge hold, and bilateral wall-sit hold. The back-extension hold, trunk-flexion hold, and side-bridge hold tests have been reported to provide highly reliable measurements of core musculature performance capabilities.45 The back-extension hold, trunk-flexion hold, and side-bridge hold tests were administered according to the procedures reported by McGill,46 with the following modifications: rather than using straps to stabilize the pelvis and lower extremities for the back-extension hold test, manual stabilization was provided by another player, and the side-bridge hold test was performed with the top foot resting on the lower foot, rather than with both feet in contact with the floor (supported by the dominant arm). The wall-sit hold test is widely used to assess endurance of the pelvic and thigh musculature, but we found no documentation of its test-retest reliability in the literature.
Aerobic capacity was assessed by the 3-minute step test that was developed and found to provide reliable values by McArdle et al.47,48 Players performed a 4-step cycle (ie, up-up-down-down) on a 40.6-cm bench in rhythm to an auditory signal that established a 96 steps-per-minute rate. An electronic pulse monitor (Polar Pacer; Polar Electro, Inc, Lake Success, NY) was used to determine recovery heart rate (HR) at 15 seconds after completion of the stepping task. Other documented potential predictors of core and lower extremity injury were body mass index (BMI), position category, and level of exposure to potentially injurious circumstances (ie, games started and games played).
An injury was defined as a core or lower extremity strain or sprain that required the attention of an athletic trainer and that limited football participation to any extent for at least 1 day after its occurrence. Fractures, dislocations, contusions, lacerations, abrasions, and overuse syndromes were excluded from the analysis. All sport-related injuries that resulted from participation in practice sessions, conditioning sessions, or games were recorded from the start of the preseason practice period until the end of the season. If a player sustained more than 1 core or lower extremity injury, the injury that imposed the greatest restriction on football participation was designated as primary.
Data Analysis
Players were dichotomously categorized as injured or uninjured for data analysis. Receiver operating characteristic (ROC) analysis was used to establish cutpoints for dichotomization of potential predictive variables, and the Fisher exact 1-sided test was used to identify those that best discriminated injured status from uninjured status (1-sided P < .10). We used backward stepwise logistic regression analysis to identify the best combination of predictors and ROC analysis to identify the optimal number of positive factors to distinguish injured status from uninjured status. Because game exposure cannot be quantified until after a season has ended, it was excluded from subsequent analyses that were limited to potentially modifiable injury risk factors. Also, because ROC analysis cannot be used to establish cutpoints until after a season has ended, the 75th and 50th percentiles were evaluated as alternative cutpoints for preparticipation dichotomization of potentially modifiable injury risk factors. Data were analyzed using SPSS (version 17.0; SPSS, Inc, Chicago, IL).
RESULTS
A total of 46 core and lower extremity injuries were documented in the course of the season, which represented 7.6 injuries per 1000 player-exposures. At least 1 core or lower extremity injury was sustained by 39 of the 83 players (47%). Primary injuries included 7 low back strains, 10 hip-groin strains, 2 hamstring strains, 8 knee sprains, 3 syndesmotic ankle sprains, 7 lateral ankle sprains, and 2 midfoot sprains. Season-ending injuries (ie, anterior cruciate ligament tear, syndesmotic ankle sprain, midfoot sprain) were sustained by 3 of the 39 injured players.
Means and standard deviations for survey-derived function scores, tests of physical capabilities, and BMI are presented in Table 1. Because of low values for ROC area under curve (AUC) and a lack of clearly definable cutpoints to distinguish higher-risk cases from lower-risk cases, we excluded the FAAM (AUC = 0.49), back-extension hold (AUC = 0.51), side-bridge hold (AUC = 0.51), and step-test HR (AUC = 0.42) results from further analysis. Injury incidence among offensive and defensive linemen, which included defensive ends, compared with that for nonlinemen (ie, offensive and defensive backs, wide receivers, tight ends, linebackers, kickers, and punters) was not different by the Fisher exact 1-sided test (P = .29). Grouping linebackers and tight ends with linemen did not demonstrate a difference between the alternative dichotomous categorization of positions (P = .30). Results of univariate analyses of variables that were found to be strong potential predictors of injury (P < .10) are shown in Table 2.


The logistic regression results, which identified high game exposure, low trunk-flexion hold time, high ODI score, and low wall-sit hold time as the best combination of injury predictors, are provided in Table 3. Follow-up ROC analysis of the 4-factor prediction model identified ≥3 positive factors as the best standard for discrimination of injured cases from uninjured cases. Injury incidence for each possible combination of the 4 predictor variables is presented in Figure 1. No significant interactions between variables were evident.49 Although the graph in Figure 1D depicts an effect for high ODI score that is clearly different for high versus low trunk-flexion hold time, the logistic regression coefficient for the interaction between ODI and trunk-flexion hold time was not different from zero (P = .458). The 4-factor prediction model provided a high degree of accuracy in correctly categorizing players as injured or uninjured. Only 10% of injured players were positive for all 4 factors (4/39), but none of the uninjured players were positive for all 4 factors (0/44). Two or more positive factors identified 92% of injured players (36/39), whereas 48% of noninjured players (21/44) were positive for only 1 factor or no factors. Three or more positive factors provided the best balance between sensitivity and specificity at 62% and 91%, respectively. Injury risk was 3 times greater for players with ≥3 positive factors than for those who had <3 positive factors. A follow-up logistic regression analysis that excluded game exposure yielded a 3-factor model that included the same potentially modifiable factors as the primary analysis (Table 3).



Citation: Journal of Athletic Training 47, 3; 10.4085/1062-6050-47.3.17

The ROC-derived cutpoint for ODI score corresponded with the 75th percentile value, and the ROC-derived cutpoints for the 2 core muscle endurance tests were both less than the 75th percentile but greater than the 50th percentile. For follow-up analyses, ODI score was classified as high when greater than the 75th percentile. Performance values for the 2 core muscle endurance tests were classified as low when hold time was less than the 75th or 50th percentiles. Accuracy statistics are shown in Table 4 and ROC curves for 6 different prediction models in Figure 2.



Citation: Journal of Athletic Training 47, 3; 10.4085/1062-6050-47.3.17

DISCUSSION
The high-force collisions associated with American football make some injuries inevitable, but others may be prevented through improved screening procedures for identifying modifiable injury risk. Strains and sprains affecting the core and lower extremities have been estimated to account for 6.1 injuries per 1000 player-exposures, which represent more than one-half of all collegiate football injuries that require attention from an athletic trainer (10.5 injuries per 1000 player-exposures).50 Both suboptimal endurance of the core musculature and low back dysfunction have been associated with impaired neuromuscular control of the body's center of mass, inhibition of lower extremity muscles, and elevated risk for lower extremity injury.15,27,30,32–34 Diminished perception of rapid body core displacements during sport-specific maneuvers may interfere with the ability to generate adequate corrective responses of the core musculature, which requires lower extremity joints to displace to a greater extent to maintain postural stability.18 Any delay in activating muscles that span the displaced joints is likely to increase injury susceptibility. Our results clearly support the core stability concept as an important consideration for preventing core and lower extremity injuries in collegiate football players.
Consistent with the findings of previous investigations,51,52 game exposure was the strongest predictor of injury risk. In addition to exposure to high-intensity game collisions, players who are relied upon for team success in competitive events are involved in practice drills and scrimmage sessions to a greater extent than players who have a low level of game exposure. Among players in the high game-exposure group (38/83), 100% of those who were positive for high ODI score were injured (8/8) and 88% with low trunk-flexion hold times were injured (22/25). Although injury incidence was greater for those with high levels of game exposure, stratified analyses demonstrated that the relative effects of the other 3 risk factors on elevating injury risk were almost identical for the high game-exposure and low game-exposure groups (Figure 1A, B, and C). For the entire cohort, those who were positive for high ODI score (20/83) had a 3 times greater injury incidence with low trunk-flexion hold times (12/16) than those with high trunk-flexion hold times (1/4). The number of cases is too small to draw a definitive conclusion, but the relationship depicted in Figure 1D suggests that high trunk-flexion hold times may provide a protective effect that reduces the injury risk for players with a high ODI score. The finding that an ODI score as low as 6 on a 0-to-100 scale was associated with increased risk for core or lower extremity injury occurrence suggests that a relatively low level of self-reported low back dysfunction may be clinically important in collegiate football players.
Large body mass has been associated with elevated injury risk in high school football players,53–55 but at least 1 group52 did not confirm such a relationship. Previous injury has also been identified as an important predictor of football injury risk.51,52,54,55 Univariate analyses demonstrated that high BMI, history of both knee and ankle sprains, and low IKDC score had predictive value, but these variables did not contribute a substantial amount of unique information when combined with the other 4 variables that were entered into the multivariate logistic regression analysis for predicting core or lower extremity injury. Including any muscle strain or joint sprain between the low back and foot in the operational definition of injury is an important consideration for interpreting the results of this study. If the dichotomized dependent variable had represented a more specific type of injury to 1 body part (eg, low back strain, groin strain, knee sprain, ankle sprain), the relative predictive strength of each of the independent variables would almost certainly differ. Contrary to findings reported for high school football players,52 position category did not have predictive value.
The methods used to develop the primary prediction model (4-factor model A) ensured a high degree of precision for correct identification of both injured cases (ie, sensitivity) and uninjured cases (ie, specificity) within the cohort,40 but the ROC-derived cutpoints used to generate this model have limited applicability to injury prevention. Variations in core muscle performance capabilities among different football teams, variations in team membership from 1 year to the next, and changes in the performance capabilities of returning players over time severely limit the predictive value of any absolute physical test performance standard for discrimination of high-risk versus low-risk status. Low trunk-flexion hold time, low wall-sit hold time, and self-reported low back dysfunction represent potentially modifiable injury risk factors that can be identified by preparticipation screening, but some criterion is needed for each measurement to identify those players who are likely to derive the greatest benefit from a risk-reduction intervention. In an effort to develop a clinical prediction rule that would provide a high degree of utility for preventing injuries in teams with different physical performance capabilities, the 50th and 75th percentiles were evaluated as alternative cutpoints for dichotomous classifications of potentially modifiable risk factors (4-factor models B and C). Because of the 2-point intervals between successive ODI scores, the ROC-derived ≥6 cutpoint was identical to the lowest score that exceeded the 75th percentile (ie, lowest ODI score >4 = 6). For the 2 core endurance tests, cutpoints based on the 75th percentile were slightly higher than those derived from ROC analysis, and cutpoints based on the 50th percentile were slightly lower than those derived from ROC analysis. Overall, the modified prediction models generated remarkably similar results to those derived from the primary prediction model (Figure 2A).
Another impractical aspect of the primary prediction model for injury prevention is the inclusion of the nonmodifiable game-exposure variable, which cannot be quantified until a season has ended. The major influence of game exposure on injury likelihood warrants its inclusion for accurate multifactorial assessment of injury risk, but preseason speculation about the extent to which an individual player will be exposed to game conditions may be highly inaccurate. Excluding the game-exposure variable dramatically weakened specificity of injury prediction, but sensitivity of the 3-factor models was greater than that of the corresponding 4-factor models, and the relative risk for injury was still about 2 times greater for players who had ≥2 positive factors. High sensitivity is desirable for identifying a large proportion of the players most likely to sustain an injury, but high specificity is necessary to avoid unnecessary allocation of time and resources for the players who are least likely to be injured. Cutpoints for core endurance test values that were based on the 75th percentile (3-factor model B) provided better sensitivity than ROC-derived cutpoints (3-factor model A), whereas 50th percentile cutpoints provided better specificity (3-factor model C).
Although the prediction models provide strong evidence for a relationship between core stability and injury risk, numerous factors could adversely affect the validity of predictions for other cohorts of collegiate football players. For example, individual players on a football team may have a change in risk factor profile within a relatively short period of time: increased or decreased performance capability.9 Furthermore, motivation clearly affects the amount of time an athlete will sustain a body position against gravity as fatigue-related discomfort progressively increases.22,30,36 Because normative values for a large sample of collegiate football players are not available and because considerable variation in performance capabilities almost certainly exists among football teams, cutpoints for core endurance test values should be based on a selected percentile value for a given cohort. Although the values for sensitivity and relative risk were greater for prediction models that used the 75th percentile as a cutpoint for core endurance tests (4-factor model B and 3-factor model B), the models that used the 50th percentile (4-factor model C and 3-factor model C) provided greater specificity. For the purpose of preparticipation selection of players for a targeted injury-prevention intervention, a prediction model that provides a high degree of specificity minimizes inclusion of players who do not really need the focused attention (ie, those who are unlikely to sustain an injury). Thus, the 50th percentile cutpoint may have greater utility than the 75th percentile cutpoint for administering an injury risk-reduction program.
The predictors of elevated injury risk identified by this study need to be further assessed in a prospective manner with a larger and more diverse sample of collegiate football players to validate our findings. Until specific cutpoint values for core muscle endurance tests have been well established by other researchers, individual collegiate football players at elevated risk for core or lower extremity sprains or strains may be identified by the existence of any 2 or more of the following: trunk-flexion hold time of less than the 50th percentile for the team, ODI score of 6 or greater, or wall-sit hold time of less than the 50th percentile for the team. Players who have ≥2 of these 3 factors appear to have about twice as much risk for injury as those who have 1 or none. If there is a reasonable expectation that a player with 2 or more factors will have high game exposure (ie, starting in 3 or more games or playing in all games), the probability for injury appears to increase to a level that is about 3 times greater than that for players with <3 risk factors. Core and lower extremity injury predisposition is probably related more closely to suboptimal neuromuscular control than to fatigue of the core musculature, but tests for screening a large number of players must be simple to administer. Conceivably, a strong emphasis on core stability training could improve a team's overall performance capabilities to a level that eliminates the value of the 50th percentile of hold time as a criterion for classification of injury risk. Ultimately, sufficient performance and injury data may be accumulated to establish absolute hold-time cutpoints that have broad applicability to different cohorts of football players.
A major benefit of the proposed clinical prediction rule is providing a preparticipation screening mechanism to identify players who are likely to derive greatest benefit from a targeted training program, which has the potential to maximize the effectiveness of an injury-prevention program.3,5,9 Some football programs may overemphasize development of muscle power to the extent that high-load weightlifting leads to lumbar spine dysfunction, which can then increase predisposition to core and lower extremity injury. At present, evidence in the literature is insufficient to recommend a specific set of core stability exercises for injury risk reduction, but the success of any program depends largely on the philosophical approach to training that is embraced by the coaches and the extent to which individual players comply with the guidance provided for performance improvement.3,11 Our results suggest that development of neuromuscular control and endurance of the core musculature should not be neglected, particularly among football players with multiple risk factors.
The primary limitation of this study was the relatively small number of injuries sustained by members of a single football team during a surveillance period that was limited to a single season. Although a larger cohort would provide greater statistical power, the results confirm that moderate to strong associations can be detected with as few as 30 to 40 injury cases.6 Another important limitation is the lack of reliability analysis of values for the wall-sit hold test. More research is needed to establish specific protocols for the trunk-flexion hold and wall-sit hold tests to maximize consistency of effort among players and test-retest reliability. Future work in this area should include analysis of psychosocial factors,56,57 neurocognitive performance,58 and biomechanical malalignments, which may further establish an injury-risk profile to guide individualized prevention program elements.
CONCLUSIONS
Although the exact nature of cause-effect relationships among low back dysfunction, poor endurance of the core musculature, altered neuromuscular activation patterns, and injury occurrence has not been established, our results strongly support the combination of preparticipation injury-risk screening and individualized core stability training regimens as a strategy for preventing core and lower extremity injuries in collegiate football players. Assessment of the preliminary prediction rule's accuracy for a different cohort is needed to validate its use for quantifying individualized injury risk.

Graphic depiction of core and lower extremity sprain and strain incidence for stratified pairs of dichotomized variables (relative risk of injury). Independent variable categories are designated as both high or low and positive (+) or negative (−) on the basis of cutpoints derived from receiver operating characteristic analysis. Each possible combination of game exposure, Oswestry Disability Index (ODI), trunk-flexion hold, and wall-sit hold is presented. A, Game exposure X trunk-flexion hold. B, Game exposure X wall-sit hold. C, Game exposure X ODI. D, ODI X trunk-flexion hold. E, ODI X wall-sit hold. F, Wall-sit hold X trunk-flexion hold.

Receiver operating characteristic curves depicting the discriminatory power of alternative injury-prediction models derived from different methods for determining predictor cutpoints. A, 4-Factor models A (solid line), B (dashed line), and C (dotted line). B, 3-Factor models A (solid line), B (dashed line), and C (dotted line).
Contributor Notes