Global Positioning System–Derived Workload Metrics and Injury Risk in Team-Based Field Sports: A Systematic Review
To evaluate the current literature regarding the utility of global positioning system (GPS)–derived workload metrics in determining musculoskeletal injury risk in team-based field-sport athletes. PubMed entries from January 2009 through May 2019 were searched using terms related to GPS, player workload, injury risk, and team-based field sports. Only studies that used GPS metrics and had injury as the main outcome variable were included. Total distance, high-speed running, and acute : chronic workload ratios were the most common GPS metrics analyzed, with the most frequent sports being soccer, rugby, and Australian rules football. Many distinct workload metrics were associated with increased injury risk in individual studies performed in particular sport circumstances; however, the body of evidence was inconclusive as to whether any specific metrics could consistently predict injury risk across multiple team-based field sports. Our results were inconclusive in determining if any GPS–derived workload metrics were associated with an increased injury risk. This conclusion is due to a myriad of factors, including differences in injury definitions, workload metrics, and statistical analyses across individual studies.Objective
Data Sources
Study Selection
Data Extraction
Data Synthesis
Conclusions
Over the past 10 years, the field of athlete workload monitoring has accelerated rapidly, mostly due to the introduction of global positioning system (GPS) technology in field sports. Broadly described, athlete monitoring describes the quantification of stresses incurred by athletes both inside and outside of practices and competitions with the purposes of enhancing athletic performance, determining athlete readiness, and mitigating injury risk. Training and match workloads are generally quantified in terms of external and internal loads. External load refers to a player's cumulative locomotor movements and can be measured using GPS and accelerometers. External load is quantified using individual player movement distance, velocity, accelerations, and decelerations.1,2Internal load refers to the physiological response of a player to an external load and can be determined using objective measures of heart rate and rating of perceived exertion (RPE).1,3,4 The relationships of external and internal load indicators with injury risk have been examined in various elite team sports such as Australian rules football, rugby, and soccer.5
Understanding the risk factors of injury is an underlying theme of sports medicine research. Injuries create both physical and psychological burdens for athletes, as well as competitive and economic burdens for sports teams.6,7 Historically, in athletic training, injury risk factors have been evaluated by taking an individual's preseason measurements (eg, strength, flexibility, injury history) and then monitoring injuries throughout the season. Retrospectively, the data would be analyzed to look for associations between single preseason measurements and injury occurrence. The introduction of athlete monitoring methods allows the incorporation of daily workload metrics, leading to a more temporal-based approach to injury risk analysis. For athletic trainers, a greater understanding of the daily sporting demands placed on an athlete could assist in decisions regarding injury prevention, rehabilitation, and return to play.
The emergence of workload monitoring technology in team-based sports has been accompanied by literature surrounding the use of sensor data for predicting or being associated with an increased injury risk. The purpose of our systematic review was to evaluate and summarize the original research literature on GPS–derived workload measures for predicting or explaining injury risk in team-based field-sport athletes.
METHODS
We conducted a PubMed literature search using the search terms shown in Table 1. Search filters were set to include only articles written in English and published between January 1, 2009, and May 1, 2019. A 10-year restriction was placed on the included articles because GPS technology in sport was introduced over the past decade. Other inclusion criteria were (1) report of original research, (2) injury as the main outcome metric, (3) use of GPS–derived movement variables as independent variables, and (4) team-based field-sport athletes were the only study participants.

Methodologic Quality
Two independent raters graded each study for level of evidence according to the National Institute of Health Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies8 (Supplementary Table 1).
Data Extraction and Synthesis
For all included articles, the following data were extracted: study design, participants (sample size, sex, sport), GPS–derived metrics collected, and key findings, including quantitative point estimates and confidence intervals (CIs) of increased or decreased injury risk. We provided CIs when they were reported in the original article. When CIs are not presented in our “Results,” they were not reported in the original article. Additionally, we supplied exact P values or dichotomous findings of statistical significance (eg, P < .05) in our “Results” based on how they were reported in the included articles. Similarly, if the authors used magnitude-based inference descriptors (eg, likely harmful), we used the same terminology (in quotation marks) in our summary. Meta-analysis was not possible because of the heterogeneity of study methods.
RESULTS
Search Results and Methodologic Quality
We identified 499 articles in the initial search and selected 22 original research articles for inclusion (Figure). The study design characteristics are detailed in Table 1. Injury risk was assessed in 10 studies of Australian rules football, 6 of soccer, and 5 of rugby and 1 study each of Gaelic football and American football. The methodologic quality assessments and individual study findings are summarized in Supplementary Tables 1 and 2, respectively.



Citation: Journal of Athletic Training 55, 9; 10.4085/1062-6050-473-19
Total Distance
Total distance (TD) of player movement during a game or practice session was evaluated as an injury risk factor in 8 articles.9–16 The authors most often divided the data into 1-, 2-, 3-, and 4-week timeframes; however, within these timeframes, TD was reported differently, using either total cumulative distance11,13,14,16 or team-based Z scores.9,12
One-Week Total Distance
In evaluating 1-week cumulative TD, Windt et al14 found that a 1-SD increase in TD from the team mean decreased the injury risk in the current (odds ratio [OR] = 0.64; 95% CI = 0.46, 0.90; P < .05) but not the subsequent (OR = 0.86; 95% CI = 0.61, 1.22) week of the season. Whereas Ehrmann et al11 reported a large η2 effect size (ES) when comparing 1-week TD between injury blocks (average individual values leading up to injury) and season blocks (individual's average data leading up to the injury block; η2 = 0.30, P = .06), with increased TD being associated with increased injury risk. When injury likelihoods across TD groups were compared, the results were mixed. Bowen et al9 reported a reduced noncontact injury risk (relative risk [RR] = 0.31; 95% CI = 0.11, 0.86; P = .02) and overall injury risk (RR = 0.27; 95% CI = 0.12, 0.60; P = .002) for soccer players in the low TD group (Z scores = −1.99 to −1) versus all other TD groups. Additionally, multiseason data for another soccer sample revealed an increased contact injury risk (RR = 2.09; 95% CI = 1.1, 4.0, P = .03) among the “moderate to high” TD group (Z scores = 0–0.99) when compared with all other TD groups.10 Jaspers et al16 found “likely harmful” effects for a “high” 1-week TD (>31 161 m, OR = 1.42; 90% CI = 0.92, 2.21) when using the “low” group as a reference.
Hulin et al12 and Murray et al13 reported conflicting findings in “high” and “very high” TD groups. Murray et al13 noted a significant relationship between 1-week TD and preseason or current-week injuries; however, for subsequent-week injuries, the 1-week high TD (>20 000 m) group was associated with a decreased likelihood of injury (RR = 0.27, 90% CI = 0.17, 0.41, P = .03) versus the “moderate” TD group (10 000–15 000 m), whereas Hulin et al12 found an increased injury risk (RR = 1.9–13.9) for contact injuries in the current week among the “very high” TD (Z-score ≥ 2.0) group when compared with all other groups.
Two-Week TD
Five groups10,16–19 examined the effect of 2-week TD on injury risk with minimal overlap in findings. In dividing the 2-week cumulative TD into 3 groups, Colby et al18 observed that the middle TD group had an increased in-season injury risk (OR = 0.426, P < .05) compared with the low TD group in Australian rules football. Jaspers et al16 had similar results in that “likely harmful” effects were present for the “medium” group (48 050–59 185 m) versus the “low” group (OR = 1.93; 90% CI = 0.93, 4.02). In another Australian rules football study by Colby et al,19 TD was split into 5 categories from “very low” to “very high,” and no differences in injury risk were identified among groups. Similarly, Bowen et al9,10 evaluated soccer players in 2 studies using Z scores to dichotomize TD groups. In their 2017 study9 of 1 season of data, no differences existed between TD and injury risk. In their 2019 study10 of similar data over 3 seasons, overall injury risk decreased (RR = 0.40, P < .05) in those players with a “high” TD Z score compared with all other groups. Lastly, Bacon and Mauger17 did not find any differences in injury risk among “low,” “medium,” and “high” groups for 2-week TD.
Three-Week TD
Most of the same researchers9,10,16,17,19 who evaluated 2-week TD also evaluated 3-week TD, and 2 groups16,19 found differences in injury risk. Colby et al19 reported increased injury risk with “very low” 3-week TD (incidence rate ratio [IRR] = 2.15; 95% CI = 1.15, 4.01) when versus “moderate” TD group, whereas Jasper et al16 demonstrated a “likely harmful” effect for the “high” TD group (>86 422 m) compared with the “low” TD group (OR = 1.88; 90% CI = 1.08, 3.26).
Four-Week TD
Four-week cumulative metrics are often referred to as the chronic load, and 9 studies9–14,16,17,20 addressed the effect of 4-week TD on injury risk. Four sets of investigators12,14,16,17 did not find relationships between 4-week TD and injury risk. Of those who did find significant relationships, Ehrmann et al11 observed a large η2 (ES) of 0.30 (P = .09) in 4-week TD between a 4-week season block and a 4-week injury block, indicating that the increased TD was associated with injury. As with the shorter TD week block lengths, the results of Bowen et al9,10 varied between their 1- and 3-season studies. In their 1-season study,9 4-week “high” TD was associated with the greatest risk for overall injury (RR = 1.64; 95% CI = 1.05, 2.58; P = .03). Interestingly, a 4-week “very high” TD was not related to overall injury risk (RR = 1.29; 95% CI = 0.34, 4.99; P = .71). In their 3-season study,10 “low” 4-week TD increased the risk of noncontact injury (RR = 2.18; 95% CI = 1.0, 4.6; P = .04) versus all other groups. Similarly, Colby et al19 determined that the “very low” TD group (<71 059 m) was associated with increased injury risk (IRR = 2.32; 95% CI = 1.19, 4.52) compared with the “moderate” TD group (78 627–84 879 m). In addition, Murray et al13 showed that, during the Australian rules football in-season, a 4-week TD of ≥20 000 m had a lower risk of in-season current-week injury (RR = 0.15; 90% CI = 0.08, 0.29; P = .03) than a TD of <5000 m.
High-Speed Running Distance
Fifteen groups of authors9–11,13–19,21–25 reported on relationships between the quantity of high-speed running distance (HSD) and injury risk. The minimum velocity threshold for “high speed” varied from 14.4 to 24 km/h across these studies.
Week-to-Week Changes
Five articles13,14,22,25,26 examined how week-to-week changes in HSD affected injury risk; however, each relied on unique HSD metrics. In 3 studies,14,25,26 a large change in HSD was a possible injury risk factor. Malone et al26 found that, in Gaelic football, at a threshold of ≥14.4 km/h, absolute weekly changes in HSD above the reference group of ≤100 m had increased odds of lower extremity injury. For the greatest change group of 351–455 m, the odds of injury were 3.02 times greater (90% CI = 2.03, 5.18; P = .01); in the remaining 2 groups, the odds risks were 1.20 (90% CI = 1.05, 3.93; P = .03) and 2.27 (90% CI = 1.93, 4.44; P = .002), respectively, for changes of 101–205 m and 206–350 m. Windt et al14 noted that higher percentages of distance covered at high speed (>5 m/s) increased the injury likelihood for the current (OR = 1.34; 95% CI = 1.03, 1.73; P < .05) and subsequent (OR = 1.07; 95% CI = 1.06, 1.08; P < .05) week, whereas absolute HSD was not associated with injury risk in current (OR = 0.83; 95% CI = 0.58, 1.20) or subsequent (OR = 0.83; 95% CI = 0.57, 1.19) weeks. Specific to the hamstrings injury risk in the subsequent week, Ruddy et al25 described an increased relative risk of injury for players who had an absolute change in distance >2524 m above 10 km/h (RR = 2.2; 95% CI = 1.0, 4.8).
One-Week HSD
Of the 8 groups9–11,15,16,22,25,26 who reported on 1-week HSD, none used the same speed threshold or dichotomization scheme. Four of the 8 articles reported significant results. Malone et al26 reported an increase in injury risk (OR = 5.02; 90% CI = 1.33, 6.19; P = .006) for their highest HSD group (750–1025 m) and a decrease in injury risk (OR = 0.12; 90% CI = 0.08, 0.94; P = .03) for the second-highest HSD group (701–750 m), both compared with the lowest HSD group (≤674 m). Using logistic regression, Duhig et al22 found the largest effect of HSD on hamstrings injury risk was in the week before injury (OR = 6.44; 95% CI = 2.99, 14.41; P < .001), and increased HSD was associated with injury. Similarly, Ruddy et al25 demonstrated an increase in the relative risk of hamstrings injury in 1-week HSD of 13 312 m (RR = 2.4; 95% CI = 1.1, 5.3). Jaspers et al16 recorded a “likely harmful” effect in the “medium” group (634–1028 m; OR = 1.56; 90% CI = 0.99, 2.46) versus the “low” group. Finally, Bowen et al9 showed an increase in noncontact (RR = 1.73, 95% CI not reported; P < .05) and overall (RR = 1.73; 95% CI = 1.06, 2.84; P < .05) injury risk for their 1-week “moderate high” HSD group (Z scores = 0.00–0.99) and a decrease in overall injury risk (RR = 0.38, P < .05) for their 1-week “low” HSD group (Z scores = −1.99 to −1.00). In contrast, Bowen et al10 reviewed 3 seasons of data, which did not reflect any relationships between 1-week HSD and injury risk.
Two- and 3-Week HSD
Six sets of researchers9,10,16,17,22,25 described 2- and 3-week HSD. Of these, the only significant findings were from Duhig et al,22 who determined that the sums of HSD in weeks 1 and 2 (OR = 3.06; 95% CI = 2.03, 4.75; P < .001) and in weeks 1, 2, and 3 (OR = 2.22; 95% CI = 1.66, 3.04; P < .001) preinjury had greater relative HSDs. Of note, Bacon and Mauger17 identified an association, although it was not statistically significant, between a decrease in injury risk and 2-week “high” HSD compared with “normal” HSD (OR = 0.58; 95% CI = 0.33, 1.02; P = .06).
Four-Week HSD
The largest number of HSD results have been for the 4-week timeframe, likely due to its chronic workload label. Among the 9 articles9–11,14–17,22,25 that addressed this area, no distinct trends were present. Of significance, Bowen et al9 indicated that a 4-week “moderate-high” HSD (Z scores = 0–0.99) increased the relative risk of noncontact (RR = 2.14; 95% CI = 1.31, 3.50; P < .05) and overall (RR = 1.56, 95% CI not reported; P < .05) injuries. Moreover, with respect to hamstrings injury, Duhig et al22 observed a greater relative HSD distance in the 4 weeks leading up to a hamstrings injury (OR = 1.96; 95% CI = 1.54, 2.51; P < .001).
Full Season
Gabbett and Ullah24 examined the distance covered at various speeds during a National Rugby League practice and its effect on transient, time-loss, and match-loss injury risk. When >542 m and >2342 m were covered at “very low” and “low” intensity, respectively, the RR was decreased by 0.4 times (95% CI = 0.2, 0.9; P < .05) and 0.5 times (95% CI = 0.2, 0.9; P < .05), respectively, for time-loss injury. Alternatively, >9 m spent at “very high” intensity increased the RR for transient (non–time loss) injury by 2.7 times (95% CI = 1.2, 6.5; P < .05).
Sprint-Running Distance
A handful of author groups10,11,15,18,25,26 specifically evaluated sprint-running distance (SRD) by further classifying high-speed running. Due to methodologic differences among articles, sprint running was sometimes grouped with high-speed running for analysis, and other times, it was analyzed separately. The definition of sprint-running speed differed among studies; speeds varied from 19.811,26 to 25.3 km/h10 and also as speed above an athlete's 75% maximum performance.18,20
Week-to-Week Changes
Three sets of investigators18,23,26 explored the effect of week-to-week changes in the SRD on injury and found conflicting results. Colby et al18 did not report any significant findings, whereas Malone et al26 revealed that SRD changes between 75 and 105 m (highest group) in 1 week had the greatest increased risk of injury (OR = 6.12; 90% CI = 4.66, 8.29; P = .001) compared with a change of ≤50 m (lowest group). In addition, the middle groupings (51–64 and 65–75 m) were also associated with increased injury risk (OR = 3.12; 90% CI = 2.86, 6.13; P = .03, and OR = 4.12; 90% CI = 3.86, 7.84; P = .002, respectively). According to Ruddy et al,25 both absolute (>218 m, RR = 3.3; 95% CI = 1.5, 7.2) and relative (>2.00, RR = 3.6; 95% CI = 1.7, 7.9) week-to-week change in distance covered above 24 km/h had the largest significant influence on the risk of hamstrings injury in the subsequent week.
One-Week SRD
Malone et al26 indicated that, at a 1-week SRD of 201 to 350 m, injury risk was decreased compared with the lowest distance group of ≤165 m (OR = 0.54; 90% CI = 0.41, 0.85; P = .005); however, at an SRD of 350–525 m, injury risk was increased versus the same reference group (OR = 3.44; 90% CI = 2.98, 4.84; P = .004). Ruddy et al25 showed an increased injury risk for absolute 1-week SRD >653 m (RR = 3.4; 95% CI = 1.6, 7.2). In their 3-season study, Bowen et al10 uncovered no differences in injury risk across SRD groups.
Two-, Three-, and Four-Week SRD
The authors10,19 evaluating 2-, 3-, and 4-week SRD described no differences in injury risk. In contrast, Colby et al18 displayed a decrease in injury risk for the “very high” group for distance covered at 75% of the athlete's maximum sprint speed versus the “moderate” group at the 2-week (IRR = 0.48; 95% CI = 0.24, 0.97) and 4-week (IRR = 0.45; 95% CI = 0.25, 0.84) timeframes. Ruddy et al25 established increases in hamstrings injury risk with 3-week (>1495 m, RR = 2.5; 95% CI = 1.2, 5.5) and 4-week (>197 m, RR = 2.5; 95% CI = 1.1, 5.7) absolute distances covered at >24 km/h. Lastly, Ehrmann et al11 calculated a large ES of 0.35 (P = .07) between the 4-week injury and season blocks, demonstrating that increased SRD during the injury block was associated with an increased risk of injury.
Maximal-Velocity Exposure
Malone et al23 looked at injury risk in Gaelic football players associated with metrics of maximal-velocity sprinting. Players who ran at more than 95% maximal velocity during training and match play had a lower risk of injury in the subsequent week (OR = 0.12; 95% CI = 0.01, 0.92; P = .001) than those who did not exceed 85% of their maximal sprinting velocity.
In addition to univariate comparisons, Malone et al23 addressed the interaction of maximal-velocity exposure with low and high chronic training loads. Among athletes with a higher chronic training load (≥4750 AU), a protective effect was evident from increased exposure to maximal-velocity (10–15 exposures) events compared with fewer (≤5) exposures (OR = 0.22; 95% CI = 0.10, 1.22; P = .03). Players with a lower chronic training load (≤4650 AU) were at increased injury risk (OR = 3.38; 95% CI = 1.60, 6.75; P = .001) when exposed to ≥15 maximal-velocity events versus fewer exposures (≤5). In addition, athletes with a higher chronic training load (≥4750 AU) were somewhat likely to be at reduced risk of injury when they covered weekly maximal-velocity distances of 90 to 120 m compared with the reference group of <60 m (OR = 0.23; 95% CI = 0.10, 1.33; P = .06). Conversely, players with low chronic training loads (≤4750 AU) who covered the same distance of 90 to 120 m at maximal velocity were at higher risk of injury versus the reference group of <60 m (OR = 1.72; 95% CI = 1.05, 2.47; P = .02).
Colby et al21 assessed individual players' exposure to >85% maximal speed during both 4-week and 8-week timeframes. During the latter, “very low” (0–8), “high” (13–15), and “very high” (>15) exposures were associated with increased injury risk (“low”: IRR = 5.76; 95% CI = 1.69, 19.66; “high”: IRR = 3.03; 95% CI = 1.01, 9.10; “very high”: IRR = 4.70; 95% CI = 1.49, 14.87). Analysis of the 4-week data did not reveal any significant findings.
Accelerations and Decelerations
Taking acceleration distance into consideration, Gabbett and Ullah24 indicated that the distances covered in mild (0.55–1.11 m/s2), moderate (1.12–2.78 m/s2), and maximal accelerations (≥2.79 m/s2) were related to non–time-loss injuries; the higher the number of accelerations, the lower the risk of non–time-loss injury (RR = 0.2; 95% CI = 0.1, 0.4 for mild accelerations; RR = 0.3; 95% CI = 0.1, 0.6 for moderate accelerations; RR = 0.4; 95% CI = 0.2, 0.8 for maximal accelerations).
Bowen et al9,10 evaluated 1-, 2-, 3-, and 4-week timeframes by dividing the number of accelerations into 5 groups based on Z scores. Several relationships were identified in 1 season of soccer data.8 At the 1-week mark, the overall relative risk of injury (RR = 0.35, P < .05) was decreased in athletes who produced a “low” number of accelerations (Z score = −1.99 to −1.00). A “high” number of accelerations (Z score = 1.00–1.99) increased the overall injury risk in weeks 1 (RR = 1.83, P < .05) and 4 (RR = 1.66, P < .05), whereas for a “very high” number of accelerations (Z score >2.00), the relative risk of overall injury increased across 1 (RR = 3.06, P < .05), 2 (RR = 3.19, P < .05), and 3 (RR = 3.84, P < .05) weeks versus all other groups. In addition, the noncontact injury risk was also elevated for a “very high” number of accelerations in weeks 3 (RR = 5.11; 95% CI = 1.75, 14.96; P = .003) and 4 (RR = 4.25, 95% CI not reported; P < .05). Conversely, a “low” number of accelerations over 3 weeks (744–2861) reduced the noncontact (RR = 0.21; 95% CI = 0.05, 0.87; P = .03) and overall (RR = 0.31; 95% CI = 0.13, 0.76; P = .01) injury risks. Analysis of 3 seasons of data10 revealed no differences between the number of accelerations and injuries identified across any of the weekly timeframes. The same researchers10 looked at deceleration counts in the same manner. The only significant result was an increase in contact injury risk for the 1-week “moderate-low” (Z score = −0.99 to 0) number of decelerations (RR = 2.04; 95% CI = 1.0, 4.0; P = .04).
Jaspers et al16 assessed the numbers of accelerations and decelerations in soccer athletes. “Likely harmful effects” existed for “high” 2-week decelerations (>1462, OR = 1.49; 90% CI = 0.92, 2.42), “high” 3-week decelerations (>2140, OR = 1.68; 90% CI = 1.08, 2.63), and “high” 4-week decelerations (>2813, OR = 1.73; 90% CI = 1.00, 2.99) when compared with the “low” deceleration group (<2227).
Acute : Chronic Workload Ratio
Thirteen groups* explored acute : chronic workload ratio (ACWR) metrics in their analyses of injury risk factors. The ACWR divides the acute workload (typically 1 week) by the chronic workload (typically 3 to 6 weeks). The theory1,30 is that chronic training loads are analogous to a state of fitness, and acute training loads are analogous to a state of fatigue. Currently, 2 methods are available for calculating the ACWR. First is a rolling average ACWR, which is simply the rolling average of the acute phase divided by the rolling average of the chronic phase. Importantly, the workload metrics from all days within a phase are equally weighted in this model. The second method, proposed by Williams et al,31 is an exponentially weighted moving average (EWMA) ACWR, which includes a time-decay constant that weighs workload values closer to the end of the chronic phase (more recently) more heavily than training load values at the beginning of the chronic timeframe (longer ago). In the next paragraphs, we highlight the results of studies with significant findings.
Malone et al26 used a 3- to 21-day ratio when calculating a rolling average ACWR for HSD and found that players with ratios above 0.85 were at increased risk of injury. When the ACWR was between 0.86 and 1.00, the OR of lower extremity injury was 1.20 (90% CI = 1.10, 2.03; P = .21); between 1.00 and 1.25, the OR was 2.27 (90% CI = 2.13, 3.04; P = .001); and for ACWR <1.25, the OR was 3-fold greater (3.02; 90% CI = 2.53, 4.98; P = .001). When the same analysis was conducted on SRD, the OR decreased for ACWR between 0.71 and 0.85 (0.85; 90% CI = 0.33, 0.95; P = .04). However, for ACWRs between 0.86 and 1.25 and >1.35, the OR increased (OR = 1.14; 90% CI = 1.11, 2.14; P = .02 and OR = 5.00; 90% CI = 3.01, 7.38; P = .02, respectively).
Using absolute TD to calculate the 1 : 4-week rolling average ACWRs and interpreting them through a lens of magnitude-based inferences, Hulin et al12 determined that, in the current week, a “very high” ACWR (≥2.11) was associated with an injury risk 6.9 times greater than a “very low” ACWR of ≤0.30 (RR = 6.9; 90% CI = 5.2, 8.6), 3.4 times greater than a “low” ACWR of 0.31 to 0.66 (RR = 3.4; 90% CI = 1.4, 5.4), 2.3 times greater than a “moderate” ACWR of 1.03 to 1.38 (RR = 2.3; 90% CI = −2.3, 5.7), and 2 times that of a “high” ACWR of 1.75 to 2.10 (RR = 2.0; 90% CI = −15.2, 37.2). In addition, a “very high” 2-week ACWR (≥1.88) was associated with a risk of injury that was 2.2 times greater than a “low” ACWR of 0.46 to 0.74 (RR = 2.2; 90% CI = −2.7, 7.1), 1.9 times greater than a “moderate-low” ACWR of 0.75 to 1.01 (RR = 1.9; 90% CI = −3.6, 7.4), and 2.4 times greater than a “moderate” ACWR of 1.02 to 1.30 (RR = 2.4; 90% CI = −0.6, 5.4). For the subsequent week, a “very high” ACWR had a 10-fold increase in injury risk compared with a “very low” ratio (RR = 9.8; 90% CI = 6.2, 13.4).
Murray et al13 used the ACWRs for TD, HSD, and player load (operationally defined in Supplementary Table 2) to examine injury risk. For injuries occurring in the current week, players with an ACWR of >2.0 for TD were 5 to 8 times more likely to sustain an injury than players with an ACWR < 0.49 (RR = 7.98; 90% CI = 5.86, 10.88; P = .015) and between 0.5 and 0.99 (RR = 5.04; 90% CI = 4.16, 6.11; P = .012). For HSD, an ACWR of >2.0 was associated with a 6 to 12 times greater injury risk than ACWRs of <0.49 (RR = 11.62; 90% CI = 10.04, 13.45;, P = .006), 0.50 to 0.99 (RR = 9.63; 90% CI = 9.21, 10.07; P = .002), and 1.0 to 1.49 (RR = 6.54; 90% CI = 6.19, 6.92; P = .003). Similarly, athletes with an ACWR of >2.0 for player load had a greater risk of injury than those with an ACWR of 0.50 to 0.99 (RR = 6.27; 90% CI = 5.62, 6.00; P = .006) and 1.0 to 1.49 (RR = 7.72; 90% CI = 7.57, 7.88; P = .001). In this final result, we recognize that the point estimate lies outside the CI and contacted the authors about this discrepancy, but they did not respond to our query.
For injuries occurring in the subsequent week, Murray et al13 indicated that, during the preseason, athletes with an ACWR of >2.0 had an increased likelihood of injury versus players with an ACWR of 1.0 to 1.49 for TD (RR = 4.87; 90% CI = 2.33, 10.21; P = .05) and player load (RR = 12.46; 90% CI = 8.35, 18.59; P = .02). Similarly, an ACWR of >2.0 for HSD compared with an ACWR ratio of 0.50 : 0.99 was associated with an increased likelihood of injury (RR = 6.46, 90% CI = 4.63, 9.02, P = .02). During the in-season period, findings were similar. Specifically, when the ACWR exceeded 2.0, compared with an ACWR between 1.0 and 1.49, the likelihood of injury increased 4-fold to 7-fold for TD (RR = 5.49; 90% CI = 4.19, 7.20; P = .02), HSD (RR = 4.36; 90% CI = 3.50, 5.43; P = .02), and player load (RR = 5.80; 90% CI = 4.62, 7.27; P = .01).
Over a single season, Bowen et al9 evaluated TD, HSD, number of accelerations, and total load (operationally defined in Supplementary Table 2) in the context of ACWR using rolling averages and a 1 : 4-week ratio. Relative risk was determined by comparing injury risk with all other ACWR categories. For overall ACWR, the RR of contact injury was increased to 4.98 (P < .05) for a “very high” TD ratio (≥2.0). Conversely, a decrease in overall injury risk (RR = 0.47, P < .05) was present for athletes with a “low” HSD ACWR (−1.99 to −1.00). For accelerations, an increased RR of contact injury of 4.98 occurred at “very high” ACWR (≥2.0; P < .05). Lastly, for overall total load ACWR, the RR of contact injury was 1.92 among players with “moderate-low” ACWR (P < .05) and the RR of noncontact injury was 1.87 among players with “moderate-high” ACWR (P < .05).
Another study by Bowen et al10 spanned 3 soccer seasons and demonstrated significant findings in TD, low-intensity distance, and accelerations in relation to injury when using a rolling average ACWR method. For TD, a “very high” ACWR (≥2.0) increased the RR of noncontact and overall injury (3.67, P < .05, and 2.40, P < .05, respectively), a “moderate to high” ACWR (0.00–0.99) increased the RR of contact injury (2.03, P < .05), and a “low” ACWR (−1.99 to −1.00) increased the overall injury risk (RR = 0.19, P < .05). For low-intensity distance, a “moderate to high” ACWR (0.00–0.99) was associated with increased RRs for both contact and overall injuries (2.60, P < .05, and 1.91, P < .05, respectively), and a “very high” ACWR (≥2.0) had increased RRs for both noncontact and overall injuries (3.93, P < .05, and 2.56, P < .05, respectively). For the numbers of accelerations and decelerations, both categories of “moderate to high” ACWR (0.00–0.99) and “very high” ACWR (≥2.0) had increased injury risk associations. For a “moderate to high” ACWR, the RR increased to 1.57 (P < .05) for overall injury and 1.99 (P < .05) for contact injury for accelerations and decelerations, respectively. In the “very high” ACWR category of accelerations, noncontact (RR = 3.86, P < .05) and overall (RR = 2.52, P < .05) injury risk increased, and in the same category for decelerations, the risk also increased for both noncontact and overall injuries (RR = 3.73, P < .05, and RR = 2.44, P < .05, respectively).
Jaspers et al16 investigated the average 1 : 4-week ACWR for the workload metrics of TD, HSD, accelerations, and decelerations. A “likely harmful” effect for a “high” ACWR was present for HSD (>1.18, OR = 1.71; 90% CI = 0.90, 3.26), a “very likely beneficial” effect occurred for a “medium” ACWR for decelerations (0.86–1.12, OR = 0.38; 90% CI = 0.20, 0.72), and “likely beneficial effects” existed for a “medium” ACWR for accelerations (0.87–1.12, OR = 0.49; 90% CI = 0.24, 1.02).
Acute : Chronic Workload Ratio With High or Low Chronic Workloads
Bowen et al9 observed that the “low” TD ACWR group, when combined with a 4-week low chronic workload, had an associated decrease in overall injury (RR = 0.28, P < .05). Similar results were noted in the “low” acceleration group (RR = 0.29, P < .05). Regarding HSD, the noncontact injury risk increased in the “high” group (RR = 2.55, P < .05). Evaluating ACWR with high chronic loads demonstrated an increased RR of injury in the “moderate-high” HSD group (2.09; 95% CI = 1.06, 4.12; P = .02). In this study,9 each group was compared with all other groups.
Hulin et al27 combined TD ACWR with short and long between-matches recovery times and showed that a “high” ACWR (1.23–1.61) during short between-matches recovery times was linked with a risk of match injury that was 2.88 times greater than a “moderate-high” ACWR combined with short between-matches recovery times (RR = 2.88; 90% CI = 0.97, 8.66). The risk of match injury with a “very high” ACWR (≥1.62) combined with short recovery between matches was (1) 5.80 times greater (90% CI = 1.75, 9.91) than a “moderate-high” ACWR and (2) 3.41 times greater (90% CI = 1.17, 9.91) than a “low” ACWR. With respect to long between-matches recovery times, a “very high” ACWR (≥1.50) had a risk of match injury that was 4.46 times greater (90% CI = 0.91, 21.91) than a “moderate-high” ACWR. Recovery time between matches did not independently affect injury risk in the subsequent match.
Hulin et al12 examined TD ACWR alongside high and low chronic workloads. Of note, a high chronic workload (>16 095 m) combined with a “very high” 2-week average ACWR (≥1.54) was associated with a greater risk of injury than a “high” chronic workload combined with the following workload ratios: “low” (0.67–0.84, RR = 3.0), “moderate-low” (0.85–1.02, RR = 3.8), “moderate” (1.02–1.18, RR = 4.6), “moderate-high” (1.19–1.35, RR = 4.0), and “high” (1.36–1.53, RR = 2.4). Additionally, a low chronic workload (<16 095 m) combined with a “very high” 2-week average ACWR (≥2.17) was associated with greater injury risk than a low chronic workload combined with the following workload ratios: “low” (0.31–0.66, RR = 2.3), “moderate-low” (0.67–1.02, RR = 1.8), “moderate” (1.03–1.37, RR = 2.0), and “high” (1.75–2.16, RR = 3.1).
Additionally, Bowen et al10 explored both “high” and “low” chronic load in relation to ACWR. “High” and “low” chronic loads were defined using the median of the 4-week total for the given metric. For the combination with “high” chronic loads, only overall injuries were reported, and the only significant finding was an increase in injury risk with “moderate-to-high” (0.96–1.18) low-intensity distance ACWR (0.96–1.18, RR = 2.08, P < .05). When ACWR was combined with low chronic load, significant findings included an increase in noncontact injuries (RR = 4.50, P < .05) for TD “very high” ACWR (2.14); increases in noncontact and overall injuries (RR = 5.39 and 2.76, respectively; P values < .05) for low-intensity distance “very high” ACWR (2.15); increases in noncontact and overall injuries (RR = 5.90, P < .001, and RR = 3.18, P < .05, respectively) for acceleration “very high” ACWR (2.30); and increases in noncontact and overall injuries (RR = 6.58, P < .001, and RR = 3.47, P < .05, respectively) for deceleration “very high” ACWR (2.32).
Rolling Average ACWR and EWMA ACWR
Two groups28,29 evaluated injury risk using different methods to calculate ACWR. Murray et al29 used 7 : 28-day rolling averages and found that, in the preseason period, players with ACWRs of >2.0 for TD were at increased risk of injury compared with those who had an ACWR of 1.0 to 1.49 (RR = 8.41; 95% CI = 1.09, 64.93; P = .048). No other relationships were observed between the rolling average ACWR and injury risk during the preseason period. According to the EWMA ACWR model with a 7 : 28-day timeframe in the preseason, several relationships were present between an ACWR of >2.0 and an increased injury risk versus lower ACWR ranges. Specifically, compared with an ACWR of 1.0 to 1.49, the risk of injury was increased 6-fold to 9-fold for TD (RR = 8.74; 95% CI = 7.35, 10.39; P = .002), moderate-speed distance (RR = 6.03; 95% CI = 2.21, 16.47; P = .03), and player load (RR = 9.53; 95% CI = 5.3, 17.11; P = .01).29
During the in-season period, a rolling average ACWR of >2.0 was associated with an increased risk of injury versus a lower ACWR for a number of training metrics.29 When compared with an ACWR of 1.0 to 1.49, an ACWR of >2.0 was linked with an increase in injury risk for TD (RR = 6.52; 95% CI = 4.83, 8.80; P = .008), HSD (RR = 4.66; 95% CI = 4.12, 5.27; P = .004), and player load (RR = 5.87; 95% CI = 4.12, 8.36; P = .01). Via the EWMA calculation, athletes who exceeded an ACWR of >2.0 experienced an injury risk 13 to 21 times greater than those who maintained an ACWR of 1.0 to 1.49 for TD (RR = 21.28; 95% CI = 20.02, 22.62; P = .001), moderate-speed distance (RR = 18.19; 95% CI = 17.17, 19.27; P = .001), and player load (RR = 13.43; 95% CI = 12.75, 14.14; P = .001).29
Sampson et al28 compared a rolling average ACWR with an EWMA ACWR in American football players. With an EWMA ACWR and a 3-day injury lag (injury reported within 3 days of the evaluated ratio was associated with that previous ratio), the risk of injury with a high ACWR (>1.30) compared with a moderate ACWR (0.8–1.30) was very likely (RR = 3.33; 90% CI = 1.35, 8.19) and similar to that with a low ACWR (<0.8, RR = 3.05; 90% CI = 1.38, 6.76). When the authors compared the EMWA ACWR with a rolling average ACWR, the former had an R2 = 0.54 (3-day injury lag) in modeling using a 7:21-day comparison. When they used other comparisons (ie, 7 : 14-day, 7 : 21-day, and 7 : 28-day) and also evaluated rolling averages ACWR, the next highest R2 was for EMWA ACWR at R2 = 0.19.
Multivariate Models
Bacon and Mauger17 calculated a simple linear regression to predict the incidence of overuse injuries based on TD and HSD assignment to “low,” “normal,” or “high” groups. A significant regression equation contained only the TD variable (F1,39 = 6.482, P = .02, R2 = 0.14). Injury incidence rate per 1000 hours was decreased by −5.835 times when moving upward from 1 TD loading group to the next, meaning a higher TD loading group lowered the risk of an overuse injury.
Malone et al25 also assessed HSDs in combination with training loads. Players who had higher 21-day chronic training loads (≥2584 AU) were at reduced risk of injury when they covered 1 weekly HSD of 701 to 750 m compared with the reference group of <674 m (OR = 0.65; 90% CI = 0.25, 0.89; P = .024). Conversely, athletes who exerted low chronic training loads (≤2584 AU) and covered the same distance of 701 to 750 m were at greater risk of injury versus the reference group of <674 m (OR = 3.12; 90% CI = 2.99, 4.54; P = .04). Similar trends were observed for SRD with higher 21-day chronic training loads: players pursued increased high-speed and sprint running distances with reduced injury risk.25
Colby et al26 used a mixed-model generalized estimating equation to analyze the relationship between weekly data and injury in the subsequent week. In their multivariate model of highest predictive accuracy, they found that a “low” chronic distance coupled with a “very high” distance ACWR was associated with an increased risk (adjusted incidence rate ratio [adjusted IRR] = 2.60; 95% CI = 1.07, 6.34) compared with an above-average chronic load and moderate ACWR. In addition to workload variables, playing experience, heavy nonfootball activity, and a history of lower limb pain retained significance in the multivariate model as in the univariate models (adjusted IRR = 2.02–2.25; 95% CI = 1.02, 4.95). Colby et al26 noted the predictive accuracy of the multivariate model (area under the curve [AUC] = 0.70; 95% CI = 0.64, 0.75) was better (P < .001) than in all univariate models (AUC = 0.52–0.60) when tested on in-sample data. In addition, cross-fold validation results of simulated data indicated a very similar fit (k = 10: univariate root mean square error = 0.16 ± 0.02 versus multivariate root mean square error = 0.16 ± 0.02) on out-of-sample data.
In another study, Colby et al,19 using a multivariate model across the full in-season phase of Australian rules football, demonstrated that a “very low” (<108 km) late preseason (January to mid-February) distance placed players at greater injury risk compared with moderate (125–164 km) loads (OR = 5.6; 95% CI = 1.4, 22.8; P = .02). Similarly, “low” (76–88 km) precompetition (mid-February to mid-March) distances compared with moderate (89–112 km) distances increased the injury risk (OR = 6.0; 95% CI = 1.6, 23.3; P = .01). In addition, a “very high” distance covered (>170 km) during early preseason (November and December) was associated with greater in-season injury risk (OR = 3.2; 95% CI = 1.3, 8.5; P = .02) versus a moderate distance (95–143 km).
To advance the multivariate modeling, Colby et al21 published another paper in 2018 that addressed high-risk workload scenarios. Of the proposed high-risk scenarios, exposure to maximum speed (the number of times a player was at >85% of maximum speed) had the most interesting findings. The authors divided the scenario into 8- and 4-week timeframe subcategories, labeling them as “most significant” and “most practical,” respectively. These designations were selected because, although 8 weeks had the best predictive features, in real-life application, 8 weeks may be too long to collect and apply the data, depending on the setting. The exposure to maximum velocity was divided into 5 categories (“very low” to “very high”), with the “moderate” category serving as the reference group. For both the 8- and 4-week timeframes, the “low,” “moderate,” and “very high” categories had good specificity (ability to identify players who were not injured), but low sensitivity (did not identify players who were injured; 8 weeks: “low”: sensitivity = 0.13, specificity = 0.83; “moderate”: sensitivity = 0.05, specificity = 0.80; “very high”: sensitivity = 0.18, specificity = 0.86; 4 weeks “low”: sensitivity = 0.11, specificity = 0.83; “moderate”: sensitivity = 0.11, specificity = 0.84; “very high”: sensitivity = 0.11, specificity = 0.90).
Windt et al14 created 2 multivariate models to quantify the effect of preseason participation on injury risk while controlling for training-load variables. The first model evaluated the likelihood of injury in the current week, and even though it was associated with reduced odds of an injury, it was not significant (OR = 0.85; 95% CI = 0.70, 1.02). Similarly, when preseason participation and acute distance were controlled, a greater percentage of distance run at high speeds appeared to be associated with an increased injury risk (OR = 1.27; 95% CI = 0.99, 1.63). Finally, as with the univariate models, greater acute distance was associated with a reduced likelihood of injury (OR = 0.56; 95% CI = 0.3, 0.87). The second model demonstrated the likelihood of injury in the subsequent week from the metrics of preseason participation, acute distance, and acute percentage of distance run at high speeds. In this model, when distance and percentage of distance at high speed were controlled, increased preseason participation (at least 10 full preseason sessions completed) was associated with a reduced likelihood of injury (OR = 0.83; 95% CI = 0.70, 0.99). In this model, neither acute distance nor percentage of distance run at high speeds was significantly associated with injury risk in the subsequent week.
A few researchers furthered attempts at predictive modeling by using analytic methods such as random forest32,33 and support vector machines33 along with generalized estimating equation (GEE) modeling. Thornton et al32 used GEE and random forest models to determine which training load variables were most important in understanding injury among positional groups of a rugby team (hit-up forwards, adjustables, wide-running forwards, and outside backs). Specifically, for the adjustables group (in order of importance), the total load (TL) variables recognized were 7-day TD, 7-day high metabolic power distance, high metabolic power distance ratio, 28-day HSD, and 21-day high metabolic power distance, in which the model quasilikelihood under independence model criterion (QIC) was 566.5, and the statistical significance of these measures ranged from P = .001 to .091. For the hit-up forwards group, identified variables were session (s)RPE-TL ratio, 14-day TD, 14-day high metabolic power distance, 7-day sRPE-TL, and 28-day high metabolic power distance, in which the QIC was 441.7, and the significance of these measures ranged from P = .006 to .138. The outside-back models indicated that 21- and 28-day sRPE-TL, 7-day HSD, high metabolic power distance ratio, and TD ratio were most associated with injury risk, with a QIC of 406.6 and significance of these measures ranging from P = .092 to .225. For the wide-running forward group, sRPE-TL ratio, HSD ratio, 14-day high metabolic power distance, 14-day TD, and 7-day sRPE-TL were the most likely contributors to injury risk, with a QIC of 410.5 and significance ranging from P = .068 to .830. The random forest models of Thornton et al32 indicated that, for the adjustables group, the relative importance of TL variables was similar. The mean (± SD) receiver operating characteristic (ROC) of the random forest models was 0.74 ± 0.24. For the hit-up forwards group, 7-day sRPE-TL and 14-day high metabolic-power distance were associated with the greatest importance in injury, in which the mean model ROC was 0.65 ± 0.06. Variables recognized for the wide-running forward group varied substantially among athletes. The mean model ROC was 0.64 ± 0.05. Similarly, for the outside back group, the importance of TL variables varied between players, showing a large discrepancy for TD ratio and 28-day sRPE-TL. These models had a mean ROC of 0.64 ± 0.04.
DISCUSSION
Our aim in this review was to evaluate the current evidence for using GPS–derived metrics for injury prediction in team-based field-sport athletes. Many distinct workload metrics were associated with an increased injury risk in individual studies performed in particular sport circumstances; however, the body of evidence was inconclusive as to whether any specific metrics could consistently assess the injury risk across multiple team-based field sports. Areas of concern that led to this conclusion included disparate methods of tracking injury and statistical analysis and overly ambitious claims about relatively new technology studied in small samples of athletes. The heterogeneity in study methods not only precluded the calculation of a meta-analysis in our review but also limits the generalizability of many of the reported results to teams and athletes not involved in the original studies.
Injury Tracking
Possibly the simplest topic to resolve is the various definitions of injury used. A mix of acute, overuse, chronic, upper body, lower body, time-loss, and match-loss injury definitions were used with minimal uniformity across studies. Given that the GPS–derived metrics monitor total workload over time and generally workload applied to the lower extremity, these metrics appear best suited to forecasting lower extremity overuse injuries. One exception to this would be when monitoring changes in HSD and its effects on acute hamstrings injury, as this injury was previously linked to the eccentric loads that occur during sprinting.34–36
Regarding injury variables, a binary (injured or not-injured) classification based on a time-loss metric is easy to understand and collect uniformly. However, given the breadth of the GPS metrics and nuances of injury definitions in sport, especially overuse injury, these data may be best captured on a daily ranking scale of player availability based on ongoing musculoskeletal concerns, even if those concerns do not rise to the level of an injury report or time-loss injury. The current dichotomous reporting of injury status (injured, not injured) is convenient and easy to record, yet it misses the nuances of contemporary sports medicine in which athletes are often partial participants in practices due to their injury status. Our recommendation in this area is 2-fold. First, within the realm of practical sports science in which data are collected daily by sports medicine and sports performance staff and then analyzed retrospectively, we advise the use of the 4-tier injury and participation classification scheme described by Ahmun et al37 in response to the recent consensus statement on injury surveillance in cricket.38 The 4 tiers are (1) fully available for training and matches, with no injury or illness, (2) fully available for training and matches but with an injury or illness, (3) available for selection in a major match but with modified activity due to injury or illness, and (4) unavailable for selection in a major match due to injury or illness. These categories mirror injury reports commonly used in the clinical practice of sports medicine within organized sport and allow high-level tracking of both transient and time-loss injuries. Second, from a research and modeling perspective, knowing the injury mechanism, cause, and tissue type is important as it may lead to an understanding of how to best model different injury types using the collected data. To accomplish this, studies must be conducted in a prospective manner and may involve more daily engagement of the researchers to avoid burdening the sports medicine staff.
Statistical Analysis
Stratifying workload metrics into binned categories (ie, low, medium, high) allows for easier modeling and interpretation of results in an individual study; however, these groupings are typically based on the Z scores of 1 team. Differences in injury risk may have occurred within specific workload groupings, but the generalizability of these results to teams and athletes other than those studied is limited. For example, the difference in SRD between workload groups of approximately 200 m over 1 week or exposures to maximum velocity that vary by less than 5 m is most likely not practically meaningful; when workload metrics are grouped in this fashion, understanding the injury risk as a player moves 1 level on the scale is impossible.20,21 Valuable information may be lost with stratification,39 especially without research-based cut points. These models treat all data within the group the same and look for differences between groups when, in reality, this early in the field of GPS variables, researchers do not yet know if the upper threshold of a “low” group and the lower threshold of a “medium” group are truly different enough to be separated. The use of workload metrics on a continuous scale, rather than in stratified categories, is likely a more appropriate way to apply these measures to assess injury risk. If groupings are needed, basing them on population-based metrics (ie, “high-speed” running velocity for professional soccer players across an entire league, not just a single team) or groupings that are specific to each individual in the analysis is encouraged. Further analysis of how groupings of various levels affect injury risk calculations is indicated.
Magnitude-based inferences are another statistical choice that, in theory, attempt to make results more practical when the sample is small40,41; however, their mathematical grounding has been refuted.42,43 Researchers and practitioners should be cautious when considering the use of magnitude-based inferences in their work and when interpreting the results of others. In our review, the authors of 5 studies12,13,15,16,29 used magnitude-based inferences in their analyses. We cited their interpretation thresholds in quotation marks to signify that these are the interpretations of the original investigators. We also clearly designated this analysis approach in Supplementary Table 2. We believed it was important to include these papers because the studies were pertinent to this body of literature, albeit the original authors' interpretations may not be valid.
Other statistical variances that require attention in this body of research moving forward include appropriate sample-size calculations for the number of predictors that researchers intend to use in their models.44 Using pseudo-R2 statistics, Lolli et al,45 in their 2019 paper on perceived exertion, session duration, and hamstrings injury, determined that sample sizes of 329, 583, or 1166 players were needed for 1, 5, or 10 load-related predictors, respectively. None of the studies in this review came close to those sample sizes.
In assessing injury risk, it is also important that both authors and readers understand and interpret the differences between injury rates and injury risk. From afar, the nuances of the 2 calculations can be easily overlooked, but the interpretation differences are critical, especially when comparing results. Additionally, when discussing injury risk, researchers should always report CIs to address the precision of the measurement and in the interpretation of the results.
The variations in statistical analysis among the papers reviewed were notable and reflect the larger conversation about statistics in sports medicine regarding a shift to complex approaches versus conventional, reductionist methods. It is becoming widely accepted that injuries occur as a result of complex and nonlinear interactions among multiple variables and that conventional approaches, even multivariable ones, are unlikely to capture the dynamic and complicated nature of injuries.46,47 A reductionist approach assumes that the parts of the model can be broken down, examined individually, and then summed to represent the system as a whole.47 Even multivariable approaches can be limited by the assumption that a system is equal to the sum of its parts.48 Reductionist approaches should be used to inform the creation of complex modeling, yet their standalone utility in prediction (if this is the ultimate goal) is questionable. With all models, a validation study of unseen data should be completed before causation with injury is implied. Without validation, these models can only be considered to offer descriptive associations rather than true prediction. Implementing complex models may be difficult in a practical setting; however, the advantages of more accurate injury risk models should outweigh the implementation concerns. The authors of 2 studies25,32 in this review attempted to use complex approaches when analyzing their data and may serve as exemplars as the field advances.
Assertions Made by Authors
From its inception, the novelty of GPS technology to monitor athlete workload was captivating to sports performance and sports medicine researchers. As work in this area began, many of the methods used to analyze these data and interpret results linked to injury prevention were published in editorials or commentary articles rather than in original research papers with quantitative analyses.1,49,50 More recently, others51 have authored editorials expressing concerns regarding the use of these claims in research without further validation.
One highly debated metric that originated in research but was propagated by editorials was the ACWR.31,52–54 This metric is prominent in the papers we reviewed, with many claims about its ability to predict injury. The relative simplicity of the ACWR has been complicated by the various ways investigators have used to calculate it (different numbers of days for acute and chronic periods, rolling averages versus EWMA, etc) and the broad array of metrics used to define player workload. Additionally, using chronic load as a modifier to the ACWR presents a potential mathematical error, as the ratio already uses the chronic load value to normalize the acute load.55 Due to these variations, direct comparisons between articles and understanding what specific values of ACWR may be signaling present challenges. Others56,57 have refuted the utility of ACWR in injury prediction, suggesting the metric may be signaling another confounding factor to injury risk, such as match congestion during a season.
Additionally, when interpreting the results of a univariate analysis, authors and readers should be cautious in attempting prediction, as variables may be associated with injury without meeting the threshold for direct causation. Difficulty understanding the nuances of association versus prediction may result in practitioners concluding that a factor associated with injury risk can be used to predict, and ultimately prevent, injury.58 In turn, this may lead to incorrect inferences from spurious data. In the context of injuries, association can help identify individual factors in the overall puzzle of why injuries occur but only at a theoretical level.59
Limitations of Existing Research and Suggestions for Future Research
We evaluated 22 original research articles that involved a total of 1136 athletes who sustained 2045 total injuries over 40 team-seasons. The vast majority of this research was based on analyses of relationships for a single team over 1 season. Troublingly, none of the investigations included female athletes. Furthermore, without standardization of injury definitions and GPS–derived workload metrics, replication and meta-analysis are impossible. A prediction model based on a single team also decreases generalizability to other populations.
A focus on hypothesis-driven research would help to overcome many of the concerns addressed in this review. Studies of GPS technology often occur when a team is collecting data for a performance-related reason, and the dataset is provided to a researcher post facto to assess the relationships of workload metrics and injury. By identifying the research questions, variables (especially injury-related variables), and the best statistical approaches a priori, data can be collected in a manner that lends itself to more complex and appropriate statistical modeling in order to identify true cause and effect.
As methods in this field become more robust and uniform in manner, using different scales to critically appraise the research will become possible. We elected to use the National Institute of Health Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies; however, other methodologic quality-assessment instruments such as the Prediction Model Risk Of Bias Assessment Tool60 and Quality in Prognosis Studies61 may be more specific to injury risk research.
CONCLUSIONS
Our results were inconclusive in determining if any specific GPS–derived workload metrics were associated with increased injury risk. This conclusion is due to a myriad of factors, including differences in injury definitions, workload parameters, and statistical analysis employed across studies. Global positioning system technology in sport is still in its infancy, especially in regard to sports medicine research. As researchers and practitioners gain knowledge about how sensor-based wearable technology can inform injury risk and athlete wellness, more consistent approaches to data aggregation and modeling need to be at the forefront.


Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram.
Contributor Notes