Context:
The behaviors and beliefs of recreational runners with regard to hydration maintenance are not well elucidated.
Objective:
To examine which beverages runners choose to drink and why, negative performance and health experiences related to dehydration, and methods used to assess hydration status.
Design:
Cross-sectional study.
Setting:
Marathon registration site.
Patients or Other Participants:
Men (n = 146) and women (n = 130) (age = 38.3 ± 11.3 years) registered for the 2010 Little Rock Half-Marathon or Full Marathon.
Intervention(s):
A 23-item questionnaire was administered to runners when they picked up their race timing chips.
Main Outcome Measure(s):
Runners were separated into tertiles (Low, Mod, High) based on z scores derived from training volume, expected performance, and running experience. We used a 100-mm visual analog scale with anchors of 0 (never) and 100 (always). Total sample responses and comparisons between tertile groups for questionnaire items are presented.
Results:
The High group (58±31) reported greater consumption of sport beverages in exercise environments than the Low (42 ± 35 mm) and Mod (39 ± 32 mm) groups (P < .05) and perceived sport beverages to be superior to water in meeting hydration needs (P < .05) and improving performance during runs greater than 1 hour (P < .05). Seventy percent of runners experienced 1 or more incidents in which they believed dehydration resulted in a major performance decrement, and 45% perceived dehydration to have resulted in adverse health effects. Twenty percent of runners reported monitoring their hydration status. Urine color was the method most often reported (7%), whereas only 2% reported measuring changes in body weight.
Conclusions:
Greater attention should be paid to informing runners of valid techniques to monitor hydration status and developing an appropriate individualized hydration strategy.
Context:
Precooling is the pre-exercise reduction of body temperature and is an effective method of improving physiologic function and exercise performance in environmental heat. A practical and effective method of precooling suitable for application at athletic venues has not been demonstrated.
Objective:
To confirm the effectiveness of pre-exercise ingestion of cold fluid without fluid ingestion during exercise on pre-exercise core temperature and to determine whether pre-exercise ingestion of cold fluid alone without continued provision of cold fluid during exercise can improve exercise performance in the heat.
Design:
Randomized controlled clinical trial.
Setting:
Environmental chamber at an exercise physiology laboratory that was maintained at 32°C, 60% relative humidity, and 3.2 m/s facing air velocity.
Patients or Other Participants:
Seven male recreational cyclists (age = 21 ± 1.5 years, height = 1.81 ± 0.07 m, mass = 78.4 ± 9.2 kg) participated.
Intervention(s):
Participants ingested 900 mL of cold (2°C) or control (37°C) flavored water in 3 300-mL aliquots over 35 minutes of pre-exercise rest.
Main Outcome Measure(s):
Rectal temperature and thermal comfort before exercise and distance cycled, power output, pacing, rectal temperature, mean skin temperature, heart rate, blood lactate, thermal comfort, perceived exertion, and sweat loss during exercise.
Results:
During rest, a greater decrease in rectal temperature was observed with ingestion of the cold fluid (0.41 ± 0.16°C) than the control fluid (0.17 ± 0.17°C) over 35 to 5 minutes before exercise (t6 = −3.47, P = .01). During exercise, rectal temperature was lower after ingestion of the cold fluid at 5 to 25 minutes (t6 range, 2.53–3.38, P ≤ .05). Distance cycled was greater after ingestion of the cold fluid (19.26 ± 2.91 km) than after ingestion of the control fluid (18.72 ± 2.59 km; t6 = −2.80, P = .03). Mean power output also was greater after ingestion of the cold fluid (275 ± 27 W) than the control fluid (261 ± 22 W; t6 = −2.13, P = .05). No differences were observed for pacing, mean skin temperature, heart rate, blood lactate, thermal comfort, perceived exertion, and sweat loss (P > .05).
Conclusions:
We demonstrated that pre-exercise ingestion of cold fluid is a simple, effective precooling method suitable for field-based application.
Context:
A lack of published comparisons between measures from commercially available computerized posturography devices and the outcome measures used to define the limits of stability (LOS) makes meaningful interpretation of dynamic postural stability measures difficult.
Objectives:
To compare postural stability measures between and within devices to establish concurrent and construct validity and to determine test-retest reliability for LOS measures generated by the NeuroCom Smart Balance Master and the Biodex Balance System.
Design:
Cross-sectional study.
Setting:
Controlled research laboratory.
Patients or Other Participants:
A total of 23 healthy participants with no vestibular or visual disabilities or lower limb impairments.
Intervention(s):
The LOS were assessed during 2 laboratory test sessions 1 week apart.
Main Outcome Measure(s):
Three NeuroCom LOS variables (directional control, endpoint excursion, and movement velocity) and 2 Biodex LOS variables (directional control, test duration).
Results:
Test-retest reliability ranged from high to low across the 5 LOS measures (intraclass correlation coefficient [2,k] = 0.82 to 0.48). Pearson correlations revealed 4 significant relationships (P < .05) between and within the 2 computerized posturography devices (r = 0.42 to −0.65).
Conclusions:
Based on the wide range of intraclass correlation values we observed for the NeuroCom measures, clinicians and researchers alike should establish the reliability of LOS testing for their own clinics and laboratories. The low to moderate reliability outcomes observed for the Biodex measures were not of sufficient magnitude for us to recommend using the LOS measures from this system as the gold standard. The moderate Pearson interclass correlations we observed suggest that the Biodex and NeuroCom postural stability systems provided unique information. In this study of healthy participants, the concurrent and construct validity of the Biodex and NeuroCom LOS tests were not definitively established. We recommend that this study be repeated with a clinical population to further explore the matter.
Context:
Fatigue of the gluteus medius (GMed) muscle might be associated with decreases in postural control due to insufficient pelvic stabilization. Men and women might have different muscular recruitment patterns in response to GMed fatigue.
Objective:
To compare postural control and quality of movement between men and women after a fatiguing hip-abduction exercise.
Design:
Descriptive laboratory study.
Setting:
Controlled laboratory.
Patients or Other Participants:
Eighteen men (age = 22 ± 3.64 years, height = 183.37 ± 8.30 cm, mass = 87.02 ±12.53 kg) and 18 women (age = 22 ± 3.14, height = 167.65 ± 5.80 cm, mass = 66.64 ± 10.49 kg) with no history of low back or lower extremity injury participated in our study.
Intervention(s):
Participants followed a fatiguing protocol that involved a side-lying hip-abduction exercise performed until a 15% shift in electromyographic median frequency of the GMed was reached.
Main Outcome Measure(s):
Baseline and postfatigue measurements of single-leg static balance, dynamic balance, and quality of movement assessed with center-of-pressure measurements, the Star Excursion Balance Test, and lateral step-down test, respectively, were recorded for the dominant lower extremity (as identified by the participant).
Results:
We observed no differences in balance deficits between sexes (P > .05); however, we found main effects for time with all of our postfatigue outcome measures (P ≤ .05).
Conclusions:
Our findings suggest that postural control and quality of movement were affected negatively after a GMed-fatiguing exercise. At similar levels of local muscle fatigue, men and women had similar measurements of postural control.
Context:
Knee braces and neoprene sleeves are commonly worn by people with anterior cruciate ligament reconstructions (ACLRs) during athletic activity. How knee braces and sleeves affect muscle activation in people with ACLRs is unclear.
Purpose:
To determine the effects of knee braces and neoprene knee sleeves on the quadriceps central activation ratio (CAR) before and after aerobic exercise in people with ACLRs.
Design:
Crossover study.
Patients or Other Participants:
Fourteen people with a history of ACLR (9 women, 5 men: age = 23.61 ± 4.44 years, height = 174.09 ± 9.82 cm, mass = 75.35 ± 17.48 kg, months since ACLR = 40.62 ± 20.41).
Intervention(s):
During each of 3 sessions, participants performed a standardized aerobic exercise protocol on a treadmill. The independent variables were condition (brace, sleeve, or control) and time (baseline, pre-exercise with brace, postexercise with brace, postexercise without brace).
Main Outcome Measure(s):
Normalized torque measured during a maximal voluntary isometric contraction (TMVIC) and CAR were measured by a blinded assessor using the superimposed burst technique. The CAR was expressed as a percentage of full muscle activation. The quadriceps CAR and TMVIC were measured 4 times during each session: baseline, pre-exercise with brace, postexercise with brace, and postexercise without brace.
Results:
Immediately after the application of the knee brace, TMVIC decreased (P = .01), but no differences between bracing conditions were observed. We noted reduced TMVIC and CAR (P < .001) after exercise, both with and without the brace. No differences were seen between bracing conditions after aerobic exercise.
Conclusions:
The decrease in TMVIC immediately after brace application was not accompanied by differences between bracing conditions. Wearing a knee brace or neoprene sleeve did not seem to affect the deterioration of quadriceps function after aerobic exercise.
Context:
The ability to accurately estimate quadriceps voluntary activation is an important tool for assessing neuromuscular function after a variety of knee injuries. Different techniques have been used to assess quadriceps volitional activation, including various stimulating electrode types and electrode configurations, yet the optimal electrode types and configurations for depolarizing motor units in the attempt to assess muscle activation are unknown.
Objective:
To determine whether stimulating electrode type and configuration affect quadriceps central activation ratio (CAR) and percentage-of-activation measurements in healthy participants.
Design:
Crossover study.
Setting:
Research laboratory.
Patients and Other Participants:
Twenty participants (13 men, 7 women; age = 26 ± 5.3 years, height = 173.85 ± 7.3 cm, mass = 77.37 ± 16 kg) volunteered.
Intervention(s):
All participants performed 4 counter-balanced muscle activation tests incorporating 2 different electrode types (self-adhesive, carbon-impregnated) and 2 electrode configurations (vastus, rectus).
Main Outcome Measure(s):
Quadriceps activation was calculated with the CAR and percentage-of-activation equations, which were derived from superimposed burst and resting torque measurements.
Results:
No differences were found between conditions for CAR and percentage-of-activation measurements, whereas resting twitch torque was higher in the rectus configuration for both self-adhesive (216 ± 66.98 Nm) and carbon-impregnated (209.1 ± 68.22 Nm) electrodes than in the vastus configuration (209.5 ± 65.5 Nm and 204 ± 62.7 Nm, respectively) for these electrode types (F1,19 = 4.87, P = .04). In addition, resting twitch torque was greater for both electrode configurations with self-adhesive electrodes than with carbon-impregnated electrodes (F1,19 = 9.33, P = .007). Bland-Altman plots revealed acceptable mean differences for agreement between electrode type and configuration for CAR and percentage of activation, but limits of agreement were wide.
Conclusions:
Although these electrode configurations and types might not necessarily be able to be used interchangeably, differences in electrode type and configuration did not seem to affect CAR and percentage-of-activation outcome measures.
Context:
Community-acquired methicillin-resistant Staphylococcus aureus (MRSA) is becoming more prevalent in healthy athletic populations. Various preventive measures have been proposed, but few researchers have evaluated the protective effects of a prophylactic application of a commercially available product.
Objective:
To compare the persistent antimicrobial properties of a commercially available antimicrobial product containing 4% chlorhexidine gluconate (Hibiclens) with those of a mild, nonmedicated soap (Dr. Bronner's Magic Soap).
Design:
Cross-sectional study.
Setting:
Microbiology laboratory, contract research organization.
Patients or Other Participants:
Twenty healthy human volunteers.
Intervention(s):
The test and control products were randomly assigned and applied to both forearms of each participant. Each forearm was washed for 2 minutes with the test or control product, rinsed, and dried. At, 1, 2, and 4 hours after application, each forearm was exposed to MRSA for approximately 30 minutes.
Main Outcome Measure(s):
Differences in numbers of MRSA recovered from each forearm, test and control, at each postapplication time point were compared.
Results:
Fewer MRSA (P < .0001) were recovered from the forearms treated with the test product (4% chlorhexidine gluconate) than from the forearms treated with the control product (nonmedicated soap).
Conclusions:
The 4% chlorhexidine gluconate product demonstrated persistent bactericidal activity versus MRSA for up to 4 hours after application.
Context:
To our knowledge, no authors have assessed health-related quality of life (HR-QOL) in participants with functional ankle instability (FAI). Furthermore, the relationships between measures of ankle functional limitation and HR-QOL are unknown.
Objective:
To use the Short Form–36v2 Health Survey (SF-36) to compare HR-QOL in participants with or without FAI and to determine whether HR-QOL was related to functional limitation.
Design:
Cross-sectional study.
Setting:
Sports medicine research laboratory.
Patients or Other Participants:
Sixty-eight participants with FAI (defined as at least 1 lateral ankle sprain and 1 episode of giveway per month) or without FAI were recruited (FAI group: n = 34, age = 25 ± 5 years, height = 1.71 ± 0.08 m, mass = 74.39 ± 12.78 kg, Cumberland Ankle Instability Tool score = 19.3 ± 4; uninjured [UI] group: n = 34, age = 23 ± 4 years, height = 1.69 ± 0.08 m, mass = 67.94 ± 11.27 kg, Cumberland Ankle Instability Tool score = 29.4 ± 1).
Main Outcome Measure(s):
All participants completed the SF-36 as a measure of HR-QOL and the Foot and Ankle Ability Measure (FAAM) and the FAAM Sport version (FAAMS) as assessments of functional limitation. To compare the FAI and UI groups, we calculated multiple analyses of variance followed by univariate tests. Additionally, we correlated the SF-36 summary component scale and domain scales with the FAAM and FAAMS scores.
Results:
Participants with FAI had lower scores on the SF-36 physical component summary (FAI = 54.4 ± 5.1, UI = 57.8 ± 3.7, P = .005), physical function domain scale (FAI = 54.5 ± 3.8, UI = 56.6 ± 1.2, P = .004), and bodily pain domain scale (FAI = 52.0 ± 6.7, UI = 58.5 ± 5.3, P < .005). Similarly, participants with FAI had lower scores on the FAAM (FAI = 93.7 ± 8.4, UI = 99.5 ± 1.4, P < .005) and FAAMS (FAI = 84.5 ± 8.4, UI = 99.8 ± 0.72, P < .005) than did the UI group. The FAAM score was correlated with the physical component summary scale (r = 0.42, P = .001) and the physical function domain scale (r = 0.61, P < .005). The FAAMS score was correlated with the physical function domain scale (r = 0.47, P < .005) and the vitality domain scale (r = 0.36, P = .002).
Conclusions:
Compared with UI participants, those with FAI had less HR-QOL and more functional limitations. Furthermore, positive correlations were found between HR-QOL and functional limitation measures. This suggests that ankle impairment may reduce overall HR-QOL.
Context:
Active muscle stiffness might protect the unstable shoulder from recurrent dislocation.
Objective:
To compare strength and active stiffness in participants with unilateral anterior shoulder instability and to examine the relationship between active stiffness and functional ability.
Design:
Cross-sectional study.
Setting:
University research laboratory.
Patients or Other Participants:
Participants included 16 males (age range, 16–40 years; height = 179.4 ± 6.1 cm; mass = 79.1 ± 6.8 kg) with 2 or more episodes of unilateral traumatic anterior shoulder instability.
Main Outcome Measure(s):
Active stiffness and maximal voluntary strength were measured bilaterally in participants. In addition, quality of life, function, and perceived instability were measured using the Western Ontario Stability Index, American Shoulder and Elbow Surgeons Standardized Shoulder Assessment Form, and Single Alpha Numeric Evaluation, respectively.
Results:
We found less horizontal adduction strength (t15 = −4.092, P = .001) and less stiffness at 30% (t14 = −3.796, P = .002) and 50% (t12 = −2.341, P = .04) maximal voluntary strength in the unstable than stable shoulder. Active stiffness was not correlated with quality of life, function, or perceived instability (r range, 0.0–0.25; P > .05).
Conclusions:
The observed reduction in stiffness in the unstable shoulder warrants inclusion of exercises in the rehabilitation program to protect the joint from perturbations that might lead to dislocation. The lack of association between active stiffness and quality of life, function, or perceived instability might indicate that stiffness plays a less direct role in shoulder stability.
Context:
Participation in high school sports has grown 16.1% over the last decade, but few studies have compared the overall injury risks in girls' softball and boys' baseball.
Objective:
To examine the incidence of injury in high school softball and baseball players.
Design:
Cohort study.
Setting:
Greenville, South Carolina, high schools.
Patients or Other Participants:
Softball and baseball players (n = 247) from 11 high schools.
Main Outcome Measure(s):
Injury rates, locations, types; initial or subsequent injury; practice or game setting; positions played; seasonal trends.
Results:
The overall incidence injury rate was 4.5/1000 athlete-exposures (AEs), with more injuries overall in softball players (5.6/1000 AEs) than in baseball players (4.0/1000 AEs). Baseball players had a higher initial injury rate (75.9/1000 AEs) than softball players (66.4/1000 AEs): rate ratio (RR) = 0.88, 95% confidence interval (CI) = 0.4, 1.7. The initial injury rate was higher than the subsequent injury rate for the overall sample (P < .0001) and for softball (P < .0001) and baseball (P < .001) players. For both sports, the injury rate during games (4.6/1000 AEs) was similar to that during practices (4.1/1000 AEs), RR = 1.22, 95% CI = 0.7, 2.2. Softball players were more likely to be injured in a game than were baseball players (RR = 1.92, 95% CI = 0.8, 4.3). Most injuries (77%) were mild (3.5/1000 AEs). The upper extremity accounted for the highest proportion of injuries (63.3%). The incidence of injury for pitchers was 37.3% and for position players was 15.3%. The rate of injury was highest during the first month of the season (7.96/1000 AEs).
Conclusions:
The incidence of injury was low for both softball and baseball. Most injuries were minor and affected the upper extremity. The injury rates were highest in the first month of the season, so prevention strategies should be focused on minimizing injuries and monitoring players early in the season.
Context:
Understanding implementation strategies of Approved Clinical Instructors (ACIs) who use evidence-based practice (EBP) in clinical instruction will help promote the use of EBP in clinical practice.
Objective:
To examine the perspectives and experiences of ACIs using EBP concepts in undergraduate athletic training education programs to determine the importance of using these concepts in clinical practice, clinical EBP implementation strategies for students, and challenges of implementing EBP into clinical practice while mentoring and teaching their students.
Design:
Qualitative study.
Setting:
Telephone interviews.
Patients or Other Participants:
Sixteen ACIs (11 men, 5 women; experience as a certified athletic trainer = 10 ± 4.7 years, experience as an ACI = 6.8 ± 3.9 years) were interviewed.
Data Collection and Analysis:
We interviewed each participant by telephone. Interview transcripts were analyzed and coded for common themes and subthemes regarding implementation strategies. Established themes were triangulated through peer review and member checking to verify the data.
Results:
The ACIs identified EBP implementation as important for validation of the profession, changing paradigm shift, improving patient care, and improving student educational experiences. They promoted 3 methods of implementing EBP concepts with their students: self-discovery, promoting critical thinking, and sharing information. They assisted students with the steps of EBP and often faced challenges in implementation of the first 3 steps of EBP: defining a clinical question, literature searching, and literature appraisal. Finally, ACIs indicated that modeling the behavior of making clinical decisions based on evidence was the best way to encourage students to continue using EBP.
Conclusions:
Athletic training education program directors should encourage and recommend specific techniques for EBP implementation in the clinical setting. The ACIs believed that role modeling is a strategy that can be used to promote the use of EBP with students. Training of ACIs should include methods by which to address the steps of the EBP process while still promoting critical thinking.
Context:
Previous researchers have indicated that athletic training education programs (ATEPs) appear to retain students who are motivated and well integrated into their education programs. However, no researchers have examined the factors leading to successful persistence to graduation of recent graduates from ATEPs.
Objective:
To determine the factors that led students enrolled in a postprofessional education program accredited by the National Athletic Trainers' Association (NATA) to persist to graduation from accredited undergraduate ATEPs.
Design:
Qualitative study.
Setting:
Postprofessional education program accredited by the NATA.
Patients or Other Participants:
Fourteen graduates (12 women, 2 men) of accredited undergraduate entry-level ATEPs who were enrolled in an NATA-accredited postprofessional education program volunteered to participate.
Data Collection and Analysis:
We conducted semistructured interviews and analyzed data through a grounded theory approach. We used open, axial, and selective coding procedures. To ensure trustworthiness, 2 independent coders analyzed the data. The researchers then negotiated over the coding categories until they reached 100% agreement. We also performed member checks and peer debriefing.
Results:
Four themes emerged from the data. Decisions to persist to graduation from ATEPs appeared to be influenced by students' positive interactions with faculty, clinical instructors, and peers. The environment of the ATEPs also affected their persistence. Participants thought they learned much in both the clinic and the classroom, and this learning motivated them to persist. Finally, participants could see themselves practicing athletic training as a career, and this greatly influenced their eventual persistence.
Conclusions:
Our study gives athletic training educators insight into the reasons students persist to graduation from ATEPs. Specifically, athletic training programs should strive to develop close-knit learning communities that stress positive interactions between students and instructors. Athletic training educators also must work to present the athletic training field as exciting and dynamic.
Context:
Didactic proficiency does not ensure clinical aptitude. Quality athletic health care requires clinical knowledge and affective traits.
Objective:
To develop a grounded theory explaining the constructs of a quality certified athletic trainer (AT).
Design:
Delphi study.
Setting:
Interviews in conference rooms or business offices and by telephone.
Patients or Other Participants:
Thirteen ATs (men = 8, women = 5) stratified across the largest employment settings (high school, college, clinical) in the 4 largest districts of the National Athletic Trainers' Association (2, 3, 4, 9).
Data Collection and Analysis:
Open-ended interview questions were audio recorded, transcribed, and reviewed before condensing. Two member checks ensured trustworthiness. Open coding reduced text to descriptive adjectives.
Results:
We grouped adjectives into 5 constructs (care, communication, commitment, integrity, knowledge) and grouped these constructs into 2 higher-order constructs (affective traits, effective traits).
Conclusions:
According to participants, ATs who demonstrate the ability to care, show commitment and integrity, value professional knowledge, and communicate effectively with others can be identified as quality ATs. These abilities facilitate the creation of positive relationships. These relationships allow the quality AT to interact with patients and other health care professionals on a knowledgeable basis that ultimately improves health care delivery. Our resulting theory supported the examination of characteristics not traditionally assessed in an athletic training education program. If researchers can show that these characteristics develop ATs into quality ATs (eg, those who work better with others, relate meaningfully with patients, and improve the standard of health care), they must be cultivated in the educational setting.
Context:
Our previous research determined the frequency of participation and perceived effect of formal and informal continuing education (CE) activities. However, actual preferences for and barriers to CE must be characterized.
Objective:
To determine the types of formal and informal CE activities preferred by athletic trainers (ATs) and barriers to their participation in these activities.
Design:
Cross-sectional study.
Setting:
Athletic training practice settings.
Patients or Other Participants:
Of a geographically stratified random sample of 1000 ATs, 427 ATs (42.7%) completed the survey.
Main Outcome Measure(s):
As part of a larger study, the Survey of Formal and Informal Athletic Training Continuing Education Activities (FIATCEA) was developed and administered electronically. The FIATCEA consists of demographic characteristics and Likert scale items (1 = strongly disagree, 5 = strongly agree) about preferred CE activities and barriers to these activities. Internal consistency of survey items, as determined by Cronbach α, was 0.638 for preferred CE activities and 0.860 for barriers to these activities. Descriptive statistics were computed for all items. Differences between respondent demographic characteristics and preferred CE activities and barriers to these activities were determined via analysis of variance and dependent t tests. The α level was set at .05.
Results:
Hands-on clinical workshops and professional networking were the preferred formal and informal CE activities, respectively. The most frequently reported barriers to formal CE were the cost of attending and travel distance, whereas the most frequently reported barriers to informal CE were personal and job-specific factors. Differences were noted between both the cost of CE and travel distance to CE and all other barriers to CE participation (F1,411 = 233.54, P < .001).
Conclusions:
Overall, ATs preferred formal CE activities. The same barriers (eg, cost, travel distance) to formal CE appeared to be universal to all ATs. Informal CE was highly valued by ATs because it could be individualized.
Context:
Factors that affect food choices include the physical and social environments, quality, quantity, perceived healthfulness, and convenience. The personal food choice process was defined as the procedures used by athletes for making food choices, including the weighing and balancing of activities of daily life, physical well-being, convenience, monetary resources, and social relationships.
Objective:
To develop a theoretical model explaining the personal food choice processes of collegiate football players.
Design:
Qualitative study.
Setting:
National Collegiate Athletic Association Division II football program.
Patients or Other Participants:
Fifteen football players were purposefully sampled to represent various positions, years of athletic eligibility, and ethnic backgrounds.
Data Collection and Analysis:
For text data collection, we used predetermined, open-ended questions. Data were analyzed using the constant comparison method. The athletes' words were used to label and describe their interactions and experiences with the food choice process. Member checks and an external audit were conducted by a qualitative methodologist and a nutrition specialist, and the findings were triangulated with the current literature to ensure trustworthiness of the text data.
Results:
Time was the core category and yielded a cyclic graphic of a theoretical model for the food choice system. Planning hydration, macronutrient strategies, snacks, and healthful food choices emerged as themes.
Conclusions:
The athletes planned meals and snacks around their academic and athletic schedules while attempting to consume foods identified as healthful. Healthful foods were generally lower in fat but high in preferred macronutrients. High-protein foods were the players' primary goal; carbohydrate consumption was secondary. The athletes had established plans to maintain hydration. Professionals may use these findings to implement educational programs on food choices for football players.
JAT eISSN: 1938-162X
JAT ISSN: 1062-6050
ATEJ ISSN: 1947-380X