Swallow Screening
Evaluation of swallowing may be performed on individuals of any age and with any disease, and may include a screening, clinical exam, and/or instrumental exam. If a screening is employed, its goal is to identify the risk of dysphagia. If the risk exists, then a referral is made for a dysphagia evaluation (Box 3.1 and 3.2). This evaluation may include both clinical and instrumental components, but not all patients will get (or need) both of these. The goal of the dysphagia evaluation is to identify the presence and nature of the swallow deficit, as well as inform the therapeutic process. Currently, there is no standard of care for a dysphagia evaluation. The dysphagia clinician is skilled in both clinical and instrumental exams, and typically engages some combination of approaches to provide maximal benefit for the patient.
Box 3.1: Schematic relationship between screening and evaluation
Box 3.2: Comparison between screening and evaluation
Adult Screening
Screening procedures are used throughout medicine to identify individuals who are at risk for a condition or disease. Hence, a dysphagia screening is used to detect individuals who are at risk for dysphagia. Ideally, dysphagia screening tools should be valid and reliable, easy and quick to administer, and provide a pass/fail format, thus allowing for a quick determination of the likelihood of a swallow disorder. Using a pass/fail determination, a fail would evoke a comprehensive dysphagia evaluation (Swigert, Steele, & Riquelme, 2007). A screening may also elucidate the need for referral(s) to other medical health professionals. Box 3.3 schematically depicts various screening formats and possible red flags that may indicate a risk of dysphagia.
Box 3.3: Screening formats and associated red flags for dysphagia
Clinical Note
A screening may be an: (1) interview, questionnaire, or medical chart review, (2) observation or report of clinical signs or symptoms associated with dysphagia, (3) administration of liquid or other oral intake, or (4) presence of a specific disease (such as CVA), medical status (such as presence of tracheotomy), or specific events in the medical history (such as history of pneumonia) (Daniels, Ballo, Mahoney, & Foundas, 2000; Logemann, Veis, & Colangelo, 1999).
Clinical Note
Learn more about screening by visiting these external links:
Although the Joint Commission for health care’s standards recommend the completion of a dysphagia screening in certified stroke centers, dysphagia screenings are not the standard of care at all medical facilities (Masrur et al., 2013). Some medical facilities incorporate a dysphagia screening as part of their routine admission. Medical facilities that use a formal dysphagia screening procedure have lower rates of aspiration pneumonia (Hinchey et al., 2005). When facilities use a dysphagia screening, an SLP or other medical professional may perform the procedure. When the screening is not performed by the SLP, the SLP may be involved in designing the program and training the professionals who will carry out the screening protocol.
Effectiveness of a screening tool can be measured by noting sensitivity and specificity. Ideally, a screening tool has both high sensitivity and specificity. That is, the screening has a high likelihood of identifying individuals at risk for a disease (true positive) without identifying individuals who do not have a specific disease (false positive). As a general rule, however, screening tests in many fields of medicine do not meet these stringent criteria. When a compromise is required, screening tools are geared toward high sensitivity. The false-positive rate that goes along with low specificity is considered acceptable in order not to miss any at-risk individuals. The assumption of a screening is that early identification of at-risk individuals reduces the chance of long-term health consequences and, in some cases, death (Hinchey et al., 2005; Yeh et al., 2011; Masrur et al., 2013).
“Want more?”
Terminology Revisited
Sensitivity— A test’s ability to designate as positive an individual with a specific disease. There are few false negatives.
Specificity— A test’s ability to designate as negative an individual who does not have a disease. There are few false positives.
One of the first published dysphagia screenings was a checklist published by Dr. Logemann (1983). This screening was completed by interviewing the nursing staff or reviewing the medical record, and included a patient observation, preferably during mealtime. Checking yes to a certain number of factors would indicate an increased risk of dysphagia and lead to a complete dysphagia assessment. The contents of this checklist still serves as a solid reference for “red flags” of dysphagia. Since 1983, multiple formal and informal screening procedures have been used. Box 3.4 provides a cursory overview of some of the currently-used screening protocols. Most formal screenings have been developed for patients post-stroke. Yet, screening procedures have been applied to other clinical populations such as patients with head and neck cancer or pediatric patients. There is no clear “one-size-fits-all” screening and to date, it is unclear which screening best serves a large patient population (Daniels, Anderson, & Willson, 2012).
Box 3.4: Samples of screening tests currently available and their ability to identify dysphagia risk
In a popular screening technique, sometimes referred to as the water swallow challenge, individuals drink a measured amount of water. During these various water swallow protocols, the clinician’s main goal is to observe the individual during and after the swallow for occurrence of cough, throat clearing, or a change in vocal quality. Any of these observations would result in a failed screening and would trigger a dysphagia evaluation (DePippo, Holas, & Reding, 1994; Kidd, Lawson, Nesbitt, & MacMahon, 1993; Suiter, Sloggy, & Leder, 2013). Screening results are communicated with the appropriate medical personnel, family and caregivers, as well as the patient undergoing the screening.
Screening Individuals with Tracheostomy Tubes
Tracheal secretions have been used as an estimate of the risk of aspiration for screening individuals who have tracheostomy tubes. Various procedures have been employed including the use of glucose oxidase strips and blue dye. Although these procedures estimate the presence of aspiration in the tracheal secretions, they do not highlight the physiology of the aspirate event. Hence, these approaches, when used alone, serve as a screening of dysphagia risk.
“Want more?”
Blue Dye Test
Over the past 40 years, 3 protocols have employed the use of blue dye to estimate the risk of aspiration in patients with tracheostomy tubes — dye mixed with saliva, tube feedings, and food or liquid presented orally. Two of these protocols—dye mixed with saliva and dye mixed with food and liquid—are still in use today. In the Evan’s Blue Dye Test (EBDT), as originally described by Cameron and colleagues (1973), four drops of dye are placed on the individual’s tongue every 4 hours. Tracheal secretions are monitored across two days for the presence of blue dye, which serves as a positive indication of aspiration. According to these authors, aspiration of oral secretions was correctly detected in 69% of the patients tested (Cameron, Reynolds, & Zuidema, 1973).
The Modified Evan’s Blue Dye Test (MEBDT) was adopted to provide more specific detail on the impact of consistency on aspiration. In this approach, the dye is mixed with test foods including ice chips, liquid, and puree. Suctioned tracheal secretions are compared before and after each food item; the presence of dye indicates aspiration. This procedure identifies copious amounts of aspiration, but it is not a reliable choice for patients with mild or trace aspiration (Brady, Hildner, & Hutchins, 1999; Donzelli, Brady, Wesling, & Craney, 2001). The sensitivity and specificity for this procedure to predict aspiration ranges from 82-95% and 38-100%, respectively, with higher values noted when the procedure is used in individuals with head and neck cancer (Belafsky, Blumenfeld, LePage, & Nahrstedt, 2003; Winklmaier, Wüst, Plinkert, & Wallner, 2007).
“Want more?”
Although not currently supported, the use of blue dye in enteral feeds was a common practice in intensive care units at the end of the last century (Methany, Aud, & Wunderlich, 1999). If blue dye was present in the tracheal secretions, aspiration was suspected. This procedure fell out of favor due to the risk of toxicity, as well as the low sensitivity for aspiration (Potts, Zaroukian, Guerrero, & Baker, 1993; Liu, McIntyre, & Waters, 1989). In fact, in 2003, the Food and Drug Administration posted a health advisory against the use of blue dye in enteral feeds.
When choosing to use a blue dye protocol, special consideration needs to be given to the choice of dye. In the original procedure proposed by Cameron and colleagues (1973), Evan’s blue dye was used. Perhaps for reasons of cost or convenience, many facilities switched to commercially available blue food dye, which came in large bottles that could be used across multiple individuals. Eventually these multi-dose bottles were implicated in the spread of infection due to cross contamination (File, Tan, Thomson, Stephens, & Thompson, 1995). To eliminate this risk, it is now recommended that only sterile, single-use dye be used.
The use of blue dye is contraindicated in patients with sepsis or leaky gut because these conditions allow the blue dye to be absorbed into the blood stream. In certain amounts, absorption of the dye is toxic to some individuals and has been linked to several deaths (Gaur, Sorg, & Shukla, 2003; Lucarelli, Shirk, Julian, & Crouser, 2004; Maloney et al., 2001; Maloney et al., 2000; Walker & Hamlin, 1997). Blue dye is considered safe for most individuals when used in small doses.
Clinical Note
Individuals who have a status of post cardiac by-pass or renal failure may be at greater risk for sepsis or intestinal hyperpermeability (“leaky gut”), and therefore may not be ideal for blue dye.
Glucose Oxidase Strip Method
Glucose dipsticks are designed to identify sugar in the urine. Because these dipsticks are sensitive to the presence of sugar, and food breaks down into sugar, these dipsticks can be used in tracheal secretions. High sugar concentration in tracheal secretions indicates potential aspirated material. The basic premise is that tracheobronchial secretions normally contain less than 5 mg of glucose per deciliter. Sugar concentrations higher than that may indicate aspiration. Simply stated, the dipstick is inserted into the tracheal secretions being careful not to contact any blood in the region. The procedure is typically used in patients with tracheostomy tubes who are receiving enteral nutrition. Limitations include the fact that blood contains glucose; blood at the tracheotomy site is not uncommon and may provide inaccurate results (Metheny & Clouse, 1997). Further, the procedure has poor sensitivity when individuals are being fed low-glucose formulas. If additional glucose is added to the feeding, sensitivity is increased (Hussain, Roy, & Young, 2003; Young, 2001). Finally, detection of aspiration is an off-label use of these strips and the accuracy of strips to measure glucose in tracheal secretions has not been evaluated.
Clinical Note
Although glucose is not typically present in endotracheal secretions, it is measurable in patients with hyperglycemia or epithelial inflammation (Philips, Meguer, Redman, & Baker, 2003; Metheny et al., 2005).
Although earlier reports found relatively high sensitivity for this procedure (Potts, Zaroukian, Guerrero, & Baker, 1993; Montejo-Gonzalez, Perez-Cardenas, Fernandez-Hernandez, & Conde-Alonso, 1994), this support has not been replicated in recent years. Thus, the use of glucose oxidative dipsticks as a screen for the presence of aspiration should be used sparingly or in select patient populations with added glucose to the feeding. Current research is evaluating the use of other markers besides glucose, such as pepsin, as indicators of endotracheal aspiration (Sole et al., 2014).
Clinical Note
When choosing secretions to dip your strip, blood should be avoided as it is higher in glucose than tracheal secretions and may alter the accuracy of the results.
Pediatric Screening
While many screenings have been developed for use with individuals status post-stroke, dysphagia screenings are also used, albeit with less frequency, in the pediatric population. Most public schools utilize a formal dysphagia screening process prior to an evaluation. In addition to checklists focused on identifying red flags that indicate a risk of dysphagia, pediatric dysphagia screenings have been performed using the 3 oz water swallow challenge (Yale Swallow Protocol, Suiter & Leder, 2008). Although the screening may address areas beyond swallow safety such as swallow efficiency, about 75% of the time, the 3 oz water challenge test is accurate at identifying the presence of aspiration in pediatric individuals (Suiter, Leder, & Karas, 2009).
Due to the impact of medical events on developmental milestones, as well as the risks associated with feeding or swallowing deficits, the pediatric population requires special consideration when deciding to perform a screening. Box 3.5 lists variables for consideration when choosing to screening (as opposed to evaluating) a pediatric individual. Consideration should be given to the age of the individual, as well as their anatomical, motor, mental, and social development. Signs and symptoms observed with dysphagia in infants and children may differ from those in adults. Rejection of food, food selectivity, gagging with or without items in the mouth, or use of an open mouth posture during play or rest may be indicators of dysphagia.
Box 3.5: Variables to consider when choosing to screen or evaluate a pediatric individual
Screening pediatric patients may also be performed by educated parents and caregivers. In this scenario, parents or caregivers are trained to observe the incidence of clinical signs such as stridor, wheezing or rattly chest sounds. Benfer and colleagues (2015) propose the development of online trainings to improve parent report of dysphagia. Parents with “at-risk” children can be trained and then serve as reporters when further investigation is required.
“Want more?”
To learn more about developmental milestones check out these external links: