According to ASHA, evidence-based practice (EBP) in speech and language proceeds in a four-step process:
- Step 1: Framing the Clinical Question
- Step 2: Finding the Evidence
- Step 3: Assessing the Evidence
- Step 4: Making the Clinical Decision
The examples ASHA provides to illustrate this four-step process highlight its use in determining the best intervention for a given situation. It stresses the need to bring evidence into your practice, but doesn’t really illustrate your practice as a whole. Intervention is neither the beginning nor the end of your interaction with a client. EBP as it is illustrated in ASHA’s materials skips past your initial determination of an individual client’s issues and stops short of your actually engaging in an intervention.
So how, as SLPs, are we supposed to actually integrate EBP into our practice? Where do we start? I suggest that we need to draw attention to the role that objective and measurable assessment play as a necessary part of the EBP process. Fortunately for us, we can actually bring the tools of evidence-based practice to bear on assessment decisions as well.
Let’s create an example here to illustrate EBP’s role in assessment. Everyone, meet Jacqueline, a third-grader who has been referred to us by her teacher. Her teacher reports that Jacqueline struggles to respond to questions in class even when she seems to know the answer.
So where do we start? What is our first ‘clinical question’? ASHA’s examples usually start with some variation of: ‘Should I use intervention A or intervention B?’ But we are clearly not at that point yet. We are at the point of asking: ‘How do I assess this child’s language difficulties?’ Framing the initial clinical question involves carefully describing Jacqueline’s communication struggles and their implications for the family, school, and community. We are looking for correspondence between several factors:
- What parents, teachers, and other stakeholders report as communication challenges;
- Our own initial visit with the client;
- Research-supported assessment techniques.
Look familiar? It corresponds nicely with ASHA’s favored diagram of the EBP process (image to the left).
Let’s imagine that we asked Jacqueline to complete a narrative story retell task where we as the listeners already know the content. This task revealed that Jacqueline used very frequent filled pauses, repetitions, revisions, and non-specific vocabulary throughout her story retell. We can tell that much from our clinical impressions. But EBP requires documenting the language production in detail, bringing to bear best practice as established in the scientific literature. So how do we come up with an objective and measurable assessment?
Well, you are reading the SALT blog, so the answer shouldn’t surprise you very much: it’s time for language sample analysis! Transcribing the story retell, capturing the vocabulary as well as filled pauses, repetitions, and revisions (i.e., mazes) provides the data necessary to assess if Jacqueline’s speech is within typical expectations for her age and grade.
So, using SALT software’s Standard Measures Report and SALT’s dynamic norms for comparison we are able to document that Jacqueline’s use of mazes was 2 standard deviations above expectations for her age and grade.
But what is at the root of these problems? Can we find more specific evidence to support our impressions? Back to the language sample analysis! Is the problem one of limited vocabulary? Jacqueline’s vocabulary diversity was within normal limits in the language sample as as noted by Number of Different Words (NDW), Type-Token Ratio (TTR), and Moving Average Type Token Ratio (MATTR). Other standardized tests corroborated strong vocabulary comprehension and use. So that’s probably not it.
Is the problem difficulty formulating sentences? Her sentences were typical in length and complexity as noted by values of MLU and Subordination Index scores. So, again, probably not it.
How about trouble retrieving specific labels? Her sample included frequent use of non-specific vocabulary words, such as “thing” and “stuff”. And she tended to use descriptions instead of a target word. SALT’s Database Maze Summary gives a breakdown of the content of Jaqueline’s mazes. We find the data reveals she produced high number of whole and part-word repetitions and revisions as well a high number of filled pauses. The grammatical categories analysis revealed a higher use of non-specific vocabulary words than perceived. So we may theorize that Jacqueline has word-finding difficulty.
So far we have determined that our initial clinical impression matches parent and teacher reports of the problem, quantified the extent of the deficit, and identified the likely root of the difficulty, giving us the scientific basis documenting that these behaviors pose significant challenges for communication and learning. We have firmly established the need for intervention.
But we have done more than just that. We have also set ourselves up to begin the EBP process all over again. We have gathered the evidence that we need in order to frame the next clinical question: ‘Should I use intervention A or intervention B?’ ASHA suggests using the PICO format for clinical questions, in which you initially propose two possible interventions (or the comparison between an intervention and nonintervention) and then turn to the scientific literature to determine which is best practice in addressing your client’s situation. This presumes that you know what the problem is and that you have an idea of how to address it. In effect, is assumes that you have already done a rigorous assessment.
In addition, the PICO format has us specify a desired outcome. This again assumes that you have done a rigorous assessment and that you will continue to assess as the intervention proceeds. How else would you be able to determine if progress towards the desired outcome is being achieved?
So now that we know that we have a 3rd grader with a significant word-finding disorder which impedes her learning in the classroom, what would an appropriate clinical question look like?
Let’s look at each element of the PICO format in turn:
- Population: 3rd graders
- Intervention: Semantic therapy
- Comparison: Word finding accommodation in the classroom
- Outcome: Functional communication in a classroom setting
Question: For 3rd graders with word finding disorders, is semantic therapy a more effective means of achieving functional communication in the classroom than word finding accommodation?
To sum up: EBP requires evidence from client assessment in order to document the problem and show evidence for the effectiveness of the proposed intervention. Language sample analysis provides the tool to quantify your listening impressions and match them to parent and teacher reports. Quantifying your clinical impressions validates the EBP process.
Jacobs et. al. (2012): “Tools for Implementing an Evidence-Based Approach in Public Health Practice.” Medscape: Preventing Chronic Disease. Accessed 11 June 2017: http://www.medscape.com/viewarticle/766812_7
Jensen-Doss and Hawley (2010): “Understanding Barriers to Evidence-Based Assessment: Clinician Attitudes Toward Standardized Assessment Tools.” Journal of Clinical Child & Adolescent Psychology (Vol. 39, Issue 6) Accessed 11 June 2017: http://www.tandfonline.com/doi/abs/10.1080/15374416.2010.517169
Ebbels et. al. (2012) “Effectiveness of Semantic Therapy for Word-Finding Difficulties in Pupils With Persistent Language Impairments: A Randomized Control Trial,” International Journal of Language & Communication Disorders (February 2012)