ASHA has a great article on dynamic assessment which gives a clear definition of dynamic assessment:
“Dynamic assessment (DA) is a method of conducting a language assessment which seeks to identify the skills that an individual child possesses as well as their learning potential. The dynamic assessment procedure emphasizes the learning process and accounts for the amount and nature of examiner investment. It is highly interactive and process-oriented.”
(ASHA, 2021, “Dynamic Assessment”)
Dynamic assessment allows the SLP to have an active role in the assessment in order to determine the student’s current skills, as well as her potential for learning.
Dynamic assessment is particularly useful for students of diverse cultural or linguistic backgrounds because it is NOT static or standardized, and yet it can still be used to differentiate between a language disorder and a language difference.
The basic premise is that students who are able to make rather large changes (e.g., better performance) in a short amount of instructional time are likely to have language difference, not disorder. Children who more likely have language disorders take a longer time to learn new skills despite instruction, or they may have difficulty generalizing skills.
So how do we go about proving disorder over difference?
In general the framework of dynamic assessment is test→ teach → retest. This is a familiar framework for most SLPs: we gather our baseline data, provide interventions and/or a mediated learning experience, then measure the same skill again. Hopefully there has been growth when we measure after intervention. The fluid part of dynamic assessment is that the SLP can adjust therapy and/or strategies during the teaching portion of the dynamic assessment.
There are a couple ways that language sample analysis can be used depending on the target skill. An example may be narrative skills, learning how to tell a good story. In this case the language sample could be used as the test-retest part of dynamic assessment. We are assuming that the student’s narrative skills are weak and so narratives are the target skill.
Baseline data could be collected using either the SALT reference database story retell elicitations as baseline (test) and then again after the mediated learning experience (retest). The narrative scoring scheme (NSS) could be used to measure progress across narrative macrostructure after teaching sessions. The great part about the NSS is that SALT has normative data, so scores can be compared to age-matched peers. Alternatively SLPs may have another way to measure narrative skills using a different scoring rubric. Maybe the SLP is only interested in seeing if the “Five Ws” were covered (e.g., who, what, where, when why).
Here is an example of test-retest using a Doctor DeSoto (Steig, 1983) narrative story retell.
The story retell was used as the test and retest. The target was narrative skills as this was identified as a possible area of challenge during the referral process. However, after looking at the composite scores from the “test” and “retest” language samples we can see that the student increased his overall NSS score by 8 points with improvements in the categories of “Introduction,” “character development,” “mental states,” “referencing,” and “ cohesion.” So with instruction and increased opportunity to use narrative skills in a mediated learning experience with scaffolding and supports, the student was able to increase narrative skills significantly in a short period of time. In this case the use of dynamic assessment seems to suggest that the student may NOT have a language impairment: the progress made within the teaching phase of dynamic assessment suggests that the student may just need additional supports (e.g., use of graphic organizer, additional time to organize thoughts, etc.) in the classroom when relaying information using narrative language.
There is a plethora of other language skills that language sample analysis would be useful for in dynamic assessment. Here are a couple more examples:
- Grammatical morpheme use. Perhaps your data from standardized testing indicated errors in or little use of grammatical morphemes. SALT easily generates reports on morphemes. So again, a language sample could be collected as a baseline (test) and then again after the mediated learning (retest). A conversational language sample would be an ideal way to look at morphemes. A conversation language sample can be collected across different environments and assist in assessing whether the skill is transferring to different settings and situations. The key here is flexibility!
- Vocabulary. Vocabulary usage always seems like this huge moving target that is hard to tackle, for me at least, because it’s so dependent on exposure and background. However, language sample analysis can look at the number of different words, or pull up grammatical categories on types of words (e.g., verbs, pronouns). I recall one student who I was working with who rarely used verbs in her sentences. It was an odd phenomenon to have an upper elementary student NOT use verbs. However, it was easy to track progress using the grammatical categories report! Here’s an example of the Grammatical Categories Report, listing all the verbs in two samples. It’s easy to link two language samples together and then generate reports with side-by-side data.
As we head into the new school year, keep dynamic assessment in your assessment toolkit as part of your informed assessment! Language sample analysis can help simplify the process and provide great data to prove the presence of a language disorder or maybe only a language difference!
Dynamic Assessment Resources:
Here are a few research articles on dynamic assessment. If you have access to ASHAWire publications with your ASHA membership you can access these free of charge.