Published at: 2022-01-17
When including language sample analysis as part of a comprehensive evaluation to report language skills across settings and contexts, I always used to format my diagnostic reports with the following headings: “Observations and Parent/Teacher Report,” “Standardized Testing,” “Informal Assessments,” and of course, “Conclusions.”
Usually, I included my data on language sample analysis under the “Informal Assessments” section. Maybe it’s time to change that.
Typically, we consider standardized testing to come from measures like the Clinical Evaluation of Language Fundamentals (CELF) or the Comprehensive Assessment of Spoken Language (CASL) which measure fragments of language skills independently through multiple subtests. However, after reading through a newly published research article, I am rethinking how I can best include information on language samples that were elicited and analyzed using the SALT elicitation protocols SALT reference databases in my reports (Tucci et. al., 2021).
So, is SALT standardized or not? Well, that’s always been a difficult question to answer. Certainly, SALT’s elicitation protocols are standard.That is, there is a strict sample collection procedure for each language sampling context (conversation, narration, expository, etc.).
But having standard procedures is only half of the challenge of standardization. We also need to have a standard mechanism for interpreting individual results. In the past, SALT (and other methods of language sample analysis) had not yet demonstrated this type of standardization. In particular, the dynamic norming procedure that SALT utilizes was untested.
SALT uses dynamic norming, which is “the process by which clinicians select a subset of a normative database samples matched to their individual client’s demographic characteristics,” (Tucci et. al., 2021, pg 1). By using dynamic norming, SLPs are able to use a more precise comparison based on a comparison age-band, allowing the comparison set to change for each speaker.
As an example, a language sample elicited from a child age 5;0 can be compared to children ages 4;10 - 5;2 for a more precise comparison than if that same 5;0 child was compared to children 5;0 - 5;11. Standardized tests we are used to giving have static norms with pre-selected normative age bands, often in year-long intervals. These static bands would compare the 5 year old student to all children 5;0 -5;11 or 5;6 at a minimum. The smaller comparison set used in the dynamic comparison method may more accurately capture the speaker’s skills relative to age-matched peers. Dynamic norming also allows for more precision in tracking change over time, especially in young speakers.
Statistically, is it valid?
The researchers addressed two basic questions: 1) how many comparison samples are needed for stable results; and 2) are dynamic norms stable enough (accounting for standard error of measurement to be used for clinical decision making. Six transcript measures were calculated:
The data was analyzed from the SALT Narrative Student Selects Story and the English Conversational databases. These two elicitation protocols are the least restrictive as far as content. Conversations are open to a variety of topics. And, in the Narrative Student Selects Story context, the speaker chooses a familiar story that he/she relays to the clinician.
Regarding the first research question, the authors determined that, “Comparison samples of at least 35 transcripts were adequate for stable clinical comparison,” with +/- 4 month age-match band for conversational samples and +/- 8 month age-match band for Narrative Student Selects Story (Tucci, et. al., 2021, pg 11). What this means clinically, is that SALT can provide statistically stable data without a large comparison group as measured by SEM on six clinically relevant transcript metrics. The authors state that, “the more homogenous age bands likely offset the need for the larger numbers that are sometimes recommended (Tucci, et. al., 2021, pg 12).
So, what does this mean for my clinical practice?
I take the results of this research to mean that language samples collected using these SALT elicitation protocols (Conversation or Narrative Student Selects Story) can be considered a standardized assessment.Additionally, I can confidently report data that is precise based on more specific age-bands that better capture change in language skills over time. In the next diagnostic report I write, I just may move my language sample analysis data to the “Standardized Assessments” section of my report!
If you have access to ASHA Wire, log on check out the complete research article:
https://doi.org/10.1044/2021_JSLHR-21-00227
Source:
Tucci, A., Plante, E., Heilmann, J. and Miller, J., 2021. Dynamic Norming for Systematic Analysis of Language Transcripts. Journal of Speech, Language, and Hearing Research, pp.1-14.
No Comments yet. Be the first to comment.