I can’t use LSA to qualify students because it isn’t standardized

Is LSA using SALT a standardized assessment? No. But we think you should be taking language samples anyway. We frequently hear from SLPs that they can’t – or don’t – use LSA because it’s not standardized. We recognize that standardized tests are a necessary part of the process for qualifying students for services. We also recognize the debate over the use of standardized tests to make substantial decisions about programming and services. And, finally, we recognize that LSA using SALT Software does not fully meet the definition to be considered “standardized.” But we’re close. And in our opinion, with your clinical expertise and the data SALT produces to support your inclinations, the fact that the process of SALT’s LSA is standardized is a strong argument for including it in your assessments.


From Miriam Webster, Definition of STANDARD TEST:

a test (as of intelligence, achievement, or personality) whose reliability has been established by obtaining an average score of a significantly large number of individuals for use as a standard of comparison


Each of the processes SALT follows to elicit, transcribe, and to analyze samples follow the guidelines for standardization, ensuring the process has rules and thus consistency across assessments. Where we fall short is that our databases aren’t large enough and don’t cover all the geographic areas and demographics typically found in the more commonly used standardized tools in our field. In a perfect world, we would win big in the lottery. We would round up participants from every geographic corner of the U.S. and every demographic, as laid out by the U.S. Census. We would pay the participants handsomely for their time. We would pay the transcribers. And we would pay for a second pass of transcription on each sample to ascertain reliability, which we know would be excellent. But, we haven’t won the lottery – yet!

So, for now, let us share about the processes of LSA and why we believe those processes make an argument for using it in your diagnostic, regardless of whether or not your school district will accept it for qualifying/exiting criteria. When spoken language is in question, LSA is an essential part of the assessment process designed to describe language production in everyday communication contexts. It is the only assessment that assesses authentic communication from real life events. LSA evaluates a broad range of expressive language achievements rather than the more commonly-seen narrow range of skills evaluated by many tests, e.g., expressive vocabulary. Cultural and linguistic bias can be entirely absent from LSA, as the examiner is in control of that realm. SALT has focused on standardizing the assessment process, from elicitation through transcription and analysis, while overcoming some of the major issues related to standardized testing.

The elicitation process is standardized with explicit protocols for capturing conversation, narrative, expository, and persuasive language samples. This accomplishes two goals: recording consistent samples across speakers and allowing those samples to be compared with databases of typical speakers’ samples collected using the same protocol.

The transcription process is standardized with detailed conventions for coding words, utterances, morphemes, errors, pauses, speaking rate, and fluency to allow each feature of the language to be analyzed and reported consistently. The need for consistency is critical. For example, utterance boundaries are defined to ensure accurate mean length of utterance (MLU) calculations. And word roots are distinguished from inflected words to avoid counting words plus their inflections as “different” words for the number of different words (NDW) calculation. Standardizing the transcription process ensures that the software calculates every measure accurately and without confusion. Eliciting and transcribing language samples in the same “standardized” way brings us to the heart of SALT, the analysis.

The analysis process is standardized through the accurate calculation of measures of syntax, semantics, verbal facility, intelligibility, and errors – each presented in statistical tables as well as in narrative format. At this point we should look into the analysis process in more detail to get a glimpse of how SALT ensures the accuracy of each measure. There are several settings which affect how each measure is calculated. The “transcript cut” specifies the portion of the transcript to use for all calculations, e.g., start to end, start to a specified timing line, or start to a specified number of words or utterances. Within the transcript cut, some measures are calculated using all utterances while others are based on a subset of the utterances (the “analysis set”). The default analysis set, for example, contains the complete and intelligible, verbal utterances – excluding abandoned and interrupted utterances, utterances with unintelligible segments, and nonverbal utterances. The combination of transcript cut and analysis set determine the utterances included in the analyses. But it doesn’t stop there. Within each utterance some measures, such as speaking rate, are calculated using all the words while other measures, such as MLU, exclude words within mazes (filled pauses, repetitions, and revisions). SALT calculates all language measures quickly and accurately following algorithms which define the utterances and words used for each language measure.

Standardizing the elicitation, transcription, and analysis processes allows for comparison of your transcript to age and grade-matched samples selected from SALT’s reference databases. To ensure meaningful comparison, your sample must be elicited and transcribed following the same processes used to elicit and transcribe the database samples. Although SALT ensures accurate analysis through the use of built-in algorithms, there is an additional consideration which comes into play any time you compare samples of differing lengths. Some measures, such as NDW and number of errors, are affected by the number of words or utterances they are based on. To ensure comparable measures, your sample and the database samples must be equated by length for these calculations.

How does it work? First you select the database containing samples of appropriate speakers and elicitation protocol. Then you specify the age and/or grade range. Finally, you specify how you want the samples to be equated by length (same number of words, same number of utterances, same amount of time, or entire sample/task). When SALT calculates measures based on the same length, longer samples are cut at the specified length. Using the raw transcript file values as the basis for comparison reflects the dynamic nature of the database comparison process. SALT selects a unique set of samples to use for comparison. This process offers a distinctive approach to creating the normative data set ensuring the best possible match without resorting to statistical manipulation.

Can SALT’s approach to language sample analysis be considered standardized? The approach? Yes. And that’s an extremely important construct in LSA. We believe that the standardized process, paired with the clinical expertise of SLPs, outweighs eliminating it from assessment of spoken language. Assessing connected speech, from every day communication events, reveals where the speaker struggles and where he or she has facility. The outcomes of LSA can support the findings of standardized tests, can hone in on areas of concern, can track therapy progress, and should be part of every assessment of spoken language.




2018-07-02T10:30:40+00:00July 2nd, 2018|Solving barriers to implementation|

Leave A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.