Magento CommerceMagento Commerce

computerized language sample analysis


The History of SALT Software

by Jon F. Miller, Ph.D., CCC-SLP
Emeritus Professor, University of Wisconsin-Madison
CEO, SALT Software LLC 

In Fall of 1980 the Blue Book was published, Assessing Language Production in Children: Experimental Procedures, which detailed the language assessment methods we had developed to document the communication abilities of children with a variety of developmental disabilities. At that time, computers were making the transition from big mainframe machines to more accessible personal computers with the introduction of the Apple II, and the competing IBM PC. The language sample analysis process as detailed in the chapter 3 of the Blue Book required the recording of a sample of language, transcribing it by hand, categorizing, counting and summarizing various linguistic features by hand, then interpreting the results. This took a great deal of time, 20 - 30 hours depending on the number and complexity of analyses, the size of the sample and the severity of the child's language problems. It was clear at this point, that while the process yielded detailed description of children's language performance that could be compared with developmental trajectories of typical children, the effort required limited the utility of the methodology. Solutions to the time commitment required exploration. Computers were the obvious first choice since what computers did best was counting, perhaps an analysis like MLU could be done automatically. Once the transcript was entered, the potential for calculating a number of measures appeared possible.

That same fall, an undergraduate student stopped by my office at the Waisman Center to inquire about a job. He looked like a football player at 6'4" and about 220 lbs. I asked him what he was interested in and he replied that he was a computer science major and that he was interested in any kind of research work I might have available. The usual work for a computer person was the formatting of data sets for statistical analysis, which may involve database construction or the writing of special code to create data arrays of specific variables for statistical analysis. "Was it possible", I asked, "to develop a program to count words and morphemes from a transcript of children talking?" I brought out several transcripts of children talking with parents and examiners from our clinic (hand written) and he said that he would like to give it a try if I would tell him exactly how the calculations were to be done. I said, "I can do that since they have just written them down in the book", handing him a copy of the brand new Blue book. I said, "Read Chapter 3 and see what you can do with it." I didn't hear from him for three weeks and I was beginning to think that I would never hear from him again when he turned up with a big printout, remember the 14" wide printer papers with the tractor feed holes on each side? He handed me the paper with a printed transcript, and a table listing the MLU, number of different words and TTR for the child and the adult speaker. Needless to say, I was stunned. These measures, it turns out, were relatively easy to calculate, but the details of the calculations (the MLU was based on words and not morphemes) were the focus of this work for the next several years. The focus returned to the transcription process, developing a coding system for what constitutes a word, a morpheme, an utterance with the most transparent system possible. At this time, word processing was not readily available, so we relied on computer programming editors and text formatting programs to enter transcripts. This was not a user friendly operation and was every bit as daunting as the hand calculation process. It appeared that our solution to the burden of hand calculation had produced an equal burden of entering the transcript into the computer.

We were confident that training and practice would be the solution to this dilemma and to some extent this was true. Our students, however, did not agree as I recall. We were able to reduce the time required for these analyses enough to embolden me to contact the Madison School District to offer a collaborative effort to improve language assessment procedures. A number of the school SLPs were very excited about the possibilities to the extent that we organized a series of Saturday training sessions on the Waisman Center mainframe computer to learn the editor and our initial analysis programs. These turned out to be very frustrating sessions for everyone given the complexities of the editors, entering all format commands by hand, just to create a viable transcript. The possibilities of the process were sufficiently exciting, however, to motivate a core of committed individuals to work on the project. We began to meet monthly to consider how to move the process forward to create improved assessment procedures. We began by exploring a transcription service where University students would enter the transcripts into the computer and produce the analysis for the therapists in the group. This produced a debate that would last for over a year - "could a third party produce an accurate transcript?" and "could transcription be done by a non-professional?" About this time, one of our students came up with a name for the project, "Systematic Analysis of Language Transcripts", or SALT.

At the same time, development of the software proceeded on two fronts, ways to improve transcript entry including advances in the transcript format that would allow expanded analyses and examination of these new personal computers to see if they could be made to do our analyses. Did these new small computers have sufficient power to analyze a transcript, and what tools were available to enter a transcript into the computer? The PCs had an immediate impact on our transcript entry problems by producing word processing programs for text entry. The difficulty with these programs was that their formats were not compatible with what was required by the programming languages used for the analysis routines. At this time we were lucky to hire Ann Nockerts as our programmer for the project and she has been a major force in the project ever since. Our strategy was to work out the transcription conventions on the Waisman mainframe and then to see if they would work on the PC formats. Advances were complicated by the lack of standards in the computing field for text formats, significant differences in operating system requirements for text and the tools available for programming, e.g., programming languages across operating systems. Early SALT programs were written in "C" for the Harris mainframe, in Apple Pascal for the Apple II and in DOS for the IBM PC. Advances in the SALT analyses and ease of transcript entry were tied to advances in each of these areas.

Over the past 20 years we have produced countless versions of SALT, each one geared to a particular computer and each one easier to use than the last. The major problems encountered with this process are the time required for transcript entry and analysis, the standardization of the transcript itself so that the relevant features can be identified by the computer, and the development of appropriate comparison data to determine the level of performance of school age children on all aspects of productive language. We believe we have made significant progress in these areas and that we can continue to overcome the remaining barriers to using language sample analysis efficiently and effectively in identifying and monitoring change of children with disordered language performance.

© Salt Software LLC. All Rights Reserved.