Is it possible for students with significant cognitive disabilities (SCD) to learn to read?  This question was asked by Lemons, Zigmond, Kloo, Hill, Mrachko, Paterra, Bost & Davis in the article, Performance of Students With Significant Cognitive Disabilities on Early-Grade Curriculum-Based Measures of Word and Passage Reading Fluency (2013, p.  409).

Lemons and colleagues (2013) state that “although evidence regarding effective practices for teaching reading to children with SCD has increased…there have been relatively few studies of whether literacy goals for these students can be accomplished” (p. 409).

Reading these lines, I was baffled.

What do they mean, I thought, when they say that evidence for an educational practice has increased, but then say, that they don’t know whether the goals, that were linked to those practices, “can be accomplished“?

If something is shown to be effective, wouldn’t it have to result in accomplishment of the goals?   Otherwize, how can it be considered effective?

Lemons and his colleagues are researchers from the University of Pittsburgh, where there exists similar legislation to Manitoba’s Bill 13, Appropriate Education Programming.

The American legislation, the No Child Left Behind Act of 2001, has ultimately the same requirements that we do, for providing equal opportunity to education, for all students.  It requires that “all students – including those with SCD…participate in an accountability system that holds schools responsible for teaching academic content to everyone” (p.  409).

The goal of this system was to determine whether enough students were meeting outcomes at their grade level, to a sufficient degree (Lemons et al., 2013, p. 409).

When this legislation was brought in, however, it was clear that some students (those with significant disabilities) needed to be tested against different standards than what was used with the general school population.  To meet this need, each state created something called the AA-AAS, which are “alternate assessments based on alternate academic achievement standards” (p. 409).

Lemons and colleagues reflected that, even though a different assessment tool could be used to measure growth for various students with disabilities, it was absolutely the case that academic content must be taught to these students.  This included literacy instruction.  Also, teachers must be able to show measurable progress among the students in this population towards becoming readers, if they were to fullfill their obligation to the No Child Left Behind Act.

Upon reading the Lemons article a second time, I have come to a hypothesis as to the meaning of the statement made by the authors, that I had found so confusing.

I have come to see that essentially it has been difficult for educators to create realistic literacy goals for students with disabilities, and also to measure the growth that does occur, when students with SCD are provided with evidence-based reading interventions.

The authors stated that not enough is known, about the academic skills of students with various disabilities.  They reasoned that teachers would be better equipped to create realistic, appropriate goals if they had better data on the existing skills of this population of students, and could measure progress more accurately and frequently (Lemons et al., 2013, p.  410).

One difficulty, is that when reading tests designed for use with the typical student population, are used with students with disabilities, it is not known which tests to use and for what grade-level:  “although there are established benchmarks or targets for performance…for students who are performing at or near grade level, it is much less clear how to evaluate the performance of students with SCD when they are assessed with early grade-measures (e.g., an eighth grader’s score on a second-grade oral reading fluency passage” (p.  411).

Essentially, it is hard to tell whether student progress has occurred, when the assessment tools aren’t sensitive enough to measuring the smaller gains that occur over time for students with significant disabilities (p. 423).

Thus, there are interventions that are shown to work, to be effective with this population, however, sometimes the data may indicate, incorrectly, that no progress that has been made.

This observation, of the mismatch between improvement in early literacy skills in students with SCD, and the tools that measure reading achievement, rang true for me.  I remember having to report our school’s literacy data to our divisional superintendent, following a year of intense reading intervention with many students in the school.

In October of that year, I had used the Jerry Johns Basic Reading Inventory, which is an informal reading inventory (IRI) that is used by teachers to  “evaluate a number of different aspects of students’ reading performance” (International Reading Association, n.d.).

The majority of the students I had worked with daily, over the course of the school year, had scored at the Pre-Primer (Kindergarten) level when I had tested their reading levels in October.  Shockingly when I assessed them again in May, I discovered to my dismay, that they scored at the Kindergarten level again!  I was certain that these students had made substantial gains, yet the data did not reflect that.  How could this be?

At the start of the year, the students in question had not yet developed the alphabetic principle, i.e.:  the ability to match letters with their corresponding sounds.  By the end of the year they knew the sounds of all of the letters of the alphabet and were beginning to decode two and three letter words accurately.  I knew that the students had made excellent progress, and had considered my intervention to be successful.  I had seen the students develop the ability to read decodable three letter words!  How could it be that when it came time to report on progress, the data showed almost no growth!

I suppose this must be what Lemons and colleagues (2013) had observed themselves, when they sought to find out which assessment tools would be most effective in measuring the reading skills of the 7 440 students with Significant Cognitive Disabilities (SCD), Learning Disabilities (LD) and Autism, whom they assessed (p.  412).

Their conclusion was that that better understanding of the reading skills of this population of students is needed.  Additionally, they found a need for further studies into establishing “which measures are most appropriate to capture growth” (p. 423).  These studies could “evaluate the frequency at which the measures should be administered to most efficiently capture growth, and provide a better understanding of expected rates of progress for students with SCD” (p. 423).

Allor, Mathes, Roberts, Cheatham & Otaiba, (2014) share the desire for an assessment tool that would give justice to the growth that does occur, however minimal, over the course of many months, when working with students with SCD.  They conclude, in their study of the effectiveness in using scientifically based reading instruction with students with SCD, that “more sensitive measures are needed to determine when small amounts of progress are being made so teachers and students will recognize the results of their hard work and continue instruction that is effective” (p.  304).

I also would find it especially helpful to have a trajectory of skills identified, which could be used to support lesson planning and progression of skills along a predetermined track, if such a thing could be created.  This article made me question how the current trajectories for literacy development could be further broken down and stratified, and how this could benefit both special education teachers and the students with whom they work.

Do you follow a specific trajectory or sequence when teaching students with SCD to read?  What system do you use?

 

 

References

Allor, J. H., Mathes, P. G., Roberts, J. K., Cheatham, J.P. & Otalia, S. A. (2014).  Is scientifically based reading instruction effective for students with below-average IQs?  Council for Exceptional Children, 80 (3), 287-306.

Lemons, C. J. et al. (2013).  Performance of students with significant cognitive disabilities on early-grade curriculum-based measures of word and passage fluency.  Council for Exceptional Children, 79 (4), 408-426.

International Reading Association. (n.d.).  Reading Rockets.  Retrieved February 8, 2020 from   https://www.readingrockets.org/article/critical-analysis-eight-informal-reading-inventories