Are IQ Tests Necessary Anymore, & What is a Good Source when Carrying out Research, After All?

Are IQ Tests Necessary Anymore, & What is a Good Source when Carrying out Research, After All?

Have you had the experience of working closely with a student who has average intelligence, but is not able to read?  One student who I will never forget, who I will call Andy, was wise beyond his years.  In eighth grade, Andy chatted easily with adults on current events.  Yet Andy could not read.  He needed to have his books and assignments read aloud to him, and an adult recorded his responses, when it was time for tests and assignments.  Yet, his ELA teacher once told me that, at test time, she had to make sure that other students did not sit too closely to him, for fear they would hear his answers and copy them.  He loved novel studies, but was not able to read above the Grade One level.

What is it like for a person to advance to high school without the ability to read early-elementary level text?  How different is it for a student with average intelligence, as opposed to someone with a significant cognitive disability, to face this reality?

Dr. Freeze (2020) describes the ability to read as existential.  If a person cannot read, it impacts their sense of self.   His description of the struggle to read, for kids like Andy, is insightful: “…The stakes are raised for non-readers when they enter school. More than any other variable, knowing how to read predicts academic and social competence, confidence, engagement and positive outcomes throughout the school years. If children do not learn to read at elementary school, they face an existential crisis. This crisis first appears when children fail to make the transition from “learning to read” in Grades 1, 2 and 3 to “reading to learn” in Grades 4, 5 and 6. During this transition, non-readers’ personal self-doubts, frustrations and reluctance to read evolve into blaming the teacher or the book, resistance to reading activities, pretend reading and learned helplessness. By high school, non-readers feel shame, inferiority and anger…It should be noted that some non-readers find success in work and life. However, I have never met a non-reading adult, successful or otherwise, who did not lament that fact that he or she had never learned to read” (Chapter 5, pp 2-3).

Costello, Foss, King, Mann, Schupack & Wilkins (2015) concur:   “One’s language is very important to one’s identity.  When children with dyslexia struggle with reading and writing in their native language, it damages their perception of themselves.  A student’s frustration with the tasks of language may contribute to low self-esteem and the mistaken belief that he or she is simply unable to learn” (p. 1, Lesson 7).

Truthfully, had it not been for the fact that our school psychologist assessed Andy with a reading disability, and explained to me that, by definition, a person with a reading disability has average intelligence, I would likely have had an entirely different perception of Andy and my expectations for him would have been drastically different.  I might have assumed that he had very low intelligence, due to his inability to read, and would have made inaccurate statements and recommendations to his teachers.  I am grateful to our school psychologists for their ability to reveal to us so much about how the students we work with think and learn, and what is feasible for our students, in terms of academic and life goals.

Additionally, if I had known more about teaching reading to struggling readers back then, I would likely have been able to do more for Andy, in teaching him to read.  Unfortunately, in my own growth as a teacher, I had not yet focused on reading research, and was learning instead about supporting teenagers with significant disabilities in transitioning to the work place, and to assisted living communities.  Andy was an anomaly to me.  The other students I worked with from day to day had very low intelligence, and were not able to read as a result.  These two types of readers are addressed in the article that is the focus of today’s post, a research article entitled IQ is Not Strongly Related to Response to Reading Instruction:  A Meta-Analytic Interpretation, (Stuebing, Barth, Molfese, Weiss & Fletcher, 2009). 

 Now, in the USA at least, it will become much more difficult for teachers to have students assessed for reading disabilities or dyslexia.  Reasons for this change are explained below, but I have to say first of all, that I personally feel that not having this information puts teachers and students at a disadvantage in some ways.  It allows teachers, like myself, to jump to conclusions about a student’s intelligence and abilities, and removes the ability to check in with a psychologist to learn the true nature of a student’s strengths and weaknesses.  It allows for huge misperceptions on the part of everyone involved with the student.  I have heard arguments in opposition to the use of IQ testing, which warn us against labelling students, but I find that a label would not to be as detrimental as the alternative.  What if, for lack of an IQ test, we completely misunderstand a student’s cognitive ability, and set the bar much too low, or impossibly high?  What are your thoughts on the issue?

Prior to the revisions to Individuals with Disabilities Act (IDEA) in 2004, schools used something called the “Discrepancy Model” to determine who received special education supports.  Essentially, this model involved checking whether there was a difference, or discrepancy between a child’s IQ scores, and how well the student was doing in school, or whether they could meet grade level outcomes.  A student who scored in the average range on an IQ test, but was struggling to read at grade level, could be diagnosed with a specific learning disability in the area of reading.  Once given this diagnosis, students were provided with special education services.  (Rosen, n.d.)

Now, in place of this model, many schools in the USA use the Response to Intervention (RTI) model.  This is the model that is used commonly here in Manitoba, as well.  In this model, there is no longer a significant need for IQ tests and specific diagnoses, when students are found to be struggling at reading.  Students are provided with support when they are first seen to be experiencing challenges.  The Response to Intervention (RTI) model “looks at all students’ reading, writing and math skills early in the school year. Then it provides targeted support to those who are struggling.  Children who don’t respond to increasing support may then be considered for special education. The benefits of RTI: Students get help early. And they don’t have to wait to prove eligibility in order to get support” (Rosen, n.d. section 5, paras 1-2).

Teachers who work with struggling readers usually work with two different types of students, according to Joseph (2002), those with “IQ-reading achievement discrepancies,” (those described above, like my student Andy), and “students with a combination of low ability and low reading achievement” (para 5). “Low ability readers make up the largest number of poor readers.  They tend to have lower than average IQ and have below grade level listening comprehension, word recognition, and reading comprehension performance” (para 6)

Many research studies have been conducted over the past decades that look at how effective IQ scores are in predicting student reading achievement, and how responsive students will be to reading intervention (Stuebing et al., 2009, pp. 31-32).

Steubing et al., (2009) looked at the results from a large number of studies of this kind, some which indicated that IQ was helpful is predicting student outcomes, and some which determined that there was very little difference in the learning outcomes of struggling readers with and without diagnoses of reading disabilities.  The goal of the Steubing meta-analysis was to look at the multiple studies that had been carried out over time, to determine for certain, whether IQ scores could tell us how students would respond to reading interventions.

Sometimes when meta-analysts gather data, they do not include all of the data that is available to them, even if that data is valid.  If a study shows a small effect, or if the results are considered “nonsignificant”, that study may not be sent in to be published, and the results may be ignored.  When only studies that show significant effects are included in a meta-analysis, the researchers are not seeing the whole picture.  When this happens, a positive bias can occur in the research (Steubing et al., 2009, p.  36).

The meta-analyses carried out by Steubing et al. was unique in that it included data that is sometimes left out, for the reasons described above.  The methods they used involved estimation.

When reading this, I asked myself, “Since when is it more accurate to make an estimation?  Aren’t estimates by definition less precise?”

Yet, the authors state that this method results in more accurate data, since a larger number of studies and therefore, more participants, are included and analyzed.  It is an approach that was advocated for by researchers Lipsey and Wilson (2002, as cited in Stuebing et al.).  It is a more difficult approach, that involves looking closely at studies that were unreasonably ignored in previous meta-analyses.

Apparently, in many meta-analyses, the researchers take the easy road, and look only at published studies that show a significant impact or difference.  it is simpler to leave out studies in which the data is less easy to analyze or studies which are not as easily retrieved, since they were never published.  This is something so common in fact, that it has been termed the “file drawer problem” by Rosenthal.  (as cited in Steubing et al., 2009, p. 36).  Again, a red flag rose in my mind, as I have been taught that the way to determine whether a source is reputable is whether or not it has been published by a respectful journal.

The researchers found, after looking at this much larger amount of data, including studies previously left out, that there were only very minor differences among the two groups of struggling readers. Thus, there was very little basis for requiring IQ tests (p. 44).  If the IQ test tells us that a learning disability exists or does not exist, but fails to impact how we teach reading to kids, I can see why people may question the need for IQ tests.

Essentially it was concluded that there is no difference in how these two groups of students learn to read:  IQ scores do not have a role in planning interventions, or in matching interventions to readers, since there is no important difference in how we should approach teaching reading, to the two types of students.  There is no difference in “growth patterns” and “no significant differences between these two groups of readers on how they develop reading precursor skills” (Wristers, Francis, Foorman, Fletcher, & Swank, as cited in Joseph, 2002).

What do we use then, if we do not use IQ scores, when planning reading interventions?  Instead, teachers can use tests of phonological awareness or reading comprehension, and be guided by the data they collect on their own.  An excellent article on how to support struggling readers with and without reading disabilities can be found in the following link:

https://www.readingrockets.org/article/best-practices-planning-interventions-students-reading-problems

I suppose that leads me to conclude that requesting an IQ test to determine which reading intervention to use with a particular student, may not be necessary.  I do, however, see the value in IQ tests when it comes to long range planning for students who struggle to read and learn.  These tests help teachers and parents to understand how an individual child’s brain works, what methods might be especially effective in supporting a specific learner, and what to expect in terms of progress over time.  I would like to invite you to share your own thoughts on IQ tests, below.

I would also like to invite you, colleagues of mine, who may have more experience with research and meta-analyses, whether the methods used in this study are in fact proper, or whether the results should be questioned, due to the use of estimation and non-published studies.  I am tempted to believe that the methods used by Steubing et al. are suitable and appropriate, considering that their results were published in a well-respected journal, even if the data from some of the studies they consulted was not.  Also, it makes sense to me that leaving out data can skew results, especially if that data is properly acquired.  What do you think?  Please share your thoughts and knowledge on the topic below!

 

 

References

Costello, S., Foss, J. M., King, D. H., Mann, M., Schupack, H. & Wilkins, A.  (2015). Academy of Orton-Gillingham Practitioners and Educators.  AOGPE Subscriber Course – Lesson 7:  History of the English Language, Part 1.   Retrieved February 13, 2020 from   http://courses.ortonacademy.org/

Freeze, R. (2020). Precision Reading: Instructors’ Handbook (3rd Edition). Winnipeg, MB: D. R. Freeze Educational Publications (www.precisionreading.com).

Joseph, L.  (2002).  Best practices in planning interventions for students with reading problems.  Reading Rockets.  Retrieved February 22, 2020 from https://www.readingrockets.org/article/best-practices-planning-interventions-students-reading-problems

Rosen, P.  (n.d.).  The discrepancy model:  What you need to know.  Retrieved February 22, 2020 from  https://www.understood.org/en/school-learning/evaluations/evaluation-basics/the-discrepancy-model-what-you-need-to-know

Stuebing, K. K., Barth, A. E., Molfese, P. J., Weiss, B., & Fletcher, J. M.  (2009).  IQ is not strongly related to response to reading instruction:  a meta-analytic interpretation.  Council for Exceptional Children,  76 (1), 31-51.

The Iris Center.  Star Legacy Modules.  Retrieved February 22, 2020 from http://www.ideapartnership.org/documents/IQ-RTI.pdf

Baffling Observations Made by Our American Colleagues in Special Education

Baffling Observations Made by Our American Colleagues in Special Education

Is it possible for students with significant cognitive disabilities (SCD) to learn to read?  This question was asked by Lemons, Zigmond, Kloo, Hill, Mrachko, Paterra, Bost & Davis in the article, Performance of Students With Significant Cognitive Disabilities on Early-Grade Curriculum-Based Measures of Word and Passage Reading Fluency (2013, p.  409).

Lemons and colleagues (2013) state that “although evidence regarding effective practices for teaching reading to children with SCD has increased…there have been relatively few studies of whether literacy goals for these students can be accomplished” (p. 409).

Reading these lines, I was baffled.

What do they mean, I thought, when they say that evidence for an educational practice has increased, but then say, that they don’t know whether the goals, that were linked to those practices, “can be accomplished“?

If something is shown to be effective, wouldn’t it have to result in accomplishment of the goals?   Otherwize, how can it be considered effective?

Lemons and his colleagues are researchers from the University of Pittsburgh, where there exists similar legislation to Manitoba’s Bill 13, Appropriate Education Programming.

The American legislation, the No Child Left Behind Act of 2001, has ultimately the same requirements that we do, for providing equal opportunity to education, for all students.  It requires that “all students – including those with SCD…participate in an accountability system that holds schools responsible for teaching academic content to everyone” (p.  409).

The goal of this system was to determine whether enough students were meeting outcomes at their grade level, to a sufficient degree (Lemons et al., 2013, p. 409).

When this legislation was brought in, however, it was clear that some students (those with significant disabilities) needed to be tested against different standards than what was used with the general school population.  To meet this need, each state created something called the AA-AAS, which are “alternate assessments based on alternate academic achievement standards” (p. 409).

Lemons and colleagues reflected that, even though a different assessment tool could be used to measure growth for various students with disabilities, it was absolutely the case that academic content must be taught to these students.  This included literacy instruction.  Also, teachers must be able to show measurable progress among the students in this population towards becoming readers, if they were to fullfill their obligation to the No Child Left Behind Act.

Upon reading the Lemons article a second time, I have come to a hypothesis as to the meaning of the statement made by the authors, that I had found so confusing.

I have come to see that essentially it has been difficult for educators to create realistic literacy goals for students with disabilities, and also to measure the growth that does occur, when students with SCD are provided with evidence-based reading interventions.

The authors stated that not enough is known, about the academic skills of students with various disabilities.  They reasoned that teachers would be better equipped to create realistic, appropriate goals if they had better data on the existing skills of this population of students, and could measure progress more accurately and frequently (Lemons et al., 2013, p.  410).

One difficulty, is that when reading tests designed for use with the typical student population, are used with students with disabilities, it is not known which tests to use and for what grade-level:  “although there are established benchmarks or targets for performance…for students who are performing at or near grade level, it is much less clear how to evaluate the performance of students with SCD when they are assessed with early grade-measures (e.g., an eighth grader’s score on a second-grade oral reading fluency passage” (p.  411).

Essentially, it is hard to tell whether student progress has occurred, when the assessment tools aren’t sensitive enough to measuring the smaller gains that occur over time for students with significant disabilities (p. 423).

Thus, there are interventions that are shown to work, to be effective with this population, however, sometimes the data may indicate, incorrectly, that no progress that has been made.

This observation, of the mismatch between improvement in early literacy skills in students with SCD, and the tools that measure reading achievement, rang true for me.  I remember having to report our school’s literacy data to our divisional superintendent, following a year of intense reading intervention with many students in the school.

In October of that year, I had used the Jerry Johns Basic Reading Inventory, which is an informal reading inventory (IRI) that is used by teachers to  “evaluate a number of different aspects of students’ reading performance” (International Reading Association, n.d.).

The majority of the students I had worked with daily, over the course of the school year, had scored at the Pre-Primer (Kindergarten) level when I had tested their reading levels in October.  Shockingly when I assessed them again in May, I discovered to my dismay, that they scored at the Kindergarten level again!  I was certain that these students had made substantial gains, yet the data did not reflect that.  How could this be?

At the start of the year, the students in question had not yet developed the alphabetic principle, i.e.:  the ability to match letters with their corresponding sounds.  By the end of the year they knew the sounds of all of the letters of the alphabet and were beginning to decode two and three letter words accurately.  I knew that the students had made excellent progress, and had considered my intervention to be successful.  I had seen the students develop the ability to read decodable three letter words!  How could it be that when it came time to report on progress, the data showed almost no growth!

I suppose this must be what Lemons and colleagues (2013) had observed themselves, when they sought to find out which assessment tools would be most effective in measuring the reading skills of the 7 440 students with Significant Cognitive Disabilities (SCD), Learning Disabilities (LD) and Autism, whom they assessed (p.  412).

Their conclusion was that that better understanding of the reading skills of this population of students is needed.  Additionally, they found a need for further studies into establishing “which measures are most appropriate to capture growth” (p. 423).  These studies could “evaluate the frequency at which the measures should be administered to most efficiently capture growth, and provide a better understanding of expected rates of progress for students with SCD” (p. 423).

Allor, Mathes, Roberts, Cheatham & Otaiba, (2014) share the desire for an assessment tool that would give justice to the growth that does occur, however minimal, over the course of many months, when working with students with SCD.  They conclude, in their study of the effectiveness in using scientifically based reading instruction with students with SCD, that “more sensitive measures are needed to determine when small amounts of progress are being made so teachers and students will recognize the results of their hard work and continue instruction that is effective” (p.  304).

I also would find it especially helpful to have a trajectory of skills identified, which could be used to support lesson planning and progression of skills along a predetermined track, if such a thing could be created.  This article made me question how the current trajectories for literacy development could be further broken down and stratified, and how this could benefit both special education teachers and the students with whom they work.

Do you follow a specific trajectory or sequence when teaching students with SCD to read?  What system do you use?

 

 

References

Allor, J. H., Mathes, P. G., Roberts, J. K., Cheatham, J.P. & Otalia, S. A. (2014).  Is scientifically based reading instruction effective for students with below-average IQs?  Council for Exceptional Children, 80 (3), 287-306.

Lemons, C. J. et al. (2013).  Performance of students with significant cognitive disabilities on early-grade curriculum-based measures of word and passage fluency.  Council for Exceptional Children, 79 (4), 408-426.

International Reading Association. (n.d.).  Reading Rockets.  Retrieved February 8, 2020 from   https://www.readingrockets.org/article/critical-analysis-eight-informal-reading-inventories

 

A “Sobering Reality”

A “Sobering Reality”

Have you been tempted to use a certain reading intervention with the students you work with, who are diagnosed with Significant Cognitive Disabilities (SCD), only to find that the intervention is intended exclusively for students with IQ scores of 70 and above?

I am curious to find out what happens when special education teachers, like myself, teach students with SCD, using reading interventions designed for use with the general population.  What problems would we encounter when trying to implement these interventions with students who have very low IQ?  Would we see results?

If a strategy is proven to be effective in teaching reading to struggling readers, does that make it a good option for teaching students with SCD (Allor, Mathes, Roberts, Cheatham & Otaiba, 2014, p.289)?  Allor and her colleagues carried out a long-term study of the impact of daily, intensive reading instruction on students with IQ scores in the 40-70 range, that is, students with intellectual disability (ID) or SCD.  Students in the borderline range for ID were also included, that is, students who scored between 70-80 on IQ tests (p. 288).

This was a “4-year longitudinal study examining the effectiveness of comprehensive, research-based reading instruction for students with low IQs” (Allor et al., 2014, p. 288).  Results were published in the journal Council for Exceptional Children, under the title Is Scientifically Based Reading Instruction Effective for Students with Below-Average IQs?

The intervention used was an evidence-based reading intervention, that had been proven to be effective with struggling readers, (not including those with ID), called Early Intervention in Reading (Mathes & Torgesen, as cited in Allor et al., 2014, p. 293).  As was the case for the intervention that I had been considering, for students who I work with, many students in the study did not have the skills that were identified as prerequisites for the intervention, and so additional lessons had to be created and taught before the students could begin (p. 293).

The intervention had many features considered essential in teaching students with ID, including direct instruction.  Additionally, the lessons were “highly detailed to make instruction explicit and…fast paced in order to maximize student engagement and motivation.  All skills [were] modelled and cumulative review [was] ample to ensure mastery and maintenance of skills” (p. 293).  Very “consistent, explicit, and repetitive routines, focusing on key-skills” were used (p. 303).  Individual pacing and behavioral supports were put in place, and the groups were kept small, with only one to four students to each teacher (p. 303).  A number of measures were implemented to ensure that the teachers were well trained, and that they applied the intervention with fidelity (p. 294).

When summarizing the results of the study, the authors concluded:  “students with low IQ do benefit from comprehensive reading programs that were designed for struggling readers, and readers with LD, but progress is slower” (p. 303).

Essentially, the data showed that the students did gain improved scores in reading words and passages, over the course of the study.  The verdict was that students with ID “should be provided with evidence-based reading instruction” (p. 302).

However, Allor et al. (2014) also noted that it took between one and three years for students to achieve reading scores in the average range, on Grade 1 reading passages.

They concluded the following:   “The sobering reality is that a typical student in our treatment group with an IQ of 75 (borderline range) would require 52 weeks of intervention to move from 20 words per minute (wpm) to 60 wpm on first grade passages. Thus, based on our data …students with IQs in the moderate range (40-55) would require approximately three and a half years to move from 0 wpm to 20 wpm [on first grade passages]” (italics mine, p. 302).

When looking critically at this study and the conclusions that the researchers came to, I am left questioning the time and effort invested, over the course of four years, when the results were of less than one year of growth, for the students involved.

At the same time, when working with students with SCD personally, I find it to be incredibly rewarding to see progress, especially when it is so long in coming, and to see students filled with pride as they gain new skills.  In fact, I find myself spending a disproportionate amount of time with one to five different students at the school where I work.

I celebrate their small successes with leaps of joy, and as much as I debate this is my mind, I can’t bring myself to spend my time elsewhere.  This minute growth in students with SCD, who begin to believe in themselves and who glow with satisfaction when they learn something new, is what makes my heart sing.

However, I am reminded of the advice given to me from one of my professors in my post-baccalaureate program, when I was working toward special-education certification.   Dr. Freeze advised our class that it is best practice to “seek interventions that result in more than one year’s growth per year, so that students eventually catch up” (Rick Freeze, personal communication to author, February 23, 2020).

Spending four years in working toward improved reading skills, and seeing only one year of growth, as occurred for many students in the study by Allor et al., is not defensible, Dr. Freeze would likely argue.  His advice is pasted next to my computer at work, as a reminder of how I should run my days, what I should keep as priorities, and how I should delegate my time:

“Spend your time with students who have the greatest potential for growth,” he would say.  “Put your focus where there will be the biggest pay-off.  For a student who does not progress when you do put lots of effort in, there is little difference when you do less.” (Rick Freeze, personal communication, February 17, 2017)

It is his advice that I have posted up next to my computer.  My intent is to use it like a ship’s binnacle, to steer me in the right direction, in the raging ocean of resource teaching.  However, despite the important guidance from Dr. Freeze I find that yet again, I am off-course, spending my time in a whirlpool, seeing very little change.

Recently, Dr. Freeze’s advice has been on my mind a lot.  I question my practice of spending such large periods of time each day on teaching reading to students who, in all likelihood, could still be reading at Kindergarten or Grade 1 level in three years. One of the students I work with was recently diagnosed with SCD, and is making very minimal progress.  I celebrate her successes, and that of the other five students who I work with almost daily, who are the students with the lowest reading scores by grade, in the school.

At the same time, as February moves full speed ahead, I see that these students are still reading at Kindergarten level, far below their grade placement level, despite intense, research-based intervention, and will likely not have gained even a year’s growth in reading, over the past year.

What do I do when June rolls around and my students have barely skinned their chins on Grade 1 level reading?  I can’t see myself dropping the intervention, yet I question this.  What is the right thing to do?

The other side of the story, before you decide that I must keep on with these five students into the next year and the next, I am obliged to include here, the Whole Picture.  I am sorry to say that with all this time spent on my chosen five, I have not met my goal of working with other students diagnosed with reading disability, with IQ scores above 70, and for whom my research-proven interventions are designed for.  These students might make quick progress, if given the opportunity to learn, using a method honed specifically to their needs.

It has been my intent to make time for them, but my days are taken up with the five lowest achieving students who bring me joy and challenge me, and week after week I see that I have not started my planned intervention with the students at my school diagnosed with reading disability.

What would happen if I ended my reading interventions with the five weakest readers, my intervention that has resulted in less than one year of reading growth for even one of them, over the past two years, and worked with one or two students who have a greater likelihood of making considerable gains?  Lemons et al. found that students with learning disabilities (IQ scores in the 70-80 range) made the largest gains as a result of their intervention, when compared to those with Intellectual Disabilities and autism (p. 415).

Should I change course, and focus on the students who have the highest potential for growth?  If I do, what will become of the students whose reading intervention is terminated?

I am curious to hear your thoughts.  What advice do you follow, with regards to how you spend your time as a resource / student support / literacy-lead teacher?

Do you follow any specific rules in terms of intervention length?  Does your school or school division provide guidelines as to who can receive reading intervention, and for how long?  How do you feel about the advice you have been given with regards to this?

How should resource teachers spend their time?  Which students should receive intensive reading intervention, and for how long?  How do you determine this?

What interventions have you used?  Are you careful to choose interventions that have been empirically validated to work for struggling readers without SCD?  How have you modified or adjusted the interventions to suit the students’ needs?  Please share your thoughts and experiences!

References

Allor, J. H., Mathes, P. G., Roberts, J. K., Cheatham, J.P. & Otalia, S. A. (2014).  Is scientifically based reading instruction effective for students with below-average IQs?  Council for Exceptional Children, 80 (3), 287-306.

Freeze, R. (2020). Precision Reading: Instructors’ Handbook (3rd Edition). Winnipeg, MB: D. R. Freeze Educational Publications (www.precisionreading.com).

Lemons, C.J., Zigmond, N, Kloo, A.M., Hill, D.R., Mrachko, A.A., Pattera, M.F., Bost, T.J. & Davis, S.M. (2013).  Performance of students with significant cognitive disabilities on early-grade curriculum-based measures of word and passage fluency.  Council for Exceptional Children, 79 (4), 408-426.