A Lethal Elixir: Copious Testing and Misdirected Teaching

Under the guise of accountability—and as a condition for Palm Lane Elementary School (and other public schools) to receive federal funds—No Child Left Behind (NCLB) required that each state create or adopt tests to assess students’ annual progress. The content of these tests for students in elementary schools varied widely among states. Some states adopted more difficult tests, but other states (e.g., Georgia) selected easy assessments. Adequate yearly progress (AYP) was required, requiring higher and higher test scores each subsequent year. This process was analogous to limiting the amount of time permitted to run 100 yards, each run requiring a decrease of several seconds.

Following is an example of the variation in testing under NCLB. During a previous school year, students in Georgia and South Carolina were administered the same standardized reading test. Students’ scores in both states wertrend copye almost equal. During the same year, each state also administered its self-selected state reading test to fulfill NCLB mandates. In South Carolina, approximately 35% of fourth graders were rated as proficient in reading. But almost 90% of fourth graders in Georgia were rated proficient in reading on Georgia’s reading test. (Would supporters of a charter at Palm Lane have concluded that South Carolina schools failed that year—and regarded public schools in Georgia as a huge success?)

Testing under NCLB revealed little useful information, even lacking the ability to compare students’ scores among states. Neither did testing identify diagnostic data to use for improving student achievement, except test content to teach so that test scores increased the following year. Overall, testing had no significant effect: Reading and math scores have remained substantively unchanged for decades (see figure above and http://tinyurl.com/n9pymaw).

Most recently (2013), students at Palm Lane were administered the STAR Reading Test. Its publisher, Renaissance Learning (RL), asserts its “research-based test items meet the highest standards for reliability and validity.” Reliable? Yes. Valid? Perhaps not. Reliability refers to whether test scores are consistent. When scores of the same students are similar when retaking the same test, reliability is high. Validity, however, refers to whether a test assesses the content it purports to test. In this case, assertions of validity by RL are suspect.

STAR Reading from RL is a program based upon purported foundational reading skills and tested by the STAR Reading Test. What is the effectiveness of this reading program? If the skills taught were truly foundational, and if reading were to improve among students after using STAR Reading, it is reasonable to conclude that these skills and reading are related—that STAR Reading works. According to RL, the content in STAR Reading is justified because it is “aligned to curriculum standards at the state and national levels—[now] including the Common Core State Standards”
(p. 22). Many purported foundational reading reading skills in STAR Reading, however, refer to factors that have nothing to do with
reading: blending, counting, and segmenting syllables; distinguishing between long and short vowel sounds; isolating initial, final, and medial phonemes; adding and substituting phonemes. Reading is the comprehension of written language, making sense of print; reading is not the accurate pronunciation of words. Regrettably, many teachers and parents misconstrue or misunderstand reading as barking at print.

Renaissance Learning also sells Accelerated Reader, a reading program for K–12 students. Students select and read books followed by a computer-based quiz. Software provides data to teachers about readers’ scores, which allows teachers to track students’ progress. What is the effectiveness of Accelerated Reader? Since 2002, the What Works Clearinghouse (WWC), part of the U.S. Department of Education, has reviewed and published findings regarding the effectiveness of instructional practices and reading and math programs. The WWC reviewed 318 studies regarding the effectiveness of Accelerated Reading. The results: Only one study met the WWC standard for evidence, and only one study met the WWC standard for evidence “with reservations”; thus, 316 studies did not meet either of these two standards. WWC’s conclusionAccelerated Reading resulted in “no discernible effects in reading fluency and comprehension for adolescent learners.”

Testing is not teaching, and public schools have been fettered with legal mandates by NCLB to test instead of  teach, which has interfered with reading, learning to read, instruction, and students’ progress. This writer and researcher believes that in recent years students have learned less about reading than the public and Congress have learned about testing limitations. Yet Congress is poised to continue testing in the next iteration of NCLB, ignoring important results from a decade of meaningless testing. (A bill in the House of Representatives failed to pass this week, lacking enough votes.) To name a few important findings:

  1. Setting impossible goals never makes sense. Mandating that all students shall read and do math proficiently by 2014 was impossible. On any measure, half of the population always scores below average, a statistical certainty.
  2. When the content and difficulty of state reading and math tests differ, comparing students’ scores is meaningless.
  3. When the content and concepts assessed on state reading tests are arbitrary and unrelated to reading, no improvement in reading achievement should be expected.
  4. When the underlying assumptions and content of reading tests lack validity, instructional programs based on these assumptions will not result in increasing reading achievement.
  5. In the absence of a national assessment program for public school students, some states select easy tests so schools appear more successful, and other states select more rigorous tests, making schools seem less successful.

Palm Lane Elementary School (and hundreds of other public schools) has been mislabeled as failing, harshly criticized because its students did not record satisfactory scores representing AYP on state-adopted tests as required by law. (It’s a good thing all students were not required to high jump 6 feet.) It made no sense to set impossible goals, which NCLB did, mandating that all students score average or better on reading and math tests by 2014—and to score higher and higher on tests each subsequent year.

NCLC was the failure, not schools. It is never a good idea to believe that students strive to excel because rigorous tests are administered. It is teachers who potentially make a significant difference in the lives of students. Search Google for “John Hattie” and review his important findings about effective schools, teachers, and programs. Hattie’s most important finding: It is teachers who make the difference. Student performance at Palm Lane and at other schools will improve to their maximum when teachers receive the freedom to teach instead of burdened by orders to test. There may be an undiscovered silver bullet to solve education’s problems, but research findings are conclusive: Testing is a dud.

About Matthew Cunningham

Leave a Reply

Your email address will not be published. Required fields are marked *

*


Skip to toolbar