review – Ӱ America's Education News Source Tue, 17 Jan 2023 20:32:32 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 /wp-content/uploads/2022/05/cropped-74_favicon-32x32.png review – Ӱ 32 32 Book Review — Are Education Leaders Mismeasuring Schools’ Vital Signs? /article/book-review-are-education-leaders-mismeasuring-schools-vital-signs/ Tue, 17 Jan 2023 21:00:00 +0000 /?post_type=article&p=702483 Two years ago, students at a charter school in East Los Angeles were learning at 1.5 to two times the pace of their grade level peers around the state, based on three years of standardized test scores. But the California Department of Education labeled the school a “low performer,” which put it at risk of closure. Why? Because  

I have written before in these pages about the importance of accurate and balanced methods of measuring school quality. In the same spirit, I recommend a new book by Steve Rees and Jill Wynns, . 

Wynns spent 24 years on the San Francisco school board, while Rees spent just as long running a company that helped school districts measure and report on the quality of their schools. Both have seen their share of mistakes, many of which lead to real pain: teachers reassigned and principals removed based on faulty data; English learners held back from entering the mainstream academic program even after they have become fluent; charter schools closed due to inadequate measurement of growth; even students denied graduation based on flawed interpretation of test results.


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


Rees and Wynns have now authored a highly readable guide that superintendents, principals, school board members, education reporters, teachers, and advocates can use to avoid these kinds of errors. They underline four flaws that are most common:

Growth v. Proficiency

The first is using children’s current test scores — rather than a measure of their academic growth — to judge the quality of schools and teachers. In high poverty schools, students often arrive several years behind grade level. Few of them are “proficient” in math or reading. But too often states and districts give greatest weight to students’ current test scores, not their rate of improvement. 

Consider a middle school whose sixth grade students arrived three years behind grade level. If they are only one year behind grade level at the end of sixth grade, that would be spectacular progress. But in California, to use but one example, the school’s academic score would be in one of the two lowest categories.

Apples v. Oranges 

The second major flaw Rees and Wynns point out is related: when trying to measure academic growth, some states and districts fail to measure the same students over time. Instead, they measure a school or grade level’s average over time. But in a middle school, a third of the students each year are new arrivals, and another third from last year have departed. In four-year high schools, a quarter leave each year, another quarter arrive.  So annual school or grade level averages are measuring different kids.

The solution is obvious: Measure the same cohort of students over time, following them from one grade level to the next. Even better, remove from your measure students who have departed or recently arrived at the school.

Ignoring the Imprecision of Test Results

The third common flaw is failure to acknowledge the imprecision of test scores. “When we test kids, we’re trying to gather evidence of something that exists out of sight, somewhere between their ears,” Rees and Wynns write. “Whatever their test scores reveal, it can only be an estimate of what they know.”

Standardized tests are often used to rate children—typically into four categories, which might be summarized as advanced, proficient, needing improvement, and far behind grade level. But imprecision means some of these classifications are dubious. “The major test publishers include what they call classification error rates in their technical manuals,” the authors explain. “It is common to find a 25–30 percent classification error rate in the middle bands of a range of test scores—and that’s for a standardized assessment with 45–65 questions.”

“In Texas, Illinois, Maryland, California, Ohio, Indiana, Florida and many other states,” they add, “the parent reports make no mention of imprecision.” Yet these reports tell parents whether a child is on grade level. Some states use a standardized test called the Smarter Balance Assessment. Its “technical manual reveals that the classification accuracy rate in these middle two bands (Levels 2 and 3) is about 70 percent. In other words, just seven out of every 10 kids whose scores land in the middle two bands will be classified correctly as having either met the standard or scored below the standard.”  

Lack of Context

The fourth major flaw Rees and Wynns discuss is “’disregarding context when analyzing gaps in achievement.” Often, a school is compared to the statewide average, when its students are anything but average. They might be affluent, or poor, or recent immigrants. If so, do we learn anything about the quality of their school by comparing them to a state average? 

Rees and Wynns urge school and district leaders to compare their students to schools or districts with demographically similar children. “If you can identify other schools with kids very much like your own who are enjoying success where your students are lagging, you can call the site or district leaders and see how their approach to teaching reading differs from your own,” they suggest. “That last step, compare-and-contrast with colleagues who are teaching students very similar to your own, is where your analytic investment will pay off.”

The authors point a finger of blame at schools of education, which rarely teach future teachers or administrators about data, assessment, or statistics. “Schools of education simply must stop sending data- and assessment-illiterate educators into the field,” they declare.

They also urge state departments of education to disclose the imprecision of test scores whenever they report results, to do more to communicate the meaning of those results, and to create help desks that district and school leaders can turn to with data and assessment questions.

Perhaps their most novel recommendation is that we begin measuring “opportunities to learn,” to draw attention to yawning gaps. Some districts assign students to the school closest to their home, for instance, while others offer significant choices — hence greater opportunity. Most districts give teachers with seniority more ability to choose their schools, leaving the schools in low-income neighborhoods to settle for rookie teachers or those no one else wants — creating a huge opportunity gap for low-income students. Some schools offer the opportunity to take more advanced courses or more career-oriented courses.

A few districts work hard to match their supply of courses and schools to what students and their families want, but most don’t. The result: yet another opportunity gap. “If 90 percent of your sections are dedicated to college-level course work, and 50 percent of your graduating seniors have chosen a path to the workforce or the military, then your master schedule constrains the opportunities to learn that your students care most about,” the book explains. “Work force prep courses and multiple pathways toward work-related professions would be a needed addition for that school. The question for those leading or governing districts is how actively you listen to students when they tell you what future they’re aiming for, and the extent to which you direct your budget and staff to meet their desires.”

A brief article cannot begin to suggest the depth and detail the authors plumb in this volume. In addition, every chapter of Mismeasuring Schools’ Vital Signs includes questions people can ask to uncover data and measurement problems — and methods to solve them — in their own districts and schools. There is even , which includes interactive data visualizations and resources such as a glossary of statistical terms and a “visual glossary” showing the types of charts and graphs you can use to communicate meaningful data.There’s an old saying in the management world: What gets measured gets done. As Rees and Wynns demonstrate, in public education we too often measure the wrong things, in the wrong ways. If we’re going to improve the lives of children, we have to learn how to measure what matters, accurately, and then understand what it means. Mismeasuring Schools’ Vital Signs is a good place to start.

]]>
Opinion: Review: Why You Should Buy into the ‘Sold a Story’ Podcast /article/review-why-you-should-buy-into-the-sold-a-story-podcast/ Fri, 02 Dec 2022 16:45:00 +0000 /?post_type=article&p=700663 Updated

Let me get this hard sell on the table right up front: You should listen to “,” a podcast about reading instruction in U.S. schools. After all, you can be concerned that 1 in 3 American fourth graders read and still not want a deep dive into how literacy is taught. But “Sold a Story” is about more than a national problem; it’s about a deeply personal struggle experienced by families of all kinds.

In the hands of adept reporter and storyteller Emily Hanford, that deep dive unfolds with crystal clarity, emotional anchors and dramatic cliffhangers to spotlight why many students struggle to read: It is because many schools don’t teach them the specific skills they need to successfully do so.

The podcast’s basic premise is that extremely popular approaches to teaching young kids to read — to decode written words — give short shrift to explicit lessons that connect letters in words to the sounds they represent. In many schools, this explicit phonics instruction is sprinkled into reading lessons, but in woefully inadequate amounts and crowded out by other strategies, including “three-cueing” — which coaches students to use context or pictures to guess what unknown words are. Research, much of it decades old and now called the , shows that systematic phonics instruction is key to helping students become fluent readers. But these other approaches have largely ignored it.


Get stories like this delivered straight to your inbox. Sign up for Ӱ Newsletter


Why? In six episodes, Hanford and her colleague Christopher Peak deftly stitch together the complete picture: an overview of those popular approaches to reading instruction, the national political battle over how to teach literacy and the reading guru whose three apostles, with their billion-dollar publishing company, championed this flawed approach.

The podcast focuses on the idea, established by reading guru Marie Clay, that children can become readers by leaning on context clues instead of sounding out words. Two very popular curricula from celebrated authors — “Units of Study for Teaching Reading” from Lucy Calkins and “Leveled Literacy Intervention” from Irene Fountas and Gay Su Pinnell — were the primary promoters of this flawed idea in school districts and education schools across the country, generating millions of dollars for them and their publisher, Heinemann.

Throughout, Hanford and Peak ground these episodes not in who should be blamed, but in who bears the consequences. The fallout is hitting students struggling to learn to read, parents flummoxed by their children’s lack of progress and teachers who keep saying something like, “If only I had known. …”

Of course, the significance of that fallout hinges on whether Hanford and Peak’s provocative claims about the scope and quality of these curricula are actually correct. There are compelling reasons to believe they are.

Regarding its scope, a 2019 nationally representative Education Week found that “Leveled Literacy” intervention was used by 43% of K-2 early reading and special education teachers, while “Units of Study” was used by 16%. These curricula are Heinemann’s biggest sellers. Hanford and Peak found Heinemann brought in over $233 million in the past decade from just the 100 largest districts. Imagine their business across the remaining 13,000 smaller school districts. 

As to the quality, EdReports, a nonprofit reviewer of K-12 instructional materials, last year found lacking — labeling both as “does not meet expectations.” However, you need not lean on expert reviews to see the disconnect in this curricular approach. In a tacit admission, Calkins revised her “Units of Study” curriculum to incorporate the Science of Reading. The disconnect is even plainer in Fountas and Pinnell’s of their approach that encourages guessing words from context. They write, “If a reader says ‘pony’ for ‘horse’ because of information from the pictures. … His response is partially correct, but the teacher needs to guide him to stop and work for accuracy.”

That response lays bare how detached their approach is from teaching students to actually read text. Getting “pony” from the word “horse” can be “partially correct” only if the goal is something other than teaching students to read accurately, because it rewards children for learning to do something other than read the word. It rewards guessing. Such a strategy might get students partial meaning in the short run, but it will produce struggling readers over time. Indeed, it has.

Hanford deserves credit for her work championing the Science of Reading and pressing the case against predominant approaches to literacy used in many schools across a nation of struggling readers. Fortunately, some states and districts are . recently outlawed three-cueing, and New York City has to increase phonics instruction. But it will take time and deliberate efforts to change instruction in schools. In the interim, “Sold a Story” gives frustrated parents of struggling readers good questions to ask and the courage to demand better instruction. Clear, engaging and, yes, enraging reporting like this can help policymakers, teachers and families ensure that they are not sold a story that might hold their young readers back.

]]>