Testing, Testing, Testing

Missouri's MAP is a fine, thoughtful standardized test. So how come it's wreaking havoc in the schools?

To keep up with the MAP, educators devise their own tricks. "They have a level called Not Determined that means you didn't complete all three test sessions, so you don't get averaged into your school's score," explains an administrator at a St. Louis public school. "In years past, that's how schools cheated. It was just so coincidental that the lowest-scoring kids didn't finish. But now you have a four-week window to finish, and the state got a little wise; they set a limit."

"Ah yes, the old classic method for raising your scores -- send the bottom 25 percent to the zoo," chuckles DESE's director of assessment, James Friedebach. "Now a district can't average more than 10 percent Level Not Determined, and no more than 14 percent, over the year; it's part of the accreditation process. If you have higher percentages, you get no performance points."

The harder it is to cheat, the more pressure there is to outthink the test, drill the format, second-guess the content. "Not every standard is tested every year," notes the administrator, "so the next year you get smart and teach to the standards they haven't tested yet. Well, you might bump up scores that year -- might -- but the following year, when they could have had two years of a broad-based curriculum to prepare them, you've lost a whole year. We are going to shoot ourselves in the foot like we always do, because we are teaching the test and not teaching content."

Fourth-grade teacher Arthur Howard says the MAP brought focus.
Fourth-grade teacher Arthur Howard says the MAP brought focus.
Fourth-grade teacher Arthur Howard says the MAP brought focus.
Jennifer Silverberg
Fourth-grade teacher Arthur Howard says the MAP brought focus.

According to Friedebach, the only part of the MAP one really could "teach to" is the multiple-choice exam, the Terra Nova, which stays the same every year. But to get a nice round bell curve in its scores, the Terra Nova throws in questions about material a child in that grade wouldn't have studied yet. The whizzes with math-professor parents solve the problems, the rest don't, and the curve normalizes. "That's why trying to teach to a norm-referenced test doesn't work," Friedebach concludes. "The hardest items aren't even in the curriculum, and if you try to teach them, they'll be jumbled, because they're out of order. If teachers are trying to teach to the Terra Nova, they're missing the point."

Cannon says the real trick isn't to "teach the test" but to weight the concepts. Pulling down a thick binder, she flips to the "item analysis" data for the MAP and points at the tiny type. "If I see that they have eight questions on geology and two on biology, I know my children are going to need some biology but more geology." She closes the binder and folds her arms on top of it, leaning forward: "My question wasn't 'How do we fix the MAP?' but 'How do we approach it?' We don't like the idea of someone saying we are achieving at a lower level than anyone else. So it became a matter of pride."


The central irony of the accountability-obsessed MAP is that it's absolutely useless as instructional feedback. Not only is it a "rolling test," with the questions and emphases changing every year, but you're testing a new batch of kids with markedly different abilities and levels and backgrounds. Unless your students score so low they have to take the February retest, you'll never know whether they eventually grasped the concepts they missed and you'll never know whether you've found a better way to teach those particular concepts. "You can't measure improvement," explains Laura Schmink, principal of the Dewey School of International Studies, a magnet school. "Yet that's exactly what they are asking us to measure."

The more optimistic teachers speak hopefully of the new ways data will be made available to them, but the frustration still flashes in their eyes. All this effort is being expended, all this time and money, yet it doesn't even give enough feedback to get the students interested, let alone help their teachers fix the gaps.

"We're not insensitive to that," says Friedebach. "This June, we're going to initiate some in-state scoring, in five centers, and if it works well, we'll have 12 sites next year, and we'll be able to score much quicker. We still won't be able to get the information back to teachers before they leave for summer, but it's possible we can get it to administrators by mid-July so they can organize it and make it more useful."

Teachers would rather see the same group of kids tested before and after, or at least have results in time to teach what wasn't clear before the year ends. But testing experts say you can't expect that kind of feedback from an instrument like the MAP. "Asking an accountability assessment to be also instructional is asking too much," insists Schafer, the consultant hired by the state NEA. "The test scores really are building-level information; they're a check on the school's program, but they're not for instructional decision-making about students."

He's speaking the precise, quantitative language of institutional assessment, a language possible only in a quiet, air-conditioned office. Teachers glance at the numbers, but then they walk inside that building and confront warm, noisy students with complicated lives. Some are "pencil-biters" who agonize on tests but glide brilliantly through real life. Some scrunch their eyes almost shut, trying to push past a learning disability. Some can't focus at all because their mom's hooked on crack cocaine and doesn't know her boyfriend beats them. All those scores average into the MAP results, and if a particular class is stacked with kids with developmental delays, explosive tempers or special needs, the scores will make it look like the teacher's fault.

« Previous Page
 |
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
All
 
Next Page »
 
My Voice Nation Help
0 comments
Sort: Newest | Oldest
 
Loading...