The following piece appeared in the Wall Street Journal.
How Low Can We Go?
SAT scores dropped significantly this year. Blame the schools, not the test.
BY DAVID S. KAHN
Friday, May 26, 2006 12:01 a.m. EDT
Colleges across the country are reporting a drop in SAT scores this year. I've been tutoring students in New York City for the SAT since 1989, and I have watched the numbers rise and fall. This year, though, the scores of my best students dropped about 50 points total in the math and verbal portions of the test (each on a scale of 200 to 800). Colleges and parents are wondering: Is there something wrong with the new test? Or are our children not being taught what they should know?
Before 1994, the verbal section of the SAT was about 65% vocabulary (55 out of 85 questions) and 35% reading comprehension. Then the Educational Testing Service shortened and reworked the test, devoting half of the 78 questions to each area. Last year ETS changed the test again, and now it is heavily skewed toward reading: 49 of the 68 items require students to read, synthesize and answer questions.
In such a way, ETS has increased the penalty for not reading throughout one's school years. Studying vocabulary lists before the test--a long-favored shortcut to lifting scores--just won't cut it anymore. Students who read widely and often throughout their elementary and high-school years develop the kinds of reading skills measured by the new SAT. Students who avoid reading don't--and can't develop them in a cram course.
The math section of the test also got more challenging. The SAT used to test algebra, geometry and arithmetic. Students weren't allowed to use calculators on the original SAT, so some of those problems were simply difficult arithmetical calculations (fractions, decimals and percentages). In 1994, calculators were allowed, and the questions got a bit easier--and I watched my students' math scores jump. But last year ETS made it harder by adding pre-calculus questions, and my students have struggled.
Now there are also fewer math questions--each of which counts for more. The 54 math questions count for 11 points each now (on the 200 to 800 scale); before, there were 60 questions that counted for 10 each. So if a student gets 20 questions wrong, he effectively loses 222 points instead of the former 200.
Quite simply, this is not the same SAT. Students, anxious parents and college admissions officers can't really equate this new test with those of previous years--or their results. We need to adjust our expectations accordingly. It used to be that a 650 was a really high score. After a batch of adjustments, 700 became a really high score. Now it's probably around 670. Or to use the new SAT scale--a scale that includes a writing sample graded for another possible 800 points--you've done pretty well if you eke out a score of 2000, out of a possible 2400.
Some colleges attribute the drop in numbers to the fact that the test is longer--the writing sample has added about an hour to the test's time. If so, then students should do worse on the last sections of the test than they do on the earlier parts. (There are 10 sections to the new SAT.) That hasn't been the case for my (admittedly small) sample of students.
The colleges also think that the lowering of average test scores may have something to do with the cost of taking the test. It is now $41.50, up from $28 in 2004, and so they suggest that fewer students are taking the test twice (which generally results in a slightly higher score). But that theory is weak. Colleges also report that potential students are sending them more applications than ever before, from an average of five per student to 10 or more. Given that college applications usually cost between $50 and $75 per school, the colleges are saying that students are paying an extra $250 and more to apply to additional colleges but won't pay to take the SAT a second time because the price of the test has gone up $13. I'm skeptical.
The explanation is much more straightforward. The average American receives a pretty mediocre education. The average SAT score drifted down from 1000 in the 1960s to 880 in 1993. Education activists attributed this plummet to cultural factors, a change in the testing pool and other matters. The blame was placed everywhere but on schools. That the quality of education in America declined from the 1960s to the 1990s was hardly noted in debates over the SAT.
And then the test was "recentered." Thanks to the change in the SAT scale and the change in the kinds of questions that were asked on the test, scores went up and people were able to ignore the fact that most students are not well-educated. Indeed, parents compared their children's scores with their own and concluded that their children were brilliant. Now ETS has made it a little harder to get away with not knowing your three R's.
People complain that the SAT is biased and that the bias explains why students don't do well. That's true--it is biased. It's biased against people who aren't well-educated. The test isn't causing people to have bad educations, it's merely reflecting the reality. And if you don't like your reflection, that doesn't mean that you should smash the mirror.
That the new SAT tests more reading comprehension than the old test did is a good thing. Colleges complain that their incoming students don't have sufficient skills to read and analyze the kind of material that their professors will assign them. I hope that the new SAT's emphasis will make students realize that you can't get much of an education if you can't read.
Maybe the decline in SAT scores will force people to notice that their children are not getting good educations. If your children don't read or do math, why would you think that they would do well on the SAT? I would love to get into a time machine and go back to 1960 and give this new SAT to high-school students back then. I suspect that they would do much better than today's students. If we want people to get good scores on the SAT, I have a suggestion. Stop complaining about how unfair the test is and do your homework.