Giving the SAT a failing grade

June 15, 1994|By Thomas V. DiBacco

THE announcement last week by the College Board that it will change the way Scholastic Assessment Tests are scored -- so scores are higher, but test takers no smarter -- is one more reason why this academic rite of passage ought to be given a failing grade. According to the College Board, beginning in April 1995, the scores will be raised to give students a better idea of what their scores mean. But students need no additional insight into the scores; before they ever sit for the exam, they know precisely the ranges that various institutions expect for admission. The College Board's action appears to be just the latest of a defensive strategy designed to appease critics; just as its 1990 changes (reducing the number of questions with multiple-choice answers, permitting the use of calculators) were directed at educators who believe that the test does more harm than good.

For example, SAT results often become self-fulfilling prophecies for those who obtain high scores. A good percentile ranking means entrance to a good university: at least, that's the perception of students and academic administrators. So professors believe that their students are good: ergo, they don't have to be great teachers to such a clientele. They concentrate on research; the kids get a lousy education, rarely flunk out, and believe they have a corner on the professions as a result of an experience set in motion by a four-hour, pre-college exam.

In the 1960s, when I began my full-time teaching career, academic institutions relied more heavily on open admissions policies on the grounds that a democratic nation ought to concern itself more with output (the student after four years of schooling) than with input (the credentials at the time of admissions). To be sure, there would always be the prestigious institutions that would rely on meritocracy, but the philosophy of the 1960s was antithetical to such elitism.

The problems of the decade -- grade inflation, "relevant," as opposed to traditional, courses -- then seemed to become identified with the monster of open admissions, something that had to be eradicated from the academic land.

The result was that some institutions began to employ the logic of businesses that produced expensive products: (1) raise the tuition of the school and (2) tout rigorous admissions standards. By mimicking the Ivy-League educational spread, such institutions (1) would overcome the likely drop in enrollments in the 1980s as a result of decreasing numbers of college-age students and (2) consider themselves better institutions.

What such colleges were loath to admit was that the range of increase in the SAT scores of admitted students was modest (putting most students in the wide middle) and thus was of dubious significance in designating the institution's movement into the meritocracy. To boast that the increase in scores was indicative of an improved student body was tantamount to passing off Thom McAn shoes for Gucci's.

There are other sound reasons for dropping such exams: because they are under the purview of academicians rather than good writers, the reading and vocabulary sections are less an indication of a test-taker's ability than they are a reflection of the author's mode of expression, however wordy and unclear that might be.

Moreover, if the exams measure the ability of a student to develop skills or acquire knowledge in college -- qualities that allegedly can't be learned -- then the impressive growth in prep courses would suggest that an enormous fraud is being perpetrated or that the test-makers' secrets have, in fact, been discovered.

Perhaps most important, entrance exam scores make it too easy for admissions committees to arrive at decisions on students. A decision to admit should be as thoughtful and searching an experience as the examination a good professor would give to a class. That might involve interviews, discussions with former teachers or employers, as well as reviews of the student's previous work -- methods that do not lend themselves to a machine-scored raw total.

Theologians in the Middle Ages did much to ruin their profession by making mountains out of molehills ("How many angels," they argued, "can stand on the end of a pin?").

Teachers in ancient times did the same thing to truth by putting pebbles in the mouth of an alleged liar. In the contemporary quest for credentials and educational methods that rival those in the hard sciences, academic institutions should be wary of any litmus test that is likely to become more important than the value that can be added to a student's learning experience.

Thomas V. DiBacco is a historian at The American University in Washington.

Baltimore Sun Articles
|
|
|
Please note the green-lined linked article text has been applied commercially without any involvement from our newsroom editors, reporters or any other editorial staff.