Google

www ISR
For ISR updates, send us your Email Address


Back to home page

ISR Issue 57, January–February 2008



CRITICAL THINKING

IQ, genetics, and racism

PHIL GASPER discusses how the myths about biology and intelligence refuse to disappear

SCIENTIFIC RACISM—the attempt to develop a spurious scientific justification for the claim that some racial groups are superior to others—is an intellectual corpse that refuses to stay buried. In October 2007, James Watson, who shared a Nobel Prize in 1962 for discovering the structure of DNA molecules, told a British newspaper that he is “inherently gloomy about the prospect of Africa” on the grounds that “all our social policies are based on the fact that their intelligence is the same as ours—whereas all the testing says not really.” Watson continued that while we would like to think that all humans are equal, “people who have to deal with black employees find this not true.”

Watson’s comments rightly caused an uproar, and a tour of England to promote his latest book was called off. A speech Watson was scheduled to give at London’s Science Museum was cancelled and an official announcement described his remarks as “beyond the point of acceptable debate.” In the U.S., the Federation of American Scientists issued a statement declaring that it was “outraged by the noxious comments of Dr. James Watson…[who] chose to use his unique stature to promote personal prejudices that are racist, vicious, and unsupported by science.”

The 79-year-old Watson soon apologized “unreservedly” for suggesting that Blacks are less intelligent than whites, claiming, “This is not what I meant.” But Watson’s comments were no off-the-cuff remarks, since in his new book he writes, “There is no firm reason to anticipate that the intellectual capacities of peoples geographically separated in their evolution should prove to have evolved identically. Our wanting to reserve equal powers of reason as some universal heritage of humanity will not be enough to make it so.” As the controversy broke, Watson was suspended from his position as chancellor of the Cold Spring Harbor Laboratory on Long Island, and he resigned in disgrace a few days later.

That, however, has been far from the end of the controversy. A few weeks after Watson’s resignation, the New York Times reported that, “fervid debates about race, genes and I.Q. have sprung up on the Web, in publications and in conference rooms.” Far-right groups, of course, had a field day with Watson’s remarks, but soon after he made them, the influential libertarian Cato Institute hosted a debate on “The IQ Conundrum” in its online journal, in which Linda Gottfredson, a prominent University of Delaware sociologist, defended the view that differences in IQ scores between Blacks and whites may have a genetic basis. Perhaps most shockingly, in November William Saletan, a science writer for the liberal Slate.com Web site, published a series of articles in which he argued that Watson’s original comments were correct. According to Saletan:

Tests do show an IQ deficit, not just for Africans relative to Europeans, but for Europeans relative to Asians. Economic and cultural theories have failed to explain most of the pattern, and there’s strong preliminary evidence that part of it is genetic. It’s time to prepare for the possibility that equality of intelligence, in the sense of racial averages on tests, will turn out not to be true.

Saletan couched his argument in the context of much liberal hand-wringing about the possibility of maintaining a commitment to political equality in the face of biological inequality, ignoring the fact that the view he was defending shifts the blame for very real social and economic inequalities away from racist policies and practices. Within a few days, however, Saletan issued a semi-apology, admitting that he had based his argument partly on the work of the Canadian psychologist J. Philippe Rushton, a notorious white supremacist who has long defended the idea of biological differences in intelligence between racial groups. Rushton is president of the Pioneer Fund, the main funder of scientific racism in North America, which was originally founded by Nazi sympathizers in the 1930s and is still classified by the Southern Poverty Law Center as a hate group. Slate.com was embarrassed enough to publish a critique of Saletan by another of its regular writers.

Defenders of the view that some racial groups are genetically more intelligent than others typically base their claims on the fact that some groups score better than others on IQ tests and on the assumption that IQ measures some inherent characteristic of the human mind. But anyone aware of the history of intelligence testing knows how dubious this assumption is.

The first intelligence tests were developed by the French psychologist Alfred Binet a little over a century ago, with the aim of identifying children who were having difficulty in school and who would benefit from remedial education programs. Binet attempted to identify intellectual tasks that an average child of a particular age in a given cultural environment could be expected to perform. Children whose “mental age” was more than two years lower than their chronological age were deemed in need of special education. Binet’s view was that, with the rare exceptions of individuals suffering from brain damage and other mental disorders, all children could perform well in school if given the appropriate resources and support. In other words he rejected the view that a person’s intellectual level is something fixed and unchanging.

When Binet’s tests crossed the English Channel and the Atlantic, however, and were taken up by leading British and American psychologists who embraced social Darwinism, their results were immediately given a hereditarian interpretation that claimed that the characteristic they measured was something given at birth and fundamentally unalterable. The Stanford psychologist Lewis Terman refined the items on Binet’s tests (giving birth to the standard Stanford-Binet test), invented the term “Intelligence Quotient” (IQ) to refer to the ratio of mental age to chronological age, established a measurement scale with 100 representing average performance, claimed that IQs were not just a useful diagnostic tool for schoolchildren but a measure of general intellectual ability for adults, and relentlessly promoted a hereditarian view of intelligence, even though his own data revealed only a weak correlation between social status and IQ. According to Terman:

Practically all of the investigations which have been made of the influence of nature and nurture on mental performance agree in attributing far more to original endowment than to environment. Common observation would itself suggest that the social class to which the family belongs depends less on chance than on the parents’ native qualities of intellect and character…. The children of successful and cultured parents test higher than children from wretched and ignorant homes for the simple reason that their heredity is better.

Whereas Binet’s tests had required trained personnel to test individual children, Terman invented the mass IQ test that could be administered to large numbers of children at the same time and which accepted only one correct response for questions that could quite often be interpreted and answered in different ways, and which sometimes exhibited glaring social and cultural biases (such as requiring knowledge of tennis, bowling, and the design of playing cards).

In contrast to Binet, Terman advocated a system of vocational training that would consign low-IQ children into unskilled and semi-skilled work. Without it, he warned in 1919, individuals with IQs of seventy to eighty-five would “drift easily into the ranks of the anti-social or join the army of Bolshevik discontents.” Terman’s views also reflected his own racist assumption. He wrote of children in the seventy to eighty IQ range:

No amount of school instruction will ever make them intelligent voters or capable citizens…. They represent the level of intelligence which is very, very common among Spanish-Indian and Mexican families of the Southwest and also among negroes. Their dullness seems to be racial, or at least inherent in the family stocks from which they came. The fact that one meets this type with such extraordinary frequency among Indians, Mexicans, and negroes suggests quite forcibly that the whole question of racial differences in mental traits will have to be taken up anew by experimental methods. The writer predicts that when this is done there will be discovered enormously significant racial differences in general intelligence, differences which cannot be wiped out by any scheme of mental culture.

But as the biologist Stephen Jay Gould points out in his classic exposé of scientific racism, The Mismeasure of Man, Terman’s arguments for his hereditarian conclusions were laughably weak. For example, after determining that the average IQ of twenty children in a California orphanage was low, Terman attributed this to the fact that most were “children of inferior social classes.” He dismissed the alternative explanation that living without parents in an institution might affect a child’s development with the following assertion: “The orphanage in question is a reasonably good one and affords an environment which is about as stimulating to normal mental development as average home life among the middle classes.” As Gould notes, for Terman and like-minded psychologists hereditarianism was really not so much a conclusion to be argued for as a matter of unquestioned common sense. Terman wrote, “Does not common observation teach us that, in the main, native qualities of intellect and character, rather than chance, determine the social class to which a family belongs?”

But “common sense” is all too often simply a reflection of social prejudice. When hereditarian views about intelligence were revived in the late 1960s and early 1970s by Arthur Jensen, Richard Herrnstein, and others, the philosopher of science Hilary Putnam argued that they are built on the assumption “that there are a few ‘superior’ people who have this one mysterious factor—‘intelligence’—and who are good at everything, and a lot of slobs who are not much good at anything.” If this assumption seems plausible to some people, Putnam continued, it is only because we live in a society that is highly stratified. But, in fact, “ordinary people can do anything that it is in their interest and do it well when (1) they are highly motivated and (2) they work collectively.” As Putnam pointed out:

That motivation plays a decisive role in acquiring almost any skill is a matter of everyone’s experience.... The importance of working collectively is also evidenced in many ways. The Black and Latin prisoners in Attica Prison are presumably part of [what Herrnstein calls] the low IQ “residue.” But they organized brilliantly. Every popular revolution in history makes the same point—that ordinary people in a revolution can perform incredible feats of organization, planning, strategy, etc.

Ever since IQ tests were invented, their results have been used to justify social and economic inequalities as a reflection of natural differences. But such claims are based on a series of myths.

Myth #1: IQ tests measure intelligence.

The simplest reason why this cannot be true is that in ordinary usage intelligence is an inherently vague concept and it has never been given an agreed upon precise scientific definition. The behavioral psychologist E. G. Boring once proclaimed that intelligence is whatever IQ tests measure, but without some further justification this definition is purely arbitrary. In fact the two ideas most commonly associated with intelligence by experts are ability to adapt to one’s environment and ability to learn, but traditional IQ tests are not designed to measure either of these capacities. According to a recent survey article in the journal of the American Psychological Association, “traditional tests focus much more on measuring past learning, which can be the result of differences in many factors, including motivation and available opportunities to learn.”

Does this mean that we should conclude that IQ tests measure nothing more than the ability to do well on IQ tests? There is probably more to it than that. The tests have become more sophisticated over time (the most obvious cultural biases have been removed, for example) and some do imperfectly measure capacities that can be reasonably associated with intelligence, such as information comprehension and certain kinds of abstract reasoning and problem solving. But such analytical skills at best represent only a part of what is normally understood by intelligence, which also includes practical and creative abilities that IQ tests ignore. Indeed, the Harvard psychologist Howard Gardner has argued that there are as many as eight distinct forms of intelligence. Thus IQ tests should be seen as no more than a way of assessing some aspects of one kind of intelligence. Since intelligence encompasses a variety of distinct capacities, it is highly unlikely that overall intelligence can be meaningfully ranked on a single linear scale. But even if it could be, a person’s IQ score would not be that measure.

Myth #2: IQ measures something innate, fixed, and unchangeable.

Whatever IQ tests measure, it is not something unchangeable, fixed forever by an individual’s genetic inheritance. The clearest evidence that shows this is the so-called Flynn effect (named after the intelligence researcher James Flynn), which refers to the fact that IQ scores have been rising steadily and significantly since the first tests were devised, although periodically the average score is adjusted back to 100. For example, U.S. children with average IQ scores in the 1930s would only score around 80 on today’s scale. Since there has not been enough time for significant genetic change over this period, these results indicate that IQ can be dramatically raised by social and environmental factors. Other researchers have noted that the analytical abilities that the tests measure are repeatedly instilled by Western-style education, indicating both that they can be improved and that the tests may be culturally biased.

Myth #3: Race is a biologically significant category.

Claims that racial differences in IQ scores have a genetic basis assume that race is a biologically useful category. But the racial groups into which we commonly divide people—Black, white, Asian, etc.—do not correspond to any significant biological divisions. The Marxist biologist Richard Lewontin demonstrated in the 1970s that there is much more genetic variation within such groups than there is between them.

From the biological point of view we can designate any local interbreeding population as a “race” if we like, but this concept has very little relation to the ordinary use of the term. Since neighboring populations will differ in some degree with respect to their gene frequencies, they will count as different races. Alternatively we could say that two groups count as the same race if the differences between their gene frequencies are not too great—but since the differences come in degrees, there is no biologically non-arbitrary place to draw the line. This is why race is a social and historical construction, not a biological reality.

Myth #4: Group differences in IQ scores are genetic.

Within the U.S., Blacks consistently score lower on IQ tests than whites. Similarly, Africans score lower than Europeans. Despite the fact that, as noted, race is not a biologically relevant category, there have been repeated claims that these differences are due to genetic differences between the relevant populations. Given the genetic heterogeneity of all these populations (because modern humans first evolved in Africa, there is in fact more genetic variation in Africa than in the rest of the world), these claims are initially highly implausible and there is no serious evidence in their favor. After a recent survey of the available research, the distinguished University of Michigan psychologist Richard Nisbett concluded, “The evidence most relevant to the question indicates that the genetic contribution to the Black-white IQ gap is nil.”

On the contrary, the Black-white IQ gap in the U.S. has narrowed significantly over the past thirty years, suggesting that if environments and educational opportunities were truly equalized, it would disappear completely. One study found that Black children adopted by white families that provided more educationally stimulating environments had IQs thirteen points higher than Black children adopted by Black families. Another study of German children fathered by, respectively, Black and white American GIs during the post-1945 occupation, found that there was no significant difference between their IQs.

Myth #5: Differences in levels of economic development reflect differences in intelligence.

Finally, despite the absence of any other credible evidence, the fact that there are enormous global inequalities in wealth and technology leads some people (including, it seems, James Watson) to conclude that these differences must be due to innate differences in intelligence between populations. This myth is exploded by the biologist Jared Diamond in his book Guns, Germs, and Steel. Diamond argues that people living in societies with Stone Age technology like, for instance, traditional New Guineans, need to be at least as intelligent as modern Europeans and North Americans in order to survive.

Civilization developed more rapidly on the Eurasian landmass not because its inhabitants were more intelligent, but because they had the good fortune to live in more favorable environments, which had more wild plants and animals that could be easily domesticated, and an East-West axis allowing the easier transmission of agricultural advances. The technological advantages that accrued from this, together with a greater immunity to infectious diseases acquired from centuries of living in close proximity to domesticated animals, eventually allowed Europeans to invade, conquer, and colonize large portions of the globe, sucking out resources that gave a further boost to their economic development, leading to the vast inequalities we see today.

Myths about race and intelligence will persist as long as we live in a racist society. They are one instance of the doctrine of biological determinism that has become an integral part of modern capitalism. “The problem for bourgeois society,” notes Lewontin,

is to reconcile the ideology of equality with the manifest inequality of status, wealth, and power, a problem that did not exist in the bad old days of Dei Gratia. The solution to that problem has been to put a new gloss on the idea of equality, one that distinguishes artificial inequalities which characterized the ancien régime from the natural inequalities which mark the meritocratic society…. Biological determinism…is part of the legitimating ideology of our society, the solution offered to our deepest social mystery, the analgesic for our most recurrent social pain.

The myths need to be refuted each time they emerge, but they will not finally disappear until we organize to change the society that gives rise to them.


Phil Gasper teaches philosophy at Notre Dame de Namur University in California and is editor of The Communist Manifesto: A Road Map to History’s Most Important Political Document (Haymarket Books, 2005). He can be reached at [email protected]

Back to top