High Intelligence: Nature or Nurture?
Where does high intelligence come from? Some researchers believe that intelligence is a trait inherited from a person’s parents. Scientists who research this topic typically use twin studies to determine the heritability of intelligence. The Minnesota Study of Twins Reared Apart is one of the most well-known twin studies. In this investigation, researchers found that identical twins raised together and identical twins raised apart exhibit a higher correlation between their IQ scores than siblings or fraternal twins raised together (Bouchard, Lykken, McGue, Segal, & Tellegen, 1990). The findings from this study reveal a genetic component to intelligence (Figure LI.16). At the same time, other psychologists believe that intelligence is shaped by a child’s developmental environment. If parents were to provide their children with intellectual stimuli from before they are born, it is likely that they would absorb the benefits of that stimulation, and it would be reflected in intelligence levels.
Range of Reaction is the theory that each person responds to the environment in a unique way based on their genetic makeup. According to this idea, your genetic potential is a fixed quantity, but whether you reach your full intellectual potential is dependent upon the environmental stimulation you experience, especially in childhood. Think about this scenario: A couple adopts a child who has average genetic intellectual potential. They raise the child in an extremely stimulating environment. What will happen to the couple’s new child? It is likely that the stimulating environment will improve their intellectual outcomes over the course of their life. But what happens if this experiment is reversed? If a child with an extremely strong genetic background is placed in an environment that does not stimulate them: What happens? Interestingly, according to a longitudinal study of highly gifted individuals, it was found that “the two extremes of optimal and pathological experience are both represented disproportionately in the backgrounds of creative individuals”; however, those who experienced supportive family environments were more likely to report being happy (Csikszentmihalyi & Csikszentmihalyi, 1993, p. 187).
Another challenge to determining origins of high intelligence is the confounding nature of our human social structures. It is troubling to note that some ethnic groups perform better on IQ tests than others—and it is likely that the results do not have much to do with the quality of each ethnic group’s intellect. The same is true for socioeconomic status. Children who live in poverty experience more pervasive, daily stress than children who do not worry about the basic needs of safety, shelter, and food. These worries can negatively affect how the brain functions and develops, causing a dip in IQ scores. Mark Kishiyama and his colleagues determined that children living in poverty demonstrated reduced prefrontal brain functioning comparable to children with damage to the lateral prefrontal cortex (Kishyama, Boyce, Jimenez, Perry, & Knight, 2009).
The debate around the foundations and influences on intelligence exploded in 1969, when an educational psychologist named Arthur Jensen published the article “How Much Can We Boost I.Q. and Achievement” in the Harvard Educational Review. Jensen had administered IQ tests to diverse groups of students, and his results led him to the conclusion that IQ is determined by genetics. He also posited that intelligence was made up of two types of abilities: Level I and Level II. In his theory, Level I is responsible for rote memorization, whereas Level II is responsible for conceptual and analytical abilities. According to his findings, Level I remained consistent among the human race. Level II, however, exhibited differences among ethnic groups (Modgil & Routledge, 1987). Jensen’s most controversial conclusion was that Level II intelligence is prevalent among Asians, then Caucasians, then African Americans. Robert Williams was among those who called out racial bias in Jensen’s results (Williams, 1970).
Obviously, Jensen’s interpretation of his own data caused an intense response in a nation that continued to grapple with the effects of racism (Fox, 2012). However, Jensen’s ideas were not solitary or unique; rather, they represented one of many examples of psychologists asserting racial differences in IQ and cognitive ability. In fact, Rushton and Jensen (2005) reviewed three decades worth of research on the relationship between race and cognitive ability. Jensen’s belief in the inherited nature of intelligence and the validity of the IQ test to be the truest measure of intelligence are at the core of his conclusions. If, however, you believe that intelligence is more than Levels I and II, or that IQ tests do not control for socioeconomic and cultural differences among people, then perhaps you can dismiss Jensen’s conclusions as a single window that looks out on the complicated and varied landscape of human intelligence. Once again, the limitations of intelligence testing were revealed.
Racial Differences in Intelligence
Although their bell curves overlap considerably, there are also differences in which members of different racial and ethnic groups cluster along the IQ line. Lynn’s 2006 work on racial differences in intelligence organizes the data by nine global regions, surveying 620 published studies from around the world, with a total of 813,778 tested individuals. Lynn’s meta-analysis lists the average IQ scores of East Asians (105), Europeans (99), Inuit (91), Southeast Asians and Amerindians each (87), Pacific Islanders (85), Middle Easterners (including South Asians and North Africans) (84), East and West Africans (67), Australian Aborigines (62), and Bushmen and Pygmies (54). Lynn and co-author Tatu Vanhanen (from the University of Helsinki) argue that differences in national income correlate with, and can be at least partially attributed to, differences in average national IQ (2002).
The observed average differences in intelligence between groups has at times led to malicious and misguided attempts to try to correct for them through discriminatory treatment of people from different races, ethnicities, and nationalities (Lewontin, Rose, & Kamin, 1984). One of the most egregious was the spread of eugenics, the proposal that one could improve the human species by encouraging or permitting reproduction of only those people with genetic characteristics judged desirable.
Eugenics became popular in Canada and the United States in the early 20th century and was supported by many prominent psychologists, including Sir Francis Galton. Dozens of universities offered courses in eugenics, and the topic was presented in most high school and university biology texts (Selden, 1999). Belief in the policies of eugenics led the Canadian legislatures in Alberta and British Columbia as well as the U.S. Congress to pass laws designed to restrict immigration from other countries supposedly marked by low intelligence, particularly those in eastern and southern Europe. Two of Canada’s provinces and more than one-half of the U.S. states passed laws requiring the sterilization of low-IQ individuals. In Canada, approximately 5,000 were affected. Between 1928 and 1972, mostly Indigenous women and “mental defectives” underwent forced sterilizations. Fortunately, the practice of sterilization was abandoned in Canada between the 1940s and the 1960s, although sterilization laws remained on the books in some American states until the 1970s.
One explanation for race differences in IQ is that intelligence tests are biased against some groups and in favour of others. By bias, what psychologists mean is that a test predicts outcomes — such as grades or occupational success — better for one group than it does for another. If IQ is a better predictor of school grade point average for Whites than it is for Asians, for instance, then the test would be biased against Asians, even though the average IQ scores for Asians might be higher. But IQ tests do not seem to be racially biased because the observed correlations between IQ tests and both academic and occupational achievement are about equal across races (Brody, 1992).
Another way that tests might be biased is if questions are framed such that they are easier for people from one culture to understand than for people from other cultures. For example, even a very smart person will not do well on a test if they are not fluent in the language in which the test is administered, or does not understand the meaning of the questions being asked. But modern intelligence tests are designed to be culturally neutral, and group differences are found even on tests that only ask about spatial intelligence. Although some researchers still are concerned about the possibility that intelligence tests are culturally biased, it is probably not the case that the tests are creating all of the observed group differences (Suzuki & Valencia, 1997).
Although intelligence tests may not be culturally biased, the situation in which one takes a test may be. One environmental factor that may affect how individuals perform and achieve is their expectations about their ability at a task. In some cases these beliefs may be positive, and they have the effect of making us feel more confident and thus better able to perform tasks. For instance, research has found that because Asian students are aware of the cultural stereotype that “Asians are good at math,” reminding them of this fact before they take a difficult math test can improve their performance on the test (Walton & Cohen, 2003).
On the other hand, sometimes these beliefs are negative, and they create negative self-fulfilling prophecies such that we perform more poorly just because of our knowledge about the stereotypes. In 1995 Claude Steele and Joshua Aronson tested the hypothesis that the differences in performance on IQ tests between Blacks and Whites might be due to the activation of negative stereotypes (Steele & Aronson, 1995). Because Black students are aware of the stereotype that Blacks are intellectually inferior to Whites, this stereotype might create a negative expectation, which might interfere with their performance on intellectual tests through fear of confirming that stereotype.
In support of this hypothesis, the experiments revealed that Black university students performed worse (in comparison to their prior test scores) on standardized test questions when this task was described to them as being diagnostic of their verbal ability (and thus when the stereotype was relevant), but that their performance was not influenced when the same questions were described as an exercise in problem solving. And in another study, the researchers found that when Black students were asked to indicate their race before they took a math test (again activating the stereotype), they performed more poorly than they had on prior exams, whereas White students were not affected by first indicating their race.
Researchers concluded that thinking about negative stereotypes that are relevant to a task that one is performing creates stereotype threat — performance decrements that are caused by the knowledge of cultural stereotypes. That is, they argued that the negative impact of race on standardized tests may be caused, at least in part, by the performance situation itself.
Research has found that stereotype threat effects can help explain a wide variety of performance decrements among those who are targeted by negative stereotypes. When stereotypes are activated, children with low socioeconomic status perform more poorly in math than do those with high socioeconomic status, and psychology students perform more poorly than do natural science students (Brown, Croizet, Bohner, Fournet, & Payne, 2003; Croizet & Claire, 1998). Even groups who typically enjoy advantaged social status can be made to experience stereotype threat. White men perform more poorly on a math test when they are told that their performance will be compared with that of Asian men (Aronson, Lustina, Good, Keough, & Steele, 1999), and Whites perform more poorly than Blacks on a sport-related task when it is described to them as measuring their natural athletic ability (Stone, 2002; Stone, Lynch, Sjomeling, & Darley, 1999).
Research has found that stereotype threat is caused by both cognitive and emotional factors (Schmader, Johns, & Forbes, 2008). On the cognitive side, individuals who are experiencing stereotype threat show an increased vigilance toward the environment as well as increased attempts to suppress stereotypic thoughts. Engaging in these behaviours takes cognitive capacity away from the task. On the affective side, stereotype threat occurs when there is a discrepancy between our positive concept of our own skills and abilities and the negative stereotypes that suggest poor performance. These discrepancies create stress and anxiety, and these emotions make it harder to perform well on the task.
Stereotype threat is not, however, absolute; we can get past it if we try. What is important is to reduce the self doubts that are activated when we consider the negative stereotypes. Manipulations that affirm positive characteristics about the self or one’s social group are successful at reducing stereotype threat (Marx & Roman, 2002; McIntyre, Paulson, & Lord, 2003). In fact, just knowing that stereotype threat exists and may influence our performance can help alleviate its negative impact (Johns, Schmader, & Martens, 2005).