Reading Time: 6 minutes (1,873 words)

Introduction

Intelligence is a real trait that varies among individuals and has been recognized throughout history. Different cultures have always acknowledged that some people are smarter than others, likely for practical survival reasons. However, in recent decades, discussions about measuring intelligence, especially through tests, have faced criticism and accusations of bias and fraud. Despite this skepticism, the book assumes that intelligence can be accurately and fairly assessed through standardized tests. Understanding this perspective is important for grasping the book's content.

Intelligence Ascendant

The study of intelligence has developed a lot since the late 1800s, mainly because of Charles Darwin's ideas about evolution. Darwin suggested that inherited intelligence was important in human development. His cousin, Sir Francis Galton, took this idea and showed that intelligence could be passed down in families. He believed that people have different intellectual abilities, strengths, and weaknesses. Galton tried to create an intelligence test, focusing on how well people could see and hear or react to simple situations, but his efforts did not succeed.

After Galton, other psychologists built on his work. Alfred Binet, for example, created intelligence tests that looked at reasoning and recognizing patterns, which matched better with common ideas about intelligence. By the end of the 1800s, these types of mental tests were being used in many countries.

In 1904, Charles Spearman made a significant breakthrough by introducing the correlation coefficient, which is a way to see how two different abilities are related. He found that if people did well on one mental test, they tended to do well on others too. This suggested that there was a shared ability behind different tests, which Spearman called "g," or general intelligence. His view was that intelligence is not just about learning facts but about understanding connections and using knowledge from past experiences.

Testing of intelligence continued to evolve with ideas like mental age and the intelligence quotient (IQ). These concepts helped standardize how we measure intelligence. The U.S. Army also started using IQ tests to help assign roles during World War I, which made the idea of IQ well known.

However, the use of intelligence tests became controversial. Some people misused test results to support cruel policies like eugenics, which aimed to control reproduction based on perceived intelligence. This led to practices like sterilization laws, targeting those considered to have low intelligence, especially among immigrant groups.

Many critics, like journalists, raised serious concerns about intelligence testing. They argued that it was wrong to think a quick test could determine someone's future abilities or value. Despite these worries, many schools and the military still wanted objective ways to evaluate people, which kept the demand for intelligence testing strong.

In the 1930s, intelligence tests improved even further, thanks to psychologists who created well-accepted standardized tests, such as the Wechsler scales, which were easier and cheaper to use on large groups. World War II led to advancements in testing that helped identify specific skills for military roles.

By the mid-20th century, even though people recognized that intelligence varies widely, discussions about its implications became less intense. The focus shifted more to using intelligence tests for practical purposes rather than arguing over their meanings. The testing industry grew, with millions of people around the world being tested each year. Even with concerns about how tests could be misused, they remained important tools in education, the military, and other areas, highlighting ongoing questions about the ethics and effects of measuring human abilities.

Intelligence Besieged

In the 1960s, discussions about intelligence tests became very heated because people's views on equality were changing. Movements like the civil rights movement and the War on Poverty made people more aware of social inequalities. This change in thinking affected not only politics but also how psychologists viewed social problems. Before the 1960s, experts debated whether intelligence was mostly due to genes or the environment. However, by the 1960s, it grew controversial to say that genetics affected intelligence, even though evidence showed it did.

During earlier years, psychological research focused on genetic behavior, but between the 1930s and 1960s, the focus shifted to learning, minimizing the importance of genetic differences between individuals. Behaviorists, who were leading this shift, believed that human potential could be shaped completely by the environment. They argued that failures in intelligence or social behavior were caused by problems in society, such as poor education or lack of healthcare, and that fixing these issues would eliminate the deficiencies in intelligence.

This view clashed with the belief that individual differences in intelligence could not be easily changed by government intervention. In 1969, Arthur Jensen wrote an article explaining that educational programs aimed at helping students with lower IQs were failing because those students often had low IQs that were inherited. He noted that average IQ levels varied between racial groups, which led to a strong backlash against him and labeled him as a racist.

The situation worsened with William Shockley, a scientist who shockingly suggested that people with low IQs should be sterilized. This further angered those who supported the idea that environment played a bigger role in intelligence. In 1971, Richard Herrnstein published an article that linked inherited IQ to social success, which angered many and reinforced the views of Jensen and Shockley.

The legal landscape changed when the U.S. Supreme Court decided that employers could not use standardized tests unless they were directly related to the job. This ruling led many schools to limit or stop using standardized tests like IQ tests. The debate over whether intelligence is inherited became personal, with critics calling those who believed in heritability as frauds.

Leon Kamin argued that belief in inherited IQ came from right-wing and racist motives. Further scandal occurred when Cyril Burt, a well-known researcher, was accused of fake research, bringing more doubt to intelligence testing. In 1981, Stephen Jay Gould published "The Mismeasure of Man," criticizing the history of IQ testing and arguing that intelligence cannot be measured accurately.

By the early 1980s, many people believed that the idea of intelligence was flawed. They viewed IQ tests as culturally biased and not reliable. Critics claimed these tests did not truly predict success in life and harmed students, especially those from disadvantaged backgrounds, by labeling them and promoting negative stereotypes.

Intelligence Redux

Public discussions about intelligence and IQ tests often do not match what researchers actually know. There has been a gap between what is talked about in the news and what scholars write in academic journals. One case is Arthur Jensen, who faced criticism from the public but still continued his important research quietly within the professional community. His experience shows the risks that scholars take when addressing controversial topics.

In the 1970s, many researchers observed that discussing the benefits of IQ tests or the idea that intelligence might be partly inherited could seriously harm their careers and personal lives. Because of this, studies on cognitive abilities mostly continued in academic settings, allowing for progress, even amid the surrounding debates. Over time, researchers have come to new understandings, while public discussions often focus on simpler themes like whether intelligence is mostly influenced by the environment or whether tests are biased.

By the early 1990s, professionals in the field started to categorize themselves into three main groups regarding intelligence: classicists, revisionists, and radicals. Classicists focus on understanding the structure of intelligence. They believe in the concept of "g," or general intelligence, which they see as central to intelligence. They argue that g is a reliable core human ability that can be measured through various research methods. Although many have challenged the idea of a single intelligence factor throughout history, classicists assert that well-designed IQ tests can effectively measure g without being biased against different economic or racial groups. These tests can predict important outcomes in life.

Revisionists take a different approach by concentrating on how people actually use their intelligence rather than just measuring what it is. They argue that it is important to understand the processes involved in thinking and problem-solving. For example, Robert Sternberg created a theory called the "triarchic theory" which identifies three parts of intelligence: how mental processes work internally, how tasks become routine, and how intelligence is applied in the real world. Revisionists believe this approach offers a better understanding of how intelligence functions in daily life and they seek to develop tests reflecting this broader view.

Radicals, on the other hand, often reject traditional ideas of intelligence. Howard Gardner is known for his theory of multiple intelligences, where he argues against the idea of a single intelligence factor like g. Gardner proposes seven distinct types of intelligence, such as linguistic and logical-mathematical abilities. He focuses on problem-solving as a key part of intelligence but does not provide quantitative evidence for his claims, viewing the discussion of intelligence as more subjective than strictly measurable.

In summary, conversations about intelligence differ greatly between public opinions and academic research, highlighting the complexities in defining and understanding cognitive abilities. The different groups contribute varied perspectives on what intelligence really means and how it can be measured.

The Perspective of This Book

The main idea is that intelligence is best understood through the classical tradition, which provides a lot of valuable scientific information that is often ignored in current discussions about public policy. This perspective emphasizes the importance of looking at average data to understand how human abilities relate to policies affecting society. For example, knowing the average IQ of a class can give important insights into their educational outcomes, while a single person’s IQ score offers very limited information.

While there are other views about intelligence, the focus is on the classical approach. It recognizes that human talents go beyond what is usually considered intelligence. Skills like music and sports, along with personal traits, are important too, but calling them intelligence can lead to confusion and make it harder to understand each quality's unique importance.

There is a warning against thinking that a high IQ always means a person has great social qualities. Many people assume that if someone is funny or charming, they must also have a high IQ. Although there might be a small connection between IQ and these traits, it’s not reliable enough to judge someone accurately.

Some people, known as "idiot savants," can do very complex tasks even if they have a low IQ score. These cases make it tricky to connect IQ scores directly to cognitive ability. To avoid the political issues tied to the word "intelligence," the term "cognitive ability" is often used instead.

Six important conclusions about cognitive ability are shared. First, there is a general factor of cognitive ability that people differ in. Second, standardized tests measure this factor, with IQ tests being the most accurate. Third, IQ scores tend to stay stable over a person's life. Fourth, properly given IQ tests do not show bias against any social or racial groups. Fifth, a person's cognitive ability is partly inherited. Lastly, while experts mostly agree on these points, they still have different interpretations and opinions about what they mean.

Media attitudes often contradict these conclusions, leading to confusion among the public. To help readers understand, specific topics are addressed with evidence throughout the text. The main takeaway is to be cautious about how IQ scores are interpreted and to appreciate the complexity of intelligence and individual qualities.