Below is a review I recently wrote about a book called "What Intelligence Tests Miss." The author's argument is that while IQ is a valid concept, basing estimations of intelligence on it alone overlooks an equally valid concept of rationality. As we all know, "book smarts" and "street smarts" do not seem positvely correlated (one can have a high amount of one with a low amount of the other.)
As an educator, it is frustrating because I often feel (a) that the idea of IQ is unjustly inflated and does not give the whole story; and (b) alternatives like Gardner's "multiple intelligence theory" are more politically and socially good-sounding than they are scientifically valid.
Stanovich, I think, offers an interesting "middle view." I would love to see his ideas fleshed out a little more.
____________________________________________________________-
We are all familiar with the phenomenon of those who have high IQ's doing things that seem stupid. This leads to the distinction between "book smarts" and "street smarts," but strangely enough, we call BOTH of these things intelligence. We recognize both the absent-minded professor and the low IQed entrepreneur as "intelligent." How, though, can the term "intelligence" apply to two seemingly non-correlated things (being book-smart and street-smart)?
Psychologist Keith Stanovich has an interesting idea: maybe "intelligence tests" measure intelligence (as traditionally defined) but not a wholly different faculty of rationality. To Stanovich, the difference between intelligence and rationality is the difference between the "algorithmic mind" and the "reflective mind," or, the difference between the ability to employ algorithms and the ability to think about and CRITICALLY employ algorithms. (I might say that intelligence may be the ability to map or write a sentence and rationality is the ability to formulate arguments and write a persuasive essay.)
The first half of Stanovich's book is dedicated to showing that while IQ tests are a valid measure of a faculty of general intelligence (he does not deny that IQ tests measure a very real thing), it simply does not measure all that we understand to be good thinking.
Stanovich, though, is also a critic of those like Gardner and Sternberg who want to add to the number of "intelligences" (musical intelligence, naturalistic intelligence, creative intelligence). These things, he says, inadvertently beatify the term "intelligence" to be a be-all-end-all that it is not (by implying that any good mental work must be called an "intelligence" rather than a "talent," "skill" or "proclivity.") Instead, Stanovich makes the point that intelligence is simply one component of good thinking. The other, often overlooked, ingredient is rationality (and he alludes to several studies which show these two faculties are not very positively correlated. One can have high amounts of one and low amounts of the other.)
What I thought and hoped Stanovich would do next - what he did not do - is offer a sense of how we can test for RQ (rationality quotient). While the first half makes the case very well that rationality should be valued and tested every bit as much as intelligence, he does not follow it up by showing how such a thing might be done.
Instead, Stanovich devotes the second half of this book largely to cataloguing and demonstrating "thinking errors" that distinguish rational from irrational thought. For example, humans are "cognitive misers" by nature, who like to make decisions based often on first judgments and quick (rather than thorough) analysis (a likely evolutionary strategy, as ancestors that were quick and somewhat accurate probably did better than those who were slow and very accurate). Also, humans often put more emphasis on verification than falsification, and fail to consider alternative hypotheses in problems, preferring often to go with the most obvious answer.
All of these, while interesting, have been better and more thoroughly documented in other books by decision theorists and psychologists. All Stanovich needed to do was refer us to these, at most, devoting a chapter or two to examples. There is more important work for Stanovich to do then rehash what we can just as soon read elsewhere. Instead, I think he sh old have begun outlining ideas on how to test for rationality. What would such tests look like? How would such tests affect our educational system (focused, as it is, on IQ)? What would test questions even look like and how can they be adjusted for by age/grade level? Are there pitfalls?
None of these questions were answered, and Stanovich's argument is the worse for it. Stanovich himself notes that one big reason for IQ's predominance in the psychometric world is that it is measurable (which is a big strike against many of Gardner's "multiple intelligences"). Ironically, Stanovich's failure to suggest ways to measure RQ will likely have the same effect for his idea as it had for Gardner's.
It is a shame, though. As an educator concerned both with the undeserved predominance of IQ and also the failure of concepts such as Gardner's "multiple intelligence" to offer a serious challenge, I quite like Stanovich's germinal idea. As we all know that rationality is a key component to good thinking, and it is hard to think that it is positively correlated to IQ, it would be interesting to find a way to measure RQ as a valid supplement to IQ. It is simply too bad this book did not explore the practical questions involved with his tantalizing suggestion.
Subscribe to:
Post Comments (Atom)
What's nice about being able to test for an effect is that you don't have to guess about whether it's real or not anymore, you just have to refine and improve your tests. In any complex system, a good approach to measuring any effects is to think of anything that could possibly be a measurable effect, do the math, and see if it cancels out to a zero. If you find a test that gives consistent results, then you've discovered something useful even if it wasn't what you set out to validate.
ReplyDeleteThe Myers-Briggs Type Indicator seems like a good example of this kind of approach working well. I don't get the impression that it was created out of any strong predictive model, it was just a rough guess at a good way of psychologically classifying people, and it seems to have a lot of value, so we keep using it. In my extremely limited experience with it, most people fall pretty neatly into one category, and their category makes pretty good predictions about the things it's intended to predict (even though some people stretch it too far).
Myers Briggs is good. For IQ, the WISC is a standard test, and the way they know that this IQ thing is "real" is that the parts of IQ very stronly correlate to eachother, and often (not always) correlate strongly with other factors, like educational abililty.
ReplyDeleteRationality would be a very intersting thing to test for, and I do believe the author's claim that it is a key and missed component to what we think of as "broad intelligence."
But whether it is real or not doesn't matter if we find no good reliable way to test it.
Yeah, I'm not sure how well I communicated this, but I wasn't raising any question of how "real" the RQ idea is. I was agreeing how valuable a good test is in general, and how much more difficult it is to deduce your way to any kind of certainty in the absence of a test procedure.
ReplyDeleteI mentioned Myers-Briggs because I think it's a good example of the "just try it and see" approach bearing fruit (not because it has anything to do with intelligence). I think they did a good job of not overthinking it when they created the personality dimensions. They just suggested something, we tried it, and it's worked out well so far. We'll probably scrap it before long for something better, and we'll probably find that "something better" the same way.