Tuesday, March 31, 2009

On the Human Quest for Cosmic Purpose and Rationale

On anothe blog, a fellow atheist hs written a piece oce the human quest for external purpose. Many humans, that is, quest for a purpose in life that is extenal and not of their own making ("I am part of god's plan, etc.)

The author of the article correctly notes that as the universe does not itself have desires and is not cognizant, the world has no ultimate purpose (and even if a god created us for or with a purpose, that is not mean we are bound by that purpose or that it is a purpose that would satisfy us.

Early in the article, though, he asks a very non-rhetorical question that I think should and can be answered:

I do not know where this want for an external purpose comes from. Perhaps it is some left-over brain program from our childhood where we are inclined to prefer the values of our parents. Perhaps it is something that is taught from one generation to the next without stopping to consider the wisdom (or foolishness) of doing so.

Fyfe cannot identify with the search for an external, larger, purpose to life, or the emptiness many can feel when they do not find one. While I can relate - I have no need to find a 'bigger' purpose - I can certainly relate to those who do and think I can give some explanations here. And a search for a 'bigger' purpose has little to do with motivations that Fyfe proposes.

Why do we search for grand purpose in life rather than being satisfied with human-made purposes? First, I believe that we do this because many lives that contain hardship are assuaged by the belief that "there has to be more than just this." Slavery is a good example of this. The slave, who was/is forced to toil for many hours a day under brutal condition, and who had no control over his/her life, may take comfort in the idea that he/she is part of something larger and better than his/her own day-to-day. Saying that the slave's own life is the purpose of his/her life does not do becuase he/she feels their life to be intrinsically purposeless.

But what about those who are better off? Why do folk with relatively happy (or at least very tolerable) lives feel the need to search for a larger purpose? I have known many who've done this - who go into a particular profesion, for instance, because they feel they are "serving the greater good," or the like. Even if they are creating this purpose for themselves (which often they have), they are comforted by the fact that they are serving a purpose larger than themselves, and would be unsatisfied with the suggestion that they have created, rather than been given, a purpoose for their lives.

In these cases, having a purpose imposed on one (or the feeling of it) makes the purpose feel more real and legitimate. It does not feel self-serving, but other-serving. Teh best analogy I can think of is the satisfaction of scoring well on a test of someone else's creation versus scoring well on a test that one whore oneself. The latter does not cause satisfaction because the task was too easy, as one simply gave the answers to questions one created. The real satisfaction lies in beating the challenge of performing well under conditions not of one's own making.

In the same way, creating one's own purpose to live up to is less satisfying, because less challenging, than fulfilling a life purpose created by onesself.

Another reason why serving a "larger" purpose is seen as more fulfilling than serving one's own individually made purpose is because when serving others, one can feel like they are making a difference in the worldd, rather than just in themselves. This is seen by the familiar film and book plot/theme of the shallow business person (or other type of egoist) that lives like solely for themselves but unexpectedly finds joy in helping others (which was, generally at first, against his will). [films like "Family Man" or "Two Weeks Notice" are more popular examples.]

I have never really felt the need to seek after an externally imposed, and larger, purpose. But I have felt the need to help others rather than pursue solely what is in my own personal interest. I did this when I became a teacher rather than went into a PhD program. I wanted to do the latter, but did the former because I wanted to do something at once hard, helpful, and effectual. All I can say is that this choice, while certainly the tougher path, felt more "solid" than the self-serving goal of PhD because it involved helping others by making a sacrifice. While I was not searching for a higher purpose, I can attest to the psychological pull that serving social (and in some sense, larger) purposes can have over serving individually made (and smaller, by radius) purposes.

Sunday, March 29, 2009

Mackie and Kevin and Ben...Oh My! (A re-defense of moral subjectivism)

I have been recently, and pleasantly, surprised by a recent, lengthy post on my friend Ben Hayek's blog with a main subject of...me. (Is there a better subject for an extended essay, I wonder?) Specifically, Ben takes issue with several posts I've made here defending moral subjectivism - the view that moral judgments are subjective preferences rather than objective facts.

Like many objectivists, Ben is quite angered by subjectivism because, without it, we could not make objectively valid moral judgments. The first part of his post suggests a moral dilemma: Ben and I confront a person torturing a cat. Ben, the objectivist, would be able to very objectively condemn the torturer, while I would not.

On Kevin's view, when a subject declares that "torturing kittens for pleasure is immoral," the subject is not identifying a a fact about the world (i.e., not picking out a constituent of objective reality), but rather identifying only a state within the subject's consciousness. Kevin describes this state within the subject as no different to whenever a human being feels an emotion. Thus, if I encountered a sadist on the street about to torture kittens for the pleasure it gives him, if I rescued the kittens by force prior to their fate, I would not be doing so because it is a matter of objective fact that what the kittens were about to be subjected to is objectively wrong, but because I only feel that it would be wrong.

I suppose I am guilty as charged here. Ben is right that I agree with him that torturing kittens is wrong, but that I disagree with him about the objective import of that view. I would, in a sense, be powerless to objectively condemn the sadist.

I am quite at a loss, though, to think about why objectively condemning a sadist would have any more import than subjectively condemning her. Does the sadist care if my condemnation is coming from me or from the Cosmic Objective Order? I doubt it. I doubt a sadist who tortures kittens feels very bound by a moral excoriation - no matter whether it is subjective or objective in nature (whatever the hell the difference looks like.)

And this is one of the big objections I have to moral objectivism; like JL Mackie and others have noted, if morals exist as objective things, they are very strange things - things like no other things we know of. They do not have any physical embodiment or mode of detection (other than an introspective method identical to that used to ascertain subjective emotion!). And they are the only things (assuming they are things) that impose obligations on witnesses' behavior. Facts are real, but I can think of no fact that obligates us to change our behavior like morals do. So, are they facts? I cannot see any good reason to see that they are.

But apparently, Ben does. He suggests that they are facts just as 2+2=4 is a fact. He suggests that just as this mathematical axiom has no basis in observable sense data (but remains a fact nonetheless), so with moral facts. 2+2=4, killing kittens is wrong... pretty similar, eh?

Actually not. 2+2=4 actually does have basis in physical fact and is observable (verifiable, if you will). As numbers are symbols, "2" means ** and "4" means ****. We can verify, then, by using our number system that 2+2=4 is valid by adding ** to ** and seeing whether it equals **** (the representation for "4").

But Ben has upped that ante. He suggests that if I hold that morals are not facts because they are not grounded in the world of observable and testable things, then I have a problem: that very statement (that morals are not facts because not grounded in observable and testable things) is itself not grounded in the observable, testable world. Oh dear!

Of course, Ben's objection misunderstands the burden of proof. Just as he who claims that faeries do not exist does not bear the burden of proof (the affirmer of faeries does), my statement is something of a null hypothesis equivalent to a denial of faeries existence. Just as the faery denier is waiting for proof of so-called objective faeries, I am waiting for proof of so-called objective morals.

And, yes, there is reason to doubt such existence. While Ben is right to say that the fact of moral disagreement is not enough to make the case for moral subjectivism, it does not itself bode well for moral objectivism. Moral disagreement is not just existent, but quite pervasive and often intractable. When two people disagree over whether capitalism is wrong, for instance, they can argue facts (capitalism leads to x, socialism does not lead to x), but the argument often comes down to an impasse (I think x is wrong, and you don't), where one cannot think of any fact-based way to convince the other.

This is not very explainable under moral objectivism other than to say, as Ben does, that " people in general are stupid." This will not do, though, because he who makes that charges must be able to point to - POINT TO - what it is that the dummies are overlooking (and if that something is intuition, I am sure the dummies also have those, in which case, the accuser must argue as to why his is superior to theirs non-circularly).

Lastly, Ben wonders what my criteria for truth is. Unfortunately, this is an area of philosophy I have little patience for. What we mean by "true" appears to be a tautology: "truth" is "that which accurately describes how things are." How do we recognize truth? The only good answer to this I see is: by experiment. If I write out the directions to Sean's house, the best way to test this is to see if the directions actually get us there. If I say that x theory is true in science, the way to test that (unlike what ID creationists do) is to experiment the theory, and try and falsify the theory. If I say, "I love my Fiancee," the only - and imperfect - way to see if that is true is to observe me and see if my actions are consistent or inconsistent with that statement.

But how in the world do we verify "killing cats is wrong." We can verify "killing cats will cause them physical pain." We can verify "I have an aversion to seeing cats inflicted with physical pain." Those are all matters up to perfect or imperfect experiment. But what we cannot verify is the next step: "imposing physical pain is wrong." What does "wrong" refer to? We can't observe it. We can only intuit it (just as we intuit our own subjective emotions, or God talking to us.)

Ben is clearly not satisfied with the idea that to be called "truth" a thing has to be observable. He hints at logic being a falsifier of this claim. I think not. Logic is only so good as the results it leads to, and the only reason we call the law of non-contradiction true is because we test it both empirically and mentally. After all, logic is seen as a tool, and the only truth a tool has is that it does what it says it will do. When we use logic, it leads us to results that work as they say they will (as theories are tested by seeing if they work the way they say they will.)

And as for 2+2=4, it is the same. Its truth comes not from a heavenly mystical Platonic realm that humans cannot see (but can intuit) but from the fact that we can observe ** plus ** equalling ****. So, I am afraid, I hold to the horrible idea that "truth" gets its name from its ability to work in practice.

In the meantime, I'm going to keep on believing concepts that I believe exist in the world, albeit in a transcendental sense of course, such as mathematical truth, logical truth, and moral truth, just like I believe in the thoroughly objective nature of truth in general, along with the believe that they are - all of them - constituents of the world.

Many men intuit God and argue very fervently that He "exist[s] in the world, albeit in a transcendental sense," and that, despite his objective nature, we can't sense him with the five senses. We can only intuit him. Ben has made the case that ethics are like this. But is he persuaded by such religious arguments. (Hint: Ben is not a Christian because, despite God having all the properties Ben argues that morals have, he does not believe that there is evidence for God's existence.)


The Intrinsic Pleasure of Nice Things

Several weeks ago, while my fiancee went shopping for wedding accessories, I killed time in a Brookstone store. While browsing the items, I came across the clock pictured on the left. I did not buy it because I do not need an alarm clock. Even though I found it a positively eye-catching clock, I did not want to buy it solely for that reason. Such things would invariably make me feel guilty.

So, I am writing this essay in attempt to justify yesterday's purchase of the said clock. I liked its look, and even though I didn't need it, I bought it. Is there, I keep asking, anything wrong with that?

First, I must explain that I am not a frivolous spender. As a teacher, I make a modest amount and as such, don't waste money. Generally I don't buy it unless I need it. This clock is, you might say, an isolated incident. But enough delay; is there anything wrong with what I did?

Like many, I have unconsciously bought into the school of thought that sees buying "luxury" items as a shame, and a sense of shallowness and - pejorative ahead - consumerism. Some theorize that luxury buying is our attempt to gain power and order in our otherwise powerless and orderless lives. Others suggest that luxury consumerism stems from our increasing and insatiable desire to chase after a certain vision of a perfect life (that is, as the word "perfect" connotes, unattainable).

Such frivolity equates, in intellectual circles, to a lack of appreciation of "real" value in favor of ephemeral and fleeting value. As mentioned in a previous post, a television show I watched recently contained a philosopher whose name I don't remember affirming Plato's point that focus on sensual pleasure is surely to miss or ignore the "real" pleasures (like, of course, intellectual stimulants like philosophy). Philosophers, it seems, are Platonists rather than Epicureans.

Similarly, a book written a few years ago by another snooty intellectual, called "The Middle Mind", excoriates those who do such ghastly things as shop for luxury items (or at Wal-Mart) as "middle minds," - those less intellectually equipped than enlightened minds (that presumably buy only ugly things).

I have absorbed these ideas well; they are part of the generation I grew up with. (How many movies and tv shows were made in the nineties that mocked materialism and contained the underlying message that consumerism is the antithesis of a life well lived.) So here I am, feeling guilty about buying a clock I don't need simply because I like the design.

I have mulled over my motivations for buying the clock several times, and must say that I disagree with all of the above theorists. I bought the clock because I like the look of it, and wouldn't mind looking at it every morning. I did not buy it because of a desire to control my uncontrolled life. I did not buy it because I quest after perfection in life. I did not buy it because I am ignorant to non-tangible and more lasting pleasures (yes, I love philosophy as much as I did pre-clock). I bought it...because I like the look of it, and am naturally drawn towards things I like.

And, honestly, I can't see what is wrong with this. Of course, I would be able to spot the flaw if I were one to buy in excess and rely EXCLUSIVELY on aesthetically pleasing procurements to make me feel good. Were I frivolous in spending, the flaw would be a pragmatic one of spending more than I could afford. Were I nothing but a stuff-worshipper, one could question my psychological well being on good grounds (in the same way one would question a person obsessed with plastic surgery as their only source of self-esteem). But I am neither of these. Try as I might, I can't see why my desire for nice items is wrong if not done in excess.

Virginia Postrel, in fact, has written a very interesting book on the subject, called "The Substance of Style." And any reader thinking that I am making much ado about nothing would do well to read her book, as she demonstrates how prominent and present are current antipathies towards people's desire for nice things.

In this book, Postrel makes an interesting but very, very simple argument: that aesthetic value is a very legitimate value, and a thing's form is a legitimate part of its function. Why this is so remains a mystery, but that it is so seems almost undeniable. Our desire to look at nice things can be seen everywhere from the world of marketing to the very real and important value of "first impressions." (In an ultimate irony, even the anti-consumerist author of "the Middle Mind" has to rely on something other than an all-white book cover to sell his books. And I would even move to suggest that many anti-consumerism folks still base more of their purchases on physical appearance than they realize.)

Is such an eye for art endemic to our human nature - a part of our evolutionary past? Denis Dutton has written an interesting case for an affirmative answer in his book "The Art Instinct" Putting value on things' appearance is a huge part of our history, especially inexorable since the human species primarily relies on visual and auditory information to make judgments. Once we see that early humans that had the strongest connection between these two senses and their ability to make good, quick judgments had a survival advantage over those who couldn't, it is but a short leap to the idea that, as an evolutionary byproduct, we acquired a love for things that look and sound good. (This, of course, is just as true if "good" and "bad" turn out to be subjectively defined terms. It is still undeniable that we make judgments about how a thing looks and sounds.)

Embarrassingly enough, all of this is to say that I have no IDEA what made me love this absolutely unnecessary clock enough to buy it (rather than save the money). I know that I like the way it looks, and know that I like the idea of being able to see this nice looking clock every day. But that is the best I can do. To see this very natural reaction as mindless or shallow is as absurd as to see my desire to wear or smell cologne rather than body odor as equally shallow.

In my opinion, theorists who rail against the intrinsic pleasure of nice things are, in this sense, arguing against the way human beings are. They are also making a mistake very common amongst intellectuals - the judgment that since they like intellectualism so much that anyone who doesn't must be shallow, much in the same way some jazz musicians wrongly judge those who don't like jazz as shallow and unintellectual.

So, I guess the reason I bought the clock has to do with the very inexplicable (but somewhat less than sinister) drive to see nice things. My buying the clock did not, in all likelihood, have to do with an eschewal of "deeper" pleasures or a quest for controllability, but rather, a benign desire to see and keep seeing something that is attractive aesthetically. I hope the reader is not disappointed that I am ending this essay on a tautology (I am attracted to attractive things), but it is the best I can do. As much as some might hate to think it, I - and we all - are attracted to attractive things.

It's Like Dobzhansky Said: A review of Coyne's "Why Evolution Is True"

I recently finished reading Jerry Coyne's book "Why Evolution is True." It is a book that lays out, very patiently, the various lines of evidence for the theory of evolution, in case some people missed it the first 100 times.

As a science educator (I co-teach two biology classes) it is frustrating that this book seems to need writing again and again. Right now, my classes are in the middle of the evolution unit, and it is meeting with a bit more resistance this year than last. I have had to answer several student objections about the idea that humans can trace their lineage back to sea-dwelling creatures (when I pointed out our vestigial webbing between our fingers.) I have also heard several disparaging comments from students about evolution (like the student who insisted that a theory is a story about something for which there is no real evidence; it was an interesting class discussion.) I have also been advised by a supervisor that I should not even touch the Scopes and Dover trialw, which I still think could serve as a strong motivator.

I wanted to read this book primarily to look for interesting ways and illustrations I can use to explain how, and how we know, evolution occurs in nature. Below is my review for this very reccomended book:

Nothing in biology makes sense except in light of evolution. This is the main theme running through this book. What Coyne's book is not, is a directed rebuttal of ID creationism. What Coyne's book IS, is a very well- and clearly-written survey of the various evidences for evolution. Coyne's point is similar to Dobzansky's: all of this varied evidence can only be made sense of via the model of evolution. The fossil record, genetic evidence, etc, would simply be nonsensical under a theory like ID (unless the designer designed everything to look like it had evolved).

As a science teacher, I wish this were a book I could give both to my kids and to other science educators. Coyne gives very engaging and crystalline summaries of concepts like speciation, genetic drift, the significance of vestigial structures, and the falsifiability of evolutionary theory. For my fellow science educators, this book provides some great ideas on how to explain these and similar concepts, and also provides fascinating examples of evolution in action (I had no idea, for instance, that several sea-dwelling creatures have vestigial and all-but-functionless eyes!)

Two things that particularly struck me about this book: first, one of the most common ID objections is that we have never directly observed macroevolution/speciation in action. While Coyne incorrectly fails to correct this misimpression - any doubters, look it up - he deal well with his reply (which will still doubtless be disappointing to ID supporters). Coyne rightly notes that macroevolution takes place generally over many, many thousands of years, and that its glacial pace makes it appear as if it is not taking place. (Long and short: the best way to see the slow process of macroevolution at work is via the fossil record, and we have done that in spades.) The only reason I bring this up is that this is considered by IE supporters to be a very "live" and concerted objection, and Coyne's response, by contrast, is very, very subtle and indirect. This indirect response could be seen as a weakness by evolution's handful of critics.)

Secondly, Coyne goes through all of the lines of evidence for evolution in some detail EXCEPT the genetic evidence. As many have noted, one could ver well throw out all the fossil evidence and still have enough evidence, via genetics, of evolution. The chapters on paleontological, embryological, and zoological, and geographic evidences for evolution are very powerful. But I really missed, and hoped for, a good chapter on what many feel to be the strongest evidence for evolution: the genetic code. (Good books have been written entirely on this evidence, including Daniel J. Fairbanks's "Relics of Eden," and Sean Carroll's "Making of the Fittest." Both are highly reccomended.

Despite these points, this book is exactly what the lay-public needs. As statistics confirm that the general populace lacks good education and understanding of how evolution works (and why evolution is a sound theory), Jerry Coyne has written a patient, clear, and interesting book marshal ling the various evidences for evolution (and, by converse, against ID). As mentioned, Coyne's book is not devoted to direct confrontation with ID "theorists" like Behe and Demski. (See Miller's "Only a Theory" and Pennock's "Tower of Babel," just to mention two of hundreds.) Rather, Coyne does a great job arguing the positive case that, as Dobzhansky said, nothing in biology makes sense except in light of evolution. If only everyone would read this.

Friday, March 27, 2009

Doing All We Can to No Avail

It is crunch time at _____ ________ high. The third quarter is almost over, and everyone (staff, that is) is feeling the squeeze. I forgot how much the end of the third quarter bothered me last year, but now it is all coming back.

Waht makes the third quarter's end so disheartening for me - a special educator - is that this is the time when students who have slacked for three quarters are in danger of officially failing for the year - recieving "E's" for three of four quarters. Of course, they have not offiially failed yet so the question I hear all day every day is: what can we do to make sure they pass?

This question would not bother me - does not bother me - when it is about a student who is trying hard but coming up short. I will help those students as much as possible. The problem is that the large majority of those who we are trying to avoid failure for the third quarter are those who have shown little to no effort. It is hard for me to get up the motivation to help students I truly feel do not deserve to pass, pass. It is one of the most heartbreaking parts of my job.

A colleague of mine was the first to fall victim. He has been asked to "assist" a general educator in a class where many of the seniors are in danger of failing (a class they need to graduate). While looking at the students' "numbers," my colleague noticed that almost all of these students have a 0-10% classwork grade, a 10-20% classwork grade, and very nonexistant test/quiz grades. Long and short, these students are not failing because they try and come up short; if it were that, they would have at least decent classwork and homework grades. Rather, they are failing because they do not take the class, or school, seriously. How would you feel if you were charged with helping kids who do not help themselves and "finding a way" to turn their E into a D? Probably exactly like my colleague feels.

I have a Study Skills class, where students can work and get help on assignments for other classes. Recently, two students have been placed into that class because they are in danger of failing certain classes they cannot afford to fail. Both were put in my class last week; I have yet to see them. They have been cutting class. Perhaps they do not want help? Perhaps they are not serious enough to think help warranted? Nonsense, says my supervisors! Find a way!

In all honesty, this is a main reason I suspect that many teachers quit. Those who were once optimists have their hopes dashed on a daily basis, and those who were pessimists become hardened pessimists. I am well aware of the saying, uttered by plenty of busy body mentors, that we need to "focus on the good ones," and "do only what we can." We "plant seeds," rather than grow trees. I cannot accept that. Does that mean I don't care enough or that I care too much?

At some point, we as a society have to accept the fact that others cannot be held responsible for one's own succses or failure. Yes, teachers CAN inspire unmotivated students to learn, but to make such "cans" a job requirement (we MUST inspire even the most uninspired) gets us away from teaching and makes us cheerleading babysitters who occasionally instruct. At some point, we need to accept the fact that those who do not do enough to pass will fail, and that the worst thing we can do for those students is to send the message that we will catch them when they fall. Sometimes, falling is the only way to let you know how important it is to walk.

Monday, March 23, 2009

Is "Meritocracy" a Dirty Word?

I have a bold confession - one that is most certain to be "politically incorrect." No...I am not a racist, sexist, or homophobe. No... I am not a nazi or a fascist. But it will be hard to convince some people that I am not any of these things because of what I AM: a meritocrat. I believe very firmly in the idea of meritocracy - that deserts should be doled out by merit before any other criteria - and because of this, some will think that I must be racist, sexist, an a fascist.

Why do I make this bizarre and, to some, grizzly confession? Because I have recently been noticing the word "meritorcracy," floating around, and when I see it, it is always used pejoratively. When looking through education books on amazon.com, for instance, I see books like "The Big Test: The Secret History of American Meritocracy," which professes to show that "The current crises in American education have deep roots," in the elitist and racist idea of meritocracy.

Another pejorative referral to meritocracy can be seen here, in an online summary of an article by political scientist James Flynn,, who refers to meritocracy as a "materialist-elitist value," and whose article title juxtaposes meritocracy with "true justice."

Most surface level discussions of meritocracy do exactly what Flynn does: suggest that a meritocrat is an elitist. Of course, when the term "meritocracy" is defined, meritocrats, like myself, turn out to be quite different from elitists. Here are their definitions:

elitism: practice of or belief in rule by an elite.

meritocracy: leadership by able and talented persons.

When one confuses belief in meritocracy for belief in elitism - as often happens - one makes a categorical mistake. Belief in meritocracy could be seen as a kind of elitism, but there are many kinds of elitism that are not meritocracy. Meritocrats do not, for instannce, believe in the value of elites by birth order, financial privilege or any other criteria that is not directly related to ability at the relevant task(s). The connotation of elitism is that "elites" should be praised BECAUSE OF their "elite" status (by whatever criteria that is). For the meritocrat, the only relevant factor in one's praiseworthiness is one's ability - one's merit in performing the task being measured.

Why do I oppose forms of ranking not based on merit? Why not support that more egalitarian/equalitarian ideal of the day that desert be meted out based on need rather than ability or meted out equally to all regardless of ability?

There are several reasons I oppose this idea. First, I believe strongly in the idea that ranking by merit provides incentive to people to achieve more than they might if they were guaranteed a certain level of "reward" regardless of merit. On the flipside, nations/schools/business strucutres that do not work on the merit system (who promote based on time served rather than achievement) tend to emcourage employees to focus less on output and more on "putting in your time." (To see this in action, look at the relative lethargy of public school employees versus private school employees).

The second reason I oppose non-merit-based programs, like affirmative action and social promotion in schools, is that its attempts at fairness are not fair at all, for two reasons. First, when reward is not meted out by merit, but by other factors, then we are inadvertently saying that ability to do x is relevant to the reward one should recieve for x (that one's ability to get into law school is not related to one's ability to demonstrate legal ability). The second reason non-merit-based approaches are not fair is that they inadvertently tend to put people "in over their heads," by putting the non-law-school-ready applicant into law school without demanding a certain level of attainment (leaving them unequipped). Those who are "helped" in the short term often end up harmed in the long term by thrusting them into a situation requiring x level of mastery without being equipped with x level mastery.

The last reason I oppose non-merit-based ranking is purely pragmatic: even if there are defects in merit-based systems (there are), I cannot think of a more fair alternative. If one has to mete out deserts somehow, it seems best that they are meted out in proportion to ability and effort. As the old dictum says, "you get out what you put in." The alternative - the "null hypothesis," if you'd like - is: "what you put in is irrelvent to what you get out." To me, that is unnaceptable.

To illustrate, let's look non-merit-based hiring like affirmative action. It is often argued that purely merit based hiring ignores such factors as socio-economic and racial differences. It is probably true that those coming from middle- or upper-class backgrounds are more "hirable" to many than those from lower-class backgrounds, as they have been afforded more opportunity in education and life. Those who are in the lower-class probably had not the educational opportunities of those "above" them in socio-economic station. Thus, meritocracy isn't fair because it "stacks the deck" against those who are less well off.

I concede the point of meritocracy's critics that a pure meritocracy does not often exist. Decisions are often made based on decisions unrelated to merit (personal biases, family connections, etc.) But this DOES NOT mean that I concede the conclusion that meritocracy is, therefore, unethical or wrong. Meritocracy - even if it exists only imperfectly - may still be more moral than its alternatives.

And this is what I hold to be the case. Notions like affirmative action to access certain things by lower standards than others who must reach higher standards. Thus, black students can attend universities with a 3.0 GPA where other students might need a 3.25 or 3.5. And if we wanted to go further, we could simply apply a fully egalitarian impulse to college admittance, admitting students regardless of GPA or any other acadmic standard.

Are thes more fair than meritocracy simply becuase some admissions boards may make some decisions based on family attendance to the school ("You're father went to Yale, therefore...") or race (which I highly doubt in this day and age)? Not unless you assume that merit is irrelevant or a minor, rather than major, criteria for deciding how to mete out rewards. If merit is irrelevant - or not as relevant as some other standard - than what standard should we use in its place? Race (as the affirmative action supporters say.) Family lineage (as the elitists suggest)?

To me, it all comes down to the maxim quoted before: you get out what you put in. Instead of getting out what your skin color dictates, what your sex dictates, what you need (regardless of whether you put anything in), or what befits your family heritage, I propose a meritocracy. We get out what we put in. Nothing else makes much sense.

Sunday, March 22, 2009

Why Do Philosophers Have Such an Antipathy Towards Consumption?

On a recent episode of the television show, No Dogs or Philosophers Allowed, the topic under discussion was "the ethics of consumption." One guest, drawing much from Plato and Aristotle, brought up an interesting point: philosophers have largely "spoken with one voice" in their antipathy towards consumption of bodily pleasures.

While this statement's absolutism was challenged by another guest, the point is very correct. While philosophers differ in degree of antipathy, they have largely "spoken with one voice" in antipathy towards consumption. Even the most "happiness focused" philosophers, like JS Mill and Epicurus, were always insistent that true happiness is something untied to consumption and that consumption is something of a transient and crass, and lesser, version of happiness.

My problem with this idea - that happiness is something very unrelated to true happiness - is that it does not jibe with the way real people live. While a philosopher may be able to convince another philosopher that true happiness cannot be enhanced or affected by consuming nice things, I don't think that the philosopher would be able to convince P. Diddy, any member of "The Real Housewives of Orange County," or the myriad of real people who aspire to make more money so that their standard of living might improve of the idea that pursuit of nice things is a mirage of real happiness.

The question is: why are philosophers so unanimous in their antipathy towards consumption and acquisition of physical pleasures and their suggestion that real happiness comes from intellectual and "spiritual" pleasures such as good conversation, an active mind, and the like?

My first thought, as I watched the program (where one guest very explicitly suggested, and another agreed, to this dictum) is that perhaps, this is an example of philosophers projecting their predilections onto others. Philosophers, after all, are the happiest when undergoing intellectual pursuits and are not as concerned with the world of concretes. As such, it would make sense for them to beatify the "life of the mind" as the road to real happiness, and the "life of bodily pleasures" as a distracting detour. (I've always thought that this was why John Stuart Mill was so insistent that, in ranking pleasures, bookish pleasures of the type he valued were placed above all others; his mistake was to assume that the pleasures he found most desirable were THE most desirable.)

Another thought is that philosophers may devalue material wealth because, subconsciously, they are justifying the fact that they are generally forced to get along with less. Plato may have devalued material pleasures and extol intellectual pleasures because this was the way he was forced to live. It is quite possible to suppose that as philosophers don't tend to make as much as merchants (but FEEL much more important than merchants), their way of "justifying" this (to themselves and others) was to philosophically devalue materialism and extol intellectualism as the true ideal.

I am certainly not the first to suspect this idea. Philosopher Robert Nozick - one of the few to openly prise capitalism - wrote an essay offering answers to the question, "Why Do Intellectuals Oppose Capitalism?"

In the essay, he gave two main answers: (1) intellectuals often feel that they are slighted by capitalism, as they tend to make less, but feel they are worth more, than businesspeople; (2) intellectuals were the group generally told in school that they were "the best" and were often raised with a sense that they should be the prized group in society. (Therefore, any system that does not value them more than merchants is inherently unjust.)

Intellectuals now expect to be the most highly valued people in a society, those with the most prestige and power, those with the greatest rewards. Intellectuals feel entitled to this. But, by and large, a capitalist society does not honor its intellectuals.

The last possible reason I want to offer for why philosophers may devalue consumption more than most of the "real world," is that, as they deal in ideas and abstractions rather than concretes, there is a tendency to believe (implicitly if not explicitly) that ideas are more real, lasting, and pure than the "real" world of concretes. This can be seen by the scorn that has often been thrown towards the philosophy of pragmatism, which always attempted to put ideas in the service of practice, rather than the other way around. AS pragmatism cared more for what worked in practice rather than ideational soundness (efficacy over consistency or philosophical beauty), pragmatism has often been seen, pejoratively, as the philosophy of businessmen and "the bottom line."

This may explain why people who are not intellectuals or philosophers do not tend to take as crass an attitude towards material wealth than philosophers and intellectuals. To the former group, material pleasure is a very real thing and ideas do not have quite the status placed on them by philosophers (who employ them constantly). Most regular people are not as happy mulling over Proust and Wittgenstein as they are watching a Blu-Ray on a big HD TV. (Philosophers, of course, cannot relate to this. It doesn't hold true for them.)

I am not one to say that material pleasure has any 1:1 relationship with happiness, and have seen enough empirical studies suggesting that no such relationship exists to know better. But I simply think that this is an area where the rift between philosophy and the real world is very great indeed. It is also an area where philosophers so easily fall into the mistake of WRONGLY assuming that what it is like to be them is what it is like to be everyone else. (Of course material consumption is inferior to intellectual consumption! Who would suggest otherwise?!)

Until philosophers can prove to non-philosophers (the decided majority) the intrinsic value of not pursuing nice things and good sensual experiences (and rather living the life of an intellectually satisfied ascetic) - which I don't think can be done with a straight face - then I will regard it as a shame to hear that philosophers "speak with one voice" on the undesirability of consumption. Unfortunately, that "one voice" will be one only heard by philosophers.

Wednesday, March 18, 2009

How Can Smart People Do Dumb Things? (Dysrationalia?)

Below is a review I recently wrote about a book called "What Intelligence Tests Miss." The author's argument is that while IQ is a valid concept, basing estimations of intelligence on it alone overlooks an equally valid concept of rationality. As we all know, "book smarts" and "street smarts" do not seem positvely correlated (one can have a high amount of one with a low amount of the other.)

As an educator, it is frustrating because I often feel (a) that the idea of IQ is unjustly inflated and does not give the whole story; and (b) alternatives like Gardner's "multiple intelligence theory" are more politically and socially good-sounding than they are scientifically valid.

Stanovich, I think, offers an interesting "middle view." I would love to see his ideas fleshed out a little more.


We are all familiar with the phenomenon of those who have high IQ's doing things that seem stupid. This leads to the distinction between "book smarts" and "street smarts," but strangely enough, we call BOTH of these things intelligence. We recognize both the absent-minded professor and the low IQed entrepreneur as "intelligent." How, though, can the term "intelligence" apply to two seemingly non-correlated things (being book-smart and street-smart)?

Psychologist Keith Stanovich has an interesting idea: maybe "intelligence tests" measure intelligence (as traditionally defined) but not a wholly different faculty of rationality. To Stanovich, the difference between intelligence and rationality is the difference between the "algorithmic mind" and the "reflective mind," or, the difference between the ability to employ algorithms and the ability to think about and CRITICALLY employ algorithms. (I might say that intelligence may be the ability to map or write a sentence and rationality is the ability to formulate arguments and write a persuasive essay.)

The first half of Stanovich's book is dedicated to showing that while IQ tests are a valid measure of a faculty of general intelligence (he does not deny that IQ tests measure a very real thing), it simply does not measure all that we understand to be good thinking.

Stanovich, though, is also a critic of those like Gardner and Sternberg who want to add to the number of "intelligences" (musical intelligence, naturalistic intelligence, creative intelligence). These things, he says, inadvertently beatify the term "intelligence" to be a be-all-end-all that it is not (by implying that any good mental work must be called an "intelligence" rather than a "talent," "skill" or "proclivity.") Instead, Stanovich makes the point that intelligence is simply one component of good thinking. The other, often overlooked, ingredient is rationality (and he alludes to several studies which show these two faculties are not very positively correlated. One can have high amounts of one and low amounts of the other.)

What I thought and hoped Stanovich would do next - what he did not do - is offer a sense of how we can test for RQ (rationality quotient). While the first half makes the case very well that rationality should be valued and tested every bit as much as intelligence, he does not follow it up by showing how such a thing might be done.

Instead, Stanovich devotes the second half of this book largely to cataloguing and demonstrating "thinking errors" that distinguish rational from irrational thought. For example, humans are "cognitive misers" by nature, who like to make decisions based often on first judgments and quick (rather than thorough) analysis (a likely evolutionary strategy, as ancestors that were quick and somewhat accurate probably did better than those who were slow and very accurate). Also, humans often put more emphasis on verification than falsification, and fail to consider alternative hypotheses in problems, preferring often to go with the most obvious answer.

All of these, while interesting, have been better and more thoroughly documented in other books by decision theorists and psychologists. All Stanovich needed to do was refer us to these, at most, devoting a chapter or two to examples. There is more important work for Stanovich to do then rehash what we can just as soon read elsewhere. Instead, I think he sh old have begun outlining ideas on how to test for rationality. What would such tests look like? How would such tests affect our educational system (focused, as it is, on IQ)? What would test questions even look like and how can they be adjusted for by age/grade level? Are there pitfalls?

None of these questions were answered, and Stanovich's argument is the worse for it. Stanovich himself notes that one big reason for IQ's predominance in the psychometric world is that it is measurable (which is a big strike against many of Gardner's "multiple intelligences"). Ironically, Stanovich's failure to suggest ways to measure RQ will likely have the same effect for his idea as it had for Gardner's.

It is a shame, though. As an educator concerned both with the undeserved predominance of IQ and also the failure of concepts such as Gardner's "multiple intelligence" to offer a serious challenge, I quite like Stanovich's germinal idea. As we all know that rationality is a key component to good thinking, and it is hard to think that it is positively correlated to IQ, it would be interesting to find a way to measure RQ as a valid supplement to IQ. It is simply too bad this book did not explore the practical questions involved with his tantalizing suggestion.

Sunday, March 15, 2009

Does "Socially Constructed" Equal "Bad"?

Like so many other good ideas that went too far, our insatiable urge to "deconstruct" things began in the 1960's. With the help of postmodernist philosophy, The Western World began its infatuation with pointing out social constructions and deconstructing them - allegedly to show the truth behind the fictions.

But we seldom take the time to ask ourselves a simple but necessary questions: Just because it is socially constructed, does that mean it is wrong or bad? All it means for something to be socially constructed - as the label suggests - is that a thing is artificially constructed by individuals rather than naturally existing. While baseballs, bats, and gloves exist, the rules of baseball are socially constructed. While dinner plates and teacups exist, the etiquette of the tea party are socially constructed. Does that mean that the fomer "really" existing things are any more beneficial than the latter? Does the fact that the rules of baseball and tea parties are socially constructed mean automatically that they are arbitrary? (Unfortunately, postmodernism taught us not only how to spot social constructions, but to automatically associate them with arbitrariness.)

Consider the following quote from an interview in the book Generation Me:

"My generation is much more independent. I pride myself on being a free and independent thinker. My wish is to break down the walls that humans have socially constructed." (Generation Me, Kindle edition, loc. 457)

That which is socially constructed is synonymous with a wall that needs to be "broken down," while that which is not socially constructed - in this case, that which is individually constructed, is to be worn like a badge connoting "free[dom] and independence."

Just as one can argue that blind acceptance of social constructions is a symptom of not thinking, so, I think, is the above reaction that "social constructions" should be avoided. The question that both sides should be asking is whether the social construction has merit - whether it exists for a good reason or whether it is, in fact, arbitrary.

Many socially constructed rules I can think of are not only completely non-arbitrary, but exist for a definite and good purpose. One example that always comes up is (what used to be) the etiquette of walking in crowds. Like rules of traffic, people used to adhere to the rule that one walks on the right. This rule, while socially constructed, serves (actually, served) a definite purpose of reducing the amount of bumping and careening that would occur when walking in public. (Nothing maddens me more than walking in a mall where no one follows this rule, and as a result, one must watch in front of them every second to avoid careening into other walkers.)

The use of money (and especially credit), the idea of tipping wait staff, and the courtesy of sending thank you notes to those who do something nice for you are all social constructions. But I don't see how anyone could argue that these things are arbitrary.

To me, the measure of whether to follow a social construction or not is to evaluate whether the rule is one that, if followed by everyone, does some good and, if ignored by everyone, would make things less good. (Certainly this is why the rule of walking on the right side is worthy of being followed, because when no one follows it, walking in crowds becomes more haphazard.)

Inevitably, someone reading this will wonder if its author is a conservative prig that bemoans the good old days when we followed rules for the sake of following rules. I assure this reader that I am nothing of the kind, but only one who believes that there is something to be said for rules when those rules can be shown (by the above method) to be well-advised and practical.

In fact, my fiancee and I are getting married in about six weeks and have decided to dispense with many of the arbitrary but oft-followed socially constructed rules. Here are a few rules that we have decided not to follow (much to the dismay of the more conservative ones in our families):

(a) we are going to hyphenate both of our names. (We will both be "Currie-Knight." We decided that doing it any other way does an injustice to the 50/50 nature of our fusing of identities.

(b) my parents are paying for the wedding (her parents have opted to give us a down-payment on a future house)

(c) I will not be wearing a tuxedo, but a very nice suit

(d) We are not going on a honeymoon (as getting the time off of work is tedious, and we have already taken big vacations together. We'd rather use the money towards something more useful.)

All of the rules that are broken above (taking the man's last name, the bride's parents paying for the wedding, wearing a tuxedo, honeymoon) are examples of social constructions that have no utility and are etiquette for the sake of etiquette. These are the types of social constructions that I think many have in mind when they express disdain for social constructions. Traffic laws, an economy built on credit, and the US Constitution are examples of social constructions that nobody seriously argues are arbitrary.)

So, why do we have so much disdain for anything smacking of "social construction"? Like so many other good ideas, I think the idea of nonconformity and skepticism of tradition is an idea that was just pushed too far. It is certainly good to be independent, but being so to the point of abandoning the mores of the job interview can be positively deleterious. It is okay to question authority, but to do so to the point of developing a knee-jerk reaction to anything that smack's of instruction or the imparting of wisdom can bite one in the ass. It is okay to rebel against rules of etiquette when such rules can be smacked down as arbitrary, but we must be mindful that many socially constructed things were socially constructed for a reason: they serve a valid social purpose of creating order.

So next time you get bumped into while strolling the aisles of the mall, ask yourself: are all social constructions arbitrary and wrong and arbitrary?

Thursday, March 12, 2009

The Impossibility of Objective Morality

Every once in a while, I force myself to take seriously the idea that morality, and moral disputes, are a matter of objective fact. I confess that I have never been able to find a good argument for this viewpoint, and while I know that many have been convinced by it, I confess my blindness for how.

The most convincing argument for me against the possibility that morality is about objective fact is the idea that the only "thing" we have to measure the "correctness" of moral statements by is our moral sentiments. Further, there is simply no good reason to suspect, and every reason to reject, that these are objective in nature. In other words, we have no objective moral blueprint to hold up to our moraal judgments to help us know whether they are correct. We only have our individual sentiments on the matter, which seem far from being objective.

To put the matter concretely, when two or more people disagree on a scientific proposition or conjecture, experiments can be performed (and it is quite hard to deny that the world these experiments are performed in is not objective). Competing theories will stand or fall based on their results in experiment against an objective world, results that are independent of anyone's wishes or desires.

Contrast that with disagreement over moral matters. When two people disagree on moral matters - one says x is right and the other, that x is wrong - there is nothing resemblinng the objective world - a world of moral fact- that we can hold the competing judgments up to. IN order to know whether x is REALLY right or wrong, we would have to have access to the world of moral facts, so that we can find out which proclamation that world endorses. So far as we can tell, there is no such world of moral facts (and when people say there is, they are generally getting there by intution, begging the question of how objective those intutions really are.)

Some diagree with this; they suggest that there IS a factual resolution to moral disputes. When I say x is wrong and you say x is right, some say that we can experiment. Per Hume's is/ought dichotomy, it is a different thing to say, "x causes suffering," than it is to say, "x is wrong." That x may cause suffering is an observable fact. That x is wrong expresses more than that x causes suffering, but goes further by inserting a value judgment that suffering is wrong. As good as that may sound, there is no objective way to prove the latter as there is the former. This is precisely because "x causes suffering," can be validated solely by appealing to the objective world of descriptive fact. That x is wrong is unprovable because it goes beyond the brute world of descriptive fact.

I confess that I cannot see how to get around this. The arguments I've heard seem to mistake the speaker's strong intuitional moral sense for an objective world of moral facts. And to me, there seems no good reason to suppose that these intuitions are anything more than psychological preferences (that we mistake solely becuase they are so strong that they SEEM to be obvious to everyone).

I have tried and tried to "get into the belief" and understand how one can believe in an objective morality, but there seem too many problems to make such a belief worthwhile. How can moral disagreements be resolved similarly to the objective resolutions in science? Where are the incontestible proofs of some moral propositions over others? How can "is" (recognition of fact) lead inexoribly to "ought (a prescriptive obligation)?

The Possibility of Overdoing It: The Case Against Too Much Inclusion

One of the arguments made against the rehabilitative effect of prisons is that prison teachers criminals to be better criminals by putting them into a situation where they can freely network and disseminate ideas. Long and short: it is often argued that prisons don't rehabilitate because they get criminals used to being around high concentrations of criminals.

The same can sometimes - certainly not always - be said of special education. When one puts a student with behavior or social problems in either a self-contained class of nothing but special education students, or an inclusion class with many special education students, often, that child will become worse because he now has others to play off of.

In particular, my high school has a "social skills" class. At first, we all thought this was a great idea, but now, it has become jokingly referred to as the "antisocial skills class." Two of the students on my case-load (who I think are decent kids around the right people) have become worse because they have begun hanging out with some of the worse kids in the class, and have developed those kids' attitudes. One, who I was helping to quit smoking, has now been overheard about smoking cigarettes and weed after school.

Another example is a student on my caseload who is very smart but has social problems interacting with peers. This student is quite mature when dealing with "good" students and quickly changes into an "in your face" student when dealing with "bad" kids. In Spanish, the student does fine (and even avoids conflict), but in "special ed classes" like Social Skills (or a Government class with many other students with learning and other disabilities), this student becomes much more difficult and obnoxious, even instigating verbal altercations.

The problem is that, as much as we special educators try and deny or excuse it away, "inclusion" classes - classes with higher numbers of special education students - are generally the classes with the highest proportion of behavior problems, underachievement, and distraction. It seems to make less and less to put learning disabled students in classes where less learning occurs - "inclusion" classes. And there seems to be no wisdom at all in putting an ADHD student in the class with the most non-academic distraction.

So, the dilemma is that while we all want to see heterogeneous groupings of students, we all know that higher percentages of special ed students in a particular class means the less productive that class is likely to be. And while we want to "spread the special education students" out so that we can avoid high concentrations in relatively few classes, that would bring up a "manpower" problem because special educators cannot float around to too many classes (which is why putting all the special education students in relatively few classes makes more "manpower" sense).

I suppose the situation of whether to put learning disabled students in a "regular" or "inclusion" class should be made on an individual basis. The question I have begun asking myself is whether x student would benefit more from the individualization and assistance that an "inclusion" class (usually with a general and special educator) could provide is outweighed by the student's need for a distraction-reduced and more focused environment that could be provided in the "regular" class.

As a result, I have placed more than a few students in "regular" classes that, for all intents and purposes, should have been in inclusion classes simply because I feared that the student would be undercut by the inclusion environment. If the goal of special education at the high school level is to make students more independent, I think that, done cautiously, will be a good decision.

Tuesday, March 10, 2009

An Interesting Look Back

It has been quite a journey, but it has recently paid off. It is officlal that I am going to University of Delaware next year to start on my PhD study in Education. It is also official that I will be fully funded by the University.

All of this moved me to look at my University of Delaware application, its "essay questions" and my responses. I am going to print here the most interesting of the question/answers, which gets me into what I have long realized is a strange background.

Are there special circumstances related to your academic record that you feel we should know about? Have you ever been convicted of a crime? If the answer is yes, explain the circumstances, give the dates of the offense(s) and discuss what you learned from the circumstance.

While I have never been convicted of a crime, I can say that, at one time, graduating high school was far from a certainty. I remember well the meeting in Liberty High School’s guidance office where I was informed that, without serious concerted effort, I would be repeating the 11th grade. At the time, I was certainly not academically motivated and even toyed with dropping out of school.
Unfortunately, this apathetic attitude towards academics followed me through my undergraduate education, and this is why I find myself needing to apologize for my lackluster undergraduate GPA, and for my “extended stay” in undergraduate study.
I did graduate high school for two reasons: my parents’ desire to see me do so, and my thoughts of attending the esteemed Berklee College of Music, where could do what I loved: to play the drums and write songs. I made it into Berklee on academic probation and while I did well in most of my music classes, I merely tolerated the academic classes as a dry matter of course. I cared so little for academics that when the registrar’s office informed me in my fourth year that I would finish three (academic) credits shy of my degree, I decided that I would move on to Nashville anyhow. Songwriters don’t need degrees, and publishers don’t ask for academic resumes.

It took a full year of living as a struggling songwriter in Nasvhille for me to realize not only that everyone around me knew more than me, but also that this fact really bothered me. I began reading more, in attempt gain on my own the education that I should have already gotten. Once I started, I just kept going, reading in philosophy, science, literature, etc. I decided to get the remaining three credits to complete my Bachelors degree at a local community college so that I might try for a Masters. Much to my surprise, I gained admissions to the very selective University of Richmond and – as the saying goes – never looked back. The once academic-loathing kid, somehow became the student who could never read enough.

Explaining my academic history is uncomfortable for a few reasons. First, it forces me to be acutely aware of early shortcomings, if not outright failures. Second, it makes me wish that I had the type of standard academic history that does not take four paragraphs to explain or apologize for. Behind all that, though, I must admit that I am strangely proud of my non-traditional history: that I was the prime mover of my education, rather than a University. This is how it has been ever since. While I have attained one Masters degree (and am set to attain a second), it is fair to say that I do much more independent reading than my courses require, and have an enjoyment for learning that a more traditional academic history may not have instilled in me. It is this enjoyment for learning that I want to bring to PhD study and a career in academia.

In many ways, I see myself in some of my students: unmotivated but possibly in posession of latent skills or motivations that might manifest later. This is why it is so frustrating to teach in the public schools. I think my experience has taught me, more than anything, that one cannot motivate the unwilling. The reason I got through high school was because my parents forced me. No high school teacher ever got me to love learning. That only happened when I began learning on my own.

So, in many ways, I see my role as a public school educator as helping kids get through with enough skill, discipline, and wherewithall to ensure that no opportunities are unduly closed to them. I would like to motivate, but I harbor no delusions that most students would recieve it. I wouldn't have.

As Mortimer Adler said, the best schooling prepares one for future education. K-12, he says, is not education, but schooling. Living is the "education" part.

Capitalism and Temperance

Who is rich? He that rejoices in his Portion.

He that drinks fast, pays slow.

What is prudence in the conduct of every private family can scarce be folly in
that of a great kingdom

These quotes may seem as if written by the great virtue ethicists like Aristotle and Plato. In reality these three quotes, which beautifully illustrate the need for modesty, temperance, and prudence were written by two of the greatest champions of capitalism: the first two are from Benjamin Franklin, and the third is from Adam Smith.

In recent discussions, the words "greed" and "capitalism" often get confused, and the former is seen to be the inevitable outcome of the latter. I can understand this connection; it seems quite obvious that as capitalism depends on the profit motive, great progress can only be achieved by great profit motive, which means "greed."

Truthfully, greed often is a big part of capitalism. The most successful entrepreneurs and businesses are those ambitious enough to desire domination of their market. Wal-mart, Southwes Airlines, Microsoft and other "high flyers" can only get there by, in some sense, being greedy - by wanting to control their markets and make as much profit as possible.

But capitalism also rewards the temperate and prudent. As Adam Smith (3rd quote) rightly notes, prudence can "scarce be folly." While risk-takers are generally greedy (which is why they are willing to take risks), capitalism more frequently rewards those who spend within their means, grow steadily over time rather than meteorically, and have realistic ambitions.

Companies which acquire huge debt, grow too fast, or try and do too much often end up losing out just as fast as they rose (if they rise at all). Companies that focus on one or a few good products, look before they leap, and do not acquire substantial debt are often the "slow but steady" companies in the market.

As with producres, so with consumers. Those who are spendthrifts, live beyond there means, and do not overextend their finances are the ones who generally go unpenalized. Those who take too many chances, buy excessively on credit, and don't limit their expenditures to their revenue, tend not to last long. The natural consequences of capitalism for this behavior are that sooner or later, their "luck," and finances, will run out.

So, far from encouraging greed, capitalism seems more to encourage temperance and prudence. While some of the more exuberant are rewarded from time to time, any good money manager will tell you that you cannot go wrong with prudence. Rather than encouraging great spending, capitalism punishes big spenders the most and moderate to small spenders the least. And a great benefit of capitalism is that it is a system - the only system - which allows one to choose for themselves what they want to be: a glutton or a spendthrift, a big shot with a stressful and high-paying career, or a "small guy" who has a less stressful pace for a lower pay, complexity or simplicity.

As I have mentioned, my fear is that the new Obama bail-out plan will immunize those who need to learn this lesson (provided by the natural consequences of capitalism) form learning it. Overspending and overextending can only lead to trouble; moderacy and prudence seldom get penalized. What Obama is doing is ensuring the opposite: those who have been prudent will be forced to pay for those who have been extravagant.

This is not the lesson of capitalism. And I like capitalism more than Obama.

Monday, March 9, 2009

Greed, Not Capitalism!

What was it that got us into this recession, anyhow? Is it the banks' fault? Is it the consumers fault? Is it the government's fault?

Yes, yes, and yes!

What got us into this recession is greed - not capitalism. We cannot blame 'the market' for the recession, as markets do not act on their own. People act within rules of the market. Thus, people - bankers, consumers, creditors, debtors - not the market, got us into the predicament we are in. People bought houses they could not afford; banks promised loans they could not support to people that should not have gotten loans; people bought too many things on credit and let their finances get out of control; banks let them do this.

Greed! Not capitalism! These two things can go toghether, but are not by any means synonyms.

Captalism functions on the principle of a profit motive, and that may lead some to think that capitalism bases itself on greed. But one can have profit as their motive without overextending themselves, just as consumers may be motivated by money and material pleasures while keeping these desires in check. The problem is NOT that banks and consumers were motivated by material gain, but that they let this cloud their judgment; they ceased thinking about the long tern (can I afford to make this loan or buy this Hummer?) and did without thinking.


My hope is that the era of frivolous spending is over. (That is a quote from our frivolously spending president for which I no longer have the link.) What many - including Obama - don't realize is that recessions are the economy's natural way of making us take stock. When spending gets out of control and people are spending more than the economy can monetarily support, recessions slam on the breaks by penalizing irresponsibility; loans must be defaulted on, business gets slower, jobs get cut, and prices and spending go back to a nice even keel.

What worries me, though, is that our very spend-hungry president is working to ensure that the good times artificially keep rolling! He is working to make sure that loans get forgiven or bailed out, jobs get artificially reproduced, and the irresponsible are saved from consequences of irresponsibility.

This will not fix the problem (as any economist worth anything will tell you), but it just postpones the inevitable. If a recession would naturally occur because we are spending out of control, the best way to curb it is make it so that spending cannot remain out of control. Our president and congress, though, are redistributing wealth precisely so that it CAN remain out of control. instead of making it so that those who have no business buying a four bedroom house can no longer pay their mortgage, the president will just make everyone pay four the four bedroom house. Instead of making sure that those who bought too much on credit must go into bakruptcy, he will make sure that we all pay for her things. Instead of ensuring that mismanaged banks be eliminated by forcing them to close under economic pressure, we are all going to pay for them to misrun their businesses.

My fear is that we will not have learned our hard-won lesson about greed's ability to tank an economy. My fear is that our leaders will protect the guilty (banks, greedy consumers) by punishing the innocent (those who don't need bailouts will pay for those who do.) My fear is that, in the process, all of this recession business will be for naught, as no one will have learned anything at all.

Sunday, March 8, 2009

Interestting, if only in theory (Review of Adler's "Paideia Proposal")

Here is a book review I wrote for philosopher Mortimer Adler's "Paideia Proposal," - a conservative work arguing for education reform. Adler is somewhat of a critic of "progressive educaiton" and advocates for a more "liberal arts" type of k-12 educaiton. For those unfamilar, Adler was also a vocal advocate of educating with the "great books" of the Western canon.

I agree with some of Adler's ideas about restoring rigor and discipline into educaiton, but I disagree with his very monolithic "one size fits all" approach, as well as his view that k-12 education should not utilize electives or vocational training. Even though I dislike many excesses of "progressive education," I see much value in a utlitarian approach to education that sees education for what W. James might call its "cash value" to students, rather than as an end in itself (which it can be to many, but will not likely be to all). In other words, Adler treats all students as if they are "bookish," managing to suggest that those who are not simply need to be molded to be so.


Mortimer Adler's "Paideia Proposal," ("paideia" means "education" in Greek) is a book which intends to offer a stern antidote to many "progressive" ideas in education. One might call Adler an educational conservative - an "essentialist" who believes that education is of value in itself (and should not be justified by its utilitarian value). Adler also believes in the value of a liberal arts education for all, the role of order and discipline in education, and the value of cultivating the intellect as the primary goal of k-12 education.

Adler's Paideia proposal "breaks" education into three types which students should receive in equal measure:

(a) knowledge acquisition: this is where direct teacher/student instruction goes on, and where the student learns to store and recall facts.

(b) developing of intellectual skill: this is where the student "learns by doing," and practices the skill under the teacher's facilitation.

(c) increase in understanding and insight: this is where students learn to evaluate, analyze, synthesize, and create ideas from ideas. Students engage in teacher-led discussion and reflections while learning "higher order thinking" skills.

I agree with these goals, but disagree much with Adler's approach. A key criticism I have of Adler's writing is that, like many philosophers of education, he speaks of students as they exist in theory rather than in practice, and tends to see them as a big monolithic group (while he says he doesn't).

Put differently and bluntly, if I had a child, I might be tempted to send it to a Paidiea school, but would be hesitant to suggest that every child should be forced into this model.

What makes the Paideia project unworkable in practice is Adler's insistence that "one size" of education "fits all." Alder does not believe in tracking of any kind, dismissing it as very undemocratic (by which he really means unegalitarian). He writes as if things like differences in intelligence (by the measure of IQ) do not exist. He repeats frequently the idea that "all children are educable," but turns it cleverly into "all children are capable of learning and absorbing the same stuff as all others." (He does bring this up as a possible criticism but dismisses the problem with high-sounding rhetoric, intimating that naysayers simply don't believe in equality.)

As a special educator, I think this idea of a "one size fits all" education is a pleasant sounding disaster. As one of my colleagues put it, "It is not a God-given right to comprehend Algebra II," by which he means that some simply learn slower, and are more limited than others. (I think Alder would realize his mistake when he put a child with Downs Syndrome, mental retardation, or autism into his Paidiea school.) Alder's point that we should challenge all students is well taken, but he doesn't seem to take seriously the FACT that students differ not only in "learning style" but in innate ability. To subject each child - regardless of ability - to the same curriculum is as unfair as hasty and strict tracking.

The other disaster in Adler's proposal is the idea that all K-12 education should be non-specialized and non-vocational. Under Adler's proposal, electives are essentially abolished and, as he says, we should "eliminate all the non-essentials from the school day." If it doesn't have to do with cultivating the intellect, we don't want it.

This would not only make school a positively dreary place for kids to be (eliminating any classes that might appeal to those not budding philosophers) but it would also lead the non-college-bound out in the cold. Alder suggests several times that all vocational training should take place post-high-school, meaning that school would no longer prepare students for a vocation at all, and those who can't afford to put off work after high school to receive additional training would be ill-prepared to start a career.

Like many schemes philosophers make about how to reform education, the Paidiea Proposal would make for some very interesting private schools. Like the Montessori method, this system might work for some or even half, but certainly not for all. Many students - those who might go into blue collar vocations - would likely do poorly in Paidiea schools. Adler might suggest that I am being pessimistic and "undemocratic," but I would charge him with utopianism and...being a theoretician rather than a statistician.

As long as differences in ability exist (and the fact is unfortunate), the Paidiea proposal, by expecting different abilities to access the same curricula, runs the risk of being as unfair as those he charges with excessive differentiation.

Friday, March 6, 2009

Teaching Them to Be Students

Music teachers are not often philosophers. Yet, at a staff meeting yesterday, where the faculty talked about a new curriculum for "freshman seminar," a high school course required of freshman to teach them basic academic skills, a music teacher responded to a question posed to us: What do you think all freshman should be taught?" Her reply got to the core of what was on many of our minds: "we need to teach them how to be students before we can TEACH them anything." She commented that before she can teach a student how to play the oboe, they must be taught how to learn, take and use guidance, attempt success, and not give up if they stumble. These are things that, sadly, many students do not come to high school knowing.

This is an often neglected and integral piece of the educative process: in order that students can learn, they must learn to be students first. In an age where we tell students many grand stories extolling the virtues of challenging authority - from the American founders to Galileo - we neglect to teach them the value of authority and the wisdom to know how and when to accept it.

We live, quite justly, in a classless system, where anyone from anywhere has the civic freedom to move up or down in the world based on their own efforts and a pinch of luck. Because of this, I think, we are inherently distrustful of words like 'authority,' 'wisdom,' and to the ideas that come with them. To accept instruction from those who know better than we hints at the idea that some are "better" than others and that the learner must bow to the teacher.

But just like skepticism in matters of science, while sometimes good, can be taken to the extreme, so can the idea that authority should be rebelled against. In order to learn, students must at some point accept the fact that they do not know all they need to know and must become willing (even grudgingly) to receive information from those in a position of authority.

This is what I think the music teacher means by saying that students must learn to be students before they learn anything. We must teach them the wisdom to know when to challenge authority and when to accept that authority may have something to teach them. They must learn when to have an ego and when to pack it away for the sake of their betterment. They must learn that while "doing their own thing" can be good, they must also do things that are not of their own design or choosing, but are things that BOTH things can benefit them. (One without the other becomes detrimental to well-being.)

These are some of the things students are not coming to us with. Sometimes, it is as if they come to us convinced that they inhabit a universe of one. We, as teachers, must take as our first mission to widen their universe.

Thursday, March 5, 2009

Liking Children versus Liking to Teach Children

There is a big difference between liking to teach children and liking children. Many, I think, get into teaching because they posess a like for children rather than a like for teaching children.

I started thinking about this when reflecting on what it is I dislike so much about teaching where I do. After all, I really do like to teach children. I got into teaching as a profession after taking a long-term substitute job at a high school where I worked in the 'resource room,' (where kids go when they need extra assistance with work, or accomodations for their disability). I vividly recall "connecting" with several students - particularly one I suspect was an undiagnosed autistic girl. I spent many hours teaching her physics, and got quite a few compliments on being able to get through to her when most teachers couldn't. I knew then that I wanted to try special education.

So, that I like teaching children is not in dispute. But I have been coming to the realization that, as much as it hurts me to admit it, I don't have the innate love for kids that many teachers do. If my job involved no possibility of teaching, for example, I would find being around kids for 6.5 hours per day too daunting to bear. It is not the kids that get me through, but the teaching of them.

And what makes matters difficult is that, where I teach, teaching is very frustrating and comprises only a small percentage of what "teachers" do. Teachers first have to motivate, then control behavior, and then, get a little teaching done. Kids actively resist us at every turn and are quite fond of defying our attempts to instil things into their brains.

This is why I think teaching where I do is so hard for me, and perhaps, not (as) hard for many others. Many of the teachers really love the kids, and don't mind as much that the kids aren't learning nearly what they should, just as long as they get to interact with them. But for me, who likes teaching kids more than being with kids, it is a source of endless frustration and dejection that the "teaching kids" part is such a small role, while the "handling kids" part takes most of the time.

If you haven't been able to tell by now by my blog entries, I am not a coddler. I do not feel bad when I don't appease kids. I don't have much trouble with trading pain for gain. I suspect, though, that a lot of teachers who like kids more than teaching kids are the opposite: their desire is to encourage over instruct, and enable rather than equip. This has been my particular experience with those who go into elementary education; they are teachers more because they like kids than any drive to teach. Students become "little guys," which is a sign, to me, of an overly motherly approach that sacrifices rigor and high-expectations for happiness and high self-esteem.

I cannot be that way. Alas, I am going on to study for my PhD, where I might teach college students - where classroom management is not the ultimate concern, and where teachers are not expected to baby, but to teach, and students are expected to exercise some independence.

I think I am making the right choice.

Tuesday, March 3, 2009

Which Counts More: Motivation or Intent?

There is a longstanding strain in law and public morality which tells us that bad acts may be excused if it can be shown that no malicious or wicked intent accompanied them. In law, killers are judged 'not guilty' because the killer suffered from a mental defect. In public morality, we often view a drug addicts crime in a different light when we find out that her actions were "caused by" drug addiction, rather than malice.

Well, here is the latest example of (what I see as) this egregious tendency. A man in Canada has recently been arrested for "beheading and cannibalizing a passenger on a Greyhound bus." According to the article:

Li's lawyers are not disputing that he killed McLean, but they will argue Li was mentally ill and not criminally responsible. A psychiatrist told the court Li is schizophrenic and believed God told him to do it.

I understand the dilemma of putting a man away in prison for an act which may have been "caused by" a mental defect. But prison isn't just about punishment either. One of the reasons I have never warmed to the "insanity defense," is because it misses the point that one key reason to put someone in prison is to keep others safe from that person. It makes absolutely no sense to argue that a person whose psychiatric disorder (an unpredictable one at that) caused him to behead and cannibalize someone is the type of person who should avoid imprisonment. That person is, to me, the very definition of a person who SHOULD be in prison - if only to guard against future beheadings that God might tell him to comit.

Why do we have such a hard time with this idea that a man afflicted with a mental disorder should be allowed to argue that he is not guilty of a crime he is in fact guilty of? Because current thought puts more weight on intent and motivation than action. We are less likely today than we were thirty years ago to feel contempt for the man who gambles his family's savings away because he is a victim of a disease. We are less likely to morally judge the action than we are the intent.

I think this is wrong-headed for several reasons. Flrst, judging intent is a very subjective business, while judging actions is not. It is a fact that this man beheaded and canibalized a person. No one, not even the lawyers, dispute that. Is it a fact that God told him to do it? Only if we take the killer's word. If a man gambles his family's savings away, this is factually verifiable. What is not is the idea that he was powerless to choose not to.

Secondly, from a legal and political sense, judging intent rather than action is a dangerous precedent. In order to protect society against certain acts (cannibalism, murder), it is best to make a rule against murder (without exception) than it is to make a rule against murder which allows for murder within certain psychological parameters. If we want to make sure that no one else feels free to behead or cannibalize, the precedent should be that ANY instance of these things are wrong, not just that it is wrong only if there is bad intent behind it.

And I reiterate my concern that, as part of imprisonment is for public safety rather than punishment, it simply does no one any good for a self-confirmed schizophrenic who has beheaded in the name of the god issuing him orders to walk the street solely becuase he has a disorder. If he walks, maybe he can kill the judge and blame it on god.

Monday, March 2, 2009

The Problem of Student Motivation

Recently, I have come across an interesting article: "Who is Responsible for Student Learning," by Baylor University education professor, J. Wesley Null. The main thesis of the article is that in an age where "accountability" is a buzz word, we must remember that, as wrong as it may sound to some, teachers can only do so much in getting students to learn. We often forget that the other half of the responsibility MUST lie with the student.

Null suggests that the view which sees teachers as the primary responsibility-holders to student learning confuse the business of education with most other businesses. There, if the product falls short of expectations, the workers are the likely culprit. As Null rightly points out, this ignores one key difference between "factories" and education: students, unlike cars and insurance policies, are agents that are often actively resistant to being "worked on." If a car turns out not to work well, something in the workmanship is likely to blame. If a student graduates history class without being able to recall history facts, it may be due to poor teaching by the teacher, or poor learning by the student.

Null provides an analogy to bolster the point:

[I]f a husband and wife enroll in, say, a marriage course at their local church, should the pastor who teaches these classes be blamed if the couple's marriage never improves? Or do the husband and wife have a joint responsibility to improve their own marriage?

Obviously, we would suggest that the pastor can persuade, guide, cajole, instruct, and remind. What we would not say, though, is that any of this can make the marriage work. In the end, teaching, guiding, and reminding are only so good as the will of the person recieving the instruction, guidance, and reminders.

That this is analogous to the limits of the teacher's role is obvious. What we do with this recognition is not. When teachers say things like, 'we can only do so much,' or try to shift at least some of the "blame" onto parents, students, and an anti-intellectual culture that many students come to us imbued with, we are seen as offering a subterfuge. In turn, teachers may be charged with laziness and unwillingness to be accountable.

In some sense, this view is sometimes justified. There certainly are such things as bad teachers, and often, those teachers ARE shielded from accountability by the ill-thought-out mechanism of tenure. And, as a "public service," there has to be some way to hold teachers accountable for results. Otherwise, poor-performing teachers can pawn all responsibility onto their students.

There are several other reasons for our collective uncomfortability with blaming at least some of education's failure on students:

(1) Doing so forces parents and society at large to take a critical look at their practices and whether they may be (inadvertently) sending teachers students who have not been taught such basic prerequisites to learning as respect for authority, impulse control, and some reason (internal or external) to value the enterprise of education. It is easier to blame teachers for students' low performance.

(2) As trivial as it seems, all of us have become familiar with the "feel good" teacher movies like "The Ron Clark Story," "Dangerous Minds," and "Freedom Writers," where a teacher is able to overcome all educational obstabcles with students by pure dilligence and tenacity. Take this, couple it with the egalitarian idea that every child has equal potential, and we grow intolerant with the view that student failure is not purely a symptom of bad teaching.

(3) As most of us have had at least 12 years of direct experience with teachers, teaching is the one job that most people have seen close-up, and as such, most people feel is not so difficult. I think that this inadvertently plays into notions that student failure is due to bad teaching because, unlike most professions, it is easy to "play at armchair teaching." Laypersons might not be quick to offer opinions on how accountants can do their job better, but are generally unafraid to criticize the "common sense" discipline of teaching.

It is difficult to know how to strike a good balance between holding teachers accountable for results and accepting that teaching, unlike other professions, requires willingness from producer AND PRODUCT in order to be successful. it may even be as simple as involving teachers who produce scores of underachieving students in more stringent observations and professional developments. It may also involve offering teachesr financial incentives, as some districts have done, for good results (thus not penalizing underperformance, but simply rewarding outstanding performance.

But what many don't realize is that teaching is hard as it is. It is even harder when we teachers are held solely or even primarily accountable for students low performance on tests (when they admit, as they often have to me, that they do not studey), for low homework grades (homework is the responsibility of the home), and even student truency.

As Null correctly notes, and as hard as it is for many parents and policy-makers to hear, "Individual agency matters. Put another way, one person cannot be held responsible for another person's behavior."

On Schemes and Schools: How We Got "Here" from "There"

Below is a recent review I have written of Diane Ravitch's bok, "Left Bac: A Century of Failed School Reform." In the book, Ravitch attempts to give an in-depth history of the progressive turn in education and its excesses. In the end, she argues that progressivism bears the responsibility for the Amreican education system losing its way. While I disagree with Ravitch on a number of points (she is very anti-utilitarian when it comes to education, and I cannot see education justified any other way), her historical arguments are very persuasive. Highly reccomended!


Diane Ravitch's "Left Back" is both a history and a polemic. As the subtitle suggest, Ravitch does not only cover the history of educational ideas over the past century, but the history of "failed" educational ideas. As other review rs suggest, Ravitch's book is a history of, and argument against, progressivism in education.

Most of this book centers around two recurring dualisms of 20th century educational theory: essentialism v. utilitarianism, and learning as transmission between teacher and student v. learning as natural student-led proces.
The debate between essentialists (like Bagley) and instrumentalists (like Dewey and Thorndike) was over whether educational learning was valuable in itself or whether its value derives from its utility. In Left Back, Ravitch demonstrates that the concept of justifying education in utilitarian terms (how useful it is to students' lives) may have been an interesting idea at one point, but, like many ideas, it was pushed too far. Not many people - even the eseentialists - would argue that education should not have utility to students lives, but the overselling of this idea by progressives resulted in everything from hastily done tracking (tailoring instruction to students' predicted 'station' in later life), to the stripping away of academic rigor (why take biology when one can take a class on how to grow plants?).

The debate between those who argued for teacher-led education versus those who argued for student-led education was an outgrowth of the previous debate. The 'student-led' advocates (William Kilpatrick, Carl Rogers) rediscovered and revamped the Rousseauian idea that the best education is a non-coercive process of letting the student explore what she likes, and fostering her creativity. By contrast, the 'teacher-led' advocates (Leon Kandel, Michael Demiashkevich), believed that learning was as often an artificial process that necessitated the teacher being a teacher, and that part of s good education was learning things beyond what one would learn on one's ow.

In each debate, the progressives (utilitarians, student-led believers) won the day, often in spite of public outcry against them. In fact, one ironic theme in Ravitch's book is that while the progressives constantly invoked the word "democratic" to support their various cure-alls, the movement was, at every turn, undemocratic. Progressives always saw themselves as superior to the clamor of "reactionary" parents (who audaciously wanted their kids to learn subject-matter), were constant enthusiasts of tracking students at an early age by their predicted 'stations' in life, and constantly spoke of "creating a new social order," rather than educating independently-thinking students.

The undoubted hero of the book is William Bagley (an education philosopher that may have been John Dewey's most serious rival that is unjustly all but unheard of today). For his part, Dewey is portrayed as an out of touch intellectual whose "innocence was [often] comical" [p. 207) Many will object to this characterization of an educational icon, but Ravitch is certainly not the first to suggest that Dewey was entirely too aloof to articulate a philosophy with any real clarity.

Some negative reviewers comment that Ravitch's characterization of the various progressive movements is an unfair and mistaken straw-man. While I have only read a handful of the plentiful original sources she cites, it is difficult to see how an author who quotes so frequently from primary sources can be said to have gotten them (many unambiguous in meaning) wrong. My thoughts are that this book is a fair portrayal of progressivism, and that the reviewers may be mad because Ravitch is not afraid to mix history and polemic.

All in all, this is a stunning work for anyone who wonders how we got here - social promotion, self-esteem movement, flexible standards - from "there." Ravitch may have mixed history with polemic, but the book is well-researched history and necessary polemic. Ravitch's conclusion:

"If there is a lesson to be learned from the river of ink that was spilled in the education disputes of the twentieth century, it is that anything in education that is labeled a "movement" should be avoided like the plague. What American education most needs is not more nostrums and enthusiasms but more attention to fundamental, time-tested truths." (p. 453)