Scull on psychiatry’s problems

Interesting book review by Andrew Scull in the Los Angeles Review of Books. The book is called All We Have to Fear: Psychiatry’s Transformation of Natural Anxieties into Mental Disorders by A. V. Horwitz and J. C. Wakefield. The authors (and the reviewer) are critical of contemporary psychiatry for much the same reason that some of us philosophical types are skeptical of it—empirical evidence is sketchy and many of symptoms of psychopathology are amorphous enough to mean anything. This paragraph is a good synopsis of the review:

This reliance on symptoms, and on the simplistic approach of counting symptoms to make a diagnosis, creates a bogus confidence in psychiatric science. Such categories have an element of the arbitrary about them. When Robert Spitzer and his associates created DSM III, they liked to call themselves DOPs (data-oriented persons). In fact, DSM’s categories were assembled through political horse-trading and internal votes and compromise. The document they produced paid little heed to the question of validity, or to whether the new system of categorizing mental disorders corresponded to real diseases out there. And subsequent revisions have hewed to the same approach. With the single exception of Post Traumatic Stress Disorder (PTSD), which, as its name implies, is a diagnosis having its origins in trauma of an extreme sort, the various categories in the DSM, including the anxiety disorders that preoccupy Horwitz and Wakefield, are purely symptom-based. (The construction of the PTSD diagnosis, incidentally, as the authors show, was every bit as political as the creation of the other DSM categories.) Because so much depends on the wording that describes the symptoms to be looked for and on how many symptoms one needs to display to warrant a particular diagnosis (why do six symptoms make a schizophrenic, not five, or seven?), small shifts in terminology can have huge real-world effects. The problem is magnified in studies of the epidemiology of psychiatric disorders. As Horwitz and Wakefield point out, to make studies of this sort cheaper and allow those producing them to employ laypeople to administer the necessary instruments, the diagnostic process is simplified even further in these settings. They write that psychiatric epidemiologists make “no attempt to establish the context in which worries arise, endure, and disappear so as to separate contextually appropriate anxiety from disordered anxiety conditions [and thus they] can uncover as much seeming psycho-pathology as they desire.”

Scull noted earlier that the DSM was created specifically to standardize psychiatric diagnoses after some embarrassing events in the mid-twentieth century. But as with all standards for complicated and messy phenomena, simplification was necessary. And since simplification leads to abstraction, the symptomologies for various disorders became vague. Vaguely defined symptoms can be matched to any complaint. Are you feeling down? Or very down?

Anyway, a good read.

Stevenson on Realism in Literature

Following is an excerpt from R. L. Stevenson’s “A note on Realism,” Essays in the Art of Writing. The old proverb struck me as I was reading it, the more things change, the more they stay the same. Stevenson rejects as a false dichotomy the distinction between the Aristotelian view of literature as vehicle of perennial truth and realism. The same monster exists today. Gritty realism about anti-heroes and the pathos of the poverty stricken are the stuff of real literature, while the problems of the human condition, when dealt with at all, are treated like social problems in need of state intervention. Anyway, enjoy:

*********

In literature…the great change of the past century has been effected by the admission of detail.  It was inaugurated by the romantic Scott; and at length, by the semi-romantic Balzac and his more or less wholly unromantic followers, bound like a duty on the novelist. For some time it signified and expressed a more ample contemplation of the conditions of man’s life; but it has recently (at least in France) fallen into a merely technical and decorative stage, which it is, perhaps, still too harsh to call survival….After Scott we beheld the starveling story—once, in the hands of Voltaire, as abstract as a parable—begin to be pampered upon facts. The introduction of these details developed a particular ability of hand; and that ability, childishly indulged, has led to the works that now amaze us on a railway journey.  A man of the unquestionable force of M. Zola spends himself on technical successes. To afford a popular flavour and attract the mob, he adds a steady current of what I may be allowed to call the rancid. That is exciting to the moralist; but what more particularly interests the artist is this tendency of the extreme of detail, when followed as a principle, to degenerate into mere feux-de-joie of literary tricking. The other day even M. Daudet was to be heard babbling of audible colours and visible sounds.

This odd suicide of one branch of the realists may serve to remind us of the fact which underlies a very dusty conflict of the critics. All representative art, which can be said to live, is both realistic and ideal; and the realism about which we quarrel is a matter purely of externals. It is no especial cultus of nature and veracity, but a mere whim of veering fashion, that has made us turn our back upon the larger, more various, and more romantic art of yore.  A photographic exactitude in dialogue is now the exclusive fashion; but even in the ablest hands it tells us no more—I think it even tells us less—than Molière, wielding his artificial medium, has told to us and to all time of Alceste or Orgon, Dorine or Chrysale. The historical novel is forgotten. Yet truth to the conditions of man’s nature and the conditions of man’s life, the truth of literary art, is free of the ages. It may be told us in a carpet comedy, in a novel of adventure, or a carpet comedy, in a novel of adventure, or a fairy tale. The scene may be pitched in London, on the sea-coast of Bohemia, or away on the mountains of Beulah. And by an odd and luminous accident, if there is any page of literature calculated to awake the envy of M. Zola, it must be that Troilus and Cressida which Shakespeare, in a spasm of unmanly anger with the world, grafted on the heroic story of the siege of Troy.

This question of realism, let it then be clearly understood, regards not in the least degree the fundamental truth, but only the technical method, of a work of art. Be as ideal or as abstract as you please, you will be none the less veracious; but if you be weak, you run the risk of being tedious and inexpressive; and if you be very strong and honest, you may chance upon a masterpiece.

 

Plagiarism in Europe

Here’s an article on plagiarism from BBC online. More politicians are being taken down for it. In my opinion—and my experience—it’s a lot more rampant than these figures suggest. Hopefully, the stink raised over this will result in more people being caught and exposed. Universities themselves don’t seem to keen on doing much about it. Your average professor keeps on believing the system will sort things out—Pangloss was never so naive.

The Weightlessness of Contemporary Literary Short Fiction

I’m going to get hate mail over this, but I don’t care. It has to be said aloud because a lot of people are thinking it, others feeling it, but neither is saying it: the vast majority of contemporary literary fiction is beautifully crafted prose wholly devoid of substance. It is masterful meaninglessness. I’m speaking especially of short fiction here, the kind found in literary magazines. When I do bother to read one of the well-established brands—I don’t think I need to name them—I come away forgetting everything I’ve read because none of the stories said anything to me. The stories are like exercises in style, like perfectly choreographed ballets about nothing.

I don’t get this feeling when I read genre writers like Ray Bradbury or Philip K. Dick. Sure, some of Bradbury’s stuff is childish and Dick’s paranoid universe can get a little old, but many of their stories have depth to them. Take Bradbury’s The Burning Man.  The characters’ encounter with “genetic evil” on a hot summer day speaks to a deep-seated fear in the human soul: that we can unintentionally stumble across evil that can only be held at bay so long by cunning before it consumes us.

Or take The Veldt, one of Bradbury’s best-known stories. It’s often portrayed as a cautionary tale about the effects of immersive technology—a warning about video games before the first one had even been created. But it has a deeper resonance. The children kill their parents because they’ve been cut off from the real world by their parents’ own well-intentioned desire to insulate them from it.  The world of human relations—of love, obligations, goals and striving for excellence—has been replaced by one where every need is met by technology. In attempting to create happy children, in other words, the parents have created sociopaths who know, feel and desire nothing beyond the sensual inputs provided by the artificial world created to entertain them into happiness. It’s something every parent agonizes over.

Contemporary literary fiction, by contrast, is devoid of real pathos. The most delicate and refined of sentiments are as deep as it gets. In fact, this aesthetic is intentional. Someone loses a not-so-loved one or witnesses an accident and then ruminates about it as a complete outsider—endless rumination on trivialities by witnesses to mundane events. There’s also something unsavory lurking in this too common how-it-affects-me standpoint. The person actually suffering is cast into the background so the bystander narrator can ruminate on how it affected him emotionally.  This is the especially unsettling dimension in all of it.

If you’re still not convinced of the problem with the aesthetic, let me put it in more concrete terms. Suppose a friend comes to you and wants to extemporize for an hour or two on his feelings over the death of a neighbour he had never even spoken to. Would you not invent some excuse to get out of it? “Sorry, I have to wash my car. Can this wait until next year some time?” And if you wouldn’t listen to what is, frankly, shameful self-indulgence, why in the name of Zeus would you read it?

Isn’t it ultimately decadent navel-gazing? Isn’t that where it originates—in decadence? After all, the stories I’ve read have their closest kinship in form and feeling with late imperial Rome, when every literary work was a glossy, meaningless ditty churned out to titillate with the cleverness of the style and lightness of its sentiments. How else does one define what is at once so beautifully wrought and yet so utterly superficial?

Now, I’ll allow the possibility that I missed something—that there are great examples of literary short fiction that I’ve completely overlooked. And I do desperately invite the opposite view. But be forewarned: don’t bother informing me about wonders without justification. Don’t just give me titles and authors and instruct me to read them. I need you to make the case; simply asserting it may demonstrate your elevated tastes to this philistine, but it won’t move me.

The floor is open. Please prove me wrong.

Finding your own voice?

I realize that “finding your own voice” is orthodoxy these days, but I’ve never quite understood why anyone seeks it, why anyone supposes it exists (or how they’ll know when they find it) and, most importantly, what value it’s supposed to have. It seems to me that the most valuable thing a writer can have is the ability to do any voice he wants.

When Science sleeps with Politics

In a recent blog post Praj upbraided Chris Mooney and Alan Berezow for attempting to rank the pro- or anti-science bona fides of political parties. The tempest in the teapot started with a remark by Alan Berezow in a column in USA Today, where he claimed that there was one anti-science Democrat for every anti-science Republican. Mooney was outraged, claiming that polling data suggest Tea Party supporters overwhelmingly reject climate change and evolution, which implied for him that Republicans were the anti-science party by a wide margin. Not surprisingly, the comments section of Mooney’s blog lit up with cases of anti-science on the left.

Praj pointed out the folly of it all. Ranking parties on the basis of their respect for science is a mug’s game. It’s really political invective and moral stridency dressed up as impartial analysis. I’d go one step further: attempts to assess (what I’ll call) the “scientificality” of parties and partisans on the basis of polling data is absolutely meaningless, since polling data reports nothing but the respondents’ feelings about science.  Let me explain.

1. Why pro-science polling data is meaningless as a measure of “scientificality” of a party

Step back and look at polling data as a measure of scientific literacy in the general population—i.e., how much people know about science.  Suppose a poll result says sixty percent of people believe in evolution and climate change while forty percent do not. Can you now infer that sixty percent of people are knowledgeable about climate change and evolution, while forty percent are not? No, because what anyone does or doesn’t “believe in” is just a measure of how they feel about a scientific statement—like “Do you believe the climate is changing?”—which is irrelevant to the question of how much they know about science.

But wait, you say, doesn’t the fact that sixty percent of people believe in evolution show that sixty percent know something about it? No again, because the poll says nothing about the quality of that knowledge. The believers might have wholly erroneous ideas about climate change and evolution. In fact, my experience suggests that the vast majority of laymen believers in evolution—including most of the haughty pundits who deride those who don’t believe—haven’t much of a grasp of evolution themselves. Most pro-evolution laymen think, for example, that evolution is progressive (i.e., that living forms get ‘better’ over time), which is completely false.

So what does the pro-climate change, pro-evolution sentiment reveal? Nothing beyond the respective successes and failures of the opponents and proponents of climate change and evolution. In my experience, belief in evolution is often worn like a badge of moral and intellectual superiority, not a conclusion reached after careful study; and disbelief like a badge of moral and intellectual independence and anti-establishment sentiment.

2. Why anti-science polling data is meaningless as a measure of the scientificality of a party

Surely, you say, anti-science sentiment does reflect ignorance of science among those who reject this or that scientific theory? Not really. Since the vast majority of non-scientists have a tenuous grasp of any given science, neither party can claim that the other has a monopoly of any consequence on ignorance. In other words, belief or disbelief in this or that scientific theory doesn’t mean much when neither side knows what they’re talking about in the first place.

Now you might still object that expressed disagreement with scientific theory—whether knowledgeable or not—can still amount to anti-scientificality when the theory is beyond dispute. Not believing in evolution, after all, is anti-science because there is no scientifically defensible anti-evolution position.

In response, let me say that it’s the mark of naivety or partisan tunnel vision to conflate what people say with why they say it.  Suppose I asked a Tea Partier the following question, modeled on Darwin’s theoretical formulation in The Origin of Species: “Do you agree that the differentiation of species was caused by descent with modification through natural selection?” His likely response: “Huh?” If I asked the same question in a different way—e.g., “Do you believe in evolution?”—he might well say, “Hell, no!”

The reason for the different answers should be obvious to thoughtful persons. He doesn’t reject evolution out of ignorance or knowledge. To this fellow, evolution means “Human beings came from monkeys,” and, more importantly, that “Human beings are no better than monkeys.” He equates an undesirable moral implication with belief in the scientific theory, so he rejects the latter on the basis of the former—that’s what “believing in evolution” means to him.

Now, before you go calling this fellow an ignoramus, let me urge you to further consider that his inference isn’t exactly irrational. Richard Dawkins and his fellow travelers (not to mention more than a few evolutionary psychologists) have tried to bootstrap the moral and political point about human beings with the scientific one time and again. And if they can say, “We’re no better than apes, evolution proves it,” why is it so unreasonable for someone in no position to understand the science to reject the science along with the moral and political activism that’s been attached to it?

3. And the moral of the story is…    

I think there’s a profound moral to this story that every thinking person should consider.  Science has always been politicized and always will be. Human nature has decreed that some will always try to tie their political crusade and their moral beliefs to the back of scientific research.  In some cases, it will be justified, but in the majority of cases it won’t; and no matter how many times you cite Hume’s is/ought dichotomy to those who should know better—Sam Harris, for example—they’ll refuse to accept it when they think the tide’s flowing in their direction.

Nonetheless, it’s the duty of thinking people to resist it because it can only end in the corruption of science and politics both. What happens, after all, when some scientific theory becomes the justification for political beliefs? Do you think new evidence that contradicts the earlier findings—and by implication the political beliefs that depend on it—will ever see the light of day? Not likely. The politicos will assure that scientific orthodoxy remains untouched; in consequence, real science will cease.

Now, maybe you think it can’t happen—I suggest you think again. The movement is afoot to have Intelligence Design taught in schools.  Why? Is it because irrational religious people don’t want their kids taught that human beings came from apes? No. It’s happening because Dawkins and Dennett (and others) propagated their atheist creed on the back of Darwinian evolution. Wiser men would have known that when science faces off against religion, science always loses. But these two naïve fools thought that parents would just sit idly by while their kids were force-fed atheism by proxy—you could almost hear them sniggering: “And there’s nothing you can do about it!”  But there was. Schools answer to parents and religious parents outnumber irreligious parents by a huge margin; so, ID will be taught in schools and there’s nothing Dawkins and Dennett can do about it—except maybe shutting up.

Intellectual Conceits: Derrida on 9/11

Let me state at the outset that I’m neither a member of the Hate Derrida or the Love Derrida clubs.  I’m in the Indifferent Club. While our membership is quietly growing, our manifesto is less well known than our rivals. We don’t claim to be experts in Derrida, of course; but from what we have read, we don’t think there’s anything especially novel about him. To us, he seems like an old New Leftist dressed in a radical skeptic’s leisure suit. To the extent that he has anything to say, then, it can appeal only to old guard leftists who’ve lost faith in the old ways without losing the old faith. It’s all the same old wine in a new bottle.

At any rate, I don’t want to get into a full-blown interpretation of Derrida and deconstruction here. I have a more modest aim. I want to explain in this post why we in the Indifferent Club always come to the conclusion that Derrida isn’t worth listening to. I’ll show this by looking at one of his answers to the questions on 9/11 solicited by Borradori and partially published online.  The full text of that question and Derrida’s answer are reproduced below along with my analysis.

It’s important to note that I’m not interested in condemning his political opinions for failing to conform to my own—the content of his politics is beside the point in what follows.  I’m only concerned to show that for all his portentous verbiage, his reflections on 9/11 are no deeper, no more cogent and no less ill-informed than what you’d find on the opinion pages of your local newspaper at the time.

Borradori: Whether or not September 11 is an event of major importance, what role do you see for philosophy? Can philosophy help us to understand what has happened?

Derrida (bracketed letters added for clarity of reference):

[a] Such an ‘event’ surely calls for a philosophical response. [b] Better, a response that calls into question, at their most fundamental level, the most deep-seated conceptual presuppositions in philosophical discourse. [c] The concepts with which this ‘event’ has most often been described, named, categorized, [d] are the products of a ‘dogmatic slumber’ from which [e] only a new philosophical reflection can awaken us, a reflection on philosophy, most notably on political philosophy and its heritage. The prevailing discourse, [f] that of the media and of the official rhetoric, [g] relies too readily on received concepts like ‘war’ or ‘terrorism’ (national or international).

I don’t know what a “philosophical response” to 9/11 means when it involves [b], questioning the presuppositions of philosophical discourse. I would have thought that [b] had no direct relationship with [a]. Presumably, however, [b] is a little fluff, a little warm-up rhetoric, and we’re supposed to skip [b] for [c]; thus, Derrida is after the conceptual framework that underlies our understanding of 9/11 as an “event,” and it is that at which the philosophical response is aimed (i.e., at [a]).

Adding [d] into the mix, we can say that our understanding is the dogmatic slumber, out of which the response is geared to waking us.  Unfortunately for us, [g] says that Derrida will only concern himself with two of the concepts in our faulty framework—i.e., with “terrorism” and “war”—out of the whole schema otherwise known as the “official rhetoric” about the “event” named 9/11.

The introductory paragraph’s bottom line is this: Derrida is going to give us a philosophical response to 9/11 that involves exposing the faults in our concepts of “war” and “terrorism” embedded in our understanding of 9/11 as an event.  I admit that it sounds promising: like we’re about to learn something we didn’t already know, or that we didn’t realize. But I think you’ll see that he says nothing we haven’t already heard ad nauseam and that his own understanding relies as much on a misunderstanding—or on willfully misconstruing— context of everyday discourse.

Derrida writes:

A critical reading of Schmitt, for example, would thus prove very useful. On the one hand, so as to follow Schmitt as far as possible in distinguishing classical war (a direct and declared confrontation between two enemy states, according to the long tradition of European law) from “civil war” and “partisan war” (in its modern forms, even though it appears, Schmitt acknowledges, as early as the beginning of the nineteenth century).

The reference to Carl Schmitt is gratuitous name-dropping. The distinction between international, civil and partisan war is not especially uncommon. And nothing added after this will justify the reference.  Anyway, let’s carry on:

 But, on the other hand, we would also have to recognize, against Schmitt, that the violence that has now been unleashed is not the result of ‘war’…

Notice the passive construction of the main point: “the violence that has now been unleashed is not the result of war.” Why would such a fearless thinker beat around the bush?  The “violence” is the invasion of Afghanistan and Iraq. The second part, “not the result of war,” must mean (recalling Derrida’s own threefold taxonomy of war) that the invasion was not provoked by an international, civil or partisan war.

This last remark makes no sense, on the face of it. Crashing airplanes into buildings must count as an act of partisan war, and the invasions as retaliatory strikes. One could question the value of retaliatory strikes or the targets of them, of course; maybe the US should have turned the other cheek, or maybe Afghanistan or Iraq or both weren’t the best choices for retaliation. But to claim, as Derrida does, that the retaliation itself was not the “result of war” is nonsensical.  Unless he means something else: that the retaliatory strikes had a different motive—that 9/11 was a pretext for invading these two countries for other reasons.

If that’s the case, there’s nothing especially profound here.  (Unless, of course, you’re one of those partisans who thinks everything likeminded people say is profound.) After all, lobbing accusations of Machiavellian conspiracies at the Bush administration was (and still is) an everyday occurrence. Shouting that “It’s all about oil!” in other words, is no more or less convincing when it’s dressed in circumlocution that avoids actually saying it.

 …(the expression ‘war on terrorism’ thus being one of the most confused, and we must analyze this confusion and the interests such an abuse of rhetoric actually serve).

Say what? There’s an “abuse of rhetoric” underway? Not an abuse of speech-making! How low will these people sink!

Seriously, ask yourself if the expression “war on terrorism” is any more opaque than any other slogan—like, say, “Yes we can!”  In fact, isn’t the meaning of the former even more obvious than the latter? Consider the following definition: the Bush administration and its supporters in the media and academia concluded that there was an anti-Western and especially anti-American subculture in Islamic communities and countries that was planning the violent overthrow of the Western hegemony through terrorism. The war on terror is thus a police, intelligence, military and diplomatic action to thwart or weaken or destroy this enemy faction.

Regardless of whether you’re for or against it, you can’t really claim to be perplexed over what the war on terror means.  Even if you think it’s a cover story for more nefarious motives, the “war on terror” is hardly a confused or confusing idea.

 Bush speaks of ‘war,’ but he is in fact incapable of identifying the enemy against whom he declares that he has declared war. It is said over and over that neither the civilian population of Afghanistan nor its armies are the enemies of the United States.

Excitable teenagers get worked up when they discover apparent paradoxes like declaring war on an enemy you can’t precisely name while peremptorily exempting certain people as innocents, as if you did indeed know who the latter were. But grownups know that we do such things all the time. The police set up a manhunt for an unknown killer on the basis of a mutilated body, astronomers search for observational evidence of planets predicted to exist, and dignitaries lay wreathes on the tombs of unknown soldiers.  It’s a crazy world when you’re young; but once you reach adulthood, experience teaches that agents can be inferred from actions.

Now, if that last remark seemed a little uncharitable to you, don’t forget that this isn’t some man on the street or some thinker caught off guard answering a reporter’s question; it’s a prepared statement from one of the intellectual luminaries of the twentieth-century.  For him to make such an unremarkable—indeed, fatuous—criticism is embarrassing.

 Assuming that ‘bin Laden’ is here the sovereign decision-maker…

It’s fair to give a qualified answer. Can anyone say for certain, after all, that bin Laden is not a figurehead for some other agency? No. But Derrida’s cautiousness when it comes to attributing agency to bin Laden belies his partiality when he next offers unqualified assertions that (presumably) fit better with his political orientation:

…everyone knows that he [bin Laden] is not Afghan, that he has been disavowed by his own country (by every “country” and state, in fact, almost without exception), that his training owes much to the United States and that, of course, he is not alone.

Everyone knows bin Laden’s not alone, says Derrida. Yet only a moment ago he held up Bush’s inability to identify the enemy in the war on terror as evidence, not only of how confused his administration was, but of his secret motivations. Indeed, the war on terror was an “abuse of rhetoric” because it conjured enemies out of thin air. Now, apparently, the existence of anti-American terrorists (as well as information regarding their whereabouts) is common knowledge. Go figure…

I’ve come to expect this double-standard from Derrida (and his fellow travelers) when it comes 9/11.  And I think I understand why.  On the one hand, their worldview demands that evil capitalists invent pretexts for wars of profit; on the other, they need some sign that the anti-capitalist, Third World underclass exists, that it has broad-based support, and that it’s striking back against its oppressor. That puts Derrida et al. in a tricky position regarding 9/11: they have to deny the culpability of the oppressed in the attack in order to preserve the capitalists’ profit motivation for the war; yet they also have to assert the culpability of the oppressed in revolutionary acts in order to make the case for the oppressiveness of the capitalists. Lawyers call it having an excuse and an alibi: he couldn’t have done it, but he sure was justified in doing it.

Derrida again:

The states that help him indirectly do not do so as states. No state as such supports him publicly.

 Is the official diplomatic line really relevant?  Does it matter to our assessment of China as a communist dictatorship, say, that it calls itself a republic? Maybe you think it does and that Derrida is making an important point here. But how come the official line didn’t even get a hearing when it came to the US and the war on terror?  Recall that Derrida dismissed the war on terror as a pretext for invasion—as an “abuse of rhetoric”—when the Bush administration offered it as its reason for invading Iraq and Afghanistan. So if the official line does count in his analysis, Derrida must offer us some reason for considering it genuine when it comes from non-Western countries, but disingenuous when it comes from Western ones.  In simpler terms, Derrida must explain why base motives can be attributed without argument to Western countries, while good motivations must be assumed of non-Western ones.  I offer this challenge not because I think it can be answered, but because I think it shows there’s nothing more to it than prejudice.

 As for states that ‘harbor’ terrorist networks, it is difficult to identify them as such. The United States and Europe, London and Berlin, are also sanctuaries, places of training or formation and information for all the ‘terrorists’ of the world. No geography, no ‘territorial’ determination, is thus pertinent any longer for locating the seat of these new technologies of transmission or aggression.

The double-standard again: countries that harbor terrorists are hard to identify as harborers of terrorists, unless they’re Western ones, in which case they’re easily identified as such. Whether al-Qaeda training camps existed in Afghanistan under the aegis of the Taliban is a question fraught with difficulty, according to Derrida, in spite of that regime’s implicit admissions. But the existence of “sanctuaries, places of training or formation” in London and Berlin is a fact beyond dispute. Soyez realiste, Monsieur Derrida! You can’t have it both ways: either you accept without reservation the intelligence reported in the papers as fact (so far as anyone knows), or you take it all with a grain of salt. For a philosopher to cherry-pick what he does and doesn’t believe from the same sources without explanation bespeaks ideological bias, not dispassionate analysis.

But let’s leave that aside and observe how the ground has shifted from deconstructing the rhetorical justification for the invasions to practical strategic policy analysis. Derrida originally said that philosophers should probe into the role played by words like “terrorism” and “war” in the media rhetoric surrounding the event. But he’s not talking about words anymore.  Now he’s offering a practical critique of the execution of the war itself. He says terrorism is an international phenomena—that geography is no longer pertinent—because the terrorists are everywhere and can strike anywhere. This implies the invasions were misguided—poor strategic decisions—for defeating terrorism.

Maybe you think he’s making a profound strategic point. But can there be any value in a geopolitical strategic assessment by Jacques Derrida, literary theorist? Is he in any position to judge the relative merits allocating resources to policing action in Western countries over invasions of foreign ones as means of defeating terrorism? I fail to see how his opinion in this regard has any more weight than my neighbor’s.

You may object too that I’m reading in the shift in focus. You’d argue instead that Derrida cites Western terrorist sanctuaries to support his contention that terrorism was a pretext for foreign invasions. Fine. But now you invite the fallacy of irrelevance: what’s the import of the fact that terrorist sanctuaries exist in London and Berlin to the claim that terrorism was used as a pretext for the invasions of Iraq and Afghanistan?  Do you want him to be claiming the following: the fact that the US did not invade Germany and England is evidence that the war on terror is a pretext for a war for profit? If you do, I’ll only say in response that you’re either simpleminded or blinded by hatred.

To say it all too quickly and in passing, to amplify and clarify just a bit what I said earlier about an absolute threat whose origin is anonymous and not related to any state, such “terrorist” attacks already no longer need planes, bombs, or kamikazes: it is enough to infiltrate a strategically important computer system and introduce a virus or some other disruptive element to paralyze the economic, military, and political resources of an entire country or continent. And this can be attempted from just about anywhere on earth, at very little expense and with minimal means. The relationship between earth, terra territory, and terror has changed, and it is necessary to know that this is because of knowledge, that is, because of technoscience.

In other words, technology—or “technoscience”—opens up whole new avenues of attack for terrorists.  I don’t know about you, but I’ve heard just about every media pundit say exactly the same thing in fewer, less convoluted words. Come to that, common sense led me to that rather obvious conclusion without anyone’s help. I will grant, however, that Derrida says it in a portentous way, as if quoting from the Book of Deconstructionist Revelation. But that only fools the reader into thinking he’s saying something more profound than he actually is.

 It is technoscience that blurs the distinction between war and terrorism.

I didn’t know the distinction between war and terrorism was an especially clear one before the rise of anything remotely describable as technoscience.  Attackers at all times call it a just war, defenders an act of terrorism (or similar terms). For that matter, terrorism itself depends on your point of view: the French Resistance was a terrorist organization to the occupying Germans, freedom fighters to their sympathetic countrymen. All the same, let’s see why Derrida thinks things are different post-technoscience:

 In this regard, when compared to the possibilities for destruction and chaotic disorder that are in reserve, for the future, in the computerized networks of the world, ’September 11’ is still part of the archaic theater of violence aimed at striking the imagination. One will be able to do even worse tomorrow, invisibly, in silence, more quickly and without any bloodshed, by attacking the computer and informational networks on which the entire life (social, economic, military, and so on) of a ‘great nation,’ of the greatest power on earth, depends. One day it might be said: ‘September 11’—those were the (‘good’) old days of the last war. Things were still of the order of the gigantic: visible and enormous! What size, what height! There has been worse since. Nanotechnologies of all sorts are so much more powerful and invisible, uncontrollable, capable of creeping in everywhere. They are the micrological rivals of microbes and bacteria. Yet our unconscious is already aware of this; it already knows it, and that’s what’s scary.

We’re tacking again. Derrida now veers off into speculation about the potential for cyber-attacks and biotechnology in future terrorism, along with a remark on the respective aesthetics of the old and new forms of terrorism. He’s right that things like cyber-warfare and biotechnology will pose different risks for us in the future. But the risks themselves—i.e., the kinds of tactics and the severity of the existential threat—are not new.  Industrial sabotage and chemical and biological weapons have been used since ancient times. Greeks would throw diseased corpses into besieged cities, raze crops and pay spies and saboteurs to wreak havoc inside enemy lines and corrupt foreign politicians.

It may seem like we’re talking whole different magnitudes of threat when we compare razing crops and diseased bodies to cyber-warfare and biotech weapons. But we’re not when we take into account counter-measures: the Greeks didn’t have antibiotics to protect themselves from disease and they lacked the capacity to store large quantities of food for long periods of time or have it shipped from distant lands. Sure, they didn’t depend on electrical grids. But the networked design of the grid makes it extremely difficult to keep large areas blacked out for the long periods of time necessary to inflict much beyond minor panic and inconvenience.

My point, at any rate, is that Derrida is not the person to consult on terrorist threats to our way of life. He doesn’t even provide what is within his ken, namely, the historical perspective I alluded to just now. Some will still no doubt find it interesting that the sage is worried about nanotechnology attacks. But I’m not sure that I should be because, for all I know, his fear might have been inspired by an episode of Star Trek.

If this violence is not a ‘war’ between states, it is not a ‘civil war’ either, or a ‘partisan war,’ in Schmitt’s sense, insofar as it does not involve, like most such wars, a national insurrection or liberation movement aimed at taking power on the ground of a nation-state (even if one of the aims, whether secondary or primary, of the ‘bin Laden’ network is to destabilize Saudi Arabia, an ambiguous ally of the United States, and put a new state power in place). Even if one were to insist on speaking here of ‘terrorism,’ this appellation now covers a new concept and new distinctions.

This argument of this last bit is largely undermined by the fact that Derrida’s own example puts terrorism into the partisan war category. Bin Laden’s attempt to destabilize Saudi Arabia raises the possibility that his efforts were really apiece with conventional partisan war—i.e., an attempt by an internal group to destabilize their own regime. The fact that bin Laden chose to target Western nations might just be a result of the softness of the target. Maybe it was easier to strike against the real target’s allies as a means of fomenting a rebellion within the real target, Saudi Arabia.

But then we don’t even have to get that sophisticated, do we? On Derrida’s own trichotomy of war, this is a partisan war writ large. It’s not an attempt to take down one nation state from within, but to overturn a whole world order from within. It is, as many others have pointed out, a global jihad. Perhaps if Derrida had been less enamored of scoring rhetorical points against the Bush administration, he would have come to the obvious conclusion that everyone else has.

So what did we learn from any of this? First, Derrida admonished us not to accept the idea that 9/11 as an “event.” Then he proceeded to ignore his own advice and talk about it as if it was. Next he told us he’d analyze our conceptual understanding of “war” and “terror.” On war he first claimed that 9/11 and the subsequent invasions of Afghanistan and Iraq didn’t fit into Schmitt’s three categories of war. A best guess as to why it didn’t fit was that Derrida wanted to insinuate that the invasions had other motives—an inference reinforced by the fact that he reintroduced Schmitt as if the usefulness of his schema had never been in doubt.

Much the same double-standard came to bear in his analysis of terrorism. The Bush administration engaged in an abuse of rhetoric in talking about a war on terrorism, since it couldn’t identify the terrorists and the countries it said were harboring them claimed they weren’t. But Derrida could say (presumably without abusing rhetoric) that the terrorists’ existence was not in doubt and that England and Germany were harboring them.  The incongruity seemed to have only one explanation: Derrida wanted to have his cake and eat it too. He wanted to claim the invasions were motivated by greed by casting doubt on the existence of a terrorist threat against which the US was retaliating, while simultaneously attributing the existence of terrorism to US foreign policy.

Finally, Derrida speculated on the future of terrorism. In pointing out the possible role technology and biological weapons, he told us nothing we didn’t know and left out the historical context that might have given him something substantive to say. Worse, didn’t even see the most obvious inference from Schmitt’s analysis: that contemporary terrorism is a partisan war against the world order.

So I ask you, why should I waste my time on more Derrida?

Writing and Realism

Realism is always an afterthought in writing advice: two thousand words on plot development, four thousand more on character development, and then a quick footnote on the importance of “doing your research.”  I guess most writers and writing instructors take realism for granted, as if it came naturally with close attention to characterization and plot.

But the single most common defect in fiction writing nowadays—especially with the rise of self-publishing—is the absence of realism.  In fact, most of the novels that fail to maintain the suspension of disbelief, in my experience, don’t fail because of typos and poor style; they fail because factual errors and inauthentic speech culminate in a saturation point where you just can’t be bothered to go on.  Let me provide a downright incredible example of what I’m getting at.

Some time ago I was handed a short story to critique as part of an informal writing group. The author was not only a graduate student in the humanities, he’d also attended a summer writer’s workshop at a prestigious European university.  So his story had not only been vetted by the writing group and its blue-chip instructors, but it was the culmination of a whole summer’s effort in the program.

The story was a piece of historical fiction, which saw the eighteenth-century heroine turn radical feminist after a tryst gone wrong. Not my cup of tea, to be sure, but from a purely stylistic point of view, it was well-written (though it should’ve been after passing through so many hands).

Nonetheless, the story was replete—I mean from the first sentence to the last—with the kind of factual historical errors that would embarrass a first-year history student.  I’m not even an expert on this era, but it was immediately apparent to me that the details about sentiments, manners, clothing, pedigree and place were all so horribly anachronistic that even a powerful plot (which it didn’t have either by the way) couldn’t have saved it. The most cringe-worthy error saw the heroine inspired by the works of Mary Wollstonecraft a full half-century before the feminist activist had even been born—as the kids say, “epic fail.”

At any rate, the experience suggested two rules of thumb to me that have been proven time and again since then: (1) one of the essential differences between professionals and amateurs is the level of realism; and (2) having your work vetted by a successful writer or a credentialed expert is no guarantee that you’ve satisfied the basic conditions for a good story.  Since the second point deserves a separate treatment, I won’t say more about it in this post beyond offering a cryptic preview of a future post: a fanatically pedantic editor with a broad knowledge-base who understands grammar the mechanics of narrative is far more useful to a writer than other writers or literary critics.

As for the first point—the importance of realism in writing—let’s begin with a postulate:

 Every writer can’t be a Hemingway when it comes to prose style; but every writer can get his facts right.

Now, if you’re one of those practical people who prefers to focus on what he can do, instead of dreaming of what he can’t, let me offer a few general principles for getting the facts right.  In other words, let me suggest some categories for “doing your research,” which you should be added to a checklist for improving on character, plot and dialogue.

Wait a minute! You say this sets the bar on style too low. If that’s what you’re thinking, I have a two word response: Herman Melville.  Melville was an amazing stylist and part of the reason was his seamless integration of nautical terminology and sea-fearers’ brogue into his prose.  That one example should suffice to prove the connection between style and factual knowledge, which is really an extension of the principle of realism involved in avoiding cardboard characters, implausible plots and inauthentic dialogue.  The bottom line, then, is that attention to realism—to getting the facts right—is ultimately a way of improving your prose style.

The following are the three most neglected categories in recent fiction, which should serve as touchstones for evaluating your own writing.

1. Nomenclature

Every profession, trade, academic discipline, technical field, hobby and institution has its own nomenclature—its own nouns and verbs.  People who write medical dramas, police procedurals and legal thrillers tend to recognize this, which is why they study medical textbooks, police procedure handbooks and why most successful writers of legal thrillers are also lawyers.  Most writers outside these genres also recognize this, and they wouldn’t think of trying to compete in this area without rigorous research.  But for some reason the same sagacity doesn’t always carry over to other fields.

Now, it’s obviously impossible to give all the terminology of every field, but I think a few representative examples make the point. I’ll follow up with a few simple ways to overcome your knowledge deficit.

a. Hobbies: art-lover, historian, fishing, woodworking, botany, etc.

Giving one’s protagonist a hobby or an area of expertise outside his immediate one is a nice way to fill out his character and provide all sorts of scene-setting fodder. That’s probably why it’s become a staple—dare I say a cliché?—in fiction and television drama over the last few years. Too bad the writer didn’t know the difference between Monet and Manet, Thales and Thucydides, spinnerbait and swimbait, flat-sawn and rift-sawn or the calyx and the corolla.

As someone knowledgeable about woodworking (and other more unusual things), I find it intolerably amateurish when an author draws on this field for effect without even bothering to find out if what he says is accurate. Wood is not “carved” on a lathe, for example, it’s turned. One finishes a piece of furniture when one applies a protective coating of vanish, oil or shellac.  And don’t assume that some fancy finish you’ve heard of, say, “French polish,” comes in one can (e.g., “Chuck opened a can of French polish…”), because, like most high-end finishes, this one involves several products and techniques. Moreover, a hammer and nails is almost never used in fine woodworking, much less a wrench.  And if you say so-and-so ran his hand over the “fine-grained [wood x],” you better make sure the wood being stroked is actually a fine-grained (maple is, for example, oak is not).

It’s also important to know that different species of wood are more commonly used in different eras and that they’re cut in different ways.  Chippendales’ furniture, for example, was usually made out of Cuban mahogany, not oak or maple.  Shaker furniture makers preferred the simpler, inauspicious grains of maple and black cherry (shunning mahogany), while Mission and especially Arts & Crafts furniture are most commonly identified with quarter-sawn white oak.  Each piece of furniture also has a name (e.g., cornice, pediment, etc.) and different eras used different techniques for construction and finishing.

As I mentioned at the outset, these particular facts are only meant to suggest the breadth and depth of the field—it’s hardly enough to write a scene. But if you make a mental note that your last trip to Ikea and the contents of dad’s rusty toolbox don’t add up to a knowledge-base about furniture and woodworking, then you’ve taken the first step toward avoiding embarrassment and adding richness and depth to your writing.

The upside to all this is how little research you actually have to do. You don’t need to undertake an apprenticeship to learn about woodworking—or gardening, or cooking or anything else.  All you need for brief references or a minor scene is an article from a hobbyist’s magazine. The good ones always use the correct nomenclature.  The same goes for more academic subjects, like art history—pick up a book and read it.

b. Construction

Every part of a house or a building has a proper name and a proper verb for carrying out the process.  Carpenters frame houses when they erect the wooden structure out of which most houses are built. Walls are constructed from vertical studs with horizontal headers (on the top) and footers (on the bottom). The framing material is usually spruce, unless steel studding is used or some other more readily available local wood. Floor beams are called joists and the roof is held up by rafters (ergo, “beams” belong in barns and timber-frame construction, unless it’s a steel I-beam, which suspends the floor joists).

The outside is covered with plywood sheeting and then faced or clad with siding.  The inside is faced with drywall or sheetrock (but there are other regional names). Plaster and lath hasn’t been used for seventy years in the developed world.  And all of the wood parts are cut with miter-saws, circular-saws and even chainsaws, and they’re attached with pneumatic nail-guns or simply nailers. Hammering a house together with spikes (not nails) is a rare thing nowadays.

Foundations hold up houses and are constructed by pouring concrete into wooden forms that are removed when the concrete cures; or, the foundation is made from concrete blocks that are cemented together with mortar (not with cement). “Cement” is a generic word for types of adhesive or a process of attaching materials together using those types of adhesive. In other words, foundations aren’t made of “cement” and masons don’t use “cement,” they use mortar on blocks and bricks, each of which is made from different materials and use for different purposes.

Further, plumbers plumb houses (usually with copper and ABS piping), electricians wire houses, and the “electrical piping” in commercial and industrial facilities is not called “piping” but conduit, which is run when it’s installed.  And, like picture frames, doors are always hung.  Incidentally, house walls, doors and modern bathtubs are not bulletproof.  A brick façade, for example, with provide some protection, but the bricks will crumble like mud pies under repeated gunfire. Unlike the cast iron varieties of yesteryear, newer bathtubs are thin steel or fiberglass.

True, some artistic license must be allowed.  Shooting people with nail-guns, for example, was all the rage a few years ago when these tools became commonplace on jobsites. In fact, however, it’s very nearly impossible to do it without pressing the tip against someone, because these tools have a safety feature that prevents…you guessed it… accidentally shooting someone! All the same, there are always and without exception proper names for the materials, tools and procedures used by industry.  Realism requires knowledge of these domains when the story requires their introduction.

As before, you can correct this deficit with a few handyman magazines, which are usually written by people who know the business.

c. Architecture

Few things grate more than the careless use of architectural terms like “Doric,” as in “He admired the Doric columns of the…” when the building or monument that completes the sentence belongs to a wholly different style and era.  Doric refers to a very specific kind of column in the traditional Orders of Architecture, which have come down to us through the Roman engineer and architect Vitruvius. Types of columns are matched with types of styles.  Similarly, a Georgian mansion adorned with Art Deco furniture is an oddity that should be explained, because the terms refer to two wholly different styles, eras and tastes.  When these terms are confused by the author, they point to his amateurism and thus alienate the reader who knows better.  In the worst case, they break the suspension of disbelief and render the book unreadable.

2. Professions and professional idioms

On the face of it, this one’s a given because of the close connection between dialogue and the way people in various professions talk.  Yet I frequently read outlandish expressions and sentiments attributed to various professionals.  A college professor, for example, advises someone to “consult his works.” No one but a pompous ass would refer to his “works,” because the term is synonymous with classics or masterpieces.  He will say my research or my work (singular) in a particular field, but never “my works” (plural) when referring to his publications.  It’s a subtle distinction, yes, but it makes all the difference to someone in the know. (Of course, if the professor character is a pompous ass, then in all likelihood he would advise someone “to consult his many works on the subject.”)

The most common transgressions happen, however, when the author depicts soldiers and the military—just ask a soldier. Know the difference between grunts and jarheads and the kinds of things soldiers actually say to one another and how they act.  Here I can only give two general rules for realism: (1) read soldiers’ memoirs, good field reporters, go to mess halls and talk with actual soldiers; (2) don’t rely on Hollywood movies.

If you thought the recent crop of films about the military were realistic, it’s probably because they reinforced the false perceptions of soldiers and the military that you’ve already picked up from the same source; and if you share this anti-military view out of ideological conviction, then good for you. But if you’re the kind of writer who’s interested in things as they are—in realism rather than perpetuating axe-grinding stereotypes—then you should consider the fact that Hollywood’s view of the military has been hopelessly discombobulated by Vietnam-era anti-war films, which were, at bottom, wholly unrealistic.  If you think otherwise, consult source 1 for an earful.

Much the same applies for any other profession: there’s not only specialized terms used by professionals, but they learn to speak in a professional dialectic, which include speech, manners and other idiosyncrasies unique to it.  I’ll admit that this sort of language and understanding is hard to pick up unless you’ve been there.  But try sneaking into professional conferences and watch documentaries to see how they actually communicate with one another.

3. Detail as double-edged sword—or, fishing out of season

Every rhetorician and trial lawyer knows that precise detail makes speeches and testimony more persuasive.  “He was wearing a navy blue blazer marked with a St. John’s High School crest” is more persuasive than “It looked to me like a dark jacket, maybe black.”  Writers know too that concrete detail is preferable to abstraction—e.g., “Bill was fishing for large mouth bass in Chesapeake Bay” is better than “Bill was fishing in the lake”—at least, when the detail is conducive to plot and character development.

Unlike rhetoricians and lawyers, however, writers have to check foreground details against the details of the background narrative. So if you write that Bill went fishing in Chesapeake Bay (foreground), you better make sure that you didn’t already specify that the time and place was mid-July (in the background narrative) when the fishing season is closed. Obviously, a character can fish out of season, but it’s remarkable if he does, and so it must be noted.  It should go without saying that depicting your “straight-arrow cop” casually poaching fish is a defect in realistic writing (parody notwithstanding).

There are a thousand ways to fall into this sort of trap. Not all of them will be obvious and many will probably be missed by the reader. But you can’t rely on that because of the symmetry between your characters and your audience.  It’s not uncommon for readers to follow a series that depicts characters they identify with or aspire to be.  So creating a crime-solving cop who’s also a fishing enthusiast, for example, will likely attract readers who are also fishing enthusiasts.  But if you constantly make these sorts of gross errors to an expert audience, you’ll lose readership because they’ll feel cheated, as if you’re pandering to them, which is even worse than not having the fishing angle at all—and that’s the sharpest part of the double-edge.

4. Pseudo-knowledge

I downloaded a sample of a post-apocalyptic novel recently, which was not badly written and the story was somewhat compelling.  But the writer insisted on inserting his pseudo-knowledge of global finance into the narrative. No doubt he wanted to heighten the dramatic tension by increasing the realism of the situation with historical precedents of financial collapse.  It’s a good strategy; and if his information dumps had been accurate, he might well have strengthened the realism of the novel.

Unfortunately, what wasn’t pseudo-history was pseudo-finance. Hence, his plan backfired in the worse possible way: in placing this pseudo-historical material so prominently in the narrative, it proved catastrophic to the suspension of disbelief, instead of working to strengthen it. Indeed, he would have been better off not mentioning historical precedents at all.  I may well have kept reading.

There’s a simple and straightforward lesson here that needn’t be belabored:

 Pseudo-knowledge cannot reinforce realism; so, get your history/ economics/ religion/ science/ engineering (etc.) right.

Now, there’s one big exception to this rule.  Unfortunately for rule-rejectionists, it’s one of those exceptions that prove the rule. I’m talking about Dan Brown’s Da Vinci Code here, of course, a perfect example of whole narrative concocted on pseudo-history.  But Brown’s book reinforces my case for the simple reason that only a miniscule number of people are sufficiently conversant in the historical material upon which the book was based to be completely turned off by it. Like many others in that smallish circle, I couldn’t bear reading it. But the vast majority of people had no problem with it, because the gross errors and exaggerations didn’t kick them in the face.  So, no, Brown’s book isn’t an exception; it’s just that the vast majority of his readers couldn’t tell. The rule stands.

Spoilers Make Books Better?

The National Post carried a report last week showing some research into the effects of spoilers on audience enjoyment.  Turns out people enjoy books more when they know how it ends.  Seems to me that it might have a lot to do with the type of story, despite the claims made in the Post story.

The Anatomy of Contemporary Atheism

When I tell people I’ve never met an atheist, they tend to pause awkwardly for the punch-line of a bad joke, or perhaps they’re waiting for confession of a cloistered life. But I’m being serious when I say it (as serious as I can be, anyway) and I’m not such a recluse that I haven’t met people who’ve claimed to be atheists.  What I have not met, I maintain, is the genuine article.  I say this because a little probing of my subject always reveals a different animal hiding beneath the atheist’s cloak—a sheep in wolf’s clothing, so to speak.

I usually find myself conversing with a sentimental progressive who sees disbelief in God as a necessary condition for his faith in Progress and its earthly realization—or to borrow an older term, its immanentization.  The equality of human beings, human rights, even animal rights are all still in place with in my interlocutor’s godless cosmos. Justice still abides without God it seems—indeed, it’s all the healthier for His demise!

Such nominal atheists have never read Carl Becker’s delightful satire, The Heavenly City of the 18th-Century Philosophers. If they had, the smart ones would recognize that their atheism substitutes Progress for God as Voltaire’s had Reason—everything else remaining conveniently unchanged.  Apparently, theology is the one domain where you can kick off the head of the organism and expect the body to live on.

Reading James Wood’s review of The Joy of Secularism in the New Yorker reminded me of all this and further reinforced my view that the so-called “New Atheists,” whatever else might be said of them, resist the real implications of their creed for themselves and for the world in which they live. From what I gather, few in the book Wood reviewed have done so either (that preternaturally cautious philosopher Charles Taylor apparently being the one partial exception).

Only Nietzsche and Plato, it seems to me, really appreciated what it means for a person to reject a transcendent order. Both philosophers realized that ennui and despair are the only psychological responses, and cultural dissolution the only social and cultural result. Indeed, even language and the shared experience and understanding embodied by it will crumble into individualized bits of dust in the wake of the departing gods. To be sure, people carry on once disenchanted and order returns as it always must, but minds can be reconstituted in strange forms and order can become a collective abomination—history and anthropology show what awaits the momentarily godless.

At any rate, my present concern is not to defend the psychological and cultural implications of God’s demise, since I take Plato’s and Nietzsche’s understanding of the world without God as the truest one.  Instead, I want to try and understand why the New Atheists have resisted what should be obvious.  What follows, then, are some hypotheses that may explain why the New Atheists have been able to be atheists.

The psychology of atheism

1. Material comfort. The simplest explanation is also the oldest one: contentment breads complacency.  Like Hitchens, Dawkins, Harris and Dennett, James Wood enjoys his own and his society’s relative health, wealth and freedom (granted, Hitchens isn’t doing so well on the health front, but he has enjoyed a healthy life with all the worldly fortune one could hope for until recently). Under such conditions, talk of God, the afterlife and rationality in the cosmos seems as superfluous as the things themselves. Who needs to wrestle with these otherworldly concerns when the sun shines so bright?

2. Nemesis. As the ancients realized, Nemesis is a wonderful thing. Fighting an enemy gives life meaning. The concentration required for achieving a goal and the thrill and the immediacy of battle keeps away the bigger and harder questions about the point of it all that invite reflection and despair.  Indeed, a battle is a journey to a destination and as every writer since Homer knows, it’s the most powerful and perennial narrative structure.  In simpler terms, atheist militancy is fueled by the same anima that fuels sports and video games—the thrill of ersatz battle.

3. Atheism as de facto religion.  This seems like the most widely accepted explanation among critics of New Atheism, since virtually every critic of Dawkins, Harris and Hitchens points out that these fellows all sound like old-fashioned Bible-thumpers—and they do, no doubt.

But as I mentioned before, my experience reading and talking to New Atheists is that beneath the surface one finds pantheism (e.g., nature as the source and cause of a self-perpetuating wonder), animism (e.g.,  “we all join in the cycle of life”), and, most often, old-fashioned Enlightenment Progressivism.  It’s not atheism that’s the religion, then, but the belief in Progress that provides purpose that is bulwark against despair, filling the gap left by God and the Cosmos in the atheists’ psyche. Atheism is just the tip of the mental iceberg, jutting out of the mass of beliefs that make up an individual psyche, because (presumably) it’s the fashionable face of a more complex set of beliefs.

4. Serotonin. My preferred hypothesis (for the time being at least) is genetic. Much like the rest of the animal kingdom, most people are wired to be happy with their circumstances, at least when the bottom of Maslow’s hierarchy of needs has been met (as per 1). Most of us don’t feel the need for deeper explanations—we don’t suffer from existential angst—because we’ve been born with the optimal balance of the “happy-brain chemical,” which prevents us from feeling anything but contentment once our basic material needs have been met. To borrow the more recent framework, James Wood and other atheists are “dandelions” among religiously-minded “orchids,” not so much clear-minded and self-aware rationalists (unless, of course, one takes the latter as a euphemism for the former).

At any rate, your average atheist likely embodies some combination of all or several of these things. It’s not easy to say for certain without cracking open skulls and reading the brains within.  All I can say for certain is that it is not Reason that sustains them on her divine account, but a combination of transcendent-mimicking false belief and brain chemistry.

Resisting the cultural implications

The psychological hypotheses can explain how human beings can believe in atheism without suffering from its psychic consequences. Moreover, number two above may explain the stridency of the New Atheists. But the psychological considerations don’t explain the resistance to the dire implications for us all, especially if Christianity should lose its influence entirely on the Western world.  So what beliefs prevent the atheist from seeing these consequences?  I tend to think that there is really only one major answer to this question: the belief in human progress.

A favorite pejorative label among conservatives is to describe left-liberals who have adopted environmentalism as a way of further their political agenda as “watermelon environmentalists” (i.e., because watermelons are green on the outside, red on the inside).  As I mentioned, I’ve noticed time and again a similar phenomenon among atheists: it’s either a conscious or unconscious front for what would otherwise be characterized as progressivism.

Which is not to say that some progressives have donned the atheist disguise as a way of forwarding their agenda (though it’s entirely possible); but most seem to suffer under the illusion that Progress is possible in an indifferent universe where living things are subject to evolutionary forces.  Diseases, man-made disasters, asteroids and a thousand other variables can destroy human life in its current state or extinguish it altogether. Whatever the case, it’s worth noting that two kinds of ignorance underlie the belief in divine Progress, making it seem rationally possible.

1. Philosophical ignorance.

There’s no better example than the pseudo-debate that goes by the title “religion versus science,” which has spawned a number of unfortunate tropes or commonplaces that are either pernicious half-truths or outright falsehoods.  One of the former is the idea that there is something called “religious belief,” which is supposed to be qualitatively different from “secular belief.” I’ll get into this false dichotomy another day. Suffice to say for now that religious belief is supposedly sectarian and absolutist, while secular beliefs are neutral and inclusive. Obviously, the former lead to oppression and war, while the latter lead to universal happiness for all. This is nonsense, of course, because we all have “religious beliefs” insofar as we have fundamental values—truths we hold to absolute—like the belief that others should be treated fairly and equally regardless of their particulars.  And we’re all willing to use force to defend and extend these fundamental beliefs—or at least to resist the imposition of the beliefs of others.  In a word, then, we are all fundamentalists of one kind or another.

For reasons I can’t really explain (beyond blaming the education system) we seem to have collectively forgotten the origin of the term “religion.” It comes from the Latin “religio,” which means “the ties that bind us” as individuals, including our obligations to family, the state, the gods or our friends.  The term “religion” was coined for the academic study of these phenomena, though its focus was human faiths, rituals, cults, institutions, the afterlife, souls, cosmologies, etc.  In other words, “religion” refers to an academic discipline that artificially and roughly broke off part of the human condition for study; it is not a logical or scientific term for certain kinds of beliefs.  In fact, what does and what does not fall under the purview of religion is an on-going controversy in religious studies.

In any case, the salient aspect of the false dichotomy between “religious and secular belief” is the possibility that religion can be transcended: that we are all of us moving toward a secular End Times when the evil forces of religion that have caused all human misery will wither and die, leaving only the happy, loving rationalists behind.  It’s this false belief above all else that provides the intellectual scaffolding for the atheist version of Progress.  Kick it away (along with a few others I’ll discuss another time), and the rest falls apart.

2. Historical and anthropological ignorance.

It’s not often that one can make generalizations about human nature.  But “religion” is one of those cases: no culture, anywhere, ever has built its collective spiritual house on the shifting sands of the immanent world.  The reason would be plain as day to anyone who’d lived before modern times and many who live outside the West: the immanent world is capricious and fickle. Droughts and cold spells kill crops; locusts come from the sky without warning and devour one’s harvest; pestilence strikes oneself, one’s family and one’s livestock.  The sea is intemperate, rising up to drown those who’d make a living off of it; and mountains can burst and bury a whole civilization under molten rock.  Even loved ones, friends and brothers can turn against you in this world.

Now, it’s natural that we yearn to escape the tumult of our native land for tranquil shores (we wish that in this life too, after all). But there’s a deeper reason than emotional solace in the invention of the transcendent.  All anthropological evidence suggests that we are hardwired to order our thoughts, our lives and our societies in accordance with a permanent, unchanging order of things.  This is not the old atheist saw that says “things looked designed to us, so we falsely infer that they are,” as if our mental architecture was a series of careful inferences that might be judged true or false on the basis of a decisive experiment.  It’s that we must live life in accordance with an understanding of its meaning and purpose and the only constant source of purpose in a Cosmos—a rational order.  A world that evolved and remains forever in flux cannot offer us that; only a transcendent order can.

It’s this last point, I think, that explains why atheists—in spite of themselves—fall into what can only be described as ersatz religions like Progress or animism.  Like the rest of us human beings, they really can’t help but believe in the gods.