The circumstances in which we laugh are many and varied, but, deep down, we laugh for one (or sometimes several) of just seven reasons.

We laugh:

1. To feel better about ourselves. When looking for romance on dating sites and apps, we often ask for, or promise to offer, a good sense of humour (GSOH). Today, we tend to think of laughter as a good thing, but, historically, this has not always been the case. In particular, the Church looked upon laughter as a corrupting and subversive force, and for centuries, the monasteries forbade it. This notion that laughter can be less than virtuous finds an echo in the superiority theory of laughter, according to which laughter is a way of putting ourselves up by putting others down. The superiority theory is most closely linked with the philosopher Thomas Hobbes, who conceived of laughter as “a sudden glory arising from sudden conception of some eminency in ourselves, by comparison with the infirmity of others, or with our own formerly.” Think of medieval mobs jeering at people in stocks, or, in our time, Candid Camera.

2. To relieve stress and anxiety. Clearly, the superiority theory is unable to account for all cases of laughter, such as laughter arising from relief, surprise, or joy. According to the relief theory of laughter, most often associated with Sigmund Freud, laughter represents a release of pent-up nervous energy. Like dreams, jokes are able to bypass our inner censor, enabling a repressed emotion such as xenophobia (or, at least, the nervous energy associated with the repression) to surface—explaining why, at times, we can be embarrassed by our own laughter. By the same token, a comedian might raise a laugh by conjuring some costly emotion, such as admiration or indignation, and then suddenly killing it. Although more flexible than the superiority theory, the relief theory is unable to account for all cases of laughter, and those who laugh hardest at offensive jokes are not generally the most repressed of people.

3. To keep it real. Much more popular today is the incongruity theory of laughter, associated with the likes of Immanuel Kant and Søren Kierkegaard, according to which the comedian raises a laugh, not by conjuring an emotion and then killing it, but by creating an expectation and then contradicting it. Building upon Aristotle, Kierkegaard highlighted that the violation of an expectation is the core not only of comedy but also of tragedy—the difference being that, in tragedy, the violation leads to significant pain or harm. Possibly, it is not the incongruity itself that we enjoy, but the light that it sheds, in particular, on the difference between what lies inside and outside our heads. The incongruity theory is arguably more basic than the relief and superiority theories. When someone laughs, our inclination is to search for an incongruity; and though we may laugh for superiority or relief, even then, it helps if we can pin our laughter on some real or imagined incongruity.

4. As a social service. According to the philosopher Henri Bergson, we tend to fall into patterns and habits, to rigidify, to lose ourselves to ourselves—and laughter is how we point this out to one another, how we up our game as a social collective. For example, we may laugh at one who falls into a hole through absentmindedness, or at one who constantly repeats the same gesture or phrase. Conversely, we may also laugh at, or from, an unusual or unexpected lack of rigidity, as, for instance, when we break a habit or have an original idea. Ultimately, says Bergson, we are laughable to the extent that we are a machine or an object, to the extent that we lack self-awareness, that we are invisible to ourselves while being visible to everyone else. Thus, the laughter of others usually draws attention to our unconscious processes, to our modes or patterns of self-deception, and to the gap, or gulf, between our fiction and the reality. This gap is narrowest in poets and artists, who have to transcend themselves if they are to be worthy of the name.

5. To put others at ease. Another way of understanding laughter is to look at it like a biologist or anthropologist might. Human infants are able to laugh long before they can speak. Laughter involves parts of the brain that are, in evolutionary terms, much older than the language centres, and that we share with other animals. Primates, in particular, produce laughing sounds when playfighting, play-chasing, or tickling one another. As with human children, it seems that their laughter functions as a signal that the danger is not for real—which may be why rictus characters such as Batman’s Joker, who send a misleading signal, are so unsettling.

6. For diplomacy. Most laughter, even today, is not directed at jokes, but at creating and maintaining social bonds. Humour is a social lubricant, a signal of contentedness, acceptance, and belonging. More than that, it is a way of communicating, of making a point emphatically, or conveying a sensitive message without incurring the usual social costs. At the same time, humour can also be a weapon, a sublimed form of aggression, serving, like the stag’s antlers, to pull rank or attract a mate. The subtlety and ambiguity involved is in itself a source of almost endless stimulation.

7. To transcend ourselves. Laughter may have begun as a signal of play, but it has, as we have seen, evolved a number of other functions. Zen masters teach that it is much easier to laugh at ourselves once we have transcended our ego. At the highest level, laughter is the sound of the shattering of the ego. It is a means of gaining (and revealing) perspective, of rising beyond ourselves and our lives, of achieving a kind of immortality, a kind of divinity. Upon awakening on her deathbed to see her entire family around her, Nancy Astor quipped, “Am I dying, or is this my birthday?”

Today, laughter is able to give us a little of what religion once did.

The five enemies of rational thought.

Following his defeat at the Battle of Actium in 31 BCE, Marc Antony heard a rumour that Cleopatra had committed suicide and, in consequence, stabbed himself in the abdomen—only to discover that Cleopatra herself had been responsible for spreading the rumour. He later died in her arms.

“Fake news” is nothing new, but in our Internet age it has spread like a contagious disease, swinging elections, fomenting social unrest, undermining institutions, and diverting political capital away from health, education, the environment, and all-round good government.

So how best to guard against it?

As a medical specialist, I’ve spent over 20 years in formal education. With the possible exception of my two-year masters in philosophy, the emphasis of my education has always been firmly and squarely on fact accumulation.

Today, I have little use for most of these facts, and though I am only middle-aged, many are already out of date, or highly questionable.

But what I do rely on—every day, all the time—is my faculty for critical thinking. As BF Skinner once put it, “Education is what survives when what has been learnt has been forgotten.”

But can critical thinking even be taught?

In Plato’s Meno, Socrates says that people with wisdom and virtue are very poor at imparting those qualities: Themistocles, the Athenian politician and general, was able to teach his son Cleophantus skills such as standing upright on horseback and shooting javelins, but no one ever credited Cleophantus with anything like his father’s wisdom; and the same could also be said of Lysimachus and his son Aristides, and Thucydides and his sons Melesias and Stephanus.

In Plato’s Protagoras, Socrates says that Pericles, who led Athens at the peak of its golden age, gave his sons excellent instruction in everything that could be learnt from teachers, but when it came to wisdom, he simply left them to “wander at their own free will in a sort of hope that they would light upon virtue of their own accord”.

It may be that wisdom and virtue cannot be taught, but thinking skills certainly can—or, at least, the beginning of them.

So rather than leaving thinking skills to chance, why not make more time for them in our schools and universities, and be more rigorous and systematic about them?

I’ll make a start by introducing you to what I have called “the five enemies of rational thought”:

1. Formal fallacy. A fallacy is some kind of defect in an argument. A formal fallacy is an invalid type of argument. It is a deductive argument with an invalid form, for example:

Some A are B. 
Some B are C. 
Therefore, some A are C.

If you cannot yet see that this argument is invalid, substitute A, B, and C with “insects”, “herbivores”, and “mammals”.

Insects, clearly, are not mammals.

A formal fallacy is built into the structure of an argument and is invalid irrespective of the content of the argument.

2. Informal fallacy. An informal fallacy, in contrast, is one that can only be identified through an analysis of the content of the argument.

Informal fallacies often turn on the misuse of language, for example, using a key term or phrase in an ambiguous way, with one meaning in one part of the argument and another meaning in another part—called “fallacy of equivocation”.

Informal fallacies can also distract from the weakness of an argument, or appeal to the emotions instead of reason.

Here are a few more examples of informal fallacies.

  • Damning the alternatives. Arguing in favour of something by damning its alternatives. (Tim’s useless and Bob’s a drunk. So, I’ll marry Jimmy. Jimmy’s the right man for me.)
  • Gambler’s fallacy. Assuming that the outcome of one or more independent events can impact the outcome of a subsequent independent event. (June is pregnant with her fourth child. Her first three children are all boys, so this time it’s bound to be a girl.)
  • Appeal to popularity. Concluding the truth of a proposition on the basis that most or many people believe it to be true. (Of course he’s guilty: even his mother has turned her back on him.)
  • Argument from ignorance. Upholding the truth of a proposition based on a lack of evidence against it, or the falsity of a proposition based on a lack of evidence for it. (Scientists haven’t found any evidence of current or past life on Mars. So, we can be certain that there has never been any life on Mars.)
  • Argument to moderation. Arguing that the moderate view or middle position must be the right or best one. (Half the country favours leaving the European Union, the other half favours remaining. Let’s compromise by leaving the European Union but remaining in the Customs Union.)

You can find many more examples in Hypersanity: Thinking Beyond Thinking.

3. Cognitive bias. Cognitive bias is sloppy, if not necessarily faulty, reasoning: a mental shortcut or heuristic intended to spare us time, effort, or discomfort—often while reinforcing our self-image or worldview—but at the cost of accuracy or reliability.

For example, in explaining the behaviour of other people, our tendency is to overestimate the role of character traits over situational factors—a bias, called correspondence bias, that goes into reverse when it comes to explaining our own behaviour. Thus, if Charlotte fails to mow the lawn, I indict her with forgetfulness, laziness, or spite; but if I fail to mow the lawn, I absolve myself on the grounds of busyness, tiredness, or inclement weather.

Another important cognitive bias is my-side, or confirmation, bias, which is the propensity to search for or recall only those stories, facts, and arguments that are in keeping with our pre-existing beliefs while filtering out those that conflict with them—which, especially on social media, can lead us to inhabit a so-called echo chamber.

4. Cognitive distortion. Cognitive distortion is a concept from cognitive-behavioural therapy (CBT), developed by psychiatrist Aaron Beck in the 1960s and used in the treatment of depression and other mental disorders.

Cognitive distortion involves interpreting events and situations so that they conform to and reinforce our outlook or frame of mind, typically on the basis of very scant or partial evidence, or even no evidence at all.

Common cognitive distortions in depression include selective abstraction and catastrophic thinking.

Selective abstraction is to focus on a single and often insignificant negative event or condition to the exclusion of other, more positive ones, for example, “My partner hates me. He gave me an annoyed look three days ago.”

Catastrophic thinking is to exaggerate and dramatize the likely consequences of an event or situation, for example, “The pain in my knee is getting worse. When I’m reduced to a wheelchair, I won’t be able to go to work and pay the bills. So, I’ll end up losing my house and dying in the street.”

A cognitive distortion can open up a vicious circle, with the cognitive distortion feeding the depression, and the depression the cognitive distortion.

Cognitive distortion as broadly understood is not limited to depression and other mental disorders, but is also a feature of, among others, poor self-esteem, jealousy, and marital conflict.

5. Self-deception. Of the five enemies of rational thought, the most important by far is self-deception, because it tends to underlie all the others.

If we do not think clearly, if we cannot see the wood for the trees, this is not usually because we lack intelligence or education or experience, but because we feel exposed and vulnerable—and rather than come to terms with a painful truth, prefer, almost reflexively, to deceive and defend ourselves.

As I argue in Hide and Seek: The Psychology of Self-Deception, all self-deception can be understood in terms of ego defence. In psychoanalytic theory, an ego defence is one of several unconscious processes that we deploy to diffuse the fear and anxiety that arise when who or what we truly are (our unconscious “id”) comes into conflict with who we think we are or who we think we should be (our conscious “superego”).

To put some flesh onto this, let’s take a look at two important ego defences: projection and idealization.

Projection is the attribution of one’s unacceptable thoughts and feelings to other people. This necessarily involves repression (another ego defence) as a first step, since unacceptable thoughts and feelings need to be repudiated before they can be detached. Classic examples of projection include the envious person who believes that everyone envies her, the covetous person who lives in constant fear of being dispossessed, and the person with fantasies of infidelity who suspects that they are being cheated upon by their partner.

Idealization involves overestimating the positive attributes of a person, object, or idea while underestimating its negative attributes. At a deeper level, it involves the projection of our needs and desires onto that person, object, or idea. A paradigm of idealization is infatuation, or romantic love, when love is confused with the need to love, and the idealized person’s negative attributes are glossed over or even construed as positive. Although this can make for a rude awakening, there are few better ways of relieving our existential anxiety than by manufacturing something that is ‘perfect’ for us, be it a piece of equipment, a place, country, person, or god.

In all cases, the raw material of thought is facts. If the facts are missing, or worse, misleading, then thought cannot even get started.

The word ‘magic’ derives from the Latin, the Greek, the Old Persian, and, ultimately, the Proto-Indo-European magh, ‘to help, to be able, to be powerful’, from which also derive the words ‘almighty’, ‘maharaja’, ‘main’, ‘may’, and… ‘machine’. We come full circle with Clarke’s Third Law, which states: ‘Any sufficiently advanced technology is indistinguishable from magic.’

Magic, like religion, is deeply embedded into the human psyche. Though it has, effectively, been banished from the land, still it surfaces in thought and language, in phrases such as ‘I must be cursed’ and ‘He’s under your spell’; in children’s stories and other fiction; and in psychological processes such as undoing, which involves thinking a thought or carrying out an act in an attempt to negate a previous, uncomfortable thought or act.

Examples of undoing include the absent father who periodically returns to spoil and smother his children, and the angry wife who throws a plate at her husband and then tries to ‘make it up’ by smothering him in kisses. The absent father and angry wife are not merely trying to make amends for their behaviour, but also, as if by magic, to ‘erase it from the record’.

Another example of undoing is the man who damages a friend’s prospects and then, a few days later, turns up at his door bearing a small gift. Rituals such as confession and penitence are, at least on some level, socially condoned and codified forms of undoing.

‘Magic’ is difficult to define, and its definition remains a matter of debate and controversy. One way of understanding it is by comparing and contrasting it to religion on the one hand and to science on the other.

Historically, the priest, the physician, the magician, and the scholar might have been one and the same person: the shaman, the sorcerer.

In the West, pre-Socratics such as Pythagoras and Empedocles moonlighted as mystics and miracle workers—or perhaps, since the term ‘philosophy’ is held to have been invented by Pythagoras, moonlighted as philosophers. Pythagoras claimed to have lived four lives and to remember them all in great detail, and once recognized the cry of his dead friend in the yelping of a puppy. After his death, the Pythagoreans deified him, and attributed him with a golden thigh and the gift of bilocation.

In Plato’s Phaedrus, Socrates argues that there are, in fact, two kinds of madness: one resulting from human illness, but the other arising from a divinely inspired release from normally accepted behaviour. This divine form of madness, says Socrates, has four parts: love, poetry, inspiration, and mysticism, which is the particular gift of Dionysus.

While Socrates, in some sense the father of logic, seldom claimed any real knowledge, he did claim to have a daimonion or ‘divine something’, an inner voice or intuition that prevented him from making grave mistakes such as getting involved in politics, or fleeing Athens: ‘This is the voice which I seem to hear murmuring in my ears, like the sound of the flute in the ears of the mystic…’

Far from being a thing of the distant past, this trope of the philosopher-sorcerer outlived the sack of Athens and the fall of Rome, and perdured well into the Enlightenment. The economist John Maynard Keynes, upon buying a trove of Isaac Newton’s papers, observed that Newton and the physicists of his time were ‘not the first of the scientists, but the last of the sorcerers’. Other notable later occultists include: Giordano Bruno, Nostradamus, Paracelsus, Giovanni Pico della Mirandola, and Arthur Conan Doyle, yes, the father of Sherlock Holmes.

Yet since antiquity, the West has had an uncomfortable relationship with magic, usually regarding it as something foreign and ‘Eastern’. In Plato’s Meno, Meno compares Socrates to the flat torpedo fish, which torpifies or numbs all those who come near it: ‘And I think that you are very wise in not [leaving Athens], for if you did in other places as you do in Athens, you would be cast into prison as a magician.’

For the Greeks as for the Romans, magic represented as improper and potentially subversive expression of religion. After centuries of contra-legislation, in 357 CE, the Christian Roman emperor Constantius II finally banned it outright:

No one shall consult a haruspex, a diviner, or a soothsayer, and wicked confessions made to augurs and prophets must cease. Chaldeans, magicians, and others who are commonly called malefactors on account of the enormity of their crimes shall no longer practice their infamous arts.

The Bible, too, inveighs against magic, in more than a hundred places, for example, picked almost at random:

  • Thou shalt not suffer a witch to live. —Exodus 22:18 (KJV)
  • Regard not them that have familiar spirits, neither seek after wizards, to be defiled by them: I am the Lord your God. —Leviticus 19:31 (KJV)
  • But the fearful, and unbelieving, and the abominable, and murderers, and whoremongers, and sorcerers, and idolaters, and all liars, shall have their part in the lake which burneth with fire and brimstone: which is the second death. —Revelation 21:8 (KJV)

Early Christians, perhaps unconsciously, associated magic with mythopoeic thought, in which all of nature is full of gods and spirits, and therefore with paganism and, by extension, with demons. During the Reformation, Protestants accused the Church of Rome, with its superstitions, relics, and exorcisms, of being more magic than religion—a charge that transferred all the more to non-Christian peoples, and that, notoriously, served as a justification for large-scale persecution, colonization, and Christianization.

Today, magic, like mythopoeic thought, is seen as ‘primitive’, and has largely been relegated to fiction and illusionism. But as a result, people have come to associate magic with delight and wonder; and with the retreat of Christianity, at least from Europe, a growing number are returning to some form of paganism as a path to personal and spiritual development.

So, what exactly is the difference between magic and religion? It is often held that magic is older than religion, or that religion was born out of magic, but it may be that they co-existed, and were not distinguished.

Both magic and religion pertain to the sacred sphere, to things removed from everyday life. But, compared to religion, magic does not split so sharply between the natural and the supernatural, the earthly and the divine, the fallen and the blessed. And whereas magic involves harnessing the world to the will, religion involves subjugating the will to the world. In the words of the anthropologist Claude Lévi-Strauss (d. 2009), ‘religion consists in a humanization of natural laws, and magic in a naturalization of human actions.’

Hence, magic tends to be about specific problems, and to involve private rites and rituals. Religion, in contrast, tends towards the bigger picture, and to involve communal worship and belonging. ‘Magic’ said the sociologist Emile Durkheim (d. 1917), ‘does not result in binding together those who adhere to it, nor in uniting them into a group leading a common life. There is no Church of magic.’

So, one hypothesis is that, as man gained increasing control over nature, magic, as it came to be called, lost ground to religion, which, being communal and centralized, evolved a hierarchy that sought to suppress those practices that threatened its dogma and dominance.

But now religion is, in its turn, on the decline—in favour of science. What is science? Within academia, there are, in fact, no clear or reliable criteria for distinguishing a science from a non-science. What might be said is that all sciences share certain assumptions which underpin the scientific method—in particular, that there is an objective reality governed by uniform laws, and that this reality can be discovered by systematic observation.

But, as I argue in my book, Hypersanity: Thinking Beyond Thinking, every scientific paradigm that has come and gone is now deemed to have been false, inaccurate, or incomplete, and it would be ignorant or arrogant to assume that our current ones might amount to the truth, the whole truth, and nothing but the truth.

The philosopher Paul Feyerabend (d. 1994) went so far as to claim that there is no such thing as ‘a’ or ‘the’ scientific method: behind the facade, ‘anything goes’, and, as a form of knowledge, science is no more privileged than magic or religion.

More than that, science has come to occupy the same place in the human psyche as religion once did. Although science began as a liberating movement, it grew dogmatic and repressive, more of an ideology than a rational method that leads to ineluctable progress.

To quote Feyerabend:

Knowledge is not a series of self-consistent theories that converges toward an ideal view; it is rather an ever increasing ocean of mutually incompatible (and perhaps even incommensurable) alternatives, each single theory, each fairy tale, each myth that is part of the collection forcing the others into greater articulation and all of them contributing, via this process of competition, to the development of our consciousness.

A common trope in fantasy fiction is the ‘thinning’ of magic: magic is fading, or has been banished, from the land, which is caught in a perpetual winter or otherwise in deathly or depressive decline, and the hero is called upon to rescue and restore the life-giving forces of old.

It is easy to draw the parallel with our own world, in which magic has been progressively driven out, first by religion, which over the centuries, became increasingly repressive of magic, and latterly by science with its zero tolerance.

When we read fantasy fiction, it is for the side of the old magic, always, that we root, for a time when the world, when life, had meaning in itself.

In my next article, I will look at the psychology and philosophy of magic.