On October 30, 1938, Orson Welles broadcast an episode of the radio drama Mercury Theatre on the Air. This episode, entitled The War of the Worlds and based on a novel by HG Wells, suggested to listeners that a Martian invasion was taking place. In the charged atmosphere of the days leading up to World War II, many people missed or ignored the opening credits and mistook the radio drama for a news broadcast. Panic ensued and people began to flee, with some even reporting flashes of light and a smell of poison gas. This panic, a form of mass hysteria, is one of the many forms that anxiety can take.
Mass hysteria can befall us at almost any time. In 1989, 150 children took part in a summer programme at a youth centre in Florida. Each day at noon, the children gathered in the dining hall to be served pre-packed lunches. One day, a girl complained that her sandwich did not taste right. She felt nauseated, went to the toilet, and returned saying that she had vomited. Almost immediately, other children began experiencing symptoms such as nausea, abdominal cramps, and tingling in the hands and feet. With that, the supervisor announced that the food may be poisoned and that the children should stop eating. Within 40 minutes, 63 children were sick and more than 25 had vomited.
The children were promptly dispatched to one of three hospitals, but every test performed on them was negative. Meal samples were analyzed but no bacteria or poisons could be found. Food processing and storage standards had been scrupulously maintained and no illness had been reported from any of the other 68 sites at which the pre-packed lunches had been served.
However, there had been in the group an atmosphere of tension, created by the release two days earlier of a newspaper article reporting on management and financial problems at the youth centre. The children had no doubt picked up on the staff’s anxiety, and this had made them particularly suggestible to the first girl’s complaints. Once the figure of authority had announced that the food may be poisoned, the situation simply spiralled out of control.
Mass hysteria is relatively uncommon, but it does provide an alarming insight into the human mind and the ease with which it might be influenced and even manipulated. It also points to our propensity to somatize, that is, to convert anxiety and distress into more concrete physical symptoms. Somatization, which can be thought of as an ego defence, is an unconscious process, and people who somatize are, almost by definition, unaware of the psychological origins of their physical symptoms.
As I discuss in The Meaning of Madness, psychological stressors can lead to physical symptoms not only by somatization, which is a psychic process, but also by physical processes involving the nervous, endocrine, and immune systems. For example, one study found that the first 24 hours of bereavement are associated with a staggering 21-fold increased risk of heart attack. Since Robert Ader’s early experiments in the 1970s, the field of psychoneuroimmunology has blossomed, uncovering a large body of evidence that has gradually led to the mainstream recognition of the adverse effects of psychological stressors on health, recovery, and ageing, and, inversely, of the protective effects of positive emotions such as happiness, belonging, and a sense of purpose or meaning.
Here, again, modern science has barely caught up with the wisdom of the Ancients, who were well aware of the close relationship between psychological and physical well-being. In Plato’s Charmides, Socrates tells the young Charmides, who has been suffering from headaches, about a charm for headaches that he learnt from one of the mystical physicians to the King of Thrace. However, this great physician cautioned that it is best to cure the soul before curing the body, since health and happiness ultimately depend on the state of the soul:
He said all things, both good and bad, in the body and in the whole man, originated in the soul and spread from there… One ought, then, to treat the soul first and foremost, if the head and the rest of the body were to be be well. He said the soul was treated with certain charms, my dear Charmides, and that these charms were beautiful words. As a result of such words self-control came into being in souls. When it came into being and was present in them, it was then easy to secure health both for the head and for the rest of the body.
Mental health is not just mental health. It is also physical health.
The circumstances in which we laugh are many and varied, but, deep down, we laugh for one (or sometimes several) of just seven reasons.
1. To feel better about ourselves. When looking for romance on dating sites and apps, we often ask for, or promise to offer, a good sense of humour (GSOH). Today, we tend to think of laughter as a good thing, but, historically, this has not always been the case. In particular, the Church looked upon laughter as a corrupting and subversive force, and for centuries, the monasteries forbade it. This notion that laughter can be less than virtuous finds an echo in the superiority theory of laughter, according to which laughter is a way of putting ourselves up by putting others down. The superiority theory is most closely linked with the philosopher Thomas Hobbes, who conceived of laughter as “a sudden glory arising from sudden conception of some eminency in ourselves, by comparison with the infirmity of others, or with our own formerly.” Think of medieval mobs jeering at people in stocks, or, in our time, Candid Camera.
2. To relieve stress and anxiety. Clearly, the superiority theory is unable to account for all cases of laughter, such as laughter arising from relief, surprise, or joy. According to the relief theory of laughter, most often associated with Sigmund Freud, laughter represents a release of pent-up nervous energy. Like dreams, jokes are able to bypass our inner censor, enabling a repressed emotion such as xenophobia (or, at least, the nervous energy associated with the repression) to surface—explaining why, at times, we can be embarrassed by our own laughter. By the same token, a comedian might raise a laugh by conjuring some costly emotion, such as admiration or indignation, and then suddenly killing it. Although more flexible than the superiority theory, the relief theory is unable to account for all cases of laughter, and those who laugh hardest at offensive jokes are not generally the most repressed of people.
3. To keep it real. Much more popular today is the incongruity theory of laughter, associated with the likes of Immanuel Kant and Søren Kierkegaard, according to which the comedian raises a laugh, not by conjuring an emotion and then killing it, but by creating an expectation and then contradicting it. Building upon Aristotle, Kierkegaard highlighted that the violation of an expectation is the core not only of comedy but also of tragedy—the difference being that, in tragedy, the violation leads to significant pain or harm. Possibly, it is not the incongruity itself that we enjoy, but the light that it sheds, in particular, on the difference between what lies inside and outside our heads. The incongruity theory is arguably more basic than the relief and superiority theories. When someone laughs, our inclination is to search for an incongruity; and though we may laugh for superiority or relief, even then, it helps if we can pin our laughter on some real or imagined incongruity.
4. As a social service. According to the philosopher Henri Bergson, we tend to fall into patterns and habits, to rigidify, to lose ourselves to ourselves—and laughter is how we point this out to one another, how we up our game as a social collective. For example, we may laugh at one who falls into a hole through absentmindedness, or at one who constantly repeats the same gesture or phrase. Conversely, we may also laugh at, or from, an unusual or unexpected lack of rigidity, as, for instance, when we break a habit or have an original idea. Ultimately, says Bergson, we are laughable to the extent that we are a machine or an object, to the extent that we lack self-awareness, that we are invisible to ourselves while being visible to everyone else. Thus, the laughter of others usually draws attention to our unconscious processes, to our modes or patterns of self-deception, and to the gap, or gulf, between our fiction and the reality. This gap is narrowest in poets and artists, who have to transcend themselves if they are to be worthy of the name.
5. To put others at ease. Another way of understanding laughter is to look at it like a biologist or anthropologist might. Human infants are able to laugh long before they can speak. Laughter involves parts of the brain that are, in evolutionary terms, much older than the language centres, and that we share with other animals. Primates, in particular, produce laughing sounds when playfighting, play-chasing, or tickling one another. As with human children, it seems that their laughter functions as a signal that the danger is not for real—which may be why rictus characters such as Batman’s Joker, who send a misleading signal, are so unsettling.
6. For diplomacy. Most laughter, even today, is not directed at jokes, but at creating and maintaining social bonds. Humour is a social lubricant, a signal of contentedness, acceptance, and belonging. More than that, it is a way of communicating, of making a point emphatically, or conveying a sensitive message without incurring the usual social costs. At the same time, humour can also be a weapon, a sublimed form of aggression, serving, like the stag’s antlers, to pull rank or attract a mate. The subtlety and ambiguity involved is in itself a source of almost endless stimulation.
7. To transcend ourselves. Laughter may have begun as a signal of play, but it has, as we have seen, evolved a number of other functions. Zen masters teach that it is much easier to laugh at ourselves once we have transcended our ego. At the highest level, laughter is the sound of the shattering of the ego. It is a means of gaining (and revealing) perspective, of rising beyond ourselves and our lives, of achieving a kind of immortality, a kind of divinity. Upon awakening on her deathbed to see her entire family around her, Nancy Astor quipped, “Am I dying, or is this my birthday?”
Today, laughter is able to give us a little of what religion once did.
Following his defeat at the Battle of Actium in 31 BCE, Marc Antony heard a rumour that Cleopatra had committed suicide and, in consequence, stabbed himself in the abdomen—only to discover that Cleopatra herself had been responsible for spreading the rumour. He later died in her arms.
“Fake news” is nothing new, but in our Internet age it has spread like a contagious disease, swinging elections, fomenting social unrest, undermining institutions, and diverting political capital away from health, education, the environment, and all-round good government.
So how best to guard against it?
As a medical specialist, I’ve spent over 20 years in formal education. With the possible exception of my two-year masters in philosophy, the emphasis of my education has always been firmly and squarely on fact accumulation.
Today, I have little use for most of these facts, and though I am only middle-aged, many are already out of date, or highly questionable.
But what I do rely on—every day, all the time—is my faculty for critical thinking. As BF Skinner once put it, “Education is what survives when what has been learnt has been forgotten.”
But can critical thinking even be taught?
In Plato’s Meno, Socrates says that people with wisdom and virtue are very poor at imparting those qualities: Themistocles, the Athenian politician and general, was able to teach his son Cleophantus skills such as standing upright on horseback and shooting javelins, but no one ever credited Cleophantus with anything like his father’s wisdom; and the same could also be said of Lysimachus and his son Aristides, and Thucydides and his sons Melesias and Stephanus.
In Plato’s Protagoras, Socrates says that Pericles, who led Athens at the peak of its golden age, gave his sons excellent instruction in everything that could be learnt from teachers, but when it came to wisdom, he simply left them to “wander at their own free will in a sort of hope that they would light upon virtue of their own accord”.
It may be that wisdom and virtue cannot be taught, but thinking skills certainly can—or, at least, the beginning of them.
So rather than leaving thinking skills to chance, why not make more time for them in our schools and universities, and be more rigorous and systematic about them?
I’ll make a start by introducing you to what I have called “the five enemies of rational thought”:
1. Formal fallacy. A fallacy is some kind of defect in an argument. A formal fallacy is an invalid type of argument. It is a deductive argument with an invalid form, for example:
Some A are B.
Some B are C.
Therefore, some A are C.
If you cannot yet see that this argument is invalid, substitute A, B, and C with “insects”, “herbivores”, and “mammals”.
Insects, clearly, are not mammals.
A formal fallacy is built into the structure of an argument and is invalid irrespective of the content of the argument.
2. Informal fallacy. An informal fallacy, in contrast, is one that can only be identified through an analysis of the content of the argument.
Informal fallacies often turn on the misuse of language, for example, using a key term or phrase in an ambiguous way, with one meaning in one part of the argument and another meaning in another part—called “fallacy of equivocation”.
Informal fallacies can also distract from the weakness of an argument, or appeal to the emotions instead of reason.
Here are a few more examples of informal fallacies.
Damning the alternatives. Arguing in favour of something by damning its alternatives. (Tim’s useless and Bob’s a drunk. So, I’ll marry Jimmy. Jimmy’s the right man for me.)
Gambler’s fallacy. Assuming that the outcome of one or more independent events can impact the outcome of a subsequent independent event. (June is pregnant with her fourth child. Her first three children are all boys, so this time it’s bound to be a girl.)
Appeal to popularity. Concluding the truth of a proposition on the basis that most or many people believe it to be true. (Of course he’s guilty: even his mother has turned her back on him.)
Argument from ignorance. Upholding the truth of a proposition based on a lack of evidence against it, or the falsity of a proposition based on a lack of evidence for it. (Scientists haven’t found any evidence of current or past life on Mars. So, we can be certain that there has never been any life on Mars.)
Argument to moderation. Arguing that the moderate view or middle position must be the right or best one. (Half the country favours leaving the European Union, the other half favours remaining. Let’s compromise by leaving the European Union but remaining in the Customs Union.)
3. Cognitive bias. Cognitive bias is sloppy, if not necessarily faulty, reasoning: a mental shortcut or heuristic intended to spare us time, effort, or discomfort—often while reinforcing our self-image or worldview—but at the cost of accuracy or reliability.
For example, in explaining the behaviour of other people, our tendency is to overestimate the role of character traits over situational factors—a bias, called correspondence bias, that goes into reverse when it comes to explaining our own behaviour. Thus, if Charlotte fails to mow the lawn, I indict her with forgetfulness, laziness, or spite; but if I fail to mow the lawn, I absolve myself on the grounds of busyness, tiredness, or inclement weather.
Another important cognitive bias is my-side, or confirmation, bias, which is the propensity to search for or recall only those stories, facts, and arguments that are in keeping with our pre-existing beliefs while filtering out those that conflict with them—which, especially on social media, can lead us to inhabit a so-called echo chamber.
4. Cognitive distortion. Cognitive distortion is a concept from cognitive-behavioural therapy (CBT), developed by psychiatrist Aaron Beck in the 1960s and used in the treatment of depression and other mental disorders.
Cognitive distortion involves interpreting events and situations so that they conform to and reinforce our outlook or frame of mind, typically on the basis of very scant or partial evidence, or even no evidence at all.
Common cognitive distortions in depression include selective abstraction and catastrophic thinking.
Selective abstraction is to focus on a single and often insignificant negative event or condition to the exclusion of other, more positive ones, for example, “My partner hates me. He gave me an annoyed look three days ago.”
Catastrophic thinking is to exaggerate and dramatize the likely consequences of an event or situation, for example, “The pain in my knee is getting worse. When I’m reduced to a wheelchair, I won’t be able to go to work and pay the bills. So, I’ll end up losing my house and dying in the street.”
A cognitive distortion can open up a vicious circle, with the cognitive distortion feeding the depression, and the depression the cognitive distortion.
Cognitive distortion as broadly understood is not limited to depression and other mental disorders, but is also a feature of, among others, poor self-esteem, jealousy, and marital conflict.
5. Self-deception. Of the five enemies of rational thought, the most important by far is self-deception, because it tends to underlie all the others.
If we do not think clearly, if we cannot see the wood for the trees, this is not usually because we lack intelligence or education or experience, but because we feel exposed and vulnerable—and rather than come to terms with a painful truth, prefer, almost reflexively, to deceive and defend ourselves.
As I argue in Hide and Seek: The Psychology of Self-Deception, all self-deception can be understood in terms of ego defence. In psychoanalytic theory, an ego defence is one of several unconscious processes that we deploy to diffuse the fear and anxiety that arise when who or what we truly are (our unconscious “id”) comes into conflict with who we think we are or who we think we should be (our conscious “superego”).
To put some flesh onto this, let’s take a look at two important ego defences: projection and idealization.
Projection is the attribution of one’s unacceptable thoughts and feelings to other people. This necessarily involves repression (another ego defence) as a first step, since unacceptable thoughts and feelings need to be repudiated before they can be detached. Classic examples of projection include the envious person who believes that everyone envies her, the covetous person who lives in constant fear of being dispossessed, and the person with fantasies of infidelity who suspects that they are being cheated upon by their partner.
Idealization involves overestimating the positive attributes of a person, object, or idea while underestimating its negative attributes. At a deeper level, it involves the projection of our needs and desires onto that person, object, or idea. A paradigm of idealization is infatuation, or romantic love, when love is confused with the need to love, and the idealized person’s negative attributes are glossed over or even construed as positive. Although this can make for a rude awakening, there are few better ways of relieving our existential anxiety than by manufacturing something that is ‘perfect’ for us, be it a piece of equipment, a place, country, person, or god.
In all cases, the raw material of thought is facts. If the facts are missing, or worse, misleading, then thought cannot even get started.
In my last article, on the history of magic, I compared magic to religion and science, but without attempting a precise definition of magic. On the assumption that certain entities can exert a hidden influence on one another, magic is a method of acting in the world through sheer power of will. The notion that the universe is pregnant with subtle connexions is supported by, of all things, the study of mathematics, and it can sometimes seem that maths is at only one remove from magic.
Magic is often considered a gift, such that some people have it to a high degree and others, the muggles, barely at all, perhaps because their will is weak, unsettled, or untrained, or because magic does not run in their family—for like madness, magic is often hereditary. Whatever the case, people without magic are usually portrayed as lacking in cognitive faculties such as insight, intuition, and imagination, and would not see possibility even if it slapped them in the face.
Magic is sometimes divided into white and black, and high and low. Black magic is selfish and does not consider other people, whereas white magic is altruistic or selfless, and seeks in general to maintain or restore the equilibrium of the universe. The magician’s psychological makeup determines what kind of magic, white or black, he or she is able or likely to wield.
Speaking of equilibrium, deflecting objects and especially people from their natural or pre-ordained course is likely to have significant repercussions, which is why, aside from the mental effort and exhaustion, the use of magic is often said to come at a price, either to the magician, his or her client (for want of a better term), or a third party. The equilibrium must, ultimately, be maintained.
The magician is, in effect, a mediator of energies. Low magic involves drawing up energies from the earth, from plants and minerals and so on, and is more the province of common folk. High magic involves drawing down raw, unprocessed energies from the sun and sky, which requires complex ritual and is more the province of an educated or trained elite.
The magician cultivates his or her will through concentration—acquiring charisma in the process—and focuses it through ritual such as ceremony, chant, or spell. Ritual also helps to create the right atmosphere and attitude for magic to take hold. Words in particular can exert a power all of their own. In the language of Ancient Egypt, the sound of a word had a magical power which complemented its meaning, a view of language which we still retain when we talk of ‘spelling’ a word, or visit a psychotherapist. And while words can change the world, getting them wrong, that is, misspelling, can have disastrous consequences.
So far, I’ve been talking as if magic actually works. But does it work, and, if so, how? Unless one broadens the definition of magic to include cognitive faculties such as insight, intuition, and imagination, or simply peak performance, magic does not work, or, at least, not in an immediate, instrumental sense. But magic might work indirectly, by focussing the mind and energies on a particular problem, or through a mechanism akin to the placebo effect or psychoneuroimmunology.
The term ‘placebo effect’ derives from the Latin placare [‘to please’] and refers to the tendency for a remedy to ‘work’ simply because it is expected to do so. In essence, people who associate taking a remedy with improvement may come to expect improvement if they take a remedy, even if the ‘remedy’ in question is no more than an inert substance, or a substance that has no therapeutic effect but only adverse effects that can be interpreted as indicative of a therapeutic effect. It may be that the expectation alone suffices to mimic the effect of the remedy, and brain imaging studies indicate that, in some cases, remedies and their placebos activate the very same mechanisms in the nervous system.
In the UK, the antidepressant fluoxetine is so commonly prescribed that trace quantities have been detected in the water supply. But, as I lay bare in my book, The Meaning of Madness, there is mounting evidence that the most commonly prescribed antidepressants are little more effective than dummy pills which, unlike antidepressants, are free from adverse effects, and cost. So, it might be said that, insofar as antidepressants work, they do so by magic—and, no doubt, would be more effective if accompanied by some kind of incantation.
Remedies that are perceived to be more potent have a stronger placebo effect. Perceptions of potency are influenced by factors such as the remedy’s size, shape, colour, route of administration, and general availability. A brightly coloured injection administered by a silver-haired professor of medicine can be expected to have a much stronger placebo effect, and therefore a much stronger overall effect, than the unremarkable over-the-counter tablet recommended by the teenager next door. This highlights the importance of the psychological, social, and cultural context in which a treatment or intervention is administered, and, more particularly, the significance of the therapeutic act or ritual. If the practitioner, the patient, and their society believe in the magic, then the magic is real by the very force of that shared belief.
No wonder the magician, the priest, and the healer used to be one and the same person—and, in many societies, still is. Like religion, magic may represent a response to anxiety, distress, and a feeling of inadequacy or impotence, especially in the face of natural disaster. And like religion, it may represent a spiritual path, akin, perhaps, to a martial art, which also involves concentrating the mind, channelling instinctual drives, and leveraging forces.
But beyond all that, magic, whether it works or not, is an external projection of the human psyche, an external projection of our internal or psychological truth, which is why it features so prominently in fiction. Fairy tales often begin with a formulation such as, ‘Once upon a time in dreamland’, and magic is that dreamland. Like dreams, magic makes use of condensed symbols, and, like dreams, it is a kind of wish fulfilment.
In the same vein, magic might be compared with mental states such as psychosis and neurosis, which, like dreams, can also involve condensed symbols and wish fulfilment. Sigmund Freud linked magical rituals and spells with neurotic and obsessional thought processes, and there are arguably some parallels with compulsive acts, which are a response to obsessional thoughts or according to rules that must be rigidly applied.
Magic is, arguably, on a spectrum with madness, and magical thinking is especially prominent in schizotypy, or schizotypal personality disorder, which predisposes to schizophrenia, and also shamanism. As discussed in my related article on the history of magic, Plato distinguished between madness resulting from human illness and madness arising from a divinely inspired release from normally accepted behaviour. In Plato’s Phaedrus, Socrates says that this divinely inspired madness has four forms: mysticism, inspiration, poetry, and love. Love, according to Socrates, is not a god, as most people think, but a great spirit [daimon] that intermediates between gods and men.
Similarly, in The Sorcerer and his Magic (1963), the anthropologist Claude Lévi-Strauss argues that magic is a mediator between normal thought processes (common sense, reason, science…), which suffer from a marked deficit of meaning, and pathological thought processes, which abound with meaning:
From any non-scientific perspective (and here we can exclude no society), pathological and normal thought processes are complementary rather than opposed. In a universe which it strives to understand but whose dynamics it cannot fully control, normal thought continually seeks the meaning of things which refuse to reveal their significance. So-called pathological thought, on the other hand, overflows with emotional interpretations and overtones, in order to supplement an otherwise deficient reality… We might borrow from linguistics and say that so-called normal thought always suffers from a deficit of meaning, whereas so-called pathological thought (in at least some of its manifestations) disposes of a plethora of meaning. Through collective participation in shamanistic curing, a balance is established between these two complementary situations.
Some of my regular readers may have wondered why I turned my pen to so apparently frivolous a subject as magic. But we now know that magic means much more than it may at first seem. Aside from its links with madness and with healing, it is a mirror of the mind, and even, like love or beauty, and science and religion, a mode of belonging to the world.
The word ‘magic’ derives from the Latin, the Greek, the Old Persian, and, ultimately, the Proto-Indo-European magh, ‘to help, to be able, to be powerful’, from which also derive the words ‘almighty’, ‘maharaja’, ‘main’, ‘may’, and… ‘machine’. We come full circle with Clarke’s Third Law, which states: ‘Any sufficiently advanced technology is indistinguishable from magic.’
Magic, like religion, is deeply embedded into the human psyche. Though it has, effectively, been banished from the land, still it surfaces in thought and language, in phrases such as ‘I must be cursed’ and ‘He’s under your spell’; in children’s stories and other fiction; and in psychological processes such as undoing, which involves thinking a thought or carrying out an act in an attempt to negate a previous, uncomfortable thought or act.
Examples of undoing include the absent father who periodically returns to spoil and smother his children, and the angry wife who throws a plate at her husband and then tries to ‘make it up’ by smothering him in kisses. The absent father and angry wife are not merely trying to make amends for their behaviour, but also, as if by magic, to ‘erase it from the record’.
Another example of undoing is the man who damages a friend’s prospects and then, a few days later, turns up at his door bearing a small gift. Rituals such as confession and penitence are, at least on some level, socially condoned and codified forms of undoing.
‘Magic’ is difficult to define, and its definition remains a matter of debate and controversy. One way of understanding it is by comparing and contrasting it to religion on the one hand and to science on the other.
Historically, the priest, the physician, the magician, and the scholar might have been one and the same person: the shaman, the sorcerer.
In the West, pre-Socratics such as Pythagoras and Empedocles moonlighted as mystics and miracle workers—or perhaps, since the term ‘philosophy’ is held to have been invented by Pythagoras, moonlighted as philosophers. Pythagoras claimed to have lived four lives and to remember them all in great detail, and once recognized the cry of his dead friend in the yelping of a puppy. After his death, the Pythagoreans deified him, and attributed him with a golden thigh and the gift of bilocation.
In Plato’s Phaedrus, Socrates argues that there are, in fact, two kinds of madness: one resulting from human illness, but the other arising from a divinely inspired release from normally accepted behaviour. This divine form of madness, says Socrates, has four parts: love, poetry, inspiration, and mysticism, which is the particular gift of Dionysus.
While Socrates, in some sense the father of logic, seldom claimed any real knowledge, he did claim to have a daimonion or ‘divine something’, an inner voice or intuition that prevented him from making grave mistakes such as getting involved in politics, or fleeing Athens: ‘This is the voice which I seem to hear murmuring in my ears, like the sound of the flute in the ears of the mystic…’
Far from being a thing of the distant past, this trope of the philosopher-sorcerer outlived the sack of Athens and the fall of Rome, and perdured well into the Enlightenment. The economist John Maynard Keynes, upon buying a trove of Isaac Newton’s papers, observed that Newton and the physicists of his time were ‘not the first of the scientists, but the last of the sorcerers’. Other notable later occultists include: Giordano Bruno, Nostradamus, Paracelsus, Giovanni Pico della Mirandola, and Arthur Conan Doyle, yes, the father of Sherlock Holmes.
Yet since antiquity, the West has had an uncomfortable relationship with magic, usually regarding it as something foreign and ‘Eastern’. In Plato’s Meno, Meno compares Socrates to the flat torpedo fish, which torpifies or numbs all those who come near it: ‘And I think that you are very wise in not [leaving Athens], for if you did in other places as you do in Athens, you would be cast into prison as a magician.’
For the Greeks as for the Romans, magic represented as improper and potentially subversive expression of religion. After centuries of contra-legislation, in 357 CE, the Christian Roman emperor Constantius II finally banned it outright:
No one shall consult a haruspex, a diviner, or a soothsayer, and wicked confessions made to augurs and prophets must cease. Chaldeans, magicians, and others who are commonly called malefactors on account of the enormity of their crimes shall no longer practice their infamous arts.
The Bible, too, inveighs against magic, in more than a hundred places, for example, picked almost at random:
Thou shalt not suffer a witch to live. —Exodus 22:18 (KJV)
Regard not them that have familiar spirits, neither seek after wizards, to be defiled by them: I am the Lord your God. —Leviticus 19:31 (KJV)
But the fearful, and unbelieving, and the abominable, and murderers, and whoremongers, and sorcerers, and idolaters, and all liars, shall have their part in the lake which burneth with fire and brimstone: which is the second death. —Revelation 21:8 (KJV)
Early Christians, perhaps unconsciously, associated magic with mythopoeic thought, in which all of nature is full of gods and spirits, and therefore with paganism and, by extension, with demons. During the Reformation, Protestants accused the Church of Rome, with its superstitions, relics, and exorcisms, of being more magic than religion—a charge that transferred all the more to non-Christian peoples, and that, notoriously, served as a justification for large-scale persecution, colonization, and Christianization.
Today, magic, like mythopoeic thought, is seen as ‘primitive’, and has largely been relegated to fiction and illusionism. But as a result, people have come to associate magic with delight and wonder; and with the retreat of Christianity, at least from Europe, a growing number are returning to some form of paganism as a path to personal and spiritual development.
So, what exactly is the difference between magic and religion? It is often held that magic is older than religion, or that religion was born out of magic, but it may be that they co-existed, and were not distinguished.
Both magic and religion pertain to the sacred sphere, to things removed from everyday life. But, compared to religion, magic does not split so sharply between the natural and the supernatural, the earthly and the divine, the fallen and the blessed. And whereas magic involves harnessing the world to the will, religion involves subjugating the will to the world. In the words of the anthropologist Claude Lévi-Strauss (d. 2009), ‘religion consists in a humanization of natural laws, and magic in a naturalization of human actions.’
Hence, magic tends to be about specific problems, and to involve private rites and rituals. Religion, in contrast, tends towards the bigger picture, and to involve communal worship and belonging. ‘Magic’ said the sociologist Emile Durkheim (d. 1917), ‘does not result in binding together those who adhere to it, nor in uniting them into a group leading a common life. There is no Church of magic.’
So, one hypothesis is that, as man gained increasing control over nature, magic, as it came to be called, lost ground to religion, which, being communal and centralized, evolved a hierarchy that sought to suppress those practices that threatened its dogma and dominance.
But now religion is, in its turn, on the decline—in favour of science. What is science? Within academia, there are, in fact, no clear or reliable criteria for distinguishing a science from a non-science. What might be said is that all sciences share certain assumptions which underpin the scientific method—in particular, that there is an objective reality governed by uniform laws, and that this reality can be discovered by systematic observation.
But, as I argue in my book, Hypersanity: Thinking Beyond Thinking, every scientific paradigm that has come and gone is now deemed to have been false, inaccurate, or incomplete, and it would be ignorant or arrogant to assume that our current ones might amount to the truth, the whole truth, and nothing but the truth.
The philosopher Paul Feyerabend (d. 1994) went so far as to claim that there is no such thing as ‘a’ or ‘the’ scientific method: behind the facade, ‘anything goes’, and, as a form of knowledge, science is no more privileged than magic or religion.
More than that, science has come to occupy the same place in the human psyche as religion once did. Although science began as a liberating movement, it grew dogmatic and repressive, more of an ideology than a rational method that leads to ineluctable progress.
To quote Feyerabend:
Knowledge is not a series of self-consistent theories that converges toward an ideal view; it is rather an ever increasing ocean of mutually incompatible (and perhaps even incommensurable) alternatives, each single theory, each fairy tale, each myth that is part of the collection forcing the others into greater articulation and all of them contributing, via this process of competition, to the development of our consciousness.
A common trope in fantasy fiction is the ‘thinning’ of magic: magic is fading, or has been banished, from the land, which is caught in a perpetual winter or otherwise in deathly or depressive decline, and the hero is called upon to rescue and restore the life-giving forces of old.
It is easy to draw the parallel with our own world, in which magic has been progressively driven out, first by religion, which over the centuries, became increasingly repressive of magic, and latterly by science with its zero tolerance.
When we read fantasy fiction, it is for the side of the old magic, always, that we root, for a time when the world, when life, had meaning in itself.
In my next article, I will look at the psychology and philosophy of magic.