Southbank Centre/Wikicommons cc-by 2.0

Confidence derives from the Latin fidere, “to trust.” To be confident is to trust and have faith in the world. To be self-confident is to trust and have faith in oneself, and, in particular, in one’s ability to engage successfully or at least adequately with the world. A self-confident person is able to act on opportunities, take on new challenges, rise to difficult situations, engage with constructive criticism, and shoulder responsibility if and when things go wrong.

Self-confidence and self-esteem often go hand in hand, but they aren’t one and the same thing. In particular, it is possible to be highly self-confident and yet to have profoundly low self-esteem, as is the case, for example, with many performers and celebrities, who are able to play to studios and galleries but then struggle behind the scenes. Esteem derives from the Latin aestimare [to appraise, value, rate, weigh, estimate], and self-esteem is our cognitive and, above all, emotional appraisal of our own worth. More than that, it is the matrix through which we think, feel, and act, and reflects and determines our relation to our self, to others, and to the world.

People with healthy self-esteem do not need to prop themselves up with externals such as income, status, or notoriety, or lean on crutches such as alcohol, drugs, or sex (when these things are a crutch). On the contrary, they treat themselves with respect and look after their health, community, and environment. They are able to invest themselves completely in projects and people because they have no fear of failure or rejection. Of course, like everybody, they suffer hurt and disappointment, but their setbacks neither damage nor diminish them. Owing to their resilience, they are open to people and possibilities, tolerant of risk, quick to joy and delight, and accepting and forgiving of others and themselves.

So what’s the secret to self-esteem? As I argue in Heaven and Hell, a book on the psychology of the emotions, many people find it easier to build their self-confidence than their self-esteem, and, conflating one with the other, end up with a long list of talents and achievements. Rather than facing up to the real issues, they hide, often their whole life long, behind their certificates and prizes. But as anyone who has been to university knows, a long list of talents and achievements is no substitute for healthy self-esteem. While these people work on their list in the hope that it might one day be long enough, they try to fill the emptiness inside them with externals such as status, income, possessions, and so on. Undermine their standing, criticize their home or car, and observe in their reaction that it is them that you undermine and criticize.

Similarly, it is no use trying to pump up the self-esteem of children (and, increasingly, adults) with empty, undeserved praise. The children are unlikely to be fooled, but may instead be held back from the sort of endeavour by which real self-esteem can grow. And what sort of endeavour is that? Whenever we live up to our dreams and promises, we can feel ourselves growing. Whenever we fail but know that we have given it our best, we can feel ourselves growing. Whenever we stand up for our values and face the consequences, we can feel ourselves growing. This is what growth depends on. Growth depends on living up to our ideals, not our parents’ ambitions for us, or the targets of the company we work for, or anything else that is not truly our own but, instead, a betrayal of ourselves.

Orson Welles tells reporters that no one connected with the broadcast had any idea that it would cause panic. (October 31, 1938). Source: Acme News Photos/Wikicommons (public domain)

On October 30, 1938, Orson Welles broadcast an episode of the radio drama Mercury Theatre on the Air. This episode, entitled The War of the Worlds and based on a novel by HG Wells, suggested to listeners that a Martian invasion was taking place. In the charged atmosphere of the days leading up to World War II, many people missed or ignored the opening credits and mistook the radio drama for a news broadcast. Panic ensued and people began to flee, with some even reporting flashes of light and a smell of poison gas. This panic, a form of mass hysteria, is one of the many forms that anxiety can take.

Mass hysteria can befall us at almost any time. In 1989, 150 children took part in a summer programme at a youth centre in Florida. Each day at noon, the children gathered in the dining hall to be served pre-packed lunches. One day, a girl complained that her sandwich did not taste right. She felt nauseated, went to the toilet, and returned saying that she had vomited. Almost immediately, other children began experiencing symptoms such as nausea, abdominal cramps, and tingling in the hands and feet. With that, the supervisor announced that the food may be poisoned and that the children should stop eating. Within 40 minutes, 63 children were sick and more than 25 had vomited.

The children were promptly dispatched to one of three hospitals, but every test performed on them was negative. Meal samples were analyzed but no bacteria or poisons could be found. Food processing and storage standards had been scrupulously maintained and no illness had been reported from any of the other 68 sites at which the pre-packed lunches had been served.

However, there had been in the group an atmosphere of tension, created by the release two days earlier of a newspaper article reporting on management and financial problems at the youth centre. The children had no doubt picked up on the staff’s anxiety, and this had made them particularly suggestible to the first girl’s complaints. Once the figure of authority had announced that the food may be poisoned, the situation simply spiralled out of control. 

Mass hysteria is relatively uncommon, but it does provide an alarming insight into the human mind and the ease with which it might be influenced and even manipulated. It also points to our propensity to somatize, that is, to convert anxiety and distress into more concrete physical symptoms. Somatization, which can be thought of as an ego defence, is an unconscious process, and people who somatize are, almost by definition, unaware of the psychological origins of their physical symptoms.

As I discuss in The Meaning of Madness, psychological stressors can lead to physical symptoms not only by somatization, which is a psychic process, but also by physical processes involving the nervous, endocrine, and immune systems. For example, one study found that the first 24 hours of bereavement are associated with a staggering 21-fold increased risk of heart attack. Since Robert Ader’s early experiments in the 1970s, the field of psychoneuroimmunology has blossomed, uncovering a large body of evidence that has gradually led to the mainstream recognition of the adverse effects of psychological stressors on health, recovery, and ageing, and, inversely, of the protective effects of positive emotions such as happiness, belonging, and a sense of purpose or meaning.

Here, again, modern science has barely caught up with the wisdom of the Ancients, who were well aware of the close relationship between psychological and physical well-being. In Plato’s Charmides, Socrates tells the young Charmides, who has been suffering from headaches, about a charm for headaches that he learnt from one of the mystical physicians to the King of Thrace. However, this great physician cautioned that it is best to cure the soul before curing the body, since health and happiness ultimately depend on the state of the soul: 

He said all things, both good and bad, in the body and in the whole man, originated in the soul and spread from there… One ought, then, to treat the soul first and foremost, if the head and the rest of the body were to be be well. He said the soul was treated with certain charms, my dear Charmides, and that these charms were beautiful words. As a result of such words self-control came into being in souls. When it came into being and was present in them, it was then easy to secure health both for the head and for the rest of the body. 

Mental health is not just mental health. It is also physical health.

The circumstances in which we laugh are many and varied, but, deep down, we laugh for one (or sometimes several) of just seven reasons.

We laugh:

1. To feel better about ourselves. When looking for romance on dating sites and apps, we often ask for, or promise to offer, a good sense of humour (GSOH). Today, we tend to think of laughter as a good thing, but, historically, this has not always been the case. In particular, the Church looked upon laughter as a corrupting and subversive force, and for centuries, the monasteries forbade it. This notion that laughter can be less than virtuous finds an echo in the superiority theory of laughter, according to which laughter is a way of putting ourselves up by putting others down. The superiority theory is most closely linked with the philosopher Thomas Hobbes, who conceived of laughter as “a sudden glory arising from sudden conception of some eminency in ourselves, by comparison with the infirmity of others, or with our own formerly.” Think of medieval mobs jeering at people in stocks, or, in our time, Candid Camera.

2. To relieve stress and anxiety. Clearly, the superiority theory is unable to account for all cases of laughter, such as laughter arising from relief, surprise, or joy. According to the relief theory of laughter, most often associated with Sigmund Freud, laughter represents a release of pent-up nervous energy. Like dreams, jokes are able to bypass our inner censor, enabling a repressed emotion such as xenophobia (or, at least, the nervous energy associated with the repression) to surface—explaining why, at times, we can be embarrassed by our own laughter. By the same token, a comedian might raise a laugh by conjuring some costly emotion, such as admiration or indignation, and then suddenly killing it. Although more flexible than the superiority theory, the relief theory is unable to account for all cases of laughter, and those who laugh hardest at offensive jokes are not generally the most repressed of people.

3. To keep it real. Much more popular today is the incongruity theory of laughter, associated with the likes of Immanuel Kant and Søren Kierkegaard, according to which the comedian raises a laugh, not by conjuring an emotion and then killing it, but by creating an expectation and then contradicting it. Building upon Aristotle, Kierkegaard highlighted that the violation of an expectation is the core not only of comedy but also of tragedy—the difference being that, in tragedy, the violation leads to significant pain or harm. Possibly, it is not the incongruity itself that we enjoy, but the light that it sheds, in particular, on the difference between what lies inside and outside our heads. The incongruity theory is arguably more basic than the relief and superiority theories. When someone laughs, our inclination is to search for an incongruity; and though we may laugh for superiority or relief, even then, it helps if we can pin our laughter on some real or imagined incongruity.

4. As a social service. According to the philosopher Henri Bergson, we tend to fall into patterns and habits, to rigidify, to lose ourselves to ourselves—and laughter is how we point this out to one another, how we up our game as a social collective. For example, we may laugh at one who falls into a hole through absentmindedness, or at one who constantly repeats the same gesture or phrase. Conversely, we may also laugh at, or from, an unusual or unexpected lack of rigidity, as, for instance, when we break a habit or have an original idea. Ultimately, says Bergson, we are laughable to the extent that we are a machine or an object, to the extent that we lack self-awareness, that we are invisible to ourselves while being visible to everyone else. Thus, the laughter of others usually draws attention to our unconscious processes, to our modes or patterns of self-deception, and to the gap, or gulf, between our fiction and the reality. This gap is narrowest in poets and artists, who have to transcend themselves if they are to be worthy of the name.

5. To put others at ease. Another way of understanding laughter is to look at it like a biologist or anthropologist might. Human infants are able to laugh long before they can speak. Laughter involves parts of the brain that are, in evolutionary terms, much older than the language centres, and that we share with other animals. Primates, in particular, produce laughing sounds when playfighting, play-chasing, or tickling one another. As with human children, it seems that their laughter functions as a signal that the danger is not for real—which may be why rictus characters such as Batman’s Joker, who send a misleading signal, are so unsettling.

6. For diplomacy. Most laughter, even today, is not directed at jokes, but at creating and maintaining social bonds. Humour is a social lubricant, a signal of contentedness, acceptance, and belonging. More than that, it is a way of communicating, of making a point emphatically, or conveying a sensitive message without incurring the usual social costs. At the same time, humour can also be a weapon, a sublimed form of aggression, serving, like the stag’s antlers, to pull rank or attract a mate. The subtlety and ambiguity involved is in itself a source of almost endless stimulation.

7. To transcend ourselves. Laughter may have begun as a signal of play, but it has, as we have seen, evolved a number of other functions. Zen masters teach that it is much easier to laugh at ourselves once we have transcended our ego. At the highest level, laughter is the sound of the shattering of the ego. It is a means of gaining (and revealing) perspective, of rising beyond ourselves and our lives, of achieving a kind of immortality, a kind of divinity. Upon awakening on her deathbed to see her entire family around her, Nancy Astor quipped, “Am I dying, or is this my birthday?”

Today, laughter is able to give us a little of what religion once did.

The 5 enemies of rational thought.

Following his defeat at the Battle of Actium in 31 BCE, Marc Antony heard a rumour that Cleopatra had committed suicide and, in consequence, stabbed himself in the abdomen—only to discover that Cleopatra herself had been responsible for spreading the rumour. He later died in her arms.

“Fake news” is nothing new, but in our Internet age it has spread like a contagious disease, swinging elections, fomenting social unrest, undermining institutions, and diverting political capital away from health, education, the environment, and all-round good government.

So how best to guard against it?

As a medical specialist, I’ve spent over 20 years in formal education. With the possible exception of my two-year masters in philosophy, the emphasis of my education has always been firmly and squarely on fact accumulation.

Today, I have little use for most of these facts, and though I am only middle-aged, many are already out of date, or highly questionable.

But what I do rely on—every day, all the time—is my faculty for critical thinking. As BF Skinner once put it, “Education is what survives when what has been learnt has been forgotten.”

But can critical thinking even be taught?

In Plato’s Meno, Socrates says that people with wisdom and virtue are very poor at imparting those qualities: Themistocles, the Athenian politician and general, was able to teach his son Cleophantus skills such as standing upright on horseback and shooting javelins, but no one ever credited Cleophantus with anything like his father’s wisdom; and the same could also be said of Lysimachus and his son Aristides, and Thucydides and his sons Melesias and Stephanus.

In Plato’s Protagoras, Socrates says that Pericles, who led Athens at the peak of its golden age, gave his sons excellent instruction in everything that could be learnt from teachers, but when it came to wisdom, he simply left them to “wander at their own free will in a sort of hope that they would light upon virtue of their own accord”.

It may be that wisdom and virtue cannot be taught, but thinking skills certainly can—or, at least, the beginning of them.

So rather than leaving thinking skills to chance, why not make more time for them in our schools and universities, and be more rigorous and systematic about them?

I’ll make a start by introducing you to what I have called “the five enemies of rational thought”:

1. Formal fallacy. A fallacy is some kind of defect in an argument. A formal fallacy is an invalid type of argument. It is a deductive argument with an invalid form, for example:

Some A are B. 
Some B are C. 
Therefore, some A are C.

If you cannot yet see that this argument is invalid, substitute A, B, and C with “insects”, “herbivores”, and “mammals”.

Insects, clearly, are not mammals.

A formal fallacy is built into the structure of an argument and is invalid irrespective of the content of the argument.

2. Informal fallacy. An informal fallacy, in contrast, is one that can only be identified through an analysis of the content of the argument.

Informal fallacies often turn on the misuse of language, for example, using a key term or phrase in an ambiguous way, with one meaning in one part of the argument and another meaning in another part—called “fallacy of equivocation”.

Informal fallacies can also distract from the weakness of an argument, or appeal to the emotions instead of reason.

Here are a few more examples of informal fallacies.

  • Damning the alternatives. Arguing in favour of something by damning its alternatives. (Tim’s useless and Bob’s a drunk. So, I’ll marry Jimmy. Jimmy’s the right man for me.)
  • Gambler’s fallacy. Assuming that the outcome of one or more independent events can impact the outcome of a subsequent independent event. (June is pregnant with her fourth child. Her first three children are all boys, so this time it’s bound to be a girl.)
  • Appeal to popularity. Concluding the truth of a proposition on the basis that most or many people believe it to be true. (Of course he’s guilty: even his mother has turned her back on him.)
  • Argument from ignorance. Upholding the truth of a proposition based on a lack of evidence against it, or the falsity of a proposition based on a lack of evidence for it. (Scientists haven’t found any evidence of current or past life on Mars. So, we can be certain that there has never been any life on Mars.)
  • Argument to moderation. Arguing that the moderate view or middle position must be the right or best one. (Half the country favours leaving the European Union, the other half favours remaining. Let’s compromise by leaving the European Union but remaining in the Customs Union.)

You can find many more examples in Hypersanity: Thinking Beyond Thinking.

3. Cognitive bias. Cognitive bias is sloppy, if not necessarily faulty, reasoning: a mental shortcut or heuristic intended to spare us time, effort, or discomfort—often while reinforcing our self-image or worldview—but at the cost of accuracy or reliability.

For example, in explaining the behaviour of other people, our tendency is to overestimate the role of character traits over situational factors—a bias, called correspondence bias, that goes into reverse when it comes to explaining our own behaviour. Thus, if Charlotte fails to mow the lawn, I indict her with forgetfulness, laziness, or spite; but if I fail to mow the lawn, I absolve myself on the grounds of busyness, tiredness, or inclement weather.

Another important cognitive bias is my-side, or confirmation, bias, which is the propensity to search for or recall only those stories, facts, and arguments that are in keeping with our pre-existing beliefs while filtering out those that conflict with them—which, especially on social media, can lead us to inhabit a so-called echo chamber.

4. Cognitive distortion. Cognitive distortion is a concept from cognitive-behavioural therapy (CBT), developed by psychiatrist Aaron Beck in the 1960s and used in the treatment of depression and other mental disorders.

Cognitive distortion involves interpreting events and situations so that they conform to and reinforce our outlook or frame of mind, typically on the basis of very scant or partial evidence, or even no evidence at all.

Common cognitive distortions in depression include selective abstraction and catastrophic thinking.

Selective abstraction is to focus on a single and often insignificant negative event or condition to the exclusion of other, more positive ones, for example, “My partner hates me. He gave me an annoyed look three days ago.”

Catastrophic thinking is to exaggerate and dramatize the likely consequences of an event or situation, for example, “The pain in my knee is getting worse. When I’m reduced to a wheelchair, I won’t be able to go to work and pay the bills. So, I’ll end up losing my house and dying in the street.”

A cognitive distortion can open up a vicious circle, with the cognitive distortion feeding the depression, and the depression the cognitive distortion.

Cognitive distortion as broadly understood is not limited to depression and other mental disorders, but is also a feature of, among others, poor self-esteem, jealousy, and marital conflict.

5. Self-deception. Of the five enemies of rational thought, the most important by far is self-deception, because it tends to underlie all the others.

If we do not think clearly, if we cannot see the wood for the trees, this is not usually because we lack intelligence or education or experience, but because we feel exposed and vulnerable—and rather than come to terms with a painful truth, prefer, almost reflexively, to deceive and defend ourselves.

As I argue in Hide and Seek: The Psychology of Self-Deception, all self-deception can be understood in terms of ego defence. In psychoanalytic theory, an ego defence is one of several unconscious processes that we deploy to diffuse the fear and anxiety that arise when who or what we truly are (our unconscious “id”) comes into conflict with who we think we are or who we think we should be (our conscious “superego”).

To put some flesh onto this, let’s take a look at two important ego defences: projection and idealization.

Projection is the attribution of one’s unacceptable thoughts and feelings to other people. This necessarily involves repression (another ego defence) as a first step, since unacceptable thoughts and feelings need to be repudiated before they can be detached. Classic examples of projection include the envious person who believes that everyone envies her, the covetous person who lives in constant fear of being dispossessed, and the person with fantasies of infidelity who suspects that they are being cheated upon by their partner.

Idealization involves overestimating the positive attributes of a person, object, or idea while underestimating its negative attributes. At a deeper level, it involves the projection of our needs and desires onto that person, object, or idea. A paradigm of idealization is infatuation, or romantic love, when love is confused with the need to love, and the idealized person’s negative attributes are glossed over or even construed as positive. Although this can make for a rude awakening, there are few better ways of relieving our existential anxiety than by manufacturing something that is ‘perfect’ for us, be it a piece of equipment, a place, country, person, or god.

In all cases, the raw material of thought is facts. If the facts are missing, or worse, misleading, then thought cannot even get started.

The psychology and philosophy of magic.

In my last article, on the history of magic, I compared magic to religion and science, but without attempting a precise definition of magic. On the assumption that certain entities can exert a hidden influence on one another, magic is a method of acting in the world through sheer power of will. The notion that the universe is pregnant with subtle connexions is supported by, of all things, the study of mathematics, and it can sometimes seem that maths is at only one remove from magic.

Magic is often considered a gift, such that some people have it to a high degree and others, the muggles, barely at all, perhaps because their will is weak, unsettled, or untrained, or because magic does not run in their family—for like madness, magic is often hereditary. Whatever the case, people without magic are usually portrayed as lacking in cognitive faculties such as insight, intuition, and imagination, and would not see possibility even if it slapped them in the face.

Magic is sometimes divided into white and black, and high and low. Black magic is selfish and does not consider other people, whereas white magic is altruistic or selfless, and seeks in general to maintain or restore the equilibrium of the universe. The magician’s psychological makeup determines what kind of magic, white or black, he or she is able or likely to wield.

Speaking of equilibrium, deflecting objects and especially people from their natural or pre-ordained course is likely to have significant repercussions, which is why, aside from the mental effort and exhaustion, the use of magic is often said to come at a price, either to the magician, his or her client (for want of a better term), or a third party. The equilibrium must, ultimately, be maintained.

The magician is, in effect, a mediator of energies. Low magic involves drawing up energies from the earth, from plants and minerals and so on, and is more the province of common folk. High magic involves drawing down raw, unprocessed energies from the sun and sky, which requires complex ritual and is more the province of an educated or trained elite.

The magician cultivates his or her will through concentration—acquiring charisma in the process—and focuses it through ritual such as ceremony, chant, or spell. Ritual also helps to create the right atmosphere and attitude for magic to take hold. Words in particular can exert a power all of their own. In the language of Ancient Egypt, the sound of a word had a magical power which complemented its meaning, a view of language which we still retain when we talk of ‘spelling’ a word, or visit a psychotherapist. And while words can change the world, getting them wrong, that is, misspelling, can have disastrous consequences.

So far, I’ve been talking as if magic actually works. But does it work, and, if so, how? Unless one broadens the definition of magic to include cognitive faculties such as insight, intuition, and imagination, or simply peak performance, magic does not work, or, at least, not in an immediate, instrumental sense. But magic might work indirectly, by focussing the mind and energies on a particular problem, or through a mechanism akin to the placebo effect or psychoneuroimmunology.

The term ‘placebo effect’ derives from the Latin placare [‘to please’] and refers to the tendency for a remedy to ‘work’ simply because it is expected to do so. In essence, people who associate taking a remedy with improvement may come to expect improvement if they take a remedy, even if the ‘remedy’ in question is no more than an inert substance, or a substance that has no therapeutic effect but only adverse effects that can be interpreted as indicative of a therapeutic effect. It may be that the expectation alone suffices to mimic the effect of the remedy, and brain imaging studies indicate that, in some cases, remedies and their placebos activate the very same mechanisms in the nervous system.

In the UK, the antidepressant fluoxetine is so commonly prescribed that trace quantities have been detected in the water supply. But, as I lay bare in my book, The Meaning of Madness, there is mounting evidence that the most commonly prescribed antidepressants are little more effective than dummy pills which, unlike antidepressants, are free from adverse effects, and cost. So, it might be said that, insofar as antidepressants work, they do so by magic—and, no doubt, would be more effective if accompanied by some kind of incantation.

Remedies that are perceived to be more potent have a stronger placebo effect. Perceptions of potency are influenced by factors such as the remedy’s size, shape, colour, route of administration, and general availability. A brightly coloured injection administered by a silver-haired professor of medicine can be expected to have a much stronger placebo effect, and therefore a much stronger overall effect, than the unremarkable over-the-counter tablet recommended by the teenager next door. This highlights the importance of the psychological, social, and cultural context in which a treatment or intervention is administered, and, more particularly, the significance of the therapeutic act or ritual. If the practitioner, the patient, and their society believe in the magic, then the magic is real by the very force of that shared belief.

No wonder the magician, the priest, and the healer used to be one and the same person—and, in many societies, still is. Like religion, magic may represent a response to anxiety, distress, and a feeling of inadequacy or impotence, especially in the face of natural disaster. And like religion, it may represent a spiritual path, akin, perhaps, to a martial art, which also involves concentrating the mind, channelling instinctual drives, and leveraging forces.

But beyond all that, magic, whether it works or not, is an external projection of the human psyche, an external projection of our internal or psychological truth, which is why it features so prominently in fiction. Fairy tales often begin with a formulation such as, ‘Once upon a time in dreamland’, and magic is that dreamland. Like dreams, magic makes use of condensed symbols, and, like dreams, it is a kind of wish fulfilment.

In the same vein, magic might be compared with mental states such as psychosis and neurosis, which, like dreams, can also involve condensed symbols and wish fulfilment. Sigmund Freud linked magical rituals and spells with neurotic and obsessional thought processes, and there are arguably some parallels with compulsive acts, which are a response to obsessional thoughts or according to rules that must be rigidly applied.

Magic is, arguably, on a spectrum with madness, and magical thinking is especially prominent in schizotypy, or schizotypal personality disorder, which predisposes to schizophrenia, and also shamanism. As discussed in my related article on the history of magic, Plato distinguished between madness resulting from human illness and madness arising from a divinely inspired release from normally accepted behaviour. In Plato’s Phaedrus, Socrates says that this divinely inspired madness has four forms: mysticism, inspiration, poetry, and love. Love, according to Socrates, is not a god, as most people think, but a great spirit [daimon] that intermediates between gods and men.

Similarly, in The Sorcerer and his Magic (1963), the anthropologist Claude Lévi-Strauss argues that magic is a mediator between normal thought processes (common sense, reason, science…), which suffer from a marked deficit of meaning, and pathological thought processes, which abound with meaning:

From any non-scientific perspective (and here we can exclude no society), pathological and normal thought processes are complementary rather than opposed. In a universe which it strives to understand but whose dynamics it cannot fully control, normal thought continually seeks the meaning of things which refuse to reveal their significance. So-called pathological thought, on the other hand, overflows with emotional interpretations and overtones, in order to supplement an otherwise deficient reality… We might borrow from linguistics and say that so-called normal thought always suffers from a deficit of meaning, whereas so-called pathological thought (in at least some of its manifestations) disposes of a plethora of meaning. Through collective participation in shamanistic curing, a balance is established between these two complementary situations.

Some of my regular readers may have wondered why I turned my pen to so apparently frivolous a subject as magic. But we now know that magic means much more than it may at first seem. Aside from its links with madness and with healing, it is a mirror of the mind, and even, like love or beauty, and science and religion, a mode of belonging to the world.