In Plato’s Cratylus, on the philosophy of language, Socrates says that the Greek word for truth, aletheia, is a compression of the phrase “a wandering that is divine”.

Since Plato, many thinkers have spoken of truth and God in the same breath, and truth has also been linked with concepts such as justice, power, and freedom. According to John the Apostle, Jesus said to the Jews: “And ye shall know the truth, and the truth shall make you free.”

Today, the belief in God may be dying, but what about truth? Rudy Giuliani, Donald Trump’s personal lawyer, claimed that “truth isn’t truth”, while Kellyanne Conway, Trump’s counsellor, presented the public with what she called “alternative facts”.

Over in the U.K. in the run-up to the Brexit referendum, Michael Gove, then Minister of Justice and Lord Chancellor, opined that people “have had enough of experts”. Accused by the father of a sick child of visiting a hospital for a press opportunity, Prime Minister Boris Johnson replied, “There’s no press here”—while being filmed by a BBC camera crew.

The anatomy of a lie

What constitutes a lie? A lie is not simply an untruth. For centuries, people taught their children that the earth was at the centre of the universe. This was not a lie, insofar as they believed it to be true.

For something to be a lie, the person putting it out has to believe that it is false, even if, by chance, it happens to be true. If I tell you, “I’m not actually my father’s biological son”, believing that I am, and it so happens that I am not, I am still telling a lie.

Of course, it could be that I am being sarcastic, or joking—and, if I have made this sufficiently clear, I could not be counted as lying. For my statement to be a lie, it is not enough that I believe it to be false. I must also intend you to believe that it is true, that is, I must also intend to deceive you. If my intention in deceiving you is a good one, I am telling a white lie; if it is a bad one, I am telling a black lie; and if it is a bit of both, a blue lie.

When Olympias told her son Alexander the Great that his father was not Philip of Macedon but Zeus himself, she would only have been lying if (1) she believed this to be false, and (2) she intended to deceive Alexander. Olympias—who, according to Plutarch, slept with snakes in her bed—probably did believe it to be true, which highlights an important problem with lying, namely, that people can believe the most fantastical things.

A special case is when someone tells the naked truth, intending others to interpret it as a lie or joke. In Game of Thrones, after killing the Freys, Arya Stark runs into some Lannister soldiers, who share with her their meal of roast rabbit and blackberry wine. When one of the soldiers (not the one played by Ed Sheeran) asks, “So why is a nice girl on her own going to King’s Landing?” Arya replies, point blank, “I’m going to kill the queen.” After an awkward silence, everyone including Arya bursts out laughing.

If I am late to a dinner party, I can tell a small lie about some heavy traffic, or I can tell a bolder lie about being pushed into a muddy ditch by a chihuahua and having to go home to get changed. The more unusual and imaginative (and embarrassing) the lie, the more it is likely to be believed.

Or I could try instead to hide the lie. For example, I might lie by omission or “mental reservation”: “Sorry, I had a flat tyre” (last month). Or I might lie by equivocation (playing on words), as Bill Clinton famously did when he stated, “I did not have sexual relations with that woman, Monica Lewinsky.”

A special kind of lie is the bluff, which involves pretending to have an asset or intention or position that one does not actually have. An infamous example of a bluff is former prime minister Theresa May’s Brexit mantra that “no deal is better than a bad deal”.

Lies versus bullsh*t

Is there any difference between telling lies and talking bullsh*t? According to the philosopher Harry Frankfurt, lies differ from bullsh*t in that liars must track the truth in order to conceal it, whereas bullsh*tters have no regard or sensitivity for the truth or even for what their intended audience believes, so long as this audience is convinced or carried by their rhetoric.

Bullsh*tters will say whatever it takes, from moment to moment, to limp on to the next moment.

For Frankfurt:

“Someone who lies and someone who tells the truth are playing on opposite sides, so to speak, in the same game. Each responds to the facts as he understands them, although the response of the one is guided by the authority of the truth, while the response of the other defies that authority and refuses to meet its demands. The bullsh*tter ignores these demands altogether. He does not reject the authority of the truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bullsh*t is a greater enemy of the truth than lies are.”

Pathological lying

Pathological lying—also sometimes called compulsive lying or mythomania—is a controversial construct, and not tightly defined. It refers to habitual lying, typically for no discernible external gain. It is often although not always a feature of the four Cluster B personality disorders, namely, borderline personality disorder, histrionic personality disorder, narcissistic personality disorder, and antisocial personality disorder, and is in Factor One of the Psychopathy Checklist.

As with pathological lying, most lying is actually carried out for internal, or emotional, gain, to attract attention or sympathy, or to alleviate feelings of abandonment, rejection, or worthlessness.

We often lie to ourselves and to others from a position of vulnerability: we lie not out of strength or smartness, but out of need and necessity.

The philosophy of lying

St Augustine’s treatise on lying begins with, “There is a great question about lying…” [Magna quæstio est de mendacio…].

It may be permissible to lie when the positive consequences clearly outweigh any negative consequences. Thus, it may be reasonable to lie in a life and death situation, for instance, to save someone from being discovered by a murderer. And it may be reasonable to lie if the person being lied to has forfeited their right to the truth, for example, by threatening violence.

But such situations are few and far between. Much more common are the small white lies that lubricate our social interactions, such as greeting acquaintances with “good to see you” and starting a letter to a stranger or antagonist with “Dear”.

Outside vis majorand social convention, it is usually a bad idea to lie. In the fifth century BCE, Herodotus wrote that from their fifth year to their twentieth, the Persians of the Pontus were instructed in three things: “to ride a horse, to draw a bow, and to speak the Truth.”

“The most disgraceful thing in the world [the Persians] think is to tell a lie; the next worst, to owe a debt: because, among other reasons, the debtor is obliged to tell lies.”

The first person to suffer from a lie is none other than the liar. Lying feels bad and damages pride and self-esteem. It is a slippery slope that leads to further and greater lies and other ethical violations. Having told a lie, it can take a lot of thought and exertion and sacrifice to avoid being found out. If found out (or even merely suspected), the liar loses authority and credibility, undermines their reputation and relationships, and may suffer further sanctions, including being lied to in return. Last but not least, by keeping them under the radar, lying prevents critical issues from being addressed and dealt with.

And then there is the harm to others. To lie is to treat people as means-to-an-end rather than as ends-in-themselves, which is why being lied to is experienced as disrespectful and demeaning. It also leads people to act on false information, which can have untold and unforeseen consequences.

When faced with a choice between a life of limitless pleasure as a detached brain in a vat, and a genuine, human life along with all its struggle and suffering, most people opt for the latter. This suggests that we value truth for its own sake, as well as for its utility. To deny us a part of reality is therefore to impoverish our life.

There is a line of reasoning that, since the natural end of speech is to communicate the thoughts of the speaker, lying is a perversion of language. Curiously, language runs into serious metaphysical difficulties as soon as a lie is introduced. Consider the sentence: “This statement is false.” If the statement is false, it is true; but if it is true, it is not false.

The strongest argument against lying is perhaps that it could not be made into a universal principle. If everyone started lying here and there and everywhere, everything would quickly come apart. For just this reason, Plato bans even poetry from his ideal state, reasoning that poetry is “thrice removed from the truth”.

It’s worrying that we as a society are increasingly tolerant of lies. When people take to lying, they have to tell more and more lies to shore up their earlier lies. This tangled web we weave undermines trust, to the point that we no longer believe anything, least of all the truth.

In the fifth century BCE, the Persian King of Kings Darius the Great had the following advice engraved for his successor Xerxes:

“Thou who shalt be king hereafter, protect yourself vigorously from the Lie; the man who shall be a lie-follower, him do thou punish well, if thus thou shall think. May my country be secure!”

If Darius knew it then, why do we not know it now?

‘Hypersanity’ is not a common or accepted term. But neither did I make it up. I first came across the concept while training in psychiatry, in The Politics of Experience and the Bird of Paradise (1967) by R D Laing. In this book, the Scottish psychiatrist presented ‘madness’ as a voyage of discovery that could open out onto a free state of higher consciousness, or hypersanity. For Laing, the descent into madness could lead to a reckoning, to an awakening, to ‘break-through’ rather than ‘breakdown’.

A few months later, I read C G Jung’s autobiography, Memories, Dreams, Reflections (1962), which provided a vivid case in point. In 1913, on the eve of the Great War, Jung broke off his close friendship with Sigmund Freud, and spent the next few years in a troubled state of mind that led him to a ‘confrontation with the unconscious’.

As Europe tore itself apart, Jung gained first-hand experience of psychotic material in which he found ‘the matrix of a mythopoeic imagination which has vanished from our rational age’. Like Gilgamesh, Odysseus, Heracles, Orpheus and Aeneas before him, Jung travelled deep down into an underworld where he conversed with Salome, an attractive young woman, and with Philemon, an old man with a white beard, the wings of a kingfisher and the horns of a bull. Although Salome and Philemon were products of Jung’s unconscious, they had lives of their own and said things that he had not previously thought. In Philemon, Jung had at long last found the father-figure that both Freud and his own father had failed to be. More than that, Philemon was a guru, and prefigured what Jung himself was later to become: the wise old man of Zürich. As the war burnt out, Jung re-emerged into sanity, and considered that he had found in his madness ‘the primo materia for a lifetime’s work’.

The Laingian concept of hypersanity, though modern, has ancient roots. Once, upon being asked to name the most beautiful of all things, Diogenes the Cynic (412-323 BCE) replied parrhesia, which in Ancient Greek means something like ‘uninhibited thought’, ‘free speech’, or ‘full expression’. Diogenes used to stroll around Athens in broad daylight brandishing a lit lamp. Whenever curious people stopped to ask what he was doing, he would reply: ‘I am just looking for a human being’ – thereby insinuating that the people of Athens were not living up to, or even much aware of, their full human potential.

After being exiled from his native Sinope for having defaced its coinage, Diogenes emigrated to Athens, took up the life of a beggar, and made it his mission to deface – metaphorically this time – the coinage of custom and convention that was, he maintained, the false currency of morality. He disdained the need for conventional shelter or any other such ‘dainties’, and elected to live in a tub and survive on a diet of onions. Diogenes proved to the later satisfaction of the Stoics that happiness has nothing whatsoever to do with a person’s material circumstances, and held that human beings had much to learn from studying the simplicity and artlessness of dogs, which, unlike human beings, had not complicated every simple gift of the gods.

The term ‘cynic’ derives from the Greek kynikos, which is the adjective of kyon or ‘dog’. Once, upon being challenged for masturbating in the marketplace, Diogenes regretted that it were not as easy to relieve hunger by rubbing an empty stomach. When asked, on another occasion, where he came from, he replied: ‘I am a citizen of the world’ (cosmopolites), a radical claim at the time, and the first recorded use of the term ‘cosmopolitan’. As he approached death, Diogenes asked for his mortal remains to be thrown outside the city walls for wild animals to feast upon. After his death in the city of Corinth, the Corinthians erected to his glory a pillar surmounted by a dog of Parian marble.

Jung and Diogenes came across as insane by the standards of their day. But both men had a depth and acuteness of vision that their contemporaries lacked, and that enabled them to see through their facades of ‘sanity’. Both psychosis and hypersanity place us outside society, making us seem ‘mad’ to the mainstream. Both states attract a heady mixture of fear and fascination. But whereas mental disorder is distressing and disabling, hypersanity is liberating and empowering.

After reading The Politics of Experience, the concept of hypersanity stuck in my mind, not least as something that I might aspire to for myself. But if there is such a thing as hypersanity, the implication is that mere sanity is not all it’s cracked up to be, a state of dormancy and dullness with less vital potential even than madness. This I think is most apparent in people’s frequently suboptimal – if not frankly inappropriate – responses, both verbal and behavioural, to the world around them. As Jung puts it:

The condition of alienation, of being asleep, of being unconscious, of being out of one’s mind, is the condition of the normal man.

Society highly values its normal man. It educates children to lose themselves and to become absurd, and thus to be normal.

Normal men have killed perhaps 100,000,000 of their fellow normal men in the last 50 years.

Many ‘normal’ people suffer from not being hypersane: they have a restricted worldview, confused priorities, and are wracked by stress, anxiety and self-deception. As a result, they sometimes do dangerous things, and become fanatics or fascists or otherwise destructive (or not constructive) people. In contrast, hypersane people are calm, contained and constructive. It is not just that the ‘sane’ are irrational but that they lack scope and range, as though they’ve grown into the prisoners of their arbitrary lives, locked up in their own dark and narrow subjectivity. Unable to take leave of their selves, they hardly look around them, barely see beauty and possibility, rarely contemplate the bigger picture – and all, ultimately, for fear of losing their selves, of breaking down, of going mad, using one form of extreme subjectivity to defend against another, as life – mysterious, magical life – slips through their fingers.

We could all go mad, in a way we already are, minus the promise. But what if there were another route to hypersanity, one that, compared with madness, was less fearsome, less dangerous, and less damaging? What if, as well as a backdoor way, there were also a royal road strewn with sweet-scented petals? After all, Diogenes did not exactly go mad. Neither did other hypersane people such as Socrates and Confucius, although the Buddha did suffer, in the beginning, with what might today be classed as depression.

Besides Jung, are there any modern examples of hypersanity? Those who escaped from Plato’s cave of shadows were reluctant to crawl back down and involve themselves in the affairs of men, and most hypersane people, rather than courting the limelight, might prefer to hide out in their back gardens. But a few do rise to prominence for the difference that they felt compelled to make, people such as Nelson Mandela and Temple Grandin. And the hypersane are still among us: from the Dalai Lama to Jane Goodall, there are many candidates. While they might seem to be living in a world of their own, this is only because they have delved more deeply into the way things are than those ‘sane’ people around them.Aeon counter – do not remove

And how to improve cognitive flexibility.

‘Insight’ is sometimes used to mean something like ‘self-awareness’, including awareness of our thought processes, beliefs, desires, emotions, and so on, and how they might relate to truth or usefulness. Of course, self-awareness comes by degrees. Owing to chemical receptors in their tendrils, vining plants know not to coil around themselves, and in that much can be said to have awareness of self and not-self. Children begin to develop reflective self-awareness at around 18 months of age, enabling them to recognize themselves in pictures and mirrors.

But ‘insight’ is also used to mean something like ‘penetrating discernment’, especially in cases when a solution to a previously intractable problem suddenly presents itself—and it is on this particular meaning of the word that I now want to focus on.

Such ‘aha moments’, epitomized by Archimedes’ cry of Eureka! Eureka! (Gr., ‘I found it! I found it!’), involve seeing something familiar in a new light or context, particularly a brighter or broader one, leading to a novel perspective and positive emotions such as joy, enthusiasm, and confidence. It is said that, after stepping into his bath, Archimedes noticed the water level rising, and suddenly understood that the volume of water displaced corresponded to the volume of the part of his body that had been submerged. Lesser examples of aha moments include suddenly understanding a joke, or suddenly perceiving the other aspect of a reversal image such as the duck/rabbit optical illusion (pictured). Aha moments result primarily from unconscious and automatic processes, and we tend, when working on insight problems, to look away from sources of visual stimulus.

Aha moments ought to be distinguished from uh-oh moments, in which we suddenly become aware of an unforeseen problem, and from doh moments, popularized by Homer Simpson, when an unforeseen problem hits us and/or we have a flash of insight into our lack of insight.

‘Thinking out of the box’ is a significant cognitive achievement. Once we have understood something in one way, it is very difficult to see it in any other way, even in the face of strong contradictory evidence. In When Prophecy Fails (1956), Leon Festinger discussed his experience of infiltrating a UFO doomsday cult whose leader had prophesied the end of the world. When the end of the world predictably failed to materialize, most of the cult members dealt with the dissonance that arose from the cognitions ‘the leader said the world would end’ and ‘the world did not end’ not by abandoning the cult or its leader, as you might expect, but by introducing the rationalization that the world had been saved by the strength of their faith!

Very often, to see something in a different light also means to see ourselves and the whole world in that new light, which can threaten and undermine our sense of self. It is more a matter of the emotions than of reason, which explains why even leading scientists can struggle with perceptual shifts. According to the physicist Max Planck, “a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” Or to put it more pithily, science advances one funeral at a time.

Even worse, strong contradictory evidence, or attempts to convince us otherwise, can, in fact, be counterproductive and entrench our existing beliefs—which is why as a psychiatrist I rarely challenge my patients or indeed anyone directly. You don’t have to take my word for it: in one recent study, supplying ‘corrective information’ to people with serious concerns about the adverse effects of the flu jab actually made them less willing to receive it.

So, short of dissolving our egos like a zen master, what can we do to improve our cognitive flexibility? Of course, it helps to have the tools of thought, including language fluency and multiple frames of reference as given by knowledge and experience. But much more important is to develop that first sense of ‘insight’, namely, insight as self-awareness.

On a more day-to-day basis, we need to create the time and conditions for allowing new connections to form. My own associative thinking is much more active when I’m both well-rested and at rest, for example, standing under the shower or ambling in the park. As Chairman and CEO of General Electric, Jack Welch spent an hour each day in what he called ‘looking out of the window time’. August Kekulé claimed to have discovered the ring structure of the benzene molecule while daydreaming about a snake biting its own tail.

Time is a very strange thing, and not at all linear: sometimes, the best way of using it is to waste it.

Nyhan B & Reifler J (2015): Does correcting myths about the flu vaccine work? An experimental evaluation of the effects of corrective information. Vaccine 33(3):459-464.

An overview of the philosophy of science

What is science? To call a thing ‘scientific’ or ‘scientifically proven’ is to lend that thing instant credibility. It is sometimes said that 90% of scientists who ever lived are alive today—despite a relative lack of scientific progress, and even regress as the planet comes under increasing strain. Especially in Northern Europe, more people believe in science than in religion, and attacking science can raise the same old, atavistic defences. In a bid to emulate or at least evoke the apparent success of physics, many areas of study have claimed the mantle of science: ‘economic science’, ‘political science’, ‘social science’, and so on. Whether or not these disciplines are true, bona fide sciences is a matter for debate, since there are no clear or reliable criteria for distinguishing a science from a non-science.

What might be said is that all sciences, unlike, say, magic or myth, share certain assumptions which underpin the scientific method, in particular, that there is an objective reality governed by uniform laws and that this reality can be discovered by systematic observation. A scientific experiment is basically a repeatable procedure designed to help support or refute a particular hypothesis about the nature of reality. Typically, it seeks to isolate the element under investigation by eliminating or ‘controlling for’ other variables that may be confused or ‘confounded’ with the element under investigation. Important assumptions or expectations include that: all potential confounding factors can be identified and controlled for, any measurements are appropriate and sensitive to the element under investigation, the results are analysed and interpreted rationally and impartially.

Still, many things can go wrong with the experiment. With, for example, drug trials, experiments that have not been adequately randomized (when subjects are randomly allocated to test and control groups) or adequately blinded (when information about the drug being administered/received is withheld from the investigator/subject) significantly exaggerate the benefits of treatment. Investigators may consciously or subconsciously withhold or ignore data that does not meet their desires or expectations (‘cherry picking’) or stray beyond their original hypothesis to look for chance or uncontrolled correlations (‘data dredging’). A promising result, which might have been obtained by chance, is much more likely to be published than an unfavourable one (‘publication bias’), creating the false impression that most studies have been positive and therefore that the drug is much more effective than it actually is. One damning systematic review found that, compared to independently funded drug trials, drug trials funded by pharmaceutical companies are less likely to be published, while those that are published are four timesmore likely to feature positive results for the products of their sponsors!

So much for the easy, superficial problems. But there are deeper, more intractable philosophical problems as well. For most of recorded history, ‘knowledge’ was based on authority, especially that of the Bible and whitebeards such as Aristotle, Ptolemy, and Galen. But today, or so we like to think, knowledge is much more secure because grounded in observation. Leaving aside that much of what counts as scientific knowledge cannot be directly observed, and that our species-specific senses are partial and limited, there is, in the phrase of Norwood Russell Hanson, ‘more to seeing than meets the eyeball’:

Seeing is an experience. A retinal reaction is only a physical state… People, not their eyes see. Cameras and eyeballs are blind.

Observation involves both perception and cognition, with sensory information filtered, interpreted, and even distorted by factors such as beliefs, experience, expectations, desires, and emotions. The finished product of observation is then encoded into a statement of fact consisting of linguistic symbols and concepts, each one with its own particular history, connotations, and limitations. All this means that it is impossible to test a hypothesis in isolation of all the background theories, frameworks, and assumptions from which it issues.

This is important, because science principally proceeds by induction, that is, by the observation of large and representative samples. But even if observation could be objective, observations alone, no matter how accurate and exhaustive, cannot in themselves establish the validity of a hypothesis. How do we know that ‘flamingos are pink’? Well, we don’t know for sure. We merely suppose that they are because, so far, every flamingo that we have seen or heard about has been pink. But the existence of a non-pink flamingo is not beyond the bounds of possibility. A turkey that is fed every morning might infer by induction that it will be fed every morning, until on Christmas Eve the goodly farmer picks it up and wrings its neck. Induction only ever yields probabilistic truths, and yet is the basis of everything that we know, or think that we know, about the world we live in. Our only justification for induction is that it has worked before, which is, of course, an inductive proof, tantamount to saying that induction works because induction works! For just this reason, induction has been called ‘the glory of science and the scandal of philosophy’.

It may be that science proceeds, not by induction, but by abduction or finding the most likely explanation for the observations—as, for example, when a physician is faced with a constellation of symptoms and formulates a ‘working diagnosis’ that more or less fits the clinical picture. But ultimately abduction is no more than a type of induction. Both abduction and induction are types of ‘backward reasoning’, formally equivalent to the logical fallacy of ‘affirming the consequent’:

  • If A then B. B. Therefore A.
  • “If I have the flu, then I have a fever. I have a fever. Therefore, I have the flu.”

But, of course, I could have meningitis or malaria or any number of other conditions. How to decide between them? At medical school, we were taught that ‘common things are common’. This is a formulation of Ockham’s razor, which involves choosing the simplest available explanation. Ockham’s razor, also called the law of parsimony, is often invoked as a principle of inductive reasoning, but, of course, the simplest available explanation is not necessarily the best or correct one. What’s more, we may be unable to decide which is the simplest explanation, or even what ‘simple’ might mean in context. Some people think that God is the simplest explanation for creation, while others think Him rather far-fetched. Still, there is some wisdom is Ockham’s razor: while the simplest explanation may not be the correct one, neither should we labour, or keep on ‘fixing’, a preferred hypothesis to save it from a simpler and better explanation. I should mention in passing that the psychological equivalent of Ockham’s razor is Hanlon’s razor: never attribute to malice that which can be adequately explained by neglect, incompetence, or stupidity.

Simpler hypotheses are also preferable in that they are easier to disprove, or falsify. To rescue it from the Problem of Induction, Karl Popper argued that science proceeds not inductively but deductively, by formulating a hypothesis and then seeking to falsify it.

  • ‘All flamingos are pink.’ Oh, but look, here’s a flamingo that’s not pink. Therefore, it is not the case that all flamingos are pink.

On this account, theories such as those of Freud and Marx are not scientific in so far as they cannot be falsified. But if Popper is correct that science proceeds by falsification, science could never tell us what is, but only ever what is not. Even if we did arrive at some truth, we could never know for sure that we had arrived. Another issue with falsification is that, when the hypothesis conflicts with the data, it could be the data rather than the hypothesis that is at fault—in which case it would be a mistake to reject the hypothesis. Scientists need to be dogmatic enough to persevere with a preferred hypothesis in the face of apparent falsifications, but not so dogmatic as to cling on to their preferred hypothesis in the face of robust and repeated falsifications. It’s a delicate balance to strike.

For Thomas Kuhn, scientific hypotheses are shaped and restricted by the worldview, or paradigm, within which the scientist operates. Most scientists are blind to the paradigm and unable to see across or beyond it. If data emerges that conflicts with the paradigm, it is usually ignored or explained away. But nothing lasts forever, and eventually the paradigm weakens and is overturned. Examples of such ‘paradigm shifts’ include the transition from Aristotelian mechanics to classical mechanics, the transition from miasma theory to the germ theory of disease, and the transition from ‘clinical judgement’ to evidence-based medicine. Of course, a paradigm does not die overnight. Reason is, for the most part, a tool that we use to justify what we are already inclined to believe, and a human life cannot easily accommodate more than one paradigm. In the words of Max Planck, ‘A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.’ Or to put it more pithily, science advances one funeral at a time.

In the Structure of Scientific Revolutions, Kuhn argued that rival paradigms offer competing and irreconcilable accounts of reality, suggesting that there are no independent standards by which they might judged against one another. Imre Lakatos sought to reconcile Popper and Kuhn, and spoke of programmes rather than paradigms. A programme is based on a hard core of theoretical assumptions accompanied by more modest auxiliary hypotheses formulated to protect the hard core against any conflicting data. While the hard core cannot be abandoned without jeopardising the programme, auxiliary hypotheses can be adapted to protect the hard core against evolving threats, rendering the hard core unfalsifiable. A progressive programme is one in which changes to auxiliary hypotheses lead to greater predictive power, whereas a degenerative programme is one in which these ad hocelaborations become sterile and cumbersome. A degenerative programme, says Lakatos, is one which is ripe for replacement. Though very successful in its time, classical mechanics, with Newton’s three laws of motion at its core, was gradually superseded by the special theory of relativity.

For Paul Feyerabend, Lakatos’s theory makes a mockery of any pretence at scientific rationality. Feyerabend went so far as to call Lakatos a ‘fellow anarchist’, albeit one in disguise. For Feyerabend, there is no such thing as a or the scientific method: anything goes, and as a form of knowledge science is no more privileged than magic, myth, or religion. More than that, science has come to occupy the same place in the human psyche as religion once did. Although science began as a liberating movement, it grew dogmatic and repressive, more of an ideology than a rational method that leads to ineluctable progress. In the words of Feyerabend:

Knowledge is not a series of self-consistent theories that converges toward an ideal view; it is rather an ever increasing ocean of mutually incompatible (and perhaps even incommensurable) alternatives, each single theory, each fairy tale, each myth that is part of the collection forcing the others into greater articulation and all of them contributing, via this process of competition, to the development of our consciousness.

‘My life’, wrote Feyerabend, ‘has been the result of accidents, not of goals and principles. My intellectual work forms only an insignificant part of it. Love and personal understanding are much more important. Leading intellectuals with their zeal for objectivity kill these personal elements. They are criminals, not the leaders of mankind.’

Every paradigm that has come and gone is now deemed to have been false, inaccurate, or incomplete, and it would be ignorant or arrogant to assume that our current ones might amount to the truth, the whole truth, and nothing but the truth. If our aim in doing science is to make predictions, enable effective technology, and in general promote successful outcomes, then this may not matter all that much, and we continue to use outdated or discredited theories such as Newton’s laws of motion so long as we find them useful. But it would help if we could be more realistic about science, and, at the same time, more rigorous, critical, and imaginative in conducting it.

Originally published on Psychology Today

For Aristotle, our unique capacity to reason is what defines us as human beings. Therefore, our happiness, or our flourishing, consists in leading a life that enables us to use and develop our reason, and that is in accordance with reason.

Article 1 of the Universal Declaration of Human Rights (1948) states that all human beings are ‘endowed with reason’, and it has long been held that reason is something that God gave us, that we share with God, and that is the divine, immortal element in us.

At the dawn of the Age of Reason, Descartes doubted everything except his ability to reason. ‘Because reason’, he wrote, ‘is the only thing that makes us men, and distinguishes us from the beasts, I would prefer to believe that it exists, in its entirety, in each of us…’

But what is reason? Reason is more than mere associative thinking, more than the mere ability to move from one idea (such as storm clouds) to another (such as imminent rain). Associative thinking can result from processes other than reason, such as instinct, learning, or intuition. Reason, in contrast, involves providing reasons—ideally good reasons—for an association. It involves using a system of representation such as thought or language to derive or arrive at an association.

Reason is often amalgamated with logic, also known as formal logic or deductive reasoning. At the very least, logic is seen as the purest form of reason. Yes, logic is basically an attempt to codify the most reliable or fail-safe forms of reasoning. But logic, or at any rate modern logic, is concerned merely with the validity of arguments, with the right relationship between premises and conclusion. It is not concerned with the actual truth or falsity of the premises or the applicability of the conclusion. Reason, in contrast, is a much broader psychological activity which also involves assessing evidence, creating and testing hypotheses, weighing competing arguments, evaluating means and ends, developing and applying heuristics (mental shortcuts), and so on. All this requires the use of judgement, which is why reason, unlike logic, cannot be delegated to a computer, and also why it so often fails to persuade. Logic is but a tool of reason, and, in fact, it can be reasonable to accept something that is or appears to be illogical.

It is often thought, not least in educational establishments, that ‘logic’ is able to provide immediate certainty and the authority or credibility that goes with it. But logic is a lot more limited than many people imagine. Logic essentially consists in a set of operations for deriving a truth from other truths. In a sense, it merely makes explicit that which was previously implicit. It brings nothing new to the table. The conclusion merely flows from the premises as their inevitable consequence, for example:

  1. All birds have feathers. (Premise 1)
  2. Woodpeckers are birds. (Premise 2)
  3. Therefore, woodpeckers have feathers. (Conclusion)

Another issue with logic is that it relies on premises that are founded, not on logic itself, but on inductive reasoning. How do we know that ‘all birds have feathers’? Well, we don’t know for sure. We merely suppose that they do because, so far, every bird that we have seen or heard about has had feathers. But the existence of birds without feathers, if only in the fossil record, is not beyond the bounds of possibility. Many avian species are hatched naked, and a featherless bird called Rhea recently took the Internet by storm.

Inductive reasoning only ever yields probabilistic ‘truths’, and yet it is the basis of everything that we know or think that we know about the world we live in. Our only justification for induction is that it has worked in the past, which is, of course, an inductive proof, tantamount to saying that induction works because induction works! To rescue it from this Problem of Induction, Karl Popper argued that science proceeds not inductively but deductively, by making bold claims and then seeking to falsify those claims. But if Popper is right, science could never tell us what is, but only ever what is not. Even if we did arrive at some truth, we could never know for sure that we had arrived. And while our current paradigms may represent some improvement on the ones that went before, it would be either ignorant or arrogant to presume that they amounted to the truth, the whole truth, and nothing but the truth.

Putting these inductive/deductive worries aside, reason is limited in reach, if not in theory then at least in practice. The movement of a simple pendulum is regular and easy to predict, but the movement of a double pendulum (a pendulum with another pendulum attached to its end) is, as can be seen on YouTube, extremely chaotic. Similarly, the interaction between two physical bodies such as the sun and the earth can be reduced to a simple formula, but the interaction between three physical bodies is much more complex—which is why the length of the lunar month is not a constant. But even this so-called Three-Body Problem is as nothing compared to the entanglement of human affairs. God, it is sometimes said, gave all the easy problems to the physicists.

The intricacies of human affairs often lead to a paralysis of reason, and we are left undecided, sometimes for years or even into the grave. To cut through all this complexity, we rely heavily on forces such as emotions and desires—which is why Aristotle’s Rhetoric on the art of arguing includes a detailed dissection of what used to be called the passions. Our emotions and desires define the aims or goals of our reasoning. They determine the parameters of any particular deliberation and carry to conscious attention only a small selection of all the available facts and alternatives. Brain injured people with a diminished capacity for emotion find it especially hard to make decisions, as do people with apathy, which is a symptom of severe depression and other mental disorders. Relying so heavily on the emotions comes at a cost, which is, of course, that emotions aren’t rational and can distort reasoning. Fear alone can open the gate to all manner of self-deception. On the other hand, that emotions aren’t rational need not make them irrational. Some emotions are appropriate or justified, while others are not. This is why, as well as coming to grips with science, it is so important to educate our emotions.

Another shortcoming of reason is that it sometimes leads to unreasonable conclusions, or even contradicts itself. In On Generation and Corruption, Aristotle says that, while the opinions of certain thinkers appear to follow logically in dialectical discussion, ‘to believe them seems next door to madness when one considers the facts’. In Plato’s Lesser Hippias, Socrates manages to argue that people who commit injustice voluntarily are better than those who do it involuntarily, but then confesses that he sometimes thinks the opposite, and sometimes goes back and forth:

My present state of mind is due to our previous argument, which inclines me to believe that in general those who do wrong involuntarily are worse than those who do wrong voluntarily, and therefore I hope that you will be good to me, and not refuse to heal me; for you will do me a much greater benefit if you cure my soul of ignorance, than you would if you were to cure my body of disease.

The sophists of Classical Greece taught rhetoric to wealthy young men with ambitions of holding public office. Prominent sophists included Protagoras, Gorgias, Prodicus, Hippias, Thrasymachus, Callicles, and Euthydemus, all of whom feature as characters in Plato’s dialogues. Protagoras charged extortionate fees for his services. He once took on a pupil, Euathlus, on the understanding that he would be paid once Euathlus had won his first court case. However, Euathlus never won a case, and eventually Protagoras sued him for non-payment. Protagoras argued that if he won the case he would be paid, and if Euathlus won the case, he still would be paid, because Euathlus would have won a case. Eualthus retorted that if he won the case he would not have to pay, and if Protagoras won the case, he still would not have to pay, because he still would not have won a case!

Whereas philosophers such as Plato use reason to arrive at the truth, sophists such as Protagoras abuse reason to move mobs and enrich themselves. But we are, after all, social animals, and reason evolved more as a means of solving practical problems and influencing people than as a ladder to abstract truths. What’s more, reason is not a solitary but a collective enterprise: premises are at least partially reliant on the achievements of others, and we ourselves make much better progress when prompted and challenged by our peers. The principal theme of Plato’s Protagoras is the teachability of virtue. At the end of the dialogue, Socrates remarks that he began by arguing that virtue cannot be taught, but ended by arguing that virtue is no other than knowledge, and therefore that it can be taught. In contrast, Protagoras began by arguing that virtue can be taught, but ended by arguing that some forms of virtue are not knowledge, and therefore that they cannot be taught! Had they not debated, both men would have stuck with their original, crude opinions and been no better off.

Why does reason say ridiculous things and contradict itself? Perhaps the biggest problem is with language. Words and sentences can be vague or ambiguous. If you remove a single grain from a heap of sand, it is still a heap of sand. But what happens if you keep on repeating the process? Is a single remaining grain still a heap? If not, at what point did the heap go from being a heap to a non-heap? When the wine critic Jancis Robinson asked on Twitter what qualifies someone to call themselves a sommelier, she received a least a dozen different responses. Another big problem is with the way we are. Our senses are crude and limited. More subtly, our minds come with built-in notions that may have served our species well but do not accurately or even approximately reflect reality. Zeno’s paradoxes, for example, flush out the limits of our understanding of something as rudimentary as movement. Some of Zeno’s paradoxes side with quantum theory in suggesting that space and time are discrete, while others side with the theory of relativity in suggesting that they are continuous. As far as I know (I am not a physicist), quantum theory and the theory of relativity remain unreconciled. Other concepts, such as infinity or what lies outside the universe, are simply beyond our ability to conceive. A final sticking point is with self-referential statements, such as “This statement is false.” If the statement is false, it is true; but if it is true, it is not false.

In concluding, I want to make it very clear that I hold reason in the highest regard. It is, after all, the foundation of our peace and freedom, which are under constant threat from the forces of unreason. In highlighting its limits, I seek not to disparage or undermine it but to understand and use it better, even to exalt it.

‘The last function of reason’, said Blaise Pascal, ‘is to recognize that there is an infinity of things which are beyond it. It is but feeble if it does not see so far as to know this.’