An overview of the philosophy of science

What is science? To call a thing ‘scientific’ or ‘scientifically proven’ is to lend that thing instant credibility. It is sometimes said that 90% of scientists who ever lived are alive today—despite a relative lack of scientific progress, and even regress as the planet comes under increasing strain. Especially in Northern Europe, more people believe in science than in religion, and attacking science can raise the same old, atavistic defences. In a bid to emulate or at least evoke the apparent success of physics, many areas of study have claimed the mantle of science: ‘economic science’, ‘political science’, ‘social science’, and so on. Whether or not these disciplines are true, bona fide sciences is a matter for debate, since there are no clear or reliable criteria for distinguishing a science from a non-science.
What might be said is that all sciences, unlike, say, magic or myth, share certain assumptions which underpin the scientific method, in particular, that there is an objective reality governed by uniform laws and that this reality can be discovered by systematic observation. A scientific experiment is basically a repeatable procedure designed to help support or refute a particular hypothesis about the nature of reality. Typically, it seeks to isolate the element under investigation by eliminating or ‘controlling for’ other variables that may be confused or ‘confounded’ with the element under investigation. Important assumptions or expectations include that: all potential confounding factors can be identified and controlled for, any measurements are appropriate and sensitive to the element under investigation, the results are analysed and interpreted rationally and impartially.
Still, many things can go wrong with the experiment. With, for example, drug trials, experiments that have not been adequately randomized (when subjects are randomly allocated to test and control groups) or adequately blinded (when information about the drug being administered/received is withheld from the investigator/subject) significantly exaggerate the benefits of treatment. Investigators may consciously or subconsciously withhold or ignore data that does not meet their desires or expectations (‘cherry picking’) or stray beyond their original hypothesis to look for chance or uncontrolled correlations (‘data dredging’). A promising result, which might have been obtained by chance, is much more likely to be published than an unfavourable one (‘publication bias’), creating the false impression that most studies have been positive and therefore that the drug is much more effective than it actually is. One damning systematic review found that, compared to independently funded drug trials, drug trials funded by pharmaceutical companies are less likely to be published, while those that are published are four timesmore likely to feature positive results for the products of their sponsors!
So much for the easy, superficial problems. But there are deeper, more intractable philosophical problems as well. For most of recorded history, ‘knowledge’ was based on authority, especially that of the Bible and whitebeards such as Aristotle, Ptolemy, and Galen. But today, or so we like to think, knowledge is much more secure because grounded in observation. Leaving aside that much of what counts as scientific knowledge cannot be directly observed, and that our species-specific senses are partial and limited, there is, in the phrase of Norwood Russell Hanson, ‘more to seeing than meets the eyeball’:
Seeing is an experience. A retinal reaction is only a physical state… People, not their eyes see. Cameras and eyeballs are blind.
Observation involves both perception and cognition, with sensory information filtered, interpreted, and even distorted by factors such as beliefs, experience, expectations, desires, and emotions. The finished product of observation is then encoded into a statement of fact consisting of linguistic symbols and concepts, each one with its own particular history, connotations, and limitations. All this means that it is impossible to test a hypothesis in isolation of all the background theories, frameworks, and assumptions from which it issues.
This is important, because science principally proceeds by induction, that is, by the observation of large and representative samples. But even if observation could be objective, observations alone, no matter how accurate and exhaustive, cannot in themselves establish the validity of a hypothesis. How do we know that ‘flamingos are pink’? Well, we don’t know for sure. We merely suppose that they are because, so far, every flamingo that we have seen or heard about has been pink. But the existence of a non-pink flamingo is not beyond the bounds of possibility. A turkey that is fed every morning might infer by induction that it will be fed every morning, until on Christmas Eve the goodly farmer picks it up and wrings its neck. Induction only ever yields probabilistic truths, and yet is the basis of everything that we know, or think that we know, about the world we live in. Our only justification for induction is that it has worked before, which is, of course, an inductive proof, tantamount to saying that induction works because induction works! For just this reason, induction has been called ‘the glory of science and the scandal of philosophy’.
It may be that science proceeds, not by induction, but by abduction or finding the most likely explanation for the observations—as, for example, when a physician is faced with a constellation of symptoms and formulates a ‘working diagnosis’ that more or less fits the clinical picture. But ultimately abduction is no more than a type of induction. Both abduction and induction are types of ‘backward reasoning’, formally equivalent to the logical fallacy of ‘affirming the consequent’:
- If A then B. B. Therefore A.
- “If I have the flu, then I have a fever. I have a fever. Therefore, I have the flu.”
But, of course, I could have meningitis or malaria or any number of other conditions. How to decide between them? At medical school, we were taught that ‘common things are common’. This is a formulation of Ockham’s razor, which involves choosing the simplest available explanation. Ockham’s razor, also called the law of parsimony, is often invoked as a principle of inductive reasoning, but, of course, the simplest available explanation is not necessarily the best or correct one. What’s more, we may be unable to decide which is the simplest explanation, or even what ‘simple’ might mean in context. Some people think that God is the simplest explanation for creation, while others think Him rather far-fetched. Still, there is some wisdom is Ockham’s razor: while the simplest explanation may not be the correct one, neither should we labour, or keep on ‘fixing’, a preferred hypothesis to save it from a simpler and better explanation. I should mention in passing that the psychological equivalent of Ockham’s razor is Hanlon’s razor: never attribute to malice that which can be adequately explained by neglect, incompetence, or stupidity.
Simpler hypotheses are also preferable in that they are easier to disprove, or falsify. To rescue it from the Problem of Induction, Karl Popper argued that science proceeds not inductively but deductively, by formulating a hypothesis and then seeking to falsify it.
- ‘All flamingos are pink.’ Oh, but look, here’s a flamingo that’s not pink. Therefore, it is not the case that all flamingos are pink.
On this account, theories such as those of Freud and Marx are not scientific in so far as they cannot be falsified. But if Popper is correct that science proceeds by falsification, science could never tell us what is, but only ever what is not. Even if we did arrive at some truth, we could never know for sure that we had arrived. Another issue with falsification is that, when the hypothesis conflicts with the data, it could be the data rather than the hypothesis that is at fault—in which case it would be a mistake to reject the hypothesis. Scientists need to be dogmatic enough to persevere with a preferred hypothesis in the face of apparent falsifications, but not so dogmatic as to cling on to their preferred hypothesis in the face of robust and repeated falsifications. It’s a delicate balance to strike.
For Thomas Kuhn, scientific hypotheses are shaped and restricted by the worldview, or paradigm, within which the scientist operates. Most scientists are blind to the paradigm and unable to see across or beyond it. If data emerges that conflicts with the paradigm, it is usually ignored or explained away. But nothing lasts forever, and eventually the paradigm weakens and is overturned. Examples of such ‘paradigm shifts’ include the transition from Aristotelian mechanics to classical mechanics, the transition from miasma theory to the germ theory of disease, and the transition from ‘clinical judgement’ to evidence-based medicine. Of course, a paradigm does not die overnight. Reason is, for the most part, a tool that we use to justify what we are already inclined to believe, and a human life cannot easily accommodate more than one paradigm. In the words of Max Planck, ‘A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.’ Or to put it more pithily, science advances one funeral at a time.
In the Structure of Scientific Revolutions, Kuhn argued that rival paradigms offer competing and irreconcilable accounts of reality, suggesting that there are no independent standards by which they might judged against one another. Imre Lakatos sought to reconcile Popper and Kuhn, and spoke of programmes rather than paradigms. A programme is based on a hard core of theoretical assumptions accompanied by more modest auxiliary hypotheses formulated to protect the hard core against any conflicting data. While the hard core cannot be abandoned without jeopardising the programme, auxiliary hypotheses can be adapted to protect the hard core against evolving threats, rendering the hard core unfalsifiable. A progressive programme is one in which changes to auxiliary hypotheses lead to greater predictive power, whereas a degenerative programme is one in which these ad hocelaborations become sterile and cumbersome. A degenerative programme, says Lakatos, is one which is ripe for replacement. Though very successful in its time, classical mechanics, with Newton’s three laws of motion at its core, was gradually superseded by the special theory of relativity.
For Paul Feyerabend, Lakatos’s theory makes a mockery of any pretence at scientific rationality. Feyerabend went so far as to call Lakatos a ‘fellow anarchist’, albeit one in disguise. For Feyerabend, there is no such thing as a or the scientific method: anything goes, and as a form of knowledge science is no more privileged than magic, myth, or religion. More than that, science has come to occupy the same place in the human psyche as religion once did. Although science began as a liberating movement, it grew dogmatic and repressive, more of an ideology than a rational method that leads to ineluctable progress. In the words of Feyerabend:
Knowledge is not a series of self-consistent theories that converges toward an ideal view; it is rather an ever increasing ocean of mutually incompatible (and perhaps even incommensurable) alternatives, each single theory, each fairy tale, each myth that is part of the collection forcing the others into greater articulation and all of them contributing, via this process of competition, to the development of our consciousness.
‘My life’, wrote Feyerabend, ‘has been the result of accidents, not of goals and principles. My intellectual work forms only an insignificant part of it. Love and personal understanding are much more important. Leading intellectuals with their zeal for objectivity kill these personal elements. They are criminals, not the leaders of mankind.’
Every paradigm that has come and gone is now deemed to have been false, inaccurate, or incomplete, and it would be ignorant or arrogant to assume that our current ones might amount to the truth, the whole truth, and nothing but the truth. If our aim in doing science is to make predictions, enable effective technology, and in general promote successful outcomes, then this may not matter all that much, and we continue to use outdated or discredited theories such as Newton’s laws of motion so long as we find them useful. But it would help if we could be more realistic about science, and, at the same time, more rigorous, critical, and imaginative in conducting it.
Originally published on Psychology Today
Like this:
Like Loading...
You must be logged in to post a comment.