After retiring from a career on the fringes of 1930s British academia, Lewis Fry Richardson was ready to indulge his passion for the mathematics of war. He wanted to test a theory that the incidence of conflict between countries is systematically related to the length of their common borders.
He began measuring the borders of Europe and was puzzled to find that his results differed between different maps. He looked up some authoritative books and found discrepancies there too. The border of Spain and Portugal was given as 987 km in one and 1,214 km in another.
It was perplexing. But Richardson persisted with his research and soon realised something at once obvious and surprising.
When making any measurement, a ruler or measuring stick of a particular size must be used. A cartographer using a 100km ruler will measure intervals of 100km along a stretch of land, cutting across many natural features. Another cartographer using a ruler of 10km will pick up many of these meanderings and come up with a longer measurement for the same stretch of land.

Our intuition is that there would come a point where smaller and smaller rulers would converge on the ‘correct’ answer. But Richardson showed that this is not the case. The measured length of (many) natural features in the environment approaches infinity as the scale of measurement gets smaller.
How had no-one had noticed this before?
For almost two-and-half-thousand years, mathematicians and physicists had implicitly relied on Euclidean geometry – the study of basic shapes taught in school – so that the distance between point A and point B just seemed obvious. The significance of the size of the measuring stick never occurred to anyone.
If others had found similar discrepancies before, they had ignored them, probably assuming some kind of error.
One of the most surprising things about this story is that a theory with such an obvious counterexample survived for so long unquestioned.
In considering a similar case, the psychologist Daniel Kahneman wrote:
I can explain it only by a weakness of the scholarly mind that I have often observed in myself. I call it theory-induced blindness: once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws. If you come upon an observation that does not seem to fit the model, you assume that there must be a perfectly good explanation that you are somehow missing. You give the theory the benefit of the doubt, trusting the community of experts who have accepted it.
Too much respect for expertise can be counterproductive. And, an educated outsider – such as Richardson was – asking seemingly dumb questions can contribute to a leap forward in understanding.
Kahneman himself was instrumental in overcoming a case of theory-induced blindness in economics in 1979. With research partner Amos Tversky, he uncovered a flaw in expected utility theory, an accepted economic theory of 250 years’ standing, and went on to develop the prospect theory of decision making, for which he was later awarded the Nobel Prize in economics.
Describing how he identified the flaw in expected utility theory, Kahneman wrote that because he was a psychologist and not an economist, he he did not know enough to be blinded by respect for it, and instead was puzzled. It was “a lucky combination of skill and ignorance…. Knowledge of perception and ignorance about decision theory both contributed to a large step forward in our research.”
In the final chapter of his final book The River of Consciousness, published posthumously in 2017, the renowned neurologist and writer Oliver Sacks described several cases of theory-induced blindness that slowed, even petrified, the progress of science.
In their own time the world was blind – either through ignorance or resistance – to Archimedes’ invention of calculus, Gregor Mendel’s laws of genetic inheritance, Louis Verrey’s discoveries about colour vision, Alfred Wegener’s theory of continental drift, and Oswald Avery’s discovery of DNA.
The default setting in human psychology is to believe. Disbelieving is hard.
It is effortless to assume that our current theories and world views are true, and effortful to keep an open mind. We also often have a vested interest in being right. So, we overlook the inconvenient counterexamples that threaten our beliefs.
Yet, developing a new theory does not necessarily imply throwing out the old one, as Einstein beautifully wrote:
To use a comparison, we could say that creating a new theory is not like destroying an old barn and erecting a skyscraper in its place. It is rather like climbing a mountain, gaining new and wider views, discovering unexpected connections between our starting point and its rich environment. But the point from which we started out still exists and can be seen, although it appears smaller and forms a tiny part of our broad view gained by the mastery of the obstacles on our adventurous way up.
Overcoming theory-induced blindness is necessary for breakthroughs and innovation in science, business, politics, ethics and individual and cultural development.
Open mindedness is key, as is diversity of thought. At least that’s the theory.
* This is not a sponsored post, but if you buy a book using a link from this article I will receive a small commission.