A visitor from America this week obligingly bore out the point I made in an earlier blog about the mutual incomprehensibility of opponents in a bipartisan political system. Asked what she made of the Trump victory, she answered only, “The people who voted him in have the IQ of a pea”. I found this a mind-boggling idea: 61 million enfranchised Americans with the collective intelligence of a Birds Eye factory. No wonder that journalist Matthew Parris, a pro-Clinton Bremainer who suddenly finds himself on the wrong side of history, has started questioning whether this democracy nonsense hasn’t been taken too far.
Of course, those 61 million Trumpers will tell you that it’s the Democrats who are either too stupid to see where their philosophy is taking the world or else so selfish that they don’t honestly care. It’s a perfect replay of what’s been happening in the post-referendum UK, where Bremainers are still fuming about the idiocy of Brexiters, who naturally return their contempt with knobs on. This is an age-old phenomenon: one so familiar that, for example, Jonathan Swift satirised it in his 1726 yarn concerning the endless bloody conflicts between Lilliputians who couldn’t agree which end of an egg to break.
Whenever you get a universal phenomenon like this, it’s instructive to look for its roots in psychology. The relevant topic, known as Confirmation Bias, is addressed by the 77th Fable, ‘Two Minds Made Up’. This concerns a pair of cormorants who, for entirely arcane reasons, start by having different opinions on whether or not the sea is a good thing. Instead of adjusting their opinions logically in the light of incoming evidence, they each interpret every subsequent occurrence as further proof of their start position, and are simply antagonised by the other’s counter-arguments. So entrenched does each of them become that, eventually, they can no longer abide even to stay together.
It is disconcerting, and certainly worrying, to know of this systematic bias in human calculation, which amounts in sum to what in a machine would be considered a serious design flaw. It is the hallmark of a brain that evolved piecemeal over hundreds of millions of years with only a few chemicals and the strictures of evolution to steer it. Ultimately the problem boils down to the fact that the basic building-block of the brain, the neuron, is essentially a binary switch, and it should never be any surprise when the macro behaviour of any object reflects its micro construction.
I hasten to add that neuronal activity is rather more sophisticated than yes/no decisions alone: the brain does produce shades of grey by additionally using the devices of repetition and intensity. Think of it this way. Suppose you’re inviting guests to a wedding, but can only send invitations by post. If there’s someone you don’t want to come, you send no invitation. If there’s someone you really want to come, you can send them one extra-large gilt-edged invitation; or else you send a second invitation, and then a third, and keep on sending until you get a reply. It’s a crude system, but then nature never had the ability to design a neurological telephone by itself. And, to be fair, it’s worked well enough to keep mammalian life going for quite a while.
So what’s wrong with it? The trouble has to do with the very machinery of neuronal operation. To put it simplistically: a neuron requires a certain amount of excitation coming into it from other neurons before it will fire. Crucially, it so happens that the threshold level falls every time it does fire, so that further firings become increasingly probable. Conversely, a ‘don’t fire’ condition becomes increasingly unlikely. Since such firings are fundamental to human decision-taking, this fact of life has major ramifications for the way we conduct ourselves. In short, we decide much less on the basis of objective judgement than we’d like to flatter ourselves.
There may be a good evolutionary reason why this is the case: if there was a sufficiently good outcome last time that a particular course of action didn’t kill us, it’s more likely than not to work out again, so what’s the point in not doing the same again? This does however translate to real-world behaviour that can only be described from a distance as irrational. If a caveman is wondering whether to risk going out, he’ll be looking for a certain number of neurological assurances about the weather, the presence of predators, et cetera, before he gets a neuronal ‘yes’; but, next time, he’ll need less by way of reassurance to get to the same decision, even though the actual threats will not have diminished. He is, in effect, being more reckless with his life, without even realising it. Certainly there is a tribe-level advantage in individual risk-taking, but that wasn’t the question.
The phenomenon is particularly striking when taken to the level of beliefs. The term ‘Confirmation Bias’ says it all. When we see anything in the news, we can barely stop ourselves interpreting it as yet more evidence of the rightness of our own self-serving opinions; yet another individual with different beliefs will equally vigorously read it in quite another way. Because the process is self-reinforcing, polarisation is inevitable. It’s no surprise that, when a person has spent his or her whole life within the intellectual confines of White Supremicism, say, or Political Correctness, they will almost inevitably be convinced that anyone who fails to share their own entirely logical, impartial and sensible conclusions must be either a dolt or a villain. And that’s why anyone you hear pontificating about the mote in the other lot’s eye needs to be invited to look at the beam in their own.