As a long-term subscriber to more than one magazine, I’m perennially fascinated by the phenomenon of journalism. The conclusion I’ve formed, rightly or wrongly, is that journalists in general and columnists in particular need no qualification other than an ability to write entertainingly. Solely by dint of being repeatedly in the public eye, they are able to acquire a credibility that’s ungrounded in any expertise. What’s responsible is the ‘recognition heuristic’ featured in my Fable 85, ‘The Judicious Sea-Turtle’. In short, if confronted with two alternatives, we automatically tend to assign greater value to the better known of the two. This can be a useful rule of thumb if you’re trying to judge, say, which of two cities is bigger. When it comes to perceived expertise, however, it can create a dangerous sense of trustworthiness. The average citizen is more likely to believe the word of a familiar pundit than a true expert who happens to be anonymous. The sort of matters that journalists speculate about can seldom be tested empirically, and in any case readers have forgotten their prognostications by the time events catch up; so they almost invariably get away with it. When a whole institution works this way, it’s no wonder that the public ends up believing all kinds of scientifically unverifiable hogwash.
I did however have the pleasure last week of being able to put one journalistic ‘expert’ to the test. Before the 2018 County Championship began, the Guardian’s cricket pundit had the chutzpah to predict the final positions. He supported his predictions with well-argued analysis that sounded highly plausible. He must have known he was creating a hostage to fortune, but obviously had confidence in his powers as a seer; or maybe he was banking on us all having forgotten by now. The trouble for him is that, by using a mathematical tool called Pearson’s Coefficient of Correlation, we can easily put an accurate value on the extent to which predictions are borne out by events. A score of 1 is a perfect correlation between prediction and reality, while 0 is none at all. For Division 2, the expert scored .33, which is what you’d generously call a weak correlation. For Division 1, it was .19, which is hardly better than getting a monkey to draw numbers out of a hat. In other words, the journalistic predictions were almost valueless, and in any serious context (e.g. gambling on sports outcomes) positively misleading. One has to hope that the Guardian’s analysis in other, more untestable spheres is less dependent on the innate credulousness of its readers.