On cosmology's scicomm disaster

Jamie Farnes, a theoretical physicist at Oxford University, recently had a paper published that claimed the effects of dark matter and dark energy could be explained by replacing them with a fluid-like substance that was created spontaneously, had negative mass and disobeyed the general theory of relativity. As fantastic as these claims are, Farnes’s paper made the problem worse by failing to explain the basis on which he was postulating the existence of this previously unknown substance.

But that wasn’t the worst of it. Oxford University published a press releasesuggesting that Farnes’s paper had “solved” the problems of dark matter/energy and stood to revolutionise cosmology. It was reprinted by PhysOrg; Farnes himself wrote about his work for The Conversation. Overall, Farnes, Oxford and the science journalists who popularised the paper failed to situate it in the right scientific context: that, more than anything else, it was a flight of fancy whose coattails his university wanted to ride.

The result was disaster. The paper received a lot of attention in the popular science press and among non-professional astronomers, so much so that the incident had to be dehyped by Ethan SiegelSabine Hossenfelder and WiredUK. You’d be hard-pressed to find a better countermeasures team.

The paper’s coverage in the international press. Source: Google News

Of course, the science alone wasn’t the problem: the reason Siegel, Hossenfelder and others had to step in was because the science journalists failed to perform their duties. Those who wrote about the paper didn’t check with independent experts about whether Farnes’s work was legit, choosing instead to quote directly from the press release. It’s been acknowledged in the past – though not sufficiently – that university press officers who draft the releases needed to buck up; rather, more importantly, the universities need to have better policies about what roles their press releases are supposed to perform.

However, this isn’t to excuse the science journalists but to highlight two things. First: they weren’t the sole points of failure. Second: instead of looking at this episode as a network where the nodes represent different points of failure, it would be useful to examine how failures at some nodes could have increased the odds of a failure at others.

Of course, if the bad science journalists had been replaced by good ones, this problem wouldn’t have happened. But ‘good’ and ‘bad’ are neither black/white nor permanent characterisations. Some journalists – often those pressed for time, who aren’t properly trained or who simply have bad mandates from their superiors in the newsroom – will look for proxies for goodness instead of performing the goodness checks themselves. And when these proxy checks fail, the whole enterprise comes down like a house of cards.

The university’s name is one such; and in this case, ‘Oxford University’ is a pretty good one. Another is that the paper was published in a peer-reviewed journal.

In this post, I want to highlight two others that’ve been overlooked by Siegel, Hossenfelder, etc.

The first is PhysOrg, which has been a problem for a long time, though it’s not entirely to blame. What many people don’t seem to know is that PhysOrg reprints press releases. It undertakes very little science writing, let alone science journalism, of its own. I’ve had many of my writers – scientists and non-scientists alike – submit articles with PhysOrg used here and there as a citation. They assume they’re quoting a publication that knows what it’s doing but what they’re actually doing is straight-up quoting press releases.

The little bit this is PhysOrg’s fault is because PhysOrg doesn’t state anywhere on its website that most of what it puts out is unoriginal, unchecked, hyped content that may or may not have a scientist’s approval and certainly doesn’t have a journalist’s. So buyers beware.

Science X, which publishes PhysOrg, has a system through which universities can submit their press releases to be published on the site. Source: PhysOrg

The second is The Conversation. Unlike PhysOrg, these guys actually add value to the stories they publish. I’m a big fan of them, too, because they amplify scientists’ voices – an invaluable action/phenomenon in countries like India, where scientists are seldom heard.

The way they add value is that they don’t just let the scientists write whatever they’re thinking; instead, they’ve an editorial staff composed of people with PhDs in the relevant fields as well as experienced in science communication. The staff helps the scientist-contributors shape their articles, and fact-check and edit them. There have been one or two examples of bad articles slipping through their gates but for the most part, The Conversation has been reliable.

HOWEVER, they certainly screwed up in this case, and in two ways. In the first way, they screwed up from the perspective of those, like me, who know how The Conversation works by straightforwardly letting us down. Something in the editorial process got shorted. (The regular reader will spot another giveaway: The Conversation usually doesn’t use headlines that admit the first-person PoV.)

Further, Wired also fails to mention something The Conversation itself endeavours to clarify with every article: that Oxford University is one of the institutions that funds the publication. I know from experience that such conflicts of interest haven’t interfered with its editorial judgment in the past, but now it’s something we’ll need to pay more attention to.

In the second way, The Conversation failed those people who didn’t know how it works by giving them the impression that it was a journalism outlet that saw sense in Farnes’s paper. For example, one scientist quoted in Wired‘s dehype article says this:

Farnes also wrote an article for The Conversation – a news outlet publishing stories written by scientists. And here Farnes yet again oversells his theory by a wide margin. “Yeah if @Astro_Jamie had anything to do with the absurd text of that press release, that’s totally on him…,” admits Kinney.

“The evidence is very much that he did,” argues Richard Easther, an astrophysicist at Auckland University. What he means by the evidence is that he was surprised when he realised that the piece in The Conversation had been written by the scientist himself, “and not a journo”.

Easther’s surprise here is unwarranted but it exists because he’s not aware of what The Conversation actually does. And like him, I imagine many journalists and other scientists don’t know what The Conversation‘s editorial model is.

Given all of this, let’s take another look at the proxy-for-reliability checklist. Some of the items on it we discussed earlier – including the name of the university – still carry points, and with good reason, although none of them by themselves should determine how the popular science article should be written. That should still follow the principles of good science journalism. However, “article in PhysOrg” has never carried any points, and “article in The Conversation” used to carry some points but which now fall to zero.

Beyond the checklist itself, if these two publications want to improve their qualitative perception, they should do more to clarify their editorial architectures and why they are what they are. It’s worse to give a false impression of what you do than to provide zero points on the checklist. On this count, PhysOrg is guiltier than The Conversation. At the same time, if the impression you were designed to provide is not the impression readers are walking away with, the design can be improved.

If it isn’t, they’ll simply assume more and more responsibility for the mistakes of poorly trained science journalists. (They won’t assume resp. for the mistakes of ‘evil’ science journalists, though I doubt that group of people exists).