Science shouldn’t animate the need for social welfare
This is an interesting discovery:
First, it’s also a bad discovery (note: there’s a difference between right/wrong and good/bad). It is useful to found specific interventions on scientific findings – such as that providing pregnant women with iron supplements in a certain window of the pregnancy could reduce the risk of anaemia by X%. However, that the state should provide iron supplements to pregnant women belonging to certain socio-economic groups across the country shouldn’t be founded on scientific findings. Such welfarist schemes should be based on the implicit virtues of social welfare itself. In the case of the new study: the US government should continue with cash payments for poor mothers irrespective of their babies’ learning outcomes. The programme can’t stop if any of their babies are slow learners.
Second, I think the deeper problem in this example lies with the context in which the study’s findings could be useful. Scientists and economists have the liberty to study what they will, as well as report what they find (see third point). But consider a scenario in which lawmakers are presented with two policies, both rooted in the same ideologies and both presenting equally workable solutions to a persistent societal issue. Only one, however, has the results of a scientific study to back up its ability to achieve its outcomes (let’s call this ‘Policy A’). Which one will the lawmakers pick to fund?
Note here that this isn’t a straightforward negotiation between the lawmakers’ collective sensibilities and the quality of the study. The decision will also be influenced by the framework of accountability and justification within which the lawmakers operate. For example, those in small, progressive nations like Finland or New Zealand, where the general scientific literacy is high enough to recognise the ills of scientism, may have the liberty to set the study aside and then decide – but those in India, a large and nationalist nation with generally low scientific literacy, are likelier than not to construe the very availability of scientific backing, of any quality, to mean Policy A is better.
This is how studies like the one above could become a problem: by establishing a pseudo-privilege for policies that have ‘scientific findings’ to back up their promises. It also creates a rationalisation of the Republican Party’s view that by handing out “unconditional aid”, the state will discourage the recipients from working. While the Republicans’ contention is speculative in principle, in policy and, just to be comprehensive, in science, scientific studies that find the opposite play nicely into their hands – even in as straightforward a case as that of poor mothers. As the New York Times article itself writes:
Another researcher, Charles A. Nelson III of Harvard, reacted more cautiously, noting the full effect of the payments — $333 a month — would not be clear until the children took cognitive tests. While the brain patterns documented in the study are often associated with higher cognitive skills, he said, that is not always the case.
“It’s potentially a groundbreaking study,” said Dr. Nelson, who served as a consultant to the study. “If I was a policymaker, I’d pay attention to this, but it would be premature of me to pass a bill that gives every family $300 a month.”
A temporary federal program of near-universal children’s subsidies — up to $300 a month per child through an expanded child tax credit — expired this month after Mr. Biden failed to unite Democrats behind a large social policy bill that would have extended it. Most Republicans oppose the monthly grants, citing the cost and warning that unconditional aid, which they describe as welfare, discourages parents from working.
Sharing some of those concerns, Senator Joe Manchin III, Democrat of West Virginia, effectively blocked the Biden plan, though he has suggested that he might support payments limited to families of modest means and those with jobs. The payments in the research project, called Baby’s First Years, were provided regardless of whether the parents worked.
Third, and in continuation, it’s ridiculous to attach the approval for policies whose principles are clear and sound to the quality of data originating from scientific studies, which in turn depends on the quality of theoretical and experimental instruments scientists have at their disposal (“We hypothesized that infants in the high-cash gift group would have greater EEG power in the mid- to high-frequency bands and reduced power in a low-frequency band compared with infants in the low-cash gift group.”). And let’s not forget, on scientists coming along in time to ask the right questions.
Fourth, do scientists and economists really have the liberty to study and report what they will? There are two ways to slice this. 1: To clarify the limited context in which this question is worth considering – not at all in almost all cases, and only when a study uncovers the scientific basis for something that isn’t well-served by such a basis. This principle is recursive: it should preclude the need for a scientific study of whether support for certain policies has been set back by the presence or absence of scientific studies. 2: where does the demand for these studies originate? Clearly someone somewhere thought, “Do we know the policy’s effects in the population?” Science can provide quick answers in some cases but not in others, and in the latter, it should be prevented from creating the impression that the absence of evidence is the evidence of absence.
Who bears that responsibility? I believe that has fallen on the shoulders of politicians, social scientists, science communicators and exponents of the humanities alone for too long; scientists also need to exercise the corresponding restraint, and refrain from conducting studies in which they don’t specify the precise context (and not just that limited to science) in which their findings are valid, if at all. In the current case, NYT called the study’s findings “modest” – that the “researchers likened them in statistical magnitude to moving to the 75th position in a line of 100 from the 81st”. Modest results are also results, sure, but as we have come to expect with COVID-19 research, don’t conduct poor studies – and by extension don’t conduct studies of a social-science concept in a scientific way and expect it to be useful.