Making Sense of Junk Science Reporting

If you’ve read a few of Heather Mallick’s columns in the Toronto Star, you’ll know that her writing is often characteristic of the triteness and superficiality (masked by an air of folksy-wisdom) that one finds in many newspaper editorials (perhaps the same can be said about my writing).

The superficiality is certainly understandable – most readers of editorials get bored after about a thousand words (if not less) – brevity and conciseness are the name of the game in newspapers. Indeed, prolonged analyses of current issues and ideas have little place in most of the mass media – asides from the average reader’s attention span, there seems various other intersecting reasons for this (television and radio are structured by time restrictions; magazines and newspapers are limited by the contraints of space; propaganda is more effective as sound bites; there is arguably a lack of a general culture of critical reflection that would be amenable to sombre and thorough analyses, etc.).

In any case, this is a bit of a digression – the main point of this post is not to account for the various structural factors influencing the quality of Mallick’s editorials, but to take issue with a specific column that she recently penned. (Although, it seems that if Mallick was more aware of these broader influences, then her writing would be more insightful).

After many previous disappointments, I usually don’t read Mallick’s columns, but recently she set her sights on a topic that I am particularly interested in, and about which I estimate myself to have some insights – the phenomena of “junk science.” Ostensibly, she also considers the way science is presented in the media (one of the topics of my doctoral research), and the ways scientists work in, navigate, and manipulate the appartatuses of academia and scholarly publishing.

Her chief conclusions? That “junk science” exists because scientists want grants and will publish any nonsense study as long as it might get picked up by media outlets, thus, bolstering said scientists’ credibility and academic capital:

There are millions of researchers — often in the fuzzy, hard-to-measure social sciences — looking for grants, and something has to fill the maw. The victim is you, a busy person watching a headline flashing at you as you wait on the subway platform.

Her evidence? A recent study that has found there to be a 1 in 38 rate of autism in a group of Korean school children. This has surprised experts, especially in North America (the U.S. specifically) where childhood autism rates are estimated to be 1 in 150. The study suggests not that autism rates are increasing, but rather, previous attempts at estimating autism rates have lacked comprehensiveness, so many children with autism have gone undiagnosed. The implication is that autism rates in North America would be found to be much higher if different screening and detection methods were adopted. Thanks to Jenny McCarthy, autism is a medical hot-topic these days, and thus this study was picked up by main-stream media.

Mallick argues that this study is a clear example of “junk science” that is uncritically accepted by the public. To make this argument she offers a shallow and confused critique of the statistical methods involved (you can read Mallick’s editorial yourself to judge whether she knows what she’s talking about), and, more substantially, since Mallick is not a scientific expert, she also offers the testimony of a pediatrics professor (as quoted in the Boston Globe) who claimed that there were issues with sampling in this study.

But does a critique by a member of the scientific community make this study a piece of “junk science”? Certainly, the critic in question does not make this claim – he is merely pointing out room for improvement in the current study. It is Mallick’s conclusion that this particular study qualifies as “junk science.” Apparently, it seems that Mallick thinks that good science needs to be impervious to critique from scientists – perhaps she expects something like an absolute consensus? But this, of course, is a completely unrealistic, and I would argue, undesirable criteria for the acceptable publication of scientific research. Disagreement is a crucial part of any field of inquiry. Exposing errors and limitations is necessary to help refine scientific understanding. This, of course, cannot be accomplished by one or two scientists, but needs to be incorporated into the practices of a community. This is why scientific journals are so important. Mallick seems to think that scientific journals should be sources of absolute authority – where one records undisprovable scientific findings. But this is not their function – they are the means of communication of (necessarily and inevitably) incomplete, non-exhuastive research and findings. They are better viewed as a part of the process of scientific research – not their definitive final product (a final product which, arguably, does not exist, depending on your definitions of definitiveness).

Mallick lacks any genuine insights on matters of scientific standards. She takes for granted that what counts as good and “junk” science is clearly established. She seems to imply that good science is free from critique. On this standard, all science is junk science. However, if we accept that scientists necessarily make mistakes, overlook certain factors, are limited in their methods, etc., then the standards of good and bad science (what do we even mean here – standards of good scientific practice, of research, or of products – journal articles, theories, methods, etc.?) are not as clear-cut as Mallick implies. Peer-review, though very limited, tries to weed-out the most egregious blunders (according to the negogiated standards of the scientific community) before articles get published. But, again, research is always fragmentary, and requires input from the larger community. This is why incomplete, nondefinitive (again, we also might wonder about Mallick’s standards of definitiveness), research needs to be published.

A major problem with how science operates in the media and the larger public, of course, is when scientific research gets touted as being definitive, authoritative, and complete (pretty much an always unwarranted assertion) – when its conclusions are over-extended. Mallick seems vaguely aware that the media is partly to blame for such exaggerations – though she sets her sights chiefly on the sins of scientists. This is where Mallick’s arguments are most ironic. First, while there is no doubt that some scientists occasionally (maybe often?) overstate the importance of their research, I suspect that it is the media that is chiefly responsible for contributing to the view that scientists regularly make definitive statements and that science is capable of giving us unquestionable, authoritative knowledge. In my experience, scientists are much more modest in their conclusions, hesitant to make absolute claims about their research, especially to the media, and careful to recognise the limitations of their studies. Take Mallick’s purported case-in-point as a case-in-point. Mallick states that the autism study claims that 1 in 38 Korean school children have autism. But the study does no such thing. It was media sources that made these claims. If one actually reads the original article (which, on a side note, I think readers of science journalism should always do – unfortunately journalists often fail to provide links or even proper references for the studies they report on), one finds that while the researchers did find a rate of 2.64% of autism spectrum disorders, the majority of the discussion in the article revolves around identifying limitations in the sampling methods and possibly reasons why this number might be exaggerated. Nowhere do the authors even hint that they think their research is conclusive. Instead, they end with this careful and modest conclusion:

While replication of our findings in other populations is essential, we conclude that the application of validated, reliable, and commonly accepted screening procedures and diagnostic criteria in a total population may yield an ASD prevalence exceeding previous estimates. Hence, we report an ASD prevalence estimate in the range of 2%–3% with due caution about the risks of over- and underestimation.

Mallick is enamoured with the website http://www.badscience.net/, whose author Ben Goldacre typically “take[s] a headline, go[es] to the study, unwrap[s] the technique and the numbers, and explain[s] why the conclusion [as presented in the news article] is nonsense.” Of course, if Mallick had followed this procedure, she would have found that it is not the conclusions presented by the scientists that are nonsense, but rather it is the way the media presents said conclusions that is problematic. Mallick does point out that part of the problem is that newspaper articles rarely provide direct links to the scientific sources being discussed (and that the public even more rarely reads them). Ironically, Mallick doesn’t provide a link to the article either, and it is abundantly clear that she never read it. I want to stress the irony of this, since Mallick is herself guilty of misrepresenting the findings of a scientific article that she is attempting to decry as misrepresented.

If Mallick was a more critical observer, she would recognise her own complicity in the misrepresentation of scientific findings by the media. She would also recognise that her ideological view of science is a chief factor in this complicity. She chastises scientists (as I have just discussed, unfairly) for overstating their findings as being definitive and authoritative  while still expecting science to be definitive and authoritative. If she, like most scientists, had a more modest and careful understanding of the research scientists produce, then her conclusions would similarly be more modest and careful.

Finally, I really need to accentuate the pot-calling-the-kettle-blackness of Mallick accusing researchers of publishing nonsense in order to keep the conversation going and the funding coming in. Isn’t that the basis of Mallick’s entire career?

Advertisements

One thought on “Making Sense of Junk Science Reporting

  1. It takes a pretty strange analysis to look at the scientist > peer-reviewed journal > superficial journalism chain and conclude that the fault consistently lies with the scientist.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s