Like many other young scientists, I come to admire Twitter for its ability to facilitate scientific discussion over the internet. Unlike Facebook, where you generally have to know someone personally to see their posts, on Twitter students and faculty and institutions can interact relatively smoothly. Since I started using Twitter, I’ve found it much easier to keep tabs on the goings-on in my field: I see links to new papers as soon as they are published, I get the CalFire updates on big wildfires near me, I learn about fire management on public lands, and more. While I tend to only write fresh tweets about articles I have actually read or my own experiences, I retweet liberally. But I’m trying to resist even retweeting an article that I haven’t at least skimmed, especially if the link is a second-hand story. Even if it sounds completely reasonable.
Unlike academic publishing, social media is a fast-paced environment. A paper that takes years to put together and includes a lengthy introduction and discussion that places the work in the context of a broader debate is reduced to a sentence on a local news site or to 140 characters. Twitter can be a strange place where academic debate crosses paths with the wider public. Scientists, government agencies, and nonprofit organizations alike share direct links to scientific papers, science news articles, and opinion pieces.
We retweet things that “sound right” or align with our expectations easily. Posts that seem surprising or wrong are less likely to get by without investigation. On occasion, I’ve followed some tweets about newly published papers back to their source, and I’ve found that a news article in question doesn’t even support the tweet that linked it, or, digging deeper, that the scientific papers don’t support the claims of a news article that cites them.
Here is one from the spring, from the CA Chaparral Institute:
The science keeps coming in. Clearing habitat to try to “fireproof” the natural world is not working. http://t.co/7sC5XFNGEH
— CA Chaparral Inst (@chaparralian) May 26, 2015
The linked article, published in the Santa Fe New Mexican, is titled “Studies question wisdom of thinning forests to stop fires.” What does the article actually say? Does it support the claim that the “science keeps rolling in”? It cites exactly one paper. Does that one paper say that thinning forests is “not working”? Not really. It argues that fire intensities in Western forests were more variable than previously believed. Does the New Mexican article say that thinning forests is “not working”? Nope. It only talks about the “studies” of its headline for a few paragraphs, while the vast majority of the article discusses the history of fire management in the US and new projects for fire and water management in the Southwest, citing many fire scientists and managers who support thin-and-burn approaches.
Other articles are simply misleading. This one, that I clicked on via Twitter last year, discusses three scientific papers that conclude that the Western US is burning less than it did historically. The article frames these conclusions as contradictory to messaging from the White House Science Advisor, John Holdren, who had said a few days earlier that area burned, fire intensity, and fire season length had increased in recent decades due to climate change. Yet most people with even a cursory understanding of fire history in the U.S. should not find this surprising or contradictory–the primary reason that the West burns less than it did historically is fire suppression, a tool that is on the whole still widely used and extremely effective at limiting wildfire, even as climate change makes fighting fires more difficult. The fact that wildfires are getting bigger and more intense over the past few decades–despite continued suppression–is alarming even if we haven’t exceeded the range of historical variation. Though all of these puzzle pieces are present in the article–a mention that the papers compare fire today to fire more than a century ago while Holdren is talking about recent trends, a final sentence mentioning suppression–the overall message and clickbait headline suggest that the White House is lying to the public.
I found these two examples because I didn’t agree with the premise, and it seemed fishy. But there are likely countless examples of articles with missing pieces that I let by or even retweeted without investigation because the summary “seemed right” to me.
Any new scientific paper, especially one that is published in a high-level journal or that is likely to pique public interest, is launched now into a cycle of endless sharing and retweeting. This can be a great thing–most of us would be thrilled for our work to make it to the New York Times, NPR or even a news aggregator like Buzzfeed or IFLS. But there’s a risk, too. For each link in the chain, there’s another opportunity to misinterpret, spin, and exaggerate; it’s easy to spread misinformation and leave out key details in the name of a catchy headline and a memorable message. What started as a single piece in a larger scientific debate may turn into a revolutionary study that disproves “myths” when it hits the media.
As Christie Aschwanden wrote elegantly on the Five-Thirty-Eight blog this week, science is messy. She writes:
The important lesson here is that a single analysis is not sufficient to find a definitive answer. Every result is a temporary truth, one that’s subject to change when someone else comes along to build, test and analyze anew.
But the tweets and articles linked above present single papers as game changers. Apparent inconsistencies are interpreted to mean that science or policymakers are lying, or that long-held beliefs are nothing more than myths.
As science graduate students, we learn early to perfect the “elevator pitch”; science writers must learn to get the main point across and produce a catchy (tweetable) headline. But is it better to make your message so simple that it’s understandable, clickable, and easily digestible yet suggest that your single paper has provided the answer to a long-debated scientific question? Or is it better to maintain nuance, emphasize uncertainty, present science as the dynamic ongoing conversation that it is, yet have nobody read it? This is a fundamental challenge of science writing in a world that doesn’t much like grappling with uncertainty.
But we can, at the very least, make sure that the articles we share tell the story that we think they are telling. We can try to be skeptics and read closely even those articles that support our own views. And we can try to tweet after reading, not before.