Stop Freaking Out About That Study Linking Diet Soda to Alzh
Some of the reports about this “deadly diet soda” study have been more nuanced than others. But there’s a common theme among a lot of them—they don’t outline some of its most crucial and relevant caveats until way past the headline. And if they did, the titles would be pretty boring. Like, “Study determines minor observational link (but no direct cause-and-effect) between certain people who drink artificial sugar beverages, but it has a small sample size that doesn’t include minorities or account for a whole bunch of other critical factors.”
Click here to subscribe to Brainstorm Health Daily, our brand new newsletter about health innovations.
That’s not exactly as sexy as claiming that a Diet Coke a day will bring Alzheimer’s in its wake, or triple the chances of a stroke. But science, fortunately (or unfortunately if you’re trying to grab clicks at the expense of good information), isn’t meant to be sexy. It’s meant to test hypotheses and express facts. And when the results of scientific experiments are presented without context, they lead to misleading, panicky headlines like the ones that dominated the Internet on Friday.
Physician Aaron Carroll, who writes for one of the most clear-eyed, if wonky, health care websites out there—the Incidental Economist—and has a delightfully no-BS, data-driven column on the New York Times’ Upshot site, highlights several reasons why you should take this new sugar study with a grain of salt.
Did the participants differ by race or ethnicity? I have no idea. I do know, however, that the authors write about the “absence of ethnic minorities, which limits the generalizability of our findings to populations of non-European decent.” Was that in the coverage you read?
Did they differ by socioeconomic status? No idea. Did they abuse drugs? Work or retire? Live alone or with someone? Have a family history of disease? No idea.
Did they acknowledge that different artificial sweeteners are different molecules with likely different effects or implications? No.
Were there multiple comparisons, meaning some results might be due to chance? Yep. Did they rely on self-report, which might mean recall bias comes into play? Yep.
Was this an observational study? Of course.
Was all of that in the coverage you read?
Carroll’s explanation is a lot more in-depth than that, digging into nerdy-but-important factors like the actual models the study’s authors used, the limitations they openly admitted to, and information we simply don’t know about their analysis.