Ever had a minor medical issue with no obvious cause? Something like low energy levels, or a string of low-grade illnesses (like colds), or a drippy nose? If you have, and you know people “into” nutrition, then chances are you’ve been suggested to “try and see” how a dietary intervention works.
The try-and-see approach is extremely common in the world of nutrition; it might even be considered a pillar of the nutrition intervention process. With the try-and-see approach, one removes a food from their diet (or less commonly adds a food) and then waits and sees how their symptoms react—in other words, they try something and see what happens.
On the surface, this seems like a great approach, even a scientific one. After all, it bears many similarities to a formal experiment, where one forms a hypothesis, tests it, and then collects results. But it’s not a scientific approach, because unlike a true scientific experiment—which has control groups and measures to reduce bias—the try-and-see approach has no guards against bias or confounding.
If that doesn’t seem like an immediate problem, this article should help clarify why it’s almost impossible to determine anything meaningful from (most) types of try-and-see experiments (also sometimes called “n=1” experiments, since the sample group is only one).
The Placebo and Nocebo Effects
The placebo effect is well known, but less well understood. Many people are under the impression that it’s a form of mind over matter, where we can produce real physiological results simply through belief. The reality is more complicated. The most simple way to understand the placebo effect is as a lens filter: it cannot change the reality of a situation or any objective measurements, but it can make you notice or ignore certain subjective measures to a greater extent. So if you’re in pain, a highly subjective feeling with no clear objective measures, you might “feel” better because you refocus your brain on other things—but if you have high cholesterol, you cannot will yourself back to normal levels.
When we test drugs, supplements, and therapies for efficacy, we usually include a placebo group and blind both the participants and the researchers to who is in which group. This is known as a double-blind placebo-controlled trial, and it’s the gold standard for scientific experimentation. We usually expect to see at least some placebo effect—or at least some noise in the signal—so we do the most we can to remove the participants’ expectations from the equation. This is the first problem with the try-and-see approach, then: there’s no blinding!
Without blinding, the try-and-see approach is extremely likely to produce placebo effects (or their reciprocal, the nocebo effect, which is when you expect something bad to happen). Placebo effects aren’t real changes, however, and so even if you feel better initially you haven’t actually changed anything long-term. It’s like disconnecting the fuse for “check engine” light in your car—it makes the problem look like it’s gone, but the real problem is still there.
Most people also don’t realize just how much the placebo effect can screw with the results. They figure it might slightly increase the odds that the results from their try-and-see intervention are false, but overall it isn’t a large enough or strong enough effect to negate those results. How wrong they are! The placebo effect can be strong enough to make it effectively impossible to know for certain whether your intervention actually worked or just seemed like it did.
Consider this study, which was performed expressly to witness the power of the placebo effect. In the study, 27 patients who thought they were lactose intolerance (but who in reality were not, based on testing they were unaware of the results for) and 54 patients with documented lactose intolerance were given a single gram of glucose, which should not cause any symptoms. Of those 27 patients, 12 (or 44%) experienced stomach issues after the dose and 14 of the controls also did.
In other words, almost half of a group of people who weren’t lactose intolerant (but thought they were) experienced subjective stomach issues when given a pill that didn’t contain any lactose. This suggests that anytime you approach an intervention with a result in mind (which is always, because you wouldn’t do something if you thought there was a 0% chance it could work), you’re highly likely—possibly as much as 50% likely—to get a false result. You might as well flip a coin for all the good that does.
Of course, placebo effects or not, nutritionists do still actively prescribe patients to try-and-see—are they all wrong? No, not all. Many do so perfectly legitimately because prior science is on their side. Many nutritional interventions have been studied in a double-blind, placebo-controlled manner. When they have been, we can see which interventions truly work and which are just placebo-driven. When we know an intervention has a real effect for a given condition, we can prescribe that intervention with peace of mind because it’s no longer reliant solely on placebo effects.
The problem is that many of the try-and-see interventions prescribed by pseudonutritionists are not backed by science, but rather on unbacked pseudoscientific ideas. In many cases, we actually have data to suggest they won’t work—for example, removing milk to reduce mucus, which numerous studies have found to be false.
When an unbacked intervention is prescribed, the patient has no context for whether it’ll work or not, and yet will walk away thinking they have a problem with a given food thanks to the placebo and nocebo effects. Now, not only have they not solved the problem, but they’ve removed a harmless food from their diet and increased dietary restriction, none of which is fair to the patient.
The placebo effect is only the first and most obvious reason why the try-and-see approach leads so often to erroneous results, however. Sometimes after trying-and-seeing, a problem has legitimately gone away—which leads us to the second major problem with this approach.
Post Hoc Ergo Propter Hoc and Confirmation Bias
Ever had a friend recommend Emergen-C, echinacea, or zinc when you have a cold? They swear it works for them, and sure enough after you’ve taken it a few days your cold is getting better and going away—clearly evidence that it works, right? Not so fast! That cold was on its way out regardless of any home remedies you attempted, it’s just the timeline of events (feel sick –> take remedy –> feel better) that makes it look like your particular solution worked. Enter the “post hoc ergo propter hoc” fallacy, more simply known as the post hoc fallacy.
Post hoc ergo propter hoc is Latin for “after this therefore due to this”, and is an extremely common logical fallacy. Anyone who believes their wardrobe affects their favorite team’s performance is guilty of this strategy, as are superstitious people in general and most try-and-seers—they assume that because they do something (wear a certain shirt, break a mirror, remove gluten from their diet), something else happens (their team wins, they have a bad day, they feel better). It should be obvious in the first two cases that the two are not related, but even in the latter case the conclusion is drawn from nothing more than a sequence of events.
The post hoc fallacy goes hand-in-hand with another major problem with the way human brain processes information, which is confirmation bias. Confirmation bias leads us to focus on and remember more clearly the evidence that supports our hypothesis and to downplay and forget more often the evidence that contradicts it. For example, when we believe that our partner always leaves the toilet seat up, we never remember the times he remembers to put it back down—and when such a time is pointed out, it’s considered to be an exception, not a common place occurrence.
In the case of the try-and-see approach, the combination of the post hoc fallacy and confirmation bias leads us to…
- Be more aware of any potential “good” feelings (“Hey! My stomach feels good right now!”)
- Downplay or forget any potential “bad” feelings (“I had a really small stomachache earlier, but it was only because I was in a rush and ate my lunch too fast. I don’t remember having any stomachaches yesterday or the day before, though.”)
- Assume that our intervention is the cause of any “good” feelings (“I cut gluten out last week so that’s why I’m doing so much better.”)
When you also account for the placebo effect priming our brain to expect a positive outcome, it’d be surprising if you didn’t feel better at all during any intervention of any type.
We’re All Guilty
This is the tricky part—we’re all guilty! We might not all use try-and-see approaches to nutrition (and here there are clearly many tiers, from those who end up with an exhaustive list of food “sensitivities” to those who won’t change their diet even for the best reasons in the world), but we all have areas where we allow ourselves to be blind. You must actively open your eyes and think critically about your decisions—you must always be vigilant—in order to avoid being trapped by logical fallacies and cognitive errors.
That’s part of my mission here: not only to explore and educate about specific nutritional issues, but also to talk about nutrition from a critical and scientific angle—to introduce you to nutritional skepticism and skepticism in general—so you can make more rational decisions even about the topics I haven’t discussed. Being skeptical until convinced otherwise is your best path towards effective nutrition (not to mention training and climbing, but that’s a whole ‘nother topic!).
What to Do Instead of Try-and-See
First and most importantly (especially if it’s a serious problem), see a professional. Unfortunately, there’s no easy metric for finding a good, science-based dietitian or nutritionist—in my experience, both are equally prone to believing unproven and disproved nutritional hypotheses. Your best bet is to garner a list of “tells” (like homeopathy, fad diets, and one-size-fit-all interventions) and avoid nutrition professionals that advertise them.
Second, if your problem is less serious and more chronic, focus on the major factors (overall diet, exercise, sleep quality/quantity, stress) rather than honing on a minor factor like removing (or adding) a single food. Individual foods are much more likely to cause obvious, serious, and acute problems (like an allergic reaction) than mild, chronic problems like low energy or upset stomachs. On the other hand, the major factors in your life all have a tremendous influence on these sorts of problems!
Finally, rely on trustworthy sources, not anecdotes and the advice of friends (or strangers). If you have a question about a particular food or intervention, you are always welcome to send it my way to hear my honest answer.
Don’t Try-and-See
As nice as it would be to be able to add or subtract foods from our diet and get precise, quantifiable results about our health, nutrition doesn’t work that way. Science doesn’t work that way. If you want real, most likely accurate answers, you need blinding and a controlled experiment—something that is beyond the reach of the average person.
I can say with certainty, however, that most of the problems that people try to solve with the try-and-see approach aren’t caused by any individual food—outside of allergies, there are very few conditions that are triggered by single foods and many more that have wide ranging, often disparate causes. With this in mind, it makes even less sense to take a try-and-see approach.
Most importantly, however, I hope that you understand why the try-and-see approach is flawed, because it is in my opinion a dividing line between those who practice and discuss nutrition using science as their foundation and those who use pseudoscience as their foundation. You can either approach nutrition rationally, where objective experimentation and understanding is the primary goal and personal experience as an important source of information is downplayed, or you can use anecdotes and subjective feelings as your baseline for nutritional evidence and ignore the frequent and obvious clashes and contradictions in the evidence you receive.
In other words, you can acknowledge and accept the flaws we all have and work to overcome them, or you can pretend like they don’t exist and believe whatever appears most readily to be true. The choice is yours!
While the placebo effect is indeed a confounding factor in personal experiments where it is impossible to blind oneself, I don’t think it’s productive (or entirely rational) to rule out all anecdotal and personal experience simply because your “experiment” has not been the subject of a large double blind study. To me this is akin to saying, any personal experience is worthless (and non-informative) unless it is backed up by a rigorous study… which is obviously silly given that most of our common experiences are not studied by science.
Yes humans are biased (vitamin C example above) and clearly there is a lot of nonsense pseudo scientific nutritionist advice out there but you do gain information from “try and see”. It’s not as reliable as a study but the probability that the effect is real if it, a) persists b) persists after repeated attempts, increases as we gain more positive information. In other words, we should take a highly skeptical but Bayesian approach.
I certainly didn’t mean to indict all personal experience, just a particular method very common within the nutrition world. With most personal experience, we never set out to prove one thing or another, but rather come to conclusions (regardless of validity) based on what happens. With a try-and-see approach, however, we attempt to filter our experience into something objective—an impossibility.
Perhaps the bigger issue is that so many of the molecular “victims” of the try-and-see approach have been routinely demonstrated to be innocuous. Tell a group of people that MSG causes migraines (a common but wrong belief), and they’ll start reporting more headaches after eating Chinese food. Tell them you’ve eliminated MSG from their food, and they’ll report fewer headaches. Never mind that no clear link has ever been found between MSG and headaches in the decades of research done on the substance—and never mind that anytime you eat meat you release significantly more glutamate into your body and brain—you’ll still elicit anecdotal reports that suggest MSG causes headaches. Personal experience, when funneled, can very much be wrong.
I would never suggest one stop trusting one’s own experience and depart into a Descartes-ian mind/body disconnect—as you mention, disregarding all personal experience would be silly. But it’s still important to recognize that personal experience can lead to wrong conclusions, and also that science trumps personal experience.