The internet is absolutely flooded with health advice that is unproven, misleading or just plain wrong. Some bogus claims are pretty obvious – if you don’t believe that germs cause infectious diseases, you’re probably not reading this article anyway. Other false claims can be harder to spot, and may cite real science in an attempt to make them appear more credible. To help you navigate this minefield, here are 10 ‘red flags’ that indicate you should be cautious about trusting health advice. Remember, you should always ask your doctor if you have doubts about what is best for your health.
This one applies to pretty much any type of misinformation, but it’s especially important to check for trustworthy sources when it comes to health. What is a trustworthy source? For scientific papers, you should look for studies that have been published in a peer-reviewed scientific journal. Government health organisations and licensed medical practitioners are generally trustworthy sources, but remember that not all doctors are medical doctors, and that even medical professionals aren’t immune to biases and conflicts of interest. Use common sense and reasonable scepticism when assessing whether an expert or organisation is trustworthy. Do they have a financial interest? Are they being criticised by the rest of the medical community?
While the statement sounds silly, you should be very cautious about claims that a treatment can cure a disease that is generally considered to be incurable. While all such diseases may one day become curable, a treatment that reverses autism probably isn’t hiding in a private clinic in Mexico, and it’s definitely not drinking bleach or urine. Unfortunately, many people are more than happy to profit from other people’s desperation, giving them false hope and often putting their health at risk in the process. The only time someone should accept an unproven treatment is when it is being done as part of a clinical trial. While being part of a clinical trial does not guarantee that a treatment will work or even that it is safe, it does ensure that a minimum standard of safety has been met and that you will be fully informed of the risks involved.
‘Detox’ (detoxification) is a buzzword attached to many practices, but it has a precise meaning – it’s the active removal of toxic substances from the body. Despite how often this term is thrown around, few interventions genuinely qualify as detoxification, and even fewer of them have actually been shown to produce a detectable health benefit. Some of them are also dangerous. The most reliable, safe and effective method of reducing toxic compounds in the body is simply to avoid putting them into your body in the first place.
When you see claims about detox, ask questions. What specific toxins are being removed? How exactly does the intervention remove these toxins? Are there scientific studies to back this up? If the answer to any of these questions is absent, vague, or doesn’t make sense to you, be sceptical about what is being claimed.
Many scientific ideas were in direct opposition to consensus when they first appeared, only to become the new consensus over time. We tend to remember examples of this happening because they make good stories, but it’s important to be aware that they are the exception and not the rule. If something flies in the face of scientific consensus, it is wrong more often than not, and you should be cautious around such claims. The fact that ‘physicists didn’t believe Einstein at first’ doesn’t make a disgraced doctor’s autism theory any more likely to be true.
Dodgy health claims often cite genuine scientific research that appears to back them up, but they miss out key logical steps. It’s easy to be blinded by science in these situations if you don’t know what to look for. To help you out, here are some questions you might wish to ask yourself.
While we humans like to think of ourselves as logical beings, studies suggest that we are much more likely to base our decisions on emotion and justify our choices with logic afterwards. That’s why it’s important to be particularly cautious about health claims that try to appeal to our emotional side, or that are trying to tell us a story rather than present us with the facts so that we can reach our own conclusions.
For example, ‘A young couple has a beautiful, perfect baby but they don’t want the baby to be vaccinated. Doctors persuade them to get their baby vaccinated, and then a few days later the baby suddenly dies.’ The story is aimed at instilling sadness, fear and possibly anger, but lacks any useful information, such as a comparison of infant mortality between vaccinated and unvaccinated infants. Other strategies may involve instilling ‘fear of missing out’ or capitalising on your cynicism (doctors/pharmaceutical companies are hiding this information from you because it would put them out of business!)
You have almost certainly heard that correlation doesn’t mean causation. When a claim implies causation, but their sources only show correlation, it suggests that they are either being careless with their language or that they are being deliberately misleading. How do you know whether the evidence supports causation? When it comes to human health, randomised, placebo-controlled clinical trials are necessary to support causation, while observational studies can only demonstrate correlation. Even clinical trials can only give a probability that causation exists, not a guarantee (though that probability can become very high if the clinical trial is large enough).
This is not to say that you should never act on correlation without causation, but it’s important to be aware of the distinction between the two. Remember that confounding variables and reverse causation might be at play. For example, you see a study in which people with high BMI were more likely to be depressed. Is being overweight making people depressed, or is depression affecting their dietary habits? What if other factors (like lack of exercise, smoking, or old age) are contributing to both? Even if a study says it had controlled for such factors, it’s usually impossible to perfectly control for all of them.
Nothing is actually ‘too good to be true’ – there’s nothing to say a wonder drug or device that reverses obesity overnight isn’t going to be discovered tomorrow. However, such discoveries are extremely rare, and when something sounds too good to be true, it usually is. Some claims advertise benefits like rapid weight loss or improved eyesight through mundane, inexpensive or effortless practices. Ask yourself ‘If it’s so easy, why is it not common knowledge? Why don’t doctors seem to know about it?’
Learn to recognise how data being presented to you could be biased, or how you yourself may be biased in your interpretation of the data. It’s not hard to set up a study or survey in a way that will skew the results in the direction the authors want. Here are some common sources of bias to look out for:
‘Miracle cure’, ‘wonder drug’ and ‘revolutionary treatment’ are phrases whose only purpose is to sell something or attract attention. Even reputable sources do it sometimes. Unfortunately, while a few discoveries could perhaps be described as miraculous in terms of their impact on human health (penicillin, insulin and vaccination come to mind), these don’t come around often. When a claim uses sensational language, it might be for the relatively harmless purpose of attracting clicks, but it might also be self-promotion. Does the source have any conflicts of interest that might incentivise them to drum up hype?
Title image by rawpixel.com on freepik