Many researchers are rushing to validate the efficacy of psilocybin microdosing for a variety of ailments, but are these studies reliable?
It was the psilocybin microdosing study heard round the world. A report from Quantified Citizen published in Scientific Reports led with the victorious title “Psilocybin microdosers demonstrate greater observed improvements in mood and mental health at one month relative to non-microdosing controls.” At first pass this seems like a very promising study, and like clockwork this title was picked up by dozens of publications worldwide, including The Independent, which enthusiastically claimed this study “adds to growing evidence of the therapeutic potential of microdosing.”
But does this study actually provide evidence of therapeutic potential for psilocybin microdosing? No it doesn’t. It doesn’t even come close.
There are many problems with the microdosing study from Quantified Citizen, which I will discuss in some detail. Yet despite the very obvious problems with this study, the glowing headline was picked up and repeated ad nauseam as if the claims represented demonstrate some breakthrough in the field. It is worrying that this sloppy research is becoming more common in the domain of psychedelic research. But even more worrying is the media’s rush to repeat any news which appears to support the ongoing narrative of the psychedelic renaissance without any fact checking. This trend is creeping uncomfortably close to pro-psychedelic propaganda.
So what is wrong with this study?
First of all, the microdosing study very carefully avoids making any claims about treating mood disorders such as depression or anxiety, which would be essential for demonstrating therapeutic efficacy. In a typical drug efficacy study, a target indicator would be identified, such as major depressive disorder (MDD) or anxiety related to PTSD. The Quantified Citizen study did not select a target indicator for their study, they merely looked at general changes in “mood and mental health,” which is extremely vague terminology to track. Because no target indicator was selected, it is impossible to state whether there is any therapeutic efficacy demonstrated by this study. The results are essentially meaningless.
In a typical efficacy study, subjects diagnosed with an indicator, such as depression or anxiety, would be recruited for a longitudinal drug therapy program and matched against a similar group taking an inactive placebo. Neither group would know if they were taking the drug or the placebo, meaning subjects would be “blinded” to the therapy. To date there have been a handful of blinded placebo-controlled studies investigating the efficacy of psilocybin microdosing and they have all been inconclusive, indicating that microdosing is no more effective than placebo. To avoid this obvious problem of inconclusive results, the Quantified Citizen study made no attempt to recruit subjects with pre-existing conditions and there was no attempt to provide a blinded placebo control group. Problem solved.
In fact, the Quantified Citizen study takes a clear shot at the common research practice of blinding, stating: “More broadly, parsing direct effects of psychedelics from indirect factors such as set, setting, individual differences, and expectancies presents epistemological and practical challenges, and the study of psychedelics may be best served by going beyond a potentially Procrustean emphasis on blinding and other approaches to maximizing control.” This defensive word salad clearly shows that the researchers have an ideological disdain for strict controls that might undermine the results they wish to achieve. To be clear, this is not a scientific experiment, this is a philosophical demonstration, and any claims of efficacy are undermined not only by the lack of blinding, but also by the statement that placebo controlled blinding is an ineffective means to evaluate a drug. This shady dodge should be enough to discount the entire study. But hold on, it gets worse.
In most drug efficacy studies it is common to strictly control the dose to make sure that every subject receives the same dose of the drug within the same dosing schedule. Controlling the dose is the only way to verify the efficacy of a particular drug treatment. Dose control should be even more important in a psilocybin microdosing study where the difference between a sub-threshold dose and a threshold dose is impossible to gauge with the naked eye. As you might be able to guess, the Quantified Citizen study made no attempt to control for dose.
In fact, the study recruited a large cohort of self-selected microdosers, people who already claimed to microdose in varying doses and schedules, with no attempt to verify how much of the drug each subject was taking. Self-reported microdosers could be taking dried mushrooms from any psychoactive mushroom species, in any dose, with any other supplemental herbs and minerals such as lion’s mane mushrooms and niacin, as advocated in the Stamets Stack. Were the subjects of this study taking true sub-threshold microdoses of psilocybin a few times a week? Were they taking threshold doses every day? Were they taking any other supplements or medications while they were microdosing? Were they taking macrodoses and tripping their face off every weekend? Since the subjects in this study were self-selected and self-reporting through an anonymous phone app, it is impossible to say what they were taking. The subjects of this study could be bots for all we know.
And this is the problem with anonymous survey research: You have no idea what the subjects are actually doing. The subjects of this psilocybin microdosing study could be lying about their doses, or fabricating responses to questionnaires to make their results look better, or could be plants inserted into the survey data to boost results. The expectation that self-selected microdosers would accurately report results that make their practice look ineffective is loaded with bias that is impossible to ignore. And the expectation that rational scientists should take this methodology seriously is disingenuous at best and outright fraud at worst.
But this study is not just bad science, it is a self-serving document designed to benefit the people who produced the study. In what has been called an epidemic of “shill science,” this study has only one main purpose —to promote the Stamets microdosing stack, a combination of psilocybin mushrooms, lion’s mane mushrooms, and niacin. Coincidentally, the Stamets stack is patented by Paul Stamets, meaning that this study is one big advertisement for an herbal supplement formulation patented by one of the authors. This is a gross misuse of sloppy research to promote an untested herbal supplement, the kind of “science” that is best relegated to the trash bin.
If there is any doubt that this microdosing study is a shady work of self-serving interests, check out the Competing interests section of this study under the heading “Ethics declarations.” Here you will find that every author on this study has a conflict of interest related to a commercial enterprise they own or represent, often containing multiple conflicts of interest. In the case of Stamets, “Paul Stamets is an investor in Quantified Citizen, MycoMedica Life Sciences, PBC and owns Fungi Perfecti, LLC which sells Lion Mane supplements. He is an applicant on pending patents combining psilocybin mushrooms, Lions Mane mushrooms and niacin.”
It is understandable that people invested in the psychedelic revolution are hungry for positive research results to report, but this study is not one of them. If anything, this study represents everything that is wrong with the psychedelic renaissance: self-serving ideologues who flout best research practices to deliver pre-determined results aimed to boost their bottom line. These baseless studies need to be called out for what they are: propaganda. And the last thing the psychedelic field needs is more propaganda. Enough is enough.
Dear James – our scientific team is happy to explain to you the difference between a prospective observational study and a clinical trial. They are different and the article unfortunately makes it clear that you do not know the difference. Just a few of the mistakes in your commentary:
– there is no blinding in a prospective observational study
– the study OBSERVES & reports on what people are doing
– it’s not a drug efficacy trial
– there is no dose control in an observational study
– there is always bias in studies – we cover that
– the study states clearly what it is and is not
– it was published in Nature Scientific reports where researchers & scientists critically reviewed it and the data. The vast majority of research for publication submitted to them is rejected.
– it is mandatory to publish conflict of interest in all research so that you can clearly see what biases there may be
– this is an internationally recognized group of researchers working together to advance the science so that it can be taken to to clinical trials
I am happy to help your team find a medical student or science student to help you not make these mistakes in your commentaries. Perhaps hiring a science writer would help?
Warmly, Dr. Pamela Kryskow.
Hi Pamela, thank you for taking the time to comment on this article. A couple of notes:
– This article is an editorial, not a scientific review
– The fact that your study is observational in nature does not prevent media outlets from reporting your findings as evidence of therapeutic potential
– I invite you to write correction letters to the media outlets that mischaracterize the nature of your findings. Here are just a few:
Forbes: “These findings join the ranks of many peer-reviewed, legitimate academic studies that look at psilocybin as a hopeful treatment for depression.”
The Independent: “a new study that adds to growing evidence of the therapeutic potential of microdosing.”
Neuroscience News: “The latest study to examine how tiny amounts of psychedelics can impact mental health provides further evidence of the therapeutic potential of microdosing.”
Toronto Sun: “A new study has added to the growing body of evidence that psychedelics will be useful in the treatment of mental health disorders.”
Thanks again for your feedback,
I read the Nature article and appreciate your review. Pamela Kryskow’s response was passive aggressive and defensive. Disappointing, but a reminder that science is only as good as the humans who do it.
Thank you for your article and for calling out the many holes in this “observational study” as Pamela defensively clarifies you didn’t take the bait, but pointed out the ripple effect which of it in the media and I’m sure she’s well aware of the impacts of a headline like this and how they may benefit her and the products she’s being paid to promote through research.
We all want to be happy to hear good news about a new solution to our problems, but a sweeping claim like this in a headline can leave an imprint on delicate minds and they believe everything they read. Integrity in reporting findings, facts, and publishing them without attempting to influence the way people think and making purchases is harder to do but required in our current day and age. And yet there is zero incentive to have any integrity. The climate of rapid media contagion coupled with short attention spans minus clarity on the studies’ limitations is an equation for propaganda. The conflict of interest path is clear here as it is on so many products and services AND we need to see more transparent and critical reviews of all consumer products and services. For every “product claim” there needs to be a review like this if we value truth over all else. Most of us don’t have time to find the truth for ourselves and that is why I appreciate this author’s article.