When researchers from the University of Utah, Harvard, and Florida State University announced findings on Reiki for chronic knee osteoarthritis, the headlines promised hope: a non-invasive, drug-free therapy that "significantly reduced symptoms." But **the actual study results told a very different story—one that reveals a persistent problem in how alternative medicine research gets conducted and reported**. What Did the Study Actually Find? The research involved 132 participants divided into four groups: real Reiki, fake Reiki (called "feiki"), mindfulness training, and a waitlist control group. At the two-month follow-up, both Reiki and mindfulness showed improvements compared to the waitlist control. But here's the critical finding: Reiki performed almost identically to fake Reiki. The difference between the two "approached statistical significance," which is researcher-speak for "it didn't actually reach statistical significance"—meaning the results could easily be due to chance. This is a negative study being presented as positive. When a treatment can't beat a placebo version of itself, that's the definition of failure to demonstrate a specific treatment effect. Yet the researchers attempted to rescue their findings by analyzing "trajectories of change" between the one-month and two-month marks, a highly unusual approach that raises questions about whether this analysis was planned before the study began. Why Does Reiki Keep Failing These Tests? The fundamental problem isn't just with this one study—it's with the entire premise. According to Reiki practitioners, the therapy works by channeling "life force energy" (called "ki") through a practitioner's hands to clear energy blockages and support the body's natural healing. The problem: life force energy, as described in Reiki philosophy, doesn't exist in any scientifically measurable way. This creates an impossible situation. You cannot scientifically prove that a non-existent method affects a non-existent energy. When extraordinary claims require extraordinary evidence, and the mechanism behind the claim has no scientific basis, studies face an uphill battle from the start. The burden of proof becomes much higher—not because of skepticism, but because basic physics and biology don't support the underlying theory. What Were the Study's Major Weaknesses? Beyond the negative primary results, the research had several methodological problems that undermined its credibility: - Subjective Outcomes Only: The study measured only self-reported symptoms, with no objective measurements. This means placebo effects and expectations can heavily influence results. - Unblinded Providers: The practitioners giving Reiki and fake Reiki knew which treatment they were administering. While they followed a script to minimize differences, the fake Reiki providers were instructed to count backwards from 1,000 by sevens in their heads to avoid accidentally giving "real" Reiki—a stark difference in the therapeutic interaction that likely wasn't masked. - High Initial Dropout: Of 606 eligible participants, 335 refused to participate before the study even began. This suggests people who agreed were predisposed to believe in Reiki or mindfulness, potentially biasing results. - Assessment Completion Issues: About 10% of participants failed to complete post-treatment assessments, with higher dropout in the fake Reiki group, creating potential asymmetry in the data. How to Evaluate Alternative Medicine Claims Critically When you encounter headlines about alternative therapies proving effective, here's how to dig deeper: - Compare to Placebo: Ask whether the treatment beat a placebo or sham version. If not, the "active ingredient" hasn't been demonstrated. Real Reiki performing like fake Reiki is a red flag, not a success. - Check the Primary Outcome: Look for the main comparison the researchers planned to make before the study started. If the primary result is negative but secondary analyses are positive, that's often a sign of "p-hacking"—running multiple analyses until something looks significant by chance. - Examine Blinding Quality: In studies of subjective outcomes like pain or symptom severity, both participants and providers should be blinded to treatment assignment. If providers know who's getting the real treatment, they may unconsciously provide better care to that group. - Assess Biological Plausibility: Does the proposed mechanism align with known biology? If a therapy claims to work through an energy that doesn't exist according to physics, that's a major credibility issue that no amount of positive results can overcome. What Does This Mean for Alternative Medicine Research? The Reiki study exemplifies a broader pattern in alternative medicine research. When studies are designed carefully with proper blinding and objective outcomes, most alternative therapies fail to show effects beyond placebo. When they do show positive results, it's often because of methodological weaknesses—unblinded providers, subjective outcomes, or selective reporting of analyses. This doesn't mean all alternative practices are worthless. Mindfulness, for example, showed genuine benefits in this same study compared to both placebo and waitlist control. But mindfulness has a plausible mechanism (changing thought patterns and attention) and doesn't require belief in non-existent energies. The difference matters. The real issue is how results get communicated. When researchers find that a treatment doesn't beat placebo, calling that a success misleads patients and the public. It also wastes research resources that could be directed toward therapies with actual evidence of effectiveness. Until alternative medicine research embraces the same rigorous standards as conventional medicine—and honestly reports negative results—headlines promising breakthroughs will continue to outpace the actual science.