In the late 1960s, a 3M scientist named Spencer Silver endeavored to create a super-strong adhesive for the aerospace industry. Spencer Silver failed. Instead, he created a reusable, low-tack adhesive. If Silver and his colleagues had let failed experiments lie, this wouldn’t be a very interesting opening story. Luckily for me—and for anyone who has ever worked in an office—Silver’s “failed” adhesive was recognized for its potential, and the Post-it note was born.
We can take a note from 3M’s playbook here, because most companies' experimentation cultures prize “positive” results and dismiss experiments that don’t confirm hypotheses. In this discussion, we’ll explore how null results can still provide crucial information for decision-making. By the end, I hope you’ll agree with me that null results can still be noteworthy.
It’s an open secret in academia that there is a publication bias: researchers are more likely to submit—and editors are more likely to publish—papers with positive results, leaving null or inconclusive findings in the proverbial “file drawer,” never contributing to our collective knowledge about a subject. This problem is so prevalent that the phenomenon has been coined the “file-drawer problem.”
Businesses face the same challenge—sometimes with even fewer guardrails and stronger incentives to promote winning experiments. While academia has tried to address this trend, albeit only partially, by establishing journals focused solely on replication and requiring researchers to conduct meta-analyses, most companies have no formal mechanism for preserving "unsuccessful" experimental knowledge.
There is high-stakes pressure to find winners. Data scientists and analysts often work in environments where team leaders must justify resource spending with positive ROI from experimentation. Company goals are frequently focused on wins, not learning outcomes.
This, unfortunately, creates a perfect storm for experimental bias. Four common temptations include:
However, this “winner-take-all” culture creates organizational blind spots that can pose significant costs to the business. A few examples include:
Perhaps most damaging, however, is that this type of culture stifles innovation. In environments where only positive results are valued, there’s a tendency to gravitate toward safe, incremental experiments with a high probability of success, rather than bold explorations that might yield transformative insights—even if they initially show null results.
In this article, we will use the terms “failed” or “non-winning” results as a catch-all for experiments that do not produce the anticipated effect in a statistically significant way. Some organizations may refer to these as “flat,” “null,” “in the noise,” or “non-significant.”
Not all failed experimental results hold the same weight. The difference between a wasteful experiment and a valuable learning opportunity often lies not in the outcome, but in the experimental design and execution. Several factors distinguish informative “non-winners” from experiments that fail to generate useful insights
A clear, testable hypothesis should be the bedrock of every experiment—full stop. Even outside a strict scientific framework, a hypothesis like “Will changing the button color increase conversions?” is far more valuable than one that simply aims to “improve the website.” This specificity allows for better learning opportunities, even when the answer is “no.”
Experiments that provide meaningful insights typically:
Methodology plays a critical role in the credibility of any experiment—especially one with a non-winning result. Consider two A/B tests with statistically non-significant outcomes: Experiment 1 ran for two days with a small sample size; Experiment 2 ran for two weeks with sufficient power. Experiment 2 is the “failure” worth learning from because its setup is sound.
Experiments that yield helpful insights typically include:
Data quality fundamentally determines the value of any experimental result. It’s a “garbage in, garbage out” situation. A properly conducted experiment showing no effect is much more valuable than a flawed experiment showing a dramatic effect. The former provides potential guidance; the latter can lead businesses astray.
You should have increased confidence in an experiment that has the following:
Context transforms raw results into organizational knowledge. In a few quarters, when someone proposes a similar idea, a well-documented null result in a central repository prevents the organization from revisiting a dead end.
Helpful documentation should include:
The communication of results determines whether learning spreads. A transparent report that clearly states, “We found no evidence to support the hypothesis that feature X improves metric Y (Credible Intervals: -0.5 to +0.5),” can provide helpful guidance to teams.
To increase the likelihood of accepted learnings, engage in the following:
Don’t forget: experimentation is an iterative process. A “failed” experiment is just one phase in that cycle. This may lead to informed re-experimentation of a previously “failed” experiment.
The initial letdown of a "failed" experiment often blinds organizations to the important long-term value these outcomes can provide. By systematically documenting and sharing non-winning experiments, companies can unlock additional business benefits.
Highlighted below are some approaches that require intentional effort but will allow your business to maximize ROI on experimental investment by documenting learnings from all experiments.
Not all experiments that fail to confirm your hypothesis yield useful learnings. Understanding the different categories of "non-winner" experiments can help teams better classify, document, and garner insights from their work. Below are a handful of null experimental outcomes that deserve attention.
These experiments directly contradict conventional wisdom or challenge deeply held organizational beliefs. Their value lies in their ability to overturn "common knowledge" that may be leading the organization astray.
They can prevent organizations from making decisions based on external industry myths or outdated internal heuristics, creating space for more promising approaches as the team moves away from false assumptions.
Some experiments yield non-winning results overall but show interesting patterns when segmented or reveal interactions between variables that weren't the primary focus of the analysis.These experiments often provide a more nuanced understanding of factors at play in your customer ecosystem than a straightforward "win" would. They highlight complexities that simple A/B testing might miss and point toward more sophisticated approaches for future experimentation.
Sometimes experiments “fail” not because the hypothesis was wrong, but due to infrastructure limitations, data quality issues, or technical constraints that weren’t obvious beforehand. These failures provide crucial information for technical roadmaps and infrastructure planning to avoid major problems down the road.
Even the most valuable experimental results provide no benefit if they’re ignored or misunderstood—especially failed test results. Communicating experimental outcomes that didn’t “win” requires skill to ensure the insights are properly understood and applied. Below are a few strategies for presenting non-winning test results to maximize learning and decision-making impact.
Embracing "non-winning but noteworthy" results can transform how you approach experimentation. By not devaluing or ignoring results that don’t confirm our hypotheses, organizations build more robust knowledge, avoid repeating mistakes, and create a culture where genuine innovation can flourish.
At Concord, we believe that every experiment generates value when approached with rigor and curiosity. Our team of experienced data scientists and experimentation SMEs has helped organizations across various industries transform their experimental approaches, building comprehensive learning systems that capture insights from every experiment—whether it confirms hypotheses or challenges assumptions.
Are you ready to extract more value from your experimentation program? Contact Concord today.
Not sure on your next step? We'd love to hear about your business challenges. No pitch. No strings attached.