A/B Testing & Experimentation

Adapting Causal Inference to Real-World Constraints: Part 2

By Tracy Burns-Yocum

What if you can't randomize? Learn how to uncover causal impact using difference-in-differences when true A/B testing isn't possible.

In a perfect world, you'd have full control over your experiment—random selection, random assignment, clean data, and ideal timing. But in the real world, business constraints, regional rollouts, and logistical challenges often make true A/B testing difficult. Randomized, controlled A/B tests are the gold standard for causal inference because they rely on both random selection and random assignment. However, if you can only implement one, random assignment is more critical.

In Part 1 of this series, we explored how to move forward when a control group isn't available. In Part 2, we'll tackle another common challenge: what to do when randomization is not an option. While random assignment is central to strong causal inference, it’s not always feasible. Fortunately, there are several alternative approaches, and choosing the right one depends on a range of factors specific to your use case.

Let’s take a closer look.

Key Terms

Let’s start by defining some common terms that will be used frequently in the following sections.

Randomization: There are two main types of randomization in experiments. The distinction between them is important, as they are often conflated.

  • Random Sampling/Selection:This is when customers are selected randomly from the entire customer population. While not necessary for an experiment, random selection helps make the results more generalizable.
  • Random Assignment:This involves randomly assigning customers to either the treatment or control group. Random assignment is essential for an experiment.

An Example

The Scenario: Your company wants to evaluate the impact of a new marketing campaign aimed at increasing sales in a specific region of the US.  

The Problem: You cannot randomize the marketing campaign roll-out across different regions because the campaign is for one region only. In this scenario, your company’s testing team cannot randomly select, nor randomly assign customers to receive the campaign or not.

Potential Alternative to Traditional A/B Testing: Difference-in-Differences (DiD)

DiD is a quasi-experimental design used for natural experiments to infer causal relationships in situations where the hallmarks of controlled experiments, such as randomization to ensure group equivalence, are not possible.

Here are the basic steps for a DiD:

1. Identify Groups. Identify a region that could serve as the “control” group. This region should not be slated to receive the campaign and should be similar to the region receiving it.

2. Collect Outcomes Pre-Campaign and Post-Campaign. Collect data before and after the campaign launch.

3. Compute the Difference in Differences. There are several differences to calculate in a DiD design:

  • First, calculate the difference in the treatment group by measuring the change in your outcome metric before and after the campaign.
  • Second, calculate the same change for the control group.
  • Third, the “difference in differences” is the difference between the two changes calculated above.

If the outcome metric in the treatment group increases more after the intervention compared to the control group, this suggests the campaign likely had a positive impact on sales. If the change for both the treatment and control groups is similar, the campaign likely had minimal impact.

As with the methods in Part 1, DiD comes with a set of assumptions that must be met for valid inference. Additionally, there are two types of DiD—standard and reverse—as well as special cases (such as interrupted time-series). Knowing when to use the DiD method and which version to implement depends on your specific scenario and the data available. Because DiD is a quasi-experimental method, it provides weaker evidence for causal relationships than a controlled A/B experiment. Therefore, quasi-experiments should be used only when it is impossible or impractical to run a true experiment.

Contact Concord

Concord's team specializes in handling even the most complex and unconventional experimentation cases, delivering precision and actionable insights from your data. With deep expertise in implementing Difference-in-Differences (DiD) designs, we can help you draw reliable conclusions from experiments conducted in dynamic, real-world environments. Whether you're evaluating policy impacts, measuring treatment effects, or navigating challenging data conditions, we have the experience and tools to deliver results you can trust. Let us help you turn your data into a strategic advantage.

Sign up to receive our bimonthly newsletter!

Not sure on your next step? We'd love to hear about your business challenges. No pitch. No strings attached.

Concord logo
©2025 Concord. All Rights Reserved  |
Privacy Policy