Constants in an Experiment: What They Are

Most failed experiments don't fail because of bad data or wrong math.
They fail because something that should have stayed the same didn't. That's the core argument this article makes: constants in an experiment — the conditions you deliberately hold fixed — are not a formality. They are the mechanism that makes your results mean anything at all.
Get them wrong, and you can't tell whether your treatment caused the outcome or whether something else shifted in the background.
This article is for engineers, PMs, and data teams who run experiments — whether in a lab context or, more likely, in product development through A/B tests. If you've ever shipped a feature based on a test result you later couldn't explain or reproduce, this is the article that explains why. Here's what you'll learn:
- What constants are and how they differ from independent and dependent variables
- The difference between physical constants (fixed by nature) and control constants (fixed by you)
- Why controlling constants is the foundation of valid, reproducible results
- How to identify and document constants before an experiment launches — and the specific mistakes that break them
- How constants translate directly to A/B testing, including which settings must stay locked and what happens when they don't
The article moves from concept to practice. It starts with the core definition and logic, then covers the two types of constants and why only one of them requires your active attention, then walks through how to actually identify and maintain them — with specific examples from both scientific and product experimentation contexts.
Constants in an experiment: the condition that makes causation legible
Every experiment rests on a simple logical premise: if you want to know what caused a change, you need to be certain that only one thing changed. That certainty comes from constants.
A constant in an experiment is any quantity deliberately held unchanged throughout the experiment so that observed effects can be attributed solely to the variable being tested — not to background noise, shifting conditions, or uncontrolled factors.
As one chemistry resource puts it bluntly: "Remove the control variables, and you basically have no experiment." That's not hyperbole. It's the logical foundation of valid experimental design.
You'll encounter the term used interchangeably with "control variable" and "constant variable" across scientific literature. These are synonyms for the same concept. For this article, "constant" leads — but don't be confused when you see the other terms in the wild.
They all refer to the same thing: a condition you've committed to keeping stable so your results mean something.
This article is for anyone designing experiments — whether in a lab, a clinical setting, or a product analytics context — who wants to understand what constants are, why they matter, and how to identify and maintain them in practice.
By the end, you'll know the difference between the two categories of constants, understand why uncontrolled constants make results uninterpretable (not just noisy), and have a concrete process for locking conditions down before any experiment launches.
This article covers:
- What constants are and how they differ from independent and dependent variables
- The two categories of constants: physical constants and control constants
- Why uncontrolled constants undermine internal validity and reproducibility
- How to identify and document constants as a formal design step
- How the same logic applies to A/B testing, with different vocabulary
- A synthesis checklist and decision framework for your next experiment
Constants are about what you're allowed to conclude, not just procedural tidiness
The purpose of a constant isn't just procedural tidiness. It's about what you're allowed to conclude. When you hold a factor steady across all trials or conditions in your experiment, you're making a deliberate claim: this factor is not the explanation for what I'm observing.
Every constant you maintain is one fewer alternative explanation for your results.
Without constants, you're not running an experiment — you're running an observation with too many moving parts to interpret. If you're testing how a chemical reacts to different compounds but you're also varying the temperature, the volume, and the purity of your reagents between trials, you have no basis for concluding that the compound choice drove the outcome.
The constants are what make the independent variable's effect legible.
Constants occupy a distinct logical position from the variables you test and measure
The three-variable framework is worth stating precisely, because conflating these categories is one of the most common errors in experimental design.
The independent variable is what the researcher deliberately changes — the thing being tested. In a chemistry experiment, this might be which compound is added to a solution. The dependent variable is what the researcher measures — the observed outcome, such as the reaction that follows.
The constant is everything else that could plausibly affect the outcome but is intentionally held steady: temperature, sample volume, reaction time, chemical purity.
The constant sits in a distinct logical position from both other variable types. It's neither the cause being tested nor the effect being measured. It's the stable background against which the experiment runs.
Researchers who treat constants as an afterthought — something to mention in a methods section rather than actively manage — tend to produce results that don't replicate and conclusions that don't hold.
Two categories of constants, only one of which demands your attention
Not all constants are the same kind of thing, and the distinction matters for how you work with them. The next section of this article covers both categories in depth, but a brief preview is useful here.
The first category is physical constants — values like the speed of light, pi, or Avogadro's number that are universal and unchanging by nature. These aren't decisions a researcher makes; they're features of reality that show up in calculations and models.
The second category is control constants — the researcher-imposed decisions to hold specific experimental conditions steady. Temperature, pH, sample size, measurement timing: these are all control constants. They don't stay fixed because the universe requires it.
They stay fixed because the researcher decided they should, and then enforced that decision throughout the experiment.
For anyone designing an experiment — whether in a lab, a clinical setting, or a product analytics context — the category that demands active attention is the second one. Physical constants take care of themselves. Control constants don't.
They require planning, documentation, and discipline to maintain. The rest of this article focuses primarily on that work.
Physical constants vs. control constants: two distinct categories
Not all constants in an experiment belong to the same category. Treating them as a single undifferentiated concept creates confusion about what a researcher actually controls versus what they simply rely on.
There are two fundamentally different types, and understanding which one you're working with determines whether you have any design work to do at all.
Physical constants: fixed by nature, not by choice
Physical constants are universal values that exist independently of any experiment. Pi, the speed of light, Avogadro's number — these are "unchanging values fundamental to scientific calculations and theories," forming the bedrock of scientific laws and principles.
No researcher decides to hold pi constant. No lab protocol needs to specify that the speed of light will remain unchanged between trials. These values are given. A researcher relies on physical constants; they do not manage them.
This is the defining characteristic of a physical constant: it is the same in every lab, in every country, in every era. It requires no decision, no documentation, and no monitoring. It simply is.
Control constants: deliberate decisions that require active maintenance
Control constants — also called control variables in the scientific literature, with both terms used interchangeably — are a different matter entirely. These are quantities that researchers intentionally hold steady throughout an experiment so that any observed changes in the outcome can be attributed to the variable being tested, not to shifting background conditions.
A concrete list of what control constants look like in chemistry and the broader sciences includes: temperature, humidity, pressure, experiment duration, sample volume, the technique used to conduct the experiment, species selection, and chemical purity.
What these have in common is that none of them hold themselves steady. A researcher must decide to control them, and then must actively maintain that control across every trial.
The plain-language summary captures the scope well: "Essentially, anything that you keep the same between two or more experiments is something you control." That breadth is worth sitting with.
The volume of solution used, the time allowed for a reaction, the specific instrument technique — all of it is up for grabs unless a researcher explicitly locks it down.
The practical test: does the value exist without you, or only because you decided it should?
The practical test is straightforward: ask whether the value exists independently of the experiment, or whether a researcher must actively decide to hold it steady. If the answer is the former, it's a physical constant. If the answer is the latter, it's a control constant.
There's another useful signal. A control constant in one experiment can become the independent variable in another. Temperature might be held constant in a study examining the effect of pH on a reaction rate, but temperature itself becomes the variable under investigation in a different study.
Physical constants don't work this way — pi is never the independent variable. That context-dependence is the fingerprint of a control constant.
Control constants are active design decisions; physical constants are not
The practical implication is direct: when designing, running, or auditing an experiment, the constants that require your attention are control constants. Physical constants are background infrastructure. Control constants are active design decisions that can succeed or fail depending on how carefully they're identified and maintained.
The entire subsequent work of experimental design — identifying what to hold constant, documenting it, and maintaining it throughout execution — operates in the control constant category. Physical constants don't ask anything of you. Control constants ask quite a lot.
Uncontrolled constants don't just add noise — they make results uninterpretable
The prior section established what control constants are and who is responsible for maintaining them. This section addresses what actually happens when that responsibility is neglected — not in the abstract, but in the specific, recoverable ways that experimental results break down.
The failure is not that results become noisier. It's that the logical connection between your treatment and your outcome breaks entirely — you can no longer claim that what you changed caused what you measured.
Constants and internal validity: the logical foundation of any experiment
Internal validity is the degree to which you can confidently attribute an observed outcome to the independent variable you changed, rather than to something else that happened to shift at the same time. Constants are what make that attribution possible.
The mechanism is straightforward. If two experimental conditions differ in more than one way, you cannot know which difference caused the outcome. Suppose you're testing the effect of a new fertilizer on plant growth, but across your trials you also vary the volume of water each plant receives.
Now any difference in growth could be explained by the fertilizer, the water, or some interaction between them. The question you set out to answer becomes unanswerable.
This is why constants aren't just procedural tidiness — they're the logical structure that gives an experiment its meaning. Without them, observed effects cannot be reliably attributed to the independent variable alone.
Constants allow researchers to "be sure that any changes in the outcome are due to the variable they're interested in." That's not a minor benefit. That's the point.
Reproducibility: why inconsistent constants undermine trust in results
Even if an experiment produces a compelling result, that result is only scientifically meaningful if someone else — or the same team, six months later — can run the same experiment and arrive at the same conclusion. Reproducibility is what separates a reliable finding from a one-time observation.
Constants are the mechanism that makes reproducibility possible. If experimental conditions weren't documented and held steady, there's no stable procedure to replicate. The need for constants stems from "duplication of results or consistency in results."
If a plethora of uncontrolled variables were allowed to shift between runs, you'd receive a corresponding plethora of variable results — which "would completely defeat the purpose of experimenting."
This matters not just for scientific credibility but for institutional trust. When a team reports a result that can't be reproduced, the problem is rarely the analysis — it's usually that the conditions weren't actually the same the second time around.
Inconsistent constants are one of the most common and least-examined culprits.
The cost of getting this wrong: wasted effort and bad decisions
For researchers and product teams, the practical stakes are significant. Failing to control constants doesn't just add noise — it can produce false positives (acting on a result that isn't real) or inconclusive results (running an experiment that can't answer the question it was designed to answer).
Both outcomes waste resources and, over time, erode confidence in the entire experimentation program.
Pre-experiment guidance from the GrowthBook team is built explicitly around preventing these failure modes. The framing is direct: "Poorly planned experiments waste time and lead to bad decisions." Rigorous pre-experiment planning — which includes identifying and locking down experimental conditions — is positioned not as bureaucratic overhead, but as the prerequisite for moving fast with reliable data.
That framing is worth internalizing. Teams that treat constant-control as optional often discover its importance only after they've shipped a feature based on a result they can no longer reproduce or explain.
The discipline of controlling constants isn't what slows experimentation down — it's what makes the results worth acting on.
Identifying constants is a formal design step, not an implicit one
Identifying constants is not an automatic step. It requires deliberate work before an experiment begins, and teams that skip it tend to produce results that are either inconclusive or actively misleading.
The practical question is how to avoid that outcome — which starts with a systematic process for identifying what must be held constant before a single data point is collected.
Enumerate every factor that could independently explain your outcome
Before you can decide what to hold constant, you need to enumerate every factor that could plausibly affect your dependent variable. In a chemistry experiment, that list might include temperature, pH, sample volume, reaction time, and chemical purity.
Biology experiments add species selection, environmental conditions, and measurement instruments to that inventory. Product experiments extend it further still — traffic sources, user segments, device types, time of day, experiment duration, and how user exposure is defined all belong on the list.
The goal of this step is completeness. Any factor you fail to identify cannot be deliberately controlled — and an uncontrolled factor that happens to shift between your test and control groups becomes an alternative explanation for whatever outcome you observe.
The identification process is essentially asking: what else, besides the treatment, could explain a difference in results?
Assign each factor a role: independent variable, dependent variable, or constant
Once you have a complete list, you need to decide what role each factor plays: independent variable (intentionally changed), dependent variable (measured), or constant (held fixed). The decision rule is straightforward — if a factor could independently explain the outcome, it must be held constant.
In practice, this is where specific decisions get made. Experiment duration, for example, must be fixed because traffic patterns vary across days of the week. Ending a test before capturing a full week of data — including weekend behavior — introduces a systematic bias that has nothing to do with the treatment.
Similarly, the definition of user exposure must be held constant: including users who never actually encountered the treatment inflates noise and dilutes any real signal. In chemistry, reaction time and reagent purity must be fixed for the same reason — they are known to affect outcomes independently of whatever variable is being tested.
Documentation before launch is the only reliable enforcement mechanism
Identifying constants is only half the work. They must be formally documented before the experiment launches and actively maintained throughout its run. Informal agreement or shared memory is not sufficient — teams change, experiments run longer than expected, and undocumented decisions get revisited at exactly the wrong moment.
Some platforms enforce certain constants at the infrastructure level. GrowthBook, for instance, uses a consistent hashing algorithm to ensure that the same user always receives the same variation as long as the experiment settings remain unchanged. That handles assignment consistency automatically.
But duration, exposure definition, and minimum sample size thresholds — a reasonable baseline is at least 200 conversion events per variation — still require deliberate human decisions made before launch and recorded somewhere the team can reference them.
The failure modes that appear when constants are never explicitly locked in
Several failure modes appear consistently in practice. The most common is under-specifying test duration — ending an experiment before it captures a representative sample of traffic, including weekends. A related mistake is defining exposure too broadly, pulling in users who never saw the treatment and thereby adding noise that makes real effects harder to detect.
A subtler error is changing experiment settings mid-run. Modifying the experiment seed or hashing ID after a test has started breaks consistent user assignment — what was a constant becomes a variable, and the integrity of the entire dataset is compromised.
Finally, many teams fail to pre-specify a minimum sample size, which leads to premature calls on results that haven't reached statistical reliability.
Each of these mistakes shares a common root: the constants were never explicitly identified, documented, and locked in before the experiment began. The fix is not complicated, but it does require treating constant identification as a formal step in experiment design rather than something that happens implicitly.
Constants in A/B testing: the same logic, different vocabulary
If you've ever run an A/B test, you've already been working with constants in an experiment. You probably just haven't called them that. Every time you configure a statistical engine, set a minimum test duration, or decide which users count as exposed to a treatment, you're making the same kind of decision a chemist makes when fixing the temperature and pH of a reaction.
The principle is identical: hold the right conditions stable so that any difference you observe can be attributed to the one thing you changed.
From lab to product: mapping scientific constants to A/B testing equivalents
In a chemistry experiment, control constants are the conditions you deliberately hold fixed — temperature, sample volume, reaction time — so that the independent variable does the explanatory work. In an A/B test, the independent variable is the change you're testing (a new checkout flow, a different headline, a revised pricing page).
The dependent variable is the metric you're measuring (conversion rate, revenue per user, retention). Everything else that could influence the outcome needs to be held constant.
In product experimentation, those constants include the randomization methodology used to assign users to variants, the statistical engine selected for analysis, the primary metric you've committed to measuring, and the rules governing which users are included in the experiment.
Control constants are "quantities that researchers intentionally keep constant during an experiment" so that "any changes in the outcome are due to the variable they're testing." That definition applies just as cleanly to a software experiment as it does to a lab bench.
Analysis settings that must stay fixed throughout a test
The specific settings that function as constants in A/B testing are more numerous than most teams consciously track. The statistical method — whether Bayesian statistics, frequentist, or sequential — must be selected before the test launches and held there.
Some platforms, for example, default to Bayesian statistics; switching to a frequentist approach after peeking at interim results doesn't just change the math, it invalidates the analysis entirely.
The same logic applies to statistical adjustment techniques — methods that reduce noise by accounting for pre-experiment differences between user groups. These must be selected before the test launches, not applied retroactively to improve the look of results.
Applying them after the fact is a form of result manipulation, even when unintentional.
Test duration is another constant that deserves explicit treatment. A minimum of one to two weeks is a reasonable rule of thumb — a test that starts on a Friday and ends on a Monday captures a traffic slice that looks nothing like a typical week.
Stopping early because results look promising is functionally the same as violating a control constant: you've changed the conditions under which the experiment runs.
The risks of changing constants mid-experiment
A practitioner observation from a widely-discussed thread on A/B testing put it plainly: "I don't think the mathematics is what gets most people into trouble. What gets people are incorrect procedures." That observation cuts to the heart of why mid-experiment constant changes are so damaging. The math is often fine. The procedure is where things break.
Changing traffic allocation mid-test disrupts the randomization balance between variants. Changing the statistical method after seeing preliminary data introduces selection bias into the analysis.
Changing the primary metric mid-run is equivalent to deciding, halfway through a chemistry experiment, that you're now measuring a different reaction product. None of these changes are recoverable through statistical adjustment after the fact.
GrowthBook includes sticky bucketing as part of its experimentation platform precisely because this risk is real — when experiment settings must change mid-run, consistent user assignment still needs to be guaranteed.
The fact that the platform built a dedicated capability to handle this edge case is itself evidence that changing constants mid-experiment is a recognized failure mode with genuine consequences.
Platform-level controls that enforce constant conditions
Modern experimentation platforms operationalize the principle of constants through specific technical mechanisms. GrowthBook's consistent hashing algorithm ensures that the same user always receives the same variant, as long as the experiment seed and user hashing ID remain unchanged.
That guarantee is a platform-enforced constant — the kind of control that would otherwise require manual discipline to maintain.
Before a test even begins, running an A/A test is a sound pre-flight check: split traffic between two identical variants and confirm that the platform produces statistically valid results with no spurious differences. This is a direct verification that the constants are correctly configured before any real variation is introduced.
Activation metrics serve a related function — they filter out users who were assigned to a variant but never actually exposed to it, preserving the integrity of the exposure constant when assignment and exposure are unavoidably separated.
The discipline of locking these settings before launch, and leaving them untouched until the experiment concludes, is what separates results you can act on from results that only look convincing.
Your results are only as trustworthy as the conditions you held steady
The through-line of this article is simple: your results are only as trustworthy as the conditions you held steady. Not the analysis, not the statistical method, not the sample size — the conditions.
Every failed experiment that produced a result you couldn't explain or reproduce almost certainly had a constant that drifted without anyone noticing.
The pre-launch work is the same whether you're in a lab or running an A/B test
Before any experiment launches, the work is the same whether you're in a lab or running an A/B test: enumerate every factor that could independently explain your outcome, decide which ones you're holding fixed, write those decisions down, and don't touch them.
The teams that skip the documentation step are the ones who end up debating, mid-run, whether the test duration was always supposed to be two weeks or three.
The terminology shifts across contexts; the underlying logic doesn't
The vocabulary shifts across contexts — control variables in a chemistry lab, analysis settings in a product experiment — but the underlying logic doesn't. Temperature in a reaction and statistical method in an A/B test are the same kind of thing: a condition that, if it changes, makes your results uninterpretable.
The translation from lab to product is direct, and recognizing it means you can apply decades of experimental design thinking to the work you're already doing.
The upfront cost of rigor is small; the downstream cost of skipping it isn't
The honest tension here is that rigor takes time upfront, and most teams feel pressure to move fast. The discipline of locking constants before launch can feel like friction.
But the teams that skip it don't actually move faster — they just discover the cost later, when they're trying to explain a result they can't reproduce or defend a decision based on a test that was quietly broken from the start. The upfront investment is small. The downstream cost of skipping it isn't.
This article was written to make that tradeoff concrete and give you the vocabulary to act on it. If it helps you run one cleaner experiment — one where you can actually trust the result — it's done its job.
What to do next: Pull up the last experiment you ran or the next one you're planning. List every factor that could plausibly affect your primary metric. For each one, ask: is this the independent variable, the dependent variable, or something I need to hold constant?
If you can't answer that question for every factor on the list, you're not ready to launch. That exercise — not the statistics, not the tooling — is where rigorous experimentation actually starts.
If you're running an A/B test, that same question applies to every configuration decision you make before launch: statistical method, test duration, exposure definition, and traffic allocation. Lock them down. Write them down. Don't revisit them until the experiment concludes.
Related insights
Related Articles
Ready to ship faster?
No credit card required. Start with feature flags, experimentation, and product analytics—free.

