# Building a culture of experimentation

This topic explains how to establish experimentation as an organizational practice, not a one-time activity owned by a single team.

<div class="contrast">

**Without culture-building:** A single team runs a few experiments, but no one else adopts the practice. The pilot stalls when that team shifts priorities, and the organization loses the investment it made in tooling and training.

**With culture-building:** Experimentation becomes a shared capability. Multiple teams propose and run tests independently. Knowledge compounds across the organization, and the practice sustains itself even as teams and priorities change.

</div>

## Identify stakeholders and roles

Every responsibility needs an owner. In smaller organizations, one person may fill multiple roles.

The following table describes common experimentation roles:

| Role | Responsibility |
| :--- | :--- |
| Executive sponsor | Champions experimentation at the leadership level, connects outcomes to business KPIs, and removes organizational blockers. |
| Program owner | Owns the experimentation roadmap, coordinates across teams, and tracks the overall health of the practice. |
| Test designer | Writes test plans, defines hypotheses, and selects metrics for individual experiments. |
| Developer | Implements feature flag variations, instruments metrics, and ensures experiments deploy correctly. |
| Data analyst | Monitors experiment results, validates statistical significance, and provides interpretation for stakeholders. |
| Product manager | Prioritizes experiment ideas against the product backlog and decides how to act on results. |

<div class="exercise">

**Your turn:** Map these roles to people in your organization. If a role has no owner, that is a gap to address before launching your practice.

| Role | Person or team | Notes |
| :--- | :--- | :--- |
| Executive sponsor | | |
| Program owner | | |
| Test designer | | |
| Developer | | |
| Data analyst | | |
| Product manager | | |

</div>

## Build awareness across the organization

Cover the following topics when introducing experimentation to new teams:

- What experimentation is and how it differs from feature flagging
- How LaunchDarkly Experimentation works at a high level
- What kinds of questions experimentation answers well
- How experiment results inform product decisions
- Real examples of experiments that produced meaningful outcomes

Tailor examples to your audience: technical metrics for engineering, conversion rates for product, revenue impact for leadership.

## Gain leadership support

Leadership support determines whether experimentation survives past a pilot phase. When leaders actively sponsor the practice, teams treat it as core work.

To gain and maintain leadership support:

- Connect experiment outcomes to existing business KPIs leadership already tracks.
- Present early wins from one or two high-visibility experiments.
- Quantify the cost of not experimenting by highlighting past releases that underperformed.
- Report quarterly on experiments run, key findings, and business impact.

<div class="exercise">

**Your turn:** Identify the business KPIs your leadership tracks and map each one to an experiment opportunity. This mapping gives you the language to pitch experimentation in terms leadership already cares about.

| Business KPI | Potential experiment opportunity |
| :--- | :--- |
| | |
| | |
| | |

</div>

## Expand beyond the initial team

Start with a motivated team that has the technical foundation to run experiments. Use that team as a proof of concept for the rest of the organization.

### Validate with an A/A test

Before the pilot team runs its first real experiment, run an A/A test. An A/A test serves both groups the same experience and validates your experimentation stack end to end: SDK integration, flag evaluation, metric event delivery, and results analysis.

Run an A/A test when:

- A team runs its first experiment on a new application or service
- You deploy a new SDK or update metric instrumentation
- You onboard a new team to Experimentation
- You migrate to a new context schema or change identity resolution

Always use frequentist statistics for A/A tests. Bayesian priors can nudge results toward a "winning" variation even when both groups receive the same experience. Frequentist analysis tests a clear null hypothesis and reports a p-value you interpret directly.

A successful A/A test shows no statistically significant difference between the two groups. If you see a significant result, investigate before running real experiments. Common causes: duplicate metric events, inconsistent context keys across SDKs, incorrect flag evaluation logic, or metric events firing before SDK initialization.

After the pilot team has run a successful A/A test and several real experiments, expand:

1. Document the pilot team's process, including templates for test plans and intake forms.
2. Run cross-team workshops on problem-solution mapping and test planning.
3. Pair experienced experimenters with new teams for their first experiments.
4. Create a shared library of completed experiments.
5. Establish a regular review cadence for presenting results and sharing lessons.

## Common challenges

The following table describes common challenges and strategies:

| Challenge | Strategy |
| :--- | :--- |
| Teams do not have time to experiment. | Start small. Run experiments on changes already in the backlog rather than creating net-new work. |
| Leadership does not see the value. | Present results in business terms. Show the revenue or efficiency impact of experiment-informed decisions. |
| Experiments produce inconclusive results. | Review test plans for clear hypotheses and adequate sample sizes before launch. Inconclusive results still provide learning. |
| Only one team runs experiments. | Create visibility by sharing results in company-wide channels. Run workshops to lower the barrier for new teams. |
| Teams skip the planning process. | Make test plan review a required step before development begins. Provide templates to reduce friction. |

<div class="exercise">

**Your turn:** Review the challenges above and check the ones your organization faces today. For each one you check, write one concrete next step you plan to take.

| Challenge applies to us | Challenge | Our next step |
| :--- | :--- | :--- |
| &square; | Teams do not have time to experiment. | |
| &square; | Leadership does not see the value. | |
| &square; | Experiments produce inconclusive results. | |
| &square; | Only one team runs experiments. | |
| &square; | Teams skip the planning process. | |

</div>

<div class="page-metadata"><table class="page-metadata-table" aria-label="Page metadata"><thead><tr><th scope="col">Last modified</th><th scope="col">Last reviewed</th><th scope="col">Review due</th></tr></thead><tbody><tr><td><time datetime="2026-03-19">2026-03-19</time></td><td></td><td></td></tr><tr><td>Last commit: <a href="https://github.com/launchdarkly-labs/ps-flag-book/commit/d319a7d">d319a7d</a></td><td></td><td></td></tr></tbody></table></div>