Introduction
This comprehensive guide is designed to help your organization successfully implement and adopt LaunchDarkly feature flags as part of your Center of Excellence initiative. Whether you’re a program manager setting objectives or a developer implementing SDKs, this guide provides structured, step-by-step guidance to ensure successful LaunchDarkly adoption across your development teams.
What You’ll Find Here
Program Management
Establish clear objectives, key results, and measurable success criteria for your LaunchDarkly implementation. This section helps program managers and leadership:
- Define OKRs aligned with business objectives
- Set up governance and best practices
- Track adoption metrics and success indicators
SDK Implementation
Technical guidance for developers implementing LaunchDarkly SDKs in their applications. This section includes:
- Preflight Checklists - Step-by-step implementation guides
- Configuration best practices
- Security and compliance considerations
- Testing and validation procedures
Getting Started
New to LaunchDarkly? Start with the Program Management section to understand the strategic approach and objectives.
Ready to implement? Jump to the SDK Preflight Checklist for hands-on technical guidance.
Looking for specific guidance? Use the search functionality to quickly find relevant information.
How to Use This Guide
This guide is structured as a progressive journey:
- Plan - Establish program objectives and success criteria
- Prepare - Complete preflight checklists for your technology stack
- Implement - Follow step-by-step SDK integration guides
- Validate - Test and verify your implementation
- Scale - Apply best practices across your organization
Each section builds upon the previous one, but you can also jump to specific topics based on your immediate needs.
Need Help?
- LaunchDarkly Documentation: launchdarkly.com/docs
- LaunchDarkly Developer Hub: developers.launchdarkly.com
- LaunchDarkly Help Center: support.launchdarkly.com
- LaunchDarkly Academy: launchdarkly.com/academy
Overview
This topic explains the objectives and key results for the LaunchDarkly Program. We recommend using the following guidelines when creating objectives and key results:
Objectives
Write objectives as clear, inspirational statements that describe what you want to achieve. Each objective should be:
- Measurable through specific key results
- Aligned with broader program goals
- Reviewed and updated regularly to reflect changing priorities
Limit each objective to three to five key results.
Key results
Key results measure progress toward your objective. They must be specific, measurable, and verifiable.
Use quantitative metrics:
- State exact numbers, percentages, or timeframes
- Define what you measure and how you measure it
- Include baseline values when comparing improvements
- Specify the time period for measurement
Focus on outcomes:
- Measure results, not activities
- Track what changes, not what you do
- Use leading indicators that predict success
- Include lagging indicators that confirm achievement
Ensure clarity:
- Use clear language that anyone can understand
- Avoid vague terms like “improve” or “better”
- Include specific targets or thresholds
- Define success criteria explicitly
Set realistic targets:
- Aim for 70% to 80% achievement probability
- Set stretch goals that require effort to reach
- Review historical performance to inform targets
Include time-bound elements:
- Set clear deadlines or timeframes
- Define measurement periods
- Align timing with objective review cycles
Active OKRs
This section contains the current OKRs for the LaunchDarkly Program.
Application teams onboard quickly and correctly with minimal support
Key Results
- Time to provision and access LaunchDarkly with correct permissions is less than one day per team
- Less than three hours of ad hoc support requested per team during onboarding
- Less than five support tickets created per team during onboarding
- All critical onboarding tasks complete and first application live in a customer-facing environment within two sprints
Tasks
To achieve this objective, complete the following tasks:
- Create self-service documentation with step-by-step guides and video tutorials when necessary
- Configure single sign-on (SSO) and identity provider (IdP) integration
- Define teams and member mapping
- Assign all flag lifecycle related actions to at least one member in each project
- Define how SDK credentials are stored and made available to the application
- Create application onboarding checklist and a method to track completion across teams
Team members find flags related to their current task unambiguously
Key Results
- More than 95% of new flags created per quarter comply with naming convention
- More than 95% of active users access dashboard filters and shortcuts at least once per month
- More than 95% of new flags created per quarter include descriptions with at least 20 characters and at least one tag
- More than 95% of release flags created per quarter link to a Jira ticket
- Zero incidents of incorrect flag changes due to ambiguity per quarter
Tasks
To achieve this objective, complete the following tasks:
- Create naming convention document
- Document flag use cases and when and when not to use flags
- Create a method to track compliance with naming convention
- Enforce approvals in critical environments
Team members de-risk releases consistently
Key Results
- Starting Q1 2026, 90% of features requiring subject matter expert (SME) testing released per quarter are behind feature flags
- More than 75% of P1/P2 incidents related to new features per quarter are remediated without new deploys
- Mean time to repair (MTTR) reduced by 50% compared to baseline for issues related to new features by end of Q2 2026
Tasks
To achieve this objective, complete the following tasks:
- Define and document release strategies:
- Who does what when
- How to implement in the platform using targeting rules and release pipelines
- How to implement in code
- Define and document incident response strategies
- Integrate with software development lifecycle (SDLC) tooling
- Enable project management
- Enable communication
- Enable observability and application performance monitoring (APM)
- Enable governance and change control
LaunchDarkly usage is sustainable with minimal flag-related technical debt
Key Results
- More than 95% of active flags have a documented owner at any point in time
- More than 95% of active flags have an up-to-date description and tags and comply with naming conventions at any point in time
- Median time to archive feature flags after release is less than 12 weeks
- 100% of flags older than six months are reviewed quarterly
- Flag cleanup service level agreements (SLAs) are established and followed for 100% of projects
Tasks
To achieve this objective, complete the following tasks:
- Implement actionable dashboards to visualize flag status including new, active, stale, and launched
- Define flag archival and cleanup policies
- Implement code references integration
Archived OKRs
This section contains previous OKRs and their outcomes.
Projects and environments
Map your applications and deployment environments to LaunchDarkly projects and environments. The key principle: fewer is better.
Quick answers
How many projects do I need? Start with one project per product or tightly coupled set of services. Only create separate projects when teams need complete independence.
How many environments do I need? Most organizations need 2-3 environments: Development, Staging, and Production. Use targeting rules to handle multiple deployment environments within these.
Data model
LaunchDarkly organizes feature flags using a hierarchical structure:
- Projects contain feature flags and environments
- Feature flags exist at the project level in all environments of that project
- Environments contain the targeting rules for each flag
- SDKs connect to a single environment and receive the state of every flag for that environment
Applications exist as a top-level resource outside of projects.
Key concepts
Projects group related applications that need to coordinate releases:
- Share feature flags across all applications in the project
- Enable prerequisites and approvals for coordinated launches
- Define who can access and modify flags
Environments represent stages in your development lifecycle:
- Each has its own targeting rules and approval workflows
- SDKs connect to one environment at a time
- Use targeting rules to consolidate multiple deployment environments
Next steps
- Learn how to map applications to projects
- Learn how to map deployment environments to LaunchDarkly environments
Projects
Default recommendation: Start with one project per product. Create separate projects only when teams have no release dependencies.
Decision framework
Use this framework to determine whether to create a separate project:
Use an existing project when
Code executes in the same process, application, or web page:
- Frontend and backend for a single-page application
- Multiple services in a monolith
- Components within the same web application
Applications are tightly coupled:
- Frontend depends on specific backend API versions
- Services communicate via internal APIs with hard dependencies
- Components must be released in lockstep
Create a separate project when
Applications are loosely coupled:
- Services communicate via public APIs with version management
- Services can be released independently
- Applications serve different products or business units
No coordination is needed:
- Applications have no release dependencies
- Teams work independently with no shared releases
- Applications serve different customer segments
Key benefits of shared projects
Sharing projects for tightly coupled applications provides:
- Prerequisites for coordinated releases across frontend and backend
- Approvals for collaborative change management
- Single pane of glass for release visibility
Important: Each flag needs a clear owner, even in shared projects. Avoid shared responsibility for individual flags.
What projects contain
Projects scope these resources:
- Feature flags: Shared across all environments in the project
- Flag templates: Standardized flag configurations
- Release pipelines: Automated release workflows
- Context kinds: Custom context type definitions
- Permissions: Project-scoped roles control access
Coordinating releases across projects
When applications in separate projects need to coordinate releases, use these strategies:
- Request metadata: Pass API version or client metadata in requests for server-side coordination
- Delegated authority: Grant teams permission to manage flags in other projects
To learn more about coordination strategies, read Coordinating releases.
Corner cases
Global flags
Scenario: Many loosely coupled microservices in separate projects need to change behavior based on shared state.
Solution: Pass shared state as custom context attributes. Identify a single source of truth and propagate evaluation results from the owner of the state, rather than duplicating the state across projects.
Onboarding new teams
Scenario: A new team joins and needs to integrate with existing applications.
Process:
- Determine if the new team has dependencies on flags in existing projects
- If dependencies exist, include the team in the shared project
- If no dependencies exist, create a separate project
- Document the coordination strategy if projects are separate
Organizing flags within projects
Use views to organize flags by tags and metadata within projects. This provides flexible grouping without creating additional projects.
To learn more, read Views.
Environments
Key principle: Less is more. Most organizations need only 2-3 LaunchDarkly environments regardless of how many deployment environments they have.
Default recommendation
Start with these environments:
- Development: No approvals, rapid iteration
- Staging: Optional approvals, testing before production
- Production: Required approvals, critical environment
Use targeting rules with custom attributes to handle multiple deployment environments within each LaunchDarkly environment.
Why consolidate environments?
Each LaunchDarkly environment adds operational overhead:
- Maintain targeting rules across multiple environments
- Synchronize flag states manually
- Manage separate SDK credentials
- Review and approve changes in each environment
Consolidate deployment environments using targeting rules instead.
When to create separate environments
| Scenario | Create separate environment? | Solution |
|---|---|---|
| Per developer | No | Use individual user targeting or custom attributes |
| Per PR or branch | No | Pass PR number or branch name as custom attributes |
| Per tenant or customer | No | Use targeting rules with tenant context attributes |
| Per geographic region | No | Pass region as a custom attribute |
| Federal vs public cloud with compliance requirements | Yes | Compliance mandates complete isolation |
| Production vs development with different approval workflows | Yes | Different stakeholders and approval requirements |
| Different data residency requirements | Yes | Legal or regulatory requirements mandate separation |
Solutions for common scenarios
Per-developer testing
Instead of creating developer environments, use:
Individual user targeting: Create rules like If user equals "alice@example.com" then serve Available
Custom attributes: Pass developer, workstation, or branch_name as context attributes
Local overrides: Use SDK wrappers or test frameworks to mock flag values
Per-PR or ephemeral environments
Pass deployment metadata as custom attributes:
Context:
- pr_number: "1234"
- hostname: "pr-1234.staging.example.com"
- git_sha: "abc123"
Rule: If pr_number equals "1234" then serve Available
Multi-tenant deployments
Pass tenant information as context attributes:
Context:
- tenant_id: "acme"
- subscription_tier: "enterprise"
Rule: If tenant_id is one of ["acme", "initech"] then serve Available
Mapping process
Follow these steps:
- List deployment environments: Document where your application runs
- Group by compliance and approval requirements: Separate only when compliance mandates it or approval workflows differ significantly
- Define custom attributes: Document attributes needed to distinguish consolidated environments
- Mark production as critical: Enable required approvals and UI warnings
Mark production environments as critical to require approvals and prevent accidental changes. To learn more, read Critical environments.
Real-world examples
Multiple regions → Single production environment
You have: US-East, US-West, EU, APAC production deployments
Create: 1 Production environment
Pass context attribute: region: "eu"
Create rules when needed: If region equals "eu" then serve Available
Federal compliance → Separate environments
You have: Federal and Public clouds with different compliance requirements
Create: 2 environments (Federal, Public)
Why separate: Compliance mandates complete isolation
Within each: Use environment: "dev" attribute to distinguish dev/staging/prod deployments
Team-specific staging → Single staging environment
You have: Staging-TeamA, Staging-TeamB, Staging-TeamC
Create: 1 Staging environment
Pass context attribute: team: "team-a"
Create rules when needed: If team equals "team-a" then serve Available
SDK Preflight Checklist
Audience: Developers who are implementing the LaunchDarkly SDKs
Init and Config
- SDK is initialized once as a singleton early in the application’s lifecycle
- Application does not block indefinitely for initialization
- SDK configuration integrated with existing configuration/secrets management
- Bootstrapping strategy defined and implemented
Client-side SDKs
Browser SDKs
Mobile SDKs
Serverless functions
Using Flags
- Define context kinds and attributes
- Define and document fallback strategy
- Use
variation/variationDetail, notallFlags/allFlagsStatefor evaluation - Flags are evaluated only where a change of behavior is exposed
- The behavior changes are encapsulated and well-scoped
- Subscribe to flag changes
Init and Config
Baseline Recommendations
This table shows baseline recommendations for SDK initialization and configuration:
| Area | Recommendation | Notes |
|---|---|---|
| Client-side init timeout | 100–500 ms | Don’t block UI. Render with fallbacks or bootstrap. |
| Server-side init timeout | 1–5 s | Keep short for startup. Continue with fallback values after timeout. |
| Private attributes | Configured | Redact PII. Consider allAttributesPrivate where appropriate. |
| Javascript SDK Bootstrapping | localStorage | Reuse cached values between sessions. |
SDK is initialized once as a singleton early in the application’s lifecycle
Applies to: All SDKs
Prevent duplicate connections, conserve resources, and ensure consistent caching/telemetry.
Implementation
- MUST Expose exactly one
ldClientper process/tab via a shared module/DI container (root provider in React). - SHOULD Make init idempotent: reuse the existing client if already created.
- SHOULD Close the client cleanly on shutdown. In serverless, create the client outside the handler for container reuse.
- NICE-TO-HAVE Emit a single startup log summarizing effective LD config (redacted).
Validation
- Pass if metrics/inspector show one stream connection per process/tab.
- Pass if event volume and resource usage do not scale with repeated imports/renders.
Application does not block indefinitely for initialization
Applies to: All SDKs
A LaunchDarkly SDK is initialized when it connects to the service and is ready to evaluate flags. If variation is called before initialization, the SDK returns the fallback value you provide. Do not block your app indefinitely while waiting for initialization. The SDK continues connecting in the background. Calls to variation always return the most recent flag value.
Implementation
- MUST Set an initialization timeout
- Client-side: 100–500 ms.
- Server-side: 1–5 s.
- SHOULD Race initialization against a timer if using an SDK that lacks a native timeout parameter.
- MAY Render/serve using bootstrapped or fallback values, then update when flags are ready.
- MAY Subscribe to change events to proactively respond to flag updates.
- MAY Configure a persistent data store to avoid fallback values in the event that the SDK is unable to connect to LaunchDarkly services.
Validation
- Pass if with endpoints blocked the app renders using fallbacks or bootstrapped values within the configured timeout.
- Pass if restoring connectivity updates values without a restart.
How to emulate:
- Point streaming/base/polling URIs to an invalid host
- Block the SDK domains in the container or host running the tests: stream.launchdarkly.com, sdk.launchdarkly.com, clientsdk.launchdarkly.com, app.launchdarkly.com
- In browsers, block
clientstream.launchdarkly.com,clientsdk.launchdarkly.com, and/orapp.launchdarkly.comin DevTools.
For implementation strategies, read the Emulating LaunchDarkly Downtime section in the cookbook.
SDK configuration integrated with existing configuration/secrets management
Applies to: All SDKs
Use your existing configuration pipeline so LD settings are centrally managed and consistent across environments. Avoid requiring code changes setting common SDK options.
Implementation
- MUST Load SDK credentials from existing configuration/secrets management system.
- MUST NOT Expose the server-side SDK Key to client applications
- SHOULD Use configuration management system to set common SDK configuration options such as:
- HTTP Proxy settings
- Log verbosity
- Enabling/disabling events in integration testing or load testing environments
- Private attribute
Validation
- Pass if rotating the SDK key in the vault results in successful rollout and the old key is revoked.
- Pass if a repository scan finds no SDK keys or environment IDs committed.
- Pass if startup logs (redacted) show expected config per environment and egress connectivity succeeds with 200/OK or open stream.
Bootstrapping strategy defined and implemented
Applies to: JS Client-Side SDK in browsers, React SDK, Vue SDK
Prevent UI flicker by rendering with known values before the SDK connects and retrieves flags.
Implementation
- SHOULD Enable
bootstrap: 'localStorage'for SPAs/PWAs to reuse cached values between sessions. - SHOULD For SSR or static HTML, embed a server-generated flags JSON and pass to the client SDK at init.
- MUST Document which strategy each app uses and when caches expire.
- SHOULD Reconcile bootstrapped values with live updates and re-render when differences appear.
Validation
- Pass if under offline/slow network the first paint uses bootstrapped values with no visible flash of wrong content.
- Pass if clearing storage falls back to safe defaults and live updates correct the UI on reconnect.
- Pass if evaluations are recorded after successful initialization.
Client-side SDKs
The following items apply to all client-side and mobile SDKs.
Application does not block on identify
Calls to identify return a promise that resolves when flags for the new context have been retrieved. In many applications, using the existing flags is acceptable and preferable to blocking in a situation where flags cannot be retrieved.
Implementation
- MAY Continue without waiting for the promise to resolve
- SHOULD Implement a timeout when identify is called
Validation
- Pass The application is able to function after calling identify while the SDK domains are blocked: clientsdk.launchdarkly.com or app.launchdarkly.com
Application does not rapidly call identify
In mobile and client-side SDKs, identify results in a network call to the evaluation endpoint. Make calls to identify sparingly. For example:
Good times to call identify:
- During a state transition from unauthenticated to authenticated
- When a attribute of a context changes
- When switching users
Bad times to call identify:
- To implement a
currentTimeattribute in your context that updates every second - Implementing contexts that appear multiple times in a page such as per-product
Browser SDKs
The following items apply only to the following SDKs:
- Javascript Client SDK
- React SDK
- Vue SDK
Send events only for variation
Avoid sending spurious events when allFlags is called. Sending evaluation events for allFlags will cause flags to never report as stale and may cause inaccuracies in guarded rollouts and experiments with false impressions.
Implementation
- MUST Set
sendEventsOnlyForVariation: truein the SDK options
Validation
- Pass calls to allFlags do not generate evaluation/summary events
Bootstrapping strategy defined and implemented
Prevent UI flicker by rendering with known values before the SDK connects and retrieves flags. To learn more about bootstrapping, read Bootstrapping.
Implementation
- SHOULD Enable
bootstrap: 'localStorage'or bootstrap from a server-side-SDK
Validation
- Pass if under offline/slow network the first paint uses bootstrapped values with no visible flash of wrong content.
Mobile SDKs
The following items apply to mobile SDKs.
Configure application identifier
The Mobile SDKs automatically capture device and application metadata. To learn more about automatic environment attributes, read Automatic Environment Attributes.
We recommend that you set the application identifier to a different value for each separately distributed software binary.
For example, suppose you have two mobile apps, one for iOS and one for Android. If you set the application identifier to “example-app” and the version to “1.0” in both SDKs, then when you create a flag targeting rule based only on application information, the flag will target both the iOS and Android application. This may not be what you intend.
We recommend using different application identifiers in this situation, for instance, by setting “example-app-ios” and “example-app-android” in your application metadata configuration.
Implementation
- MUST Configure the application identifier in the SDK configuration to a unique value for each platform
You can override the application identifier using the application metadata options when configuring your SDK. To learn how to set a custom application identifier, read Application Metadata.
Validation
- Pass Separate applications appear in the LaunchDarkly dashboard for each platform, for example android, ios, etc.
Examples
Apple iOS/iPadOS/WatchOS
// Fetch the current CFBundleIdentifier
let defaultIdentifier = Bundle.main.object(forInfoDictionaryKey: "CFBundleIdentifier") as? String ?? "UnknownIdentifier"
// Create the ApplicationInfo object
var appInfo = ApplicationInfo()
// Override applicationIdentifier to include the -apple suffix
appInfo.applicationIdentifier("\(defaultIdentifier)-apple")
// Create an LDConfig object and set the applicationInfo property
let ldConfig = LDConfig(mobileKey: "your-mobile-key")
ldConfig.applicationInfo = appInfo
var config = LDConfig(mobileKey: mobileKey, autoEnvAttributes: .enabled)
config.applicationInfo = applicationInfo
Android
import com.launchdarkly.sdk.android.Components;
import com.launchdarkly.sdk.android.integrations.ApplicationInfoBuilder;
import com.launchdarkly.sdk.android.LDConfig;
// Fetch the current package name (application identifier)
String defaultPackageName = context.getPackageName(); // replace 'context' with your Context object
// Create the ApplicationInfoBuilder object
ApplicationInfoBuilder appInfoBuilder = Components.applicationInfo();
// Override applicationIdentifier to include the "-android" suffix
appInfoBuilder.applicationId(defaultPackageName + "-android");
// Build the ApplicationInfo object
ApplicationInfo appInfo = appInfoBuilder.createApplicationInfo();
// Create an LDConfig object and set the applicationInfo property
LDConfig ldConfig = new LDConfig.Builder()
.mobileKey("your-mobile-key")
.applicationInfo(appInfoBuilder) // Pass the ApplicationInfoBuilder here
.build();
Serverless functions
The following applies to SDKs running in serverless environments such as AWS Lambda, Azure Functions, and Google Cloud Functions.
Initialize the SDK outside of the handler
Many serverless environments re-use execution environments for many invocations of the same function. This means that you must initialize the SDK outside of the handler to avoid duplicate connections and resource usage.
Implementation
- MUST Initialize the SDK outside of the function handler
- MUST NOT Close the SDK in the handler
Leverage LD Relay to reduce initialization latency
Serverless functions spawn many instances in order to handle concurrent requests. LD Relay can be deployed in order to reduce outgoing network connections, reduce outbound traffic and reduce initialization latency.
Implementation
- SHOULD Deploy LD Relay in the same region as the serverless function
- SHOULD Configure LD Relay as an event forwarder and configure the SDK’s event URI to point to LD Relay
- SHOULD Configure the SDK in proxy mode or daemon mode instead of connecting directly to LaunchDarkly
- MAY Call flush at the end of invocation to ensure all events are sent
- MAY Call flush/close when the runtime is being permanently terminated in environments that support this signal. Lambda does provide this signal to functions themselves, only extensions.
Consider daemon mode if you have a particularly large initialization payload and only need a couple of flags for the function.
Using Flags
Define context kinds and attributes
Choose context kinds/attributes that enable safe targeting, deterministic rollouts, and cross-service alignment.
Implementation
- MUST Define context kinds, for example
user,organization, ordevice. Use multi-contexts when both person and account matter. - MUST NOT Derive context keys from PII, secrets or other sensitive data.
- MUST Mark sensitive attributes as private. Context Keys cannot be private
- SHOULD Use keys that are unique, opaque, and high-entropy
- SHOULD Document the source/type for all attributes. Normalize formats, for example ISO country codes.
- SHOULD Provide shared mapping utilities to transform domain objects → LaunchDarkly contexts consistently across services.
- SHOULD Avoid targeting on sensitive information or secrets.
Validation
- Pass if a 50/50 rollout yields consistent allocations across services for the same context.
- Pass if sample contexts evaluated in a harness match expected targets/segments.
- Pass if a PII audit finds no PII in keys and private attributes are redacted in events.
- Pass if applications create/define contexts consistently across services
Define and document fallback strategy
Every flag must specify a safe fallback value that is used when the flag is unavailable. For more information on fallback values, read Maintaining fallback values.
Implementation
- MUST Pass the fallback value as the last argument to
variation()/variationDetail()with correct types. - MUST Define a strategy for determining when to audit and update fallback values.
- MUST Implement automated tests to validate the application is able to function in an at most degraded state when flags are unavailable.
Validation
- Pass if blocking SDK network causes the application to use the fallback path safely with no critical errors.
Use variation/variationDetail, not allFlags/allFlagsState for evaluation
Direct evaluation emits accurate usage events required for flag statuses, experiments, and rollout tracking.
Implementation
- MUST Call
variation()/variationDetail()at the decision point - MUST NOT Implement an additional layer of caching for calls to variation that would prevent accurate flag evaluation telemetry from being generated
Validation
- Pass accurate flag evaluation data is shown in the Flag Monitoring dashboard
Flags are evaluated only where a change of behavior is exposed
Generate evaluation events only when a change in behavior is exposed to the end user. This ensures that features such as experimentation and guarded rollouts function correctly.
Implementation
- MUST Evaluate flags only when the value is used
- SHOULD Evaluate flags as close to the decision point as possible
The behavior changes are encapsulated and well-scoped
Isolate new vs. old logic to ease future cleanup. A rule of thumb is never store the result of a boolean flags in a variable. This ensures that the behavior impacted by the flag is fully contained within the branches of the if statement.
Implementation
- SHOULD Place new/old logic in separate functions/components; avoid mixed branches.
- SHOULD Evaluate the flag inside the decision point (
if) to simplify later removal.
// Example: evaluation scoped to the component
export function CheckoutPage() {
if (ldClient.variation('enable-new-checkout', false)) {
return <NewCheckoutComponent />;
}
return <LegacyCheckoutComponent />;
}
Subscribe to flag changes
In applications with a UI or server-side use-cases where you need to respond to a flag change, use the update/change events to update the state of the application.
Implementation
- SHOULD Use the subscription mechanism provided by the SDK to respond to updates. To learn more about subscribing to flag changes, read Subscribing to flag changes.
- SHOULD Unregister temporary handlers to avoid memory leaks.
Validation
- Pass if the application responds to flag changes
Contexts
Contexts are the foundation of feature flag targeting in LaunchDarkly. Understanding how to define and use contexts effectively is critical for successful feature flag implementation.
What are contexts
Contexts represent the entities you want to target with feature flags. A context can be a user, session, device, application, request, or any other entity relevant to your use case.
Each context consists of:
- Key: A unique identifier for the context
- Attributes: Additional data used for targeting and rollouts
Targeting rules evaluate against one or more contexts to determine which variation of a feature flag to serve.
Outcomes
By understanding contexts, you will:
- Know more about contexts, what information to pass, and how to organize it
- Understand the best practices for passing data to the LaunchDarkly SDK
- Be able to successfully pass information to the SDK to be leveraged for feature flags
Topics
This section covers:
- Context fundamentals: Learn about keys, attributes, and meta attributes
- Client vs server SDKs: Understand how context handling differs between SDK types
- Choosing keys and attributes: Best practices for selecting identifiers and attributes
- Automatic attributes: Platform-provided context data
- Best practices: Do’s and don’ts for context implementation
- Context types: Detailed documentation of each context type used in your organization
Context fundamentals
Contexts are composed of three core elements: a unique key, custom attributes, and optional meta attributes.
Key
A string that uniquely identifies a context. The key:
- May represent an individual user, session, device, or any other entity you wish to target
- Must be unique for each context instance
- Cannot be marked as private
- Is used for individual targeting, experimentation, and as the default value for rollouts
Attributes
Each attribute can have one or more values. You can define custom attributes with any additional information you wish to target on.
Supported attribute types
Attributes can contain one or more values of any of these supported types:
- String
- Boolean
- Number
- Semantic Version (string format)
- Date (RFC3339 or Unix Timestamp in milliseconds)
Some operations within rule clauses such as “less than” and “greater than” only support specific types.
Nested attributes
You can target on nested objects in contexts using a JSON path notation. For example, to target on an iOS version within a nested device object, use /os/ios/version as the attribute path.
Meta attributes
Meta attributes are reserved by LaunchDarkly and may have special meaning or usage in the platform.
Key meta attributes
Key meta attributes include:
| Name | Description |
|---|---|
| key | Required. Cannot be private. Used for individual targeting, experimentation and the default value for rollouts |
| anonymous | When true, the context will be hidden from the dashboard and will not appear in autocomplete |
| name | Used in the contexts dashboard and autocomplete search |
| _meta/privateAttributes | List of attribute names whose values will be redacted from events sent to LaunchDarkly |
For a complete list, see Built-in and custom attributes in the LaunchDarkly documentation.
Code examples
Server SDK
In server-side SDKs, pass the context on every variation call:
ldclient.variation("release-widget", context, fallback);
Client SDK
In client-side SDKs, provide context at initialization and update via identify():
const ldclient = LaunchDarkly.initialize(clientId, context);
// Later, update the context
ldclient.identify(newContext);
Usage and billing
Usage is based on the number of unique context keys seen for your primary (most used) context, deduplicated across your entire account.
Limits are not hard lines. Overages do not impact the evaluation of feature flags in your application.
To learn more, read Usage metrics.
Client SDK vs server SDK
Context handling differs significantly between client-side and server-side SDKs due to their different operational models.
Context handling comparison
Client-side SDKs
Client-side SDKs handle one active user or session:
- Provide context at initialization and update via
identify() - SDK fetches variations for that specific context and caches them
- When calling
variation(), no need to provide context anymore - Evaluations happen remotely via an evaluation endpoint
Server-side SDKs
Server-side SDKs evaluate for many users:
- No context required at initialization
- SDK downloads all flag rules at startup
- Pass context on every
variation()call:variation(flag, context, fallback) - SDK calculates variations against locally cached rules
Evaluation flow
Server-side evaluation
Application Request → Create Context → Call variation(flag, context, fallback)
↓
SDK evaluates locally using cached rules
↓
Return variation
Client-side evaluation
Page Load → Initialize SDK with context → SDK fetches user's variations
↓
Cache variations locally
↓
Call variation(flag) // No context needed
When to use each
Use server-side SDKs when
- Evaluating flags for multiple different users or entities
- Running backend services or APIs
- Need to keep flag rules private
- Evaluating flags in high-security contexts
Use client-side SDKs when
- Evaluating flags for a single active session
- Running in browsers or mobile apps
- Need real-time flag updates for the current user
- Implementing user-specific feature rollouts
Updating contexts
Client-side context updates
Use identify() to update the context:
const ldclient = LaunchDarkly.initialize(clientId, context);
// User logs in
ldclient.identify({
kind: "user",
key: "user-123",
name: "Jane Doe",
email: "jane@example.com"
});
The SDK will fetch new variations for the updated context.
Server-side context changes
Simply pass a different context to variation():
// Evaluate for user 1
const variation1 = ldclient.variation("flag-key", userContext1, false);
// Evaluate for user 2
const variation2 = ldclient.variation("flag-key", userContext2, false);
No SDK reconfiguration needed.
Performance considerations
Client-side SDKs
- Initial page load includes SDK initialization time
- Subsequent evaluations are instant (served from cache)
identify()calls require network round-trip- Consider bootstrapping to eliminate initialization delay
Server-side SDKs
- Initialization happens once at application startup
- All evaluations are local and extremely fast
- No per-request network overhead
- Consider persistent stores for daemon mode deployments
Choosing keys and attributes
Selecting appropriate keys and attributes is critical for effective feature flag targeting and progressive rollouts.
Choosing a key
A well-chosen key balances consistency and risk distribution during progressive rollouts.
Key characteristics
Unique: Static 1:1 mapping to a context
Each context instance must have a unique key that always identifies the same entity.
Opaque: Non-sequential, not derived from sensitive information
Keys should not be predictable or contain sensitive data like email addresses or social security numbers.
High cardinality: Many unique values will be seen by the application
Ensure the key space is large enough to support meaningful percentage rollouts. Low cardinality (like “true”/“false”) prevents fine-grained rollout control.
Rollout consistency
The key is used as the default attribute for percentage rollouts. This applies to any attribute you plan on using for rollouts.
Example: Session consistency
To maintain consistency pre and post-login, use a session context instead of a user context for rollouts:
// Session context persists across authentication
const sessionContext = {
kind: "session",
key: generateSessionId(), // UUID stored in session storage
anonymous: true
};
Configure rollouts by session key to ensure users see consistent behavior whether logged in or not.
Defining attributes
Create attributes that support your targeting and rollout use cases.
Do
Create multiple identifiers for different contexts and consistency boundaries
Define separate context kinds for user, session, device, etc., each with appropriate attributes:
const multiContext = {
kind: "multi",
user: {
key: "user-123",
name: "Jane Doe",
createdAt: 1640000000000
},
session: {
key: "session-abc",
anonymous: true,
startedAt: Date.now()
}
};
Create attributes that support targeting and rollout use-cases
Include attributes you’ll actually use for targeting:
const userContext = {
kind: "user",
key: "user-123",
email: "jane@example.com",
plan: "enterprise",
region: "us-west",
betaTester: true
};
Define private attributes when targeting on sensitive information
Mark sensitive attributes as private to prevent them from being sent to LaunchDarkly:
const userContext = {
kind: "user",
key: "user-123",
email: "jane@example.com",
_meta: {
privateAttributes: ["email", "ipAddress"]
}
};
Do not
Use any values derived from PII or sensitive values as keys
Never use email addresses, phone numbers, or other PII directly as keys:
// Bad
const context = { kind: "user", key: "jane@example.com" };
// Good
const context = {
kind: "user",
key: "user-123",
email: "jane@example.com",
_meta: { privateAttributes: ["email"] }
};
Rapidly change attributes in client-side SDKs
Avoid using current timestamp or frequently changing values as attributes:
// Bad - causes excessive events
const context = {
kind: "user",
key: "user-123",
lastActivity: Date.now() // Changes every render
};
// Good - use stable attributes
const context = {
kind: "user",
key: "user-123",
sessionStart: sessionStartTime // Stable for session
};
Mix value types or sources for an attribute within the same project
Keep attribute types consistent across your codebase:
// Bad - inconsistent types
// iOS app sends: { plan: "enterprise" }
// Web app sends: { plan: 3 }
// Good - consistent types
// All apps send: { plan: "enterprise" }
Attribute naming conventions
Follow these conventions for consistency:
- Use camelCase for attribute names:
userId,planType,isActivated - Use clear, descriptive names: prefer
accountCreationDateoveracd - Prefix boolean attributes with
is,has, orcan:isActive,hasAccess,canEdit - Use standard date formats: RFC3339 strings or Unix timestamps in milliseconds
Multi-kind contexts
For complex targeting scenarios, use multi-kind contexts to evaluate against multiple entities simultaneously:
const multiContext = {
kind: "multi",
user: {
key: "user-123",
email: "jane@example.com",
plan: "enterprise"
},
organization: {
key: "org-456",
name: "Acme Corp",
industry: "technology"
},
device: {
key: "device-789",
platform: "iOS",
model: "iPhone 14"
}
};
This allows targeting rules like “serve to users in enterprise plan OR organizations in technology industry”.
Automatic environment attributes
Some SDKs automatically collect environment metadata and make it available as context attributes. This reduces boilerplate and provides consistent targeting capabilities.
ld_application
Automatically collected application metadata available in mobile and client-side SDKs.
Attributes
The ld_application context includes these attributes:
| Name | Description |
|---|---|
| key | Automatically generated by the SDK |
| id | Bundle Identifier |
| locale | Locale of the device, in IETF BCP 47 Language Tag format |
| name | Human-friendly name of the application |
| version | Version of the application used for update comparison |
| versionName | Human-friendly name of the version |
Use cases
Disable features on known bad builds
Target specific application versions to disable features on buggy releases:
IF ld_application version is one of 1.2.3, 1.2.4
THEN serve "Off"
This is valuable for mobile applications or heavily cached SPAs. Users may not update immediately.
Application-level configuration and customization
Serve different configurations based on application bundle ID or locale:
IF ld_application locale is one of es, es-MX, es-ES
THEN serve spanish-config
Determine when to sunset legacy behavior
Export context metrics to understand what application versions are still in use:
IF ld_application version < 2.0.0
THEN serve legacy-behavior
ELSE serve new-behavior
Use LaunchDarkly’s Data Export to analyze version distribution. This helps you decide when to drop support for older versions.
ld_device
Information about the platform, operating system, and device automatically collected by mobile SDKs.
Attributes
The ld_device context includes these attributes:
| Name | Description |
|---|---|
| key | Automatically generated by the SDK |
| manufacturer | Manufacturer of the device (Apple, Samsung, etc.) |
| model | Model of the device (iPhone, iPad, Galaxy S21) |
| /os | Operating system of the device. Includes properties for family, name, and version |
Use cases
Roll out by platform to reduce platform-specific issues
Start rollouts on platforms where you have stronger test coverage:
Rollout 10% by ld_device key
IF ld_device /os/family is iOS
Release to tier 1 supported platforms before testing on lower tiers
Prioritize your primary platforms:
IF ld_device manufacturer is one of Apple, Samsung
THEN serve 20% rollout
ELSE serve 5% rollout
Platform-specific feature or hardware targeting
Target features that require specific hardware capabilities:
IF ld_device model is one of iPhone 14, iPhone 15
AND custom-attribute has-nfc is true
THEN serve nfc-payment-feature
Operating system targeting
Access nested OS information using JSON paths:
IF ld_device /os/name is Android
AND ld_device /os/version >= 13
THEN serve android-13-features
To learn more, read the LaunchDarkly documentation on automatic environment attributes.
Best practices
Guidelines for implementing contexts effectively and avoiding common pitfalls.
Context design
Do
Create attributes that support targeting and rollout use cases
Only add attributes you’ll actually use for targeting or analytics:
// Good - actionable attributes
const context = {
kind: "user",
key: "user-123",
plan: "enterprise", // For entitlement targeting
region: "us-west", // For regional rollouts
betaTester: true // For beta feature access
};
// Bad - unused attributes
const context = {
kind: "user",
key: "user-123",
favoriteColor: "blue", // Not used for targeting
shoeSize: 10 // Not used for targeting
};
Create multiple identifiers for different contexts and consistency boundaries
Define separate contexts for user, session, device, etc.:
const multiContext = {
kind: "multi",
user: {
key: "user-123",
plan: "enterprise"
},
session: {
key: "session-abc",
anonymous: true
},
device: {
key: "device-789",
platform: "iOS"
}
};
Define private attributes when targeting on sensitive information
Mark any PII or sensitive data as private:
const context = {
kind: "user",
key: "user-123",
email: "jane@example.com",
ipAddress: "192.168.1.1",
_meta: {
privateAttributes: ["email", "ipAddress"]
}
};
Do not
Use values derived from PII or sensitive values as keys
Never use email, phone numbers, or other PII directly as keys:
// Bad - PII as key
const context = { kind: "user", key: "jane@example.com" };
// Good - opaque key, PII as private attribute
const context = {
kind: "user",
key: "user-123",
email: "jane@example.com",
_meta: { privateAttributes: ["email"] }
};
Rapidly change attributes in client-side SDKs
Avoid timestamp or frequently changing attributes:
// Bad - changes every render
const context = {
kind: "user",
key: "user-123",
currentTime: Date.now()
};
// Good - stable attributes
const context = {
kind: "user",
key: "user-123",
sessionStartTime: sessionStart
};
Mix value types or sources for an attribute within the same project
Keep attribute types consistent:
// Bad - inconsistent types across applications
// iOS: { accountType: "premium" }
// Web: { accountType: 1 }
// Good - consistent types
// All apps: { accountType: "premium" }
Flag evaluation
Do
Call variation/variationDetail where the flag will be used
Evaluate flags at the point of use:
// Good - evaluate where needed
function renderButton() {
const showNewButton = ldClient.variation("new-button-ui", context, false);
return showNewButton ? <NewButton /> : <OldButton />;
}
// Bad - evaluate unnecessarily
function loadPage() {
const allFlags = ldClient.allFlags(); // Evaluates all flags
// Only use one flag
return allFlags['new-button-ui'];
}
Maintain fallback values that allow the application to function
Choose safe fallback values:
// Good - safe fallbacks
const maxRetries = ldClient.variation("max-retries", context, 3);
const featureEnabled = ldClient.variation("new-feature", context, false);
// Bad - no fallback or unsafe fallback
const maxRetries = ldClient.variation("max-retries", context); // undefined
const criticalFeature = ldClient.variation("payment-enabled", context, true); // unsafe default
Write code with cleanup in mind
Minimize flag usage to simplify cleanup:
// Good - single evaluation point
function PaymentForm() {
const useNewPaymentFlow = ldClient.variation("new-payment-flow", context, false);
return useNewPaymentFlow ? <NewPaymentForm /> : <OldPaymentForm />;
}
// Bad - multiple evaluation points
function PaymentForm() {
if (ldClient.variation("new-payment-flow", context, false)) {
// New flow code
}
const buttonText = ldClient.variation("new-payment-flow", context, false)
? "Pay Now"
: "Submit Payment";
// More evaluations...
}
Do not
Use allFlags/allFlagsState for use cases other than passing values to another application
Only use allFlags when absolutely necessary:
// Bad - unnecessary allFlags call
const flags = ldClient.allFlags();
if (flags['feature-x']) {
// Use feature
}
// Good - targeted evaluation
if (ldClient.variation("feature-x", context, false)) {
// Use feature
}
// Good - passing to another application
const flagState = ldClient.allFlagsState(context);
bootstrapFrontend(flagState);
Call variation/variationDetail without using the flag value
Don’t evaluate flags you won’t use:
// Bad - unused evaluation
ldClient.variation("feature-flag", context, false);
// Flag value never used
// Good - use the value
const enabled = ldClient.variation("feature-flag", context, false);
if (enabled) {
enableFeature();
}
Use flags without a plan
Have a clear purpose and cleanup plan:
// Bad - unclear purpose
const flag1 = ldClient.variation("temp-flag", context, false);
const flag2 = ldClient.variation("test-something", context, false);
// Good - clear purpose and naming
const useNewCheckoutFlow = ldClient.variation(
"checkout-flow-v2-rollout", // Clear name
context,
false // Safe fallback to old flow
);
// TODO: Remove this flag after 100% rollout - JIRA-123
Code references
Add ld-find-code-refs to your CI pipeline to track flag usage:
# .github/workflows/launchdarkly-code-refs.yml
name: LaunchDarkly Code References
on: push
jobs:
find-code-refs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: LaunchDarkly Code References
uses: launchdarkly/find-code-references@v2
with:
project-key: my-project
access-token: ${{ secrets.LD_ACCESS_TOKEN }}
Code references enable you to:
- Identify where flags are used in your codebase
- Understand the impact before making flag changes
- Find and remove flags when they’re no longer needed
- Manage technical debt associated with feature flags
Flag planning
Make flag planning part of feature design. Before creating a flag, ask:
- Is this flag temporary or permanent?
- Who is responsible for maintaining the targeting rules in each environment?
- How will this flag be targeted? (By user? By session? By organization?)
- Does this feature have any dependencies? (Other flags? External services?)
- What is the cleanup plan? (When can this flag be removed?)
Minimize flag reach
Use flags in as few places as possible:
// Good - single evaluation point
function App() {
const useNewUI = ldClient.variation("new-ui", context, false);
return <AppLayout newUI={useNewUI} />;
}
// Bad - evaluated throughout the application
function Header() {
if (ldClient.variation("new-ui", context, false)) { /* ... */ }
}
function Sidebar() {
if (ldClient.variation("new-ui", context, false)) { /* ... */ }
}
function Footer() {
if (ldClient.variation("new-ui", context, false)) { /* ... */ }
}
A flag should have a single, well-defined scope. For larger features, consider breaking them into multiple flags using prerequisites.
Failure mode resilience
Connection loss
If the connection to LaunchDarkly is lost:
Client-side SDKs: Context cannot be updated via identify() until connectivity is re-established. Values from the in-memory cache will be served.
Server-side SDKs: Evaluation for any context can still take place using the in-memory feature store.
Initialization failure
If the SDK is unable to initialize:
Both SDK types: Locally cached values will be served if available, otherwise fallback values will be used.
Always provide sensible fallback values that allow your application to function:
// Good - safe degradation
const featureEnabled = ldClient.variation("new-feature", context, false);
// Bad - application breaks if SDK fails
const criticalConfig = ldClient.variation("api-endpoint", context); // undefined
Context types
This section documents the context types used in your organization. Each context type represents a different entity you can target with feature flags.
Available context types
Session
Unauthenticated user sessions that maintain consistency pre and post-login. Use for anonymous user tracking and experimentation.
User
Authenticated users with consistent experience across devices. Use for user-level entitlements and cross-device feature rollouts.
Build
Build metadata for version-aware feature management across applications and services. Use for build-specific targeting and microservice coordination.
Browser
Browser and platform detection for web applications. Use for browser-specific rollouts and progressive enhancement.
Request
Request or transaction-specific metadata for API services. Use for API versioning and endpoint-specific behavior.
Multi-kind contexts
You can combine multiple context types in a single evaluation for sophisticated targeting:
const multiContext = {
kind: "multi",
session: {
key: sessionId,
anonymous: true
},
user: {
key: userId,
name: userName
},
build: {
key: "api-orders-2.5.0",
version: "2.5.0"
}
};
This enables targeting rules like “roll out to 20% of sessions OR all enterprise users on build version 2.5.0+”.
Adding custom context types
To document a new context type for your organization:
- Create a new markdown file in this directory named after your context kind
- Follow the table structure from existing context types
- Include implementation examples
- Document specific use cases for your context
- Add the new context type to the navigation in
SUMMARY.md
Session
Unauthenticated user sessions that maintain consistency pre and post-login.
Session contexts use these attributes:
| Attribute | Type | Source | Example | Private | Notes |
|---|---|---|---|---|---|
| key | String | Local/Session Storage | 7D97F0D3-B18C-4305-B110-0317BDB745DC | FALSE | Random UUID stored in session-bound storage |
| anonymous | Boolean | Static: true | true | FALSE | Always true for session contexts |
| startedAt | Number | Current time at session creation | 1640995200000 | FALSE | Unix timestamp in milliseconds |
Implementation example
Implement a session context like this:
const sessionContext = {
kind: "session",
key: getOrCreateSessionId(),
anonymous: true,
startedAt: Date.now()
};
function getOrCreateSessionId() {
let sessionId = sessionStorage.getItem('ld-session-id');
if (!sessionId) {
sessionId = crypto.randomUUID();
sessionStorage.setItem('ld-session-id', sessionId);
}
return sessionId;
}
Use cases
Progressive release for unauthenticated users
Use this when you roll out features to users before they log in. This ensures anonymous visitors see new features without authentication.
Maintain consistency across authentication
Use this when features should remain consistent before and after login. For example, a user sees a new checkout flow while browsing anonymously. They continue to see it after logging in during the same session.
Kill switch for resource-intensive features
Use this when you disable expensive features for unauthenticated users. For example, disable AI-powered recommendations or real-time chat for anonymous sessions. This reduces infrastructure costs.
Experimentation for pre-authentication paths
Use this when you run experiments on registration flows, shopping cart behavior, or add-to-cart functionality. Rolling out by session ensures consistent experience throughout the visitor’s journey.
User
Authenticated users with consistent experience across devices.
User contexts use these attributes:
| Attribute | Type | Source | Example | Private | Notes |
|---|---|---|---|---|---|
| key | String | User Database | 7D97F0D3-B18C-4305-B110-0317BDB745DC | FALSE | User identifier from database (UUID) |
| anonymous | Boolean | Static: false | false | FALSE | Always false for authenticated users |
| name | String | User Profile | Jane Doe | FALSE | User’s first and last name |
| String | User Profile | jane@example.com | TRUE | User’s email address | |
| createdAt | Number | User Database | 1640995200000 | FALSE | Unix timestamp in milliseconds |
Implementation example
Implement a user context like this:
const userContext = {
kind: "user",
key: currentUser.id,
anonymous: false,
name: `${currentUser.firstName} ${currentUser.lastName}`,
email: currentUser.email,
createdAt: currentUser.createdAt,
_meta: {
privateAttributes: ["email"]
}
};
Use cases
Progressive release for features consistent across devices
Use this when you roll out features that work the same way on web, mobile, and tablet. For example, a new user profile layout or account settings feature.
User-level entitlements and overrides
Use this when you enable features for specific users or user groups. For example, beta features for internal employees or premium features for paying customers.
Experimentation for post-authentication activities
Use this when you run experiments on authenticated features like account management, saved preferences, or personalized recommendations. This ensures consistent experience across sessions and devices.
Enable legacy behavior based on user creation date
Use this when you maintain backward compatibility for existing users. For example, grandfather users created before a certain date into the old pricing model or feature set.
Build
Build metadata for version-aware feature management across applications and services.
Build contexts use these attributes:
| Attribute | Type | Source | Example | Private | Notes |
|---|---|---|---|---|---|
| key | String | Composite | api-orders-2.5.0 | FALSE | Service/App ID + Version |
| id | String | Build Configuration | api-orders | FALSE | Unique identifier for the service/app |
| name | String | Build Configuration | Order API | FALSE | Human-friendly name |
| version | String | Build Metadata | 2.5.0 | FALSE | Build version (semantic version) |
| versionName | String | Build Metadata | 2.5 | FALSE | Human-friendly version name |
| buildDate | Number | Build Metadata | 1640995200000 | FALSE | Unix timestamp when build was created |
| commit | String | Git Metadata | a1b2c3d | FALSE | Git commit SHA (short or full) |
Implementation example
Implement a build context like this:
const buildContext = {
kind: "build",
key: `${SERVICE_ID}-${VERSION}`,
id: process.env.SERVICE_ID,
name: "Order API",
version: process.env.VERSION,
versionName: "2.5",
buildDate: parseInt(process.env.BUILD_TIMESTAMP),
commit: process.env.GIT_COMMIT
};
Use cases
Disable features on known bad builds
Use this when you quickly disable features on buggy releases. For example, version 2.5.0 has a critical bug. Disable the problematic feature only for that version while users update.
Application-level configuration and customization
Use this when different applications in your ecosystem need different feature sets. For example, enable advanced analytics in the admin portal. Do not enable it in the customer-facing app.
Export metrics to determine when to sunset legacy behavior
Use this when you plan to remove old code paths. Export context metrics to understand what application versions are still in use. This helps you decide when to drop support for older versions.
Coordinate microservice deployments
Use this when you roll out features that require specific service versions. For example, enable a new API endpoint only when both frontend and backend deploy compatible versions.
Target by git commit
Use this when you enable or disable features for specific code commits. For example, a regression occurs in commit a1b2c3d. Disable the problematic feature only for that commit while you prepare a fix.
Browser
Browser and platform detection for web applications.
Browser contexts use these attributes:
| Attribute | Type | Source | Example | Private | Notes |
|---|---|---|---|---|---|
| key | String | Composite | chrome-120.0.6099.109 | FALSE | Browser Identifier + Version String |
| userAgent | String | Navigator API | Mozilla/5.0… | FALSE | Browser’s user-agent string |
| appName | String | Browser Detection | Chrome | FALSE | Browser app name (Firefox, Safari, Chrome) |
| /app/firefox/version | String | Browser Detection | 121.0 | FALSE | Firefox version |
| /app/chrome/version | String | Browser Detection | 120.0.6099.109 | FALSE | Chrome version |
| /app/safari/version | String | Browser Detection | 17.2 | FALSE | Safari version |
| /locale/tag | String | Navigator API | en | FALSE | Language code (en, es, de) |
Implementation example
Implement a browser context like this:
const browserContext = {
kind: "browser",
key: `${browserName}-${browserVersion}`,
userAgent: navigator.userAgent,
appName: browserName,
app: {
[browserName.toLowerCase()]: {
version: browserVersion
}
},
locale: {
tag: navigator.language.split('-')[0]
}
};
Use cases
Roll out by browser to reduce browser-specific issues
Use this when you start a rollout on browsers where you have strong test coverage. For example, roll out to Chrome at 20% while Safari remains at 5% until you verify compatibility.
Progressive enhancement based on browser capabilities
Use this when you enable features that require modern browser APIs. For example, enable WebGL-based visualizations only in browsers that support it. You can also use Service Workers only in compatible browsers.
Browser version targeting
Use this when your features require minimum browser versions. For example, enable features that use CSS Grid only on browsers with full Grid support. Use modern JavaScript features only on browsers with ES2020+ support.
Localization testing
Use this when you roll out internationalization features. For example, enable new translations for users with specific locale settings. You can also test RTL layout for Arabic or Hebrew language users.
Request
Request or transaction-specific metadata for API services.
Request contexts use these attributes:
| Attribute | Type | Source | Example | Private | Notes |
|---|---|---|---|---|---|
| key | String | Generated UUID | 7D97F0D3-B18C-4305-B110-0317BDB745DC | FALSE | Random UUID per request |
| anonymous | Boolean | Static: true | true | FALSE | Always true for request contexts |
| path | String | HTTP Request | /api/users | FALSE | HTTP request path |
| method | String | HTTP Request | POST | FALSE | HTTP request method |
| api-version | String | HTTP Header | v2 | FALSE | API version from X-API-Version header |
| request-client | String | HTTP Header | mobile-ios | FALSE | Name of the requesting client application |
Implementation example
Implement a request context like this:
const requestContext = {
kind: "request",
key: crypto.randomUUID(),
anonymous: true,
path: req.path,
method: req.method,
"api-version": req.headers['x-api-version'],
"request-client": req.headers['x-client-name']
};
Use cases
Coordinate breaking changes across projects
Use this when you manage API versioning with feature flags. For example, serve new response formats only to requests specifying api-version: v2 or higher. This allows gradual migration.
Distribute risk by request or transaction
Use this when you roll out new behavior for specific endpoints. For example, enable new validation logic on the /api/orders endpoint at 10% while other endpoints remain unchanged.
Request-level rate limiting or special handling
Use this when you apply different behavior to specific request types. For example, enable aggressive caching only for GET requests. You can also apply stricter validation for POST/PUT/DELETE operations.
Client-specific behavior
Use this when you serve different responses based on the client. For example, return simplified responses for mobile clients to reduce bandwidth. You can also enable experimental features for internal testing tools.
Resiliency
This topic covers strategies for making your application resilient when feature flags are unavailable due to network partitions or outages.
Application Resilience Goals
Design your application to be resilient even if LaunchDarkly or the network is degraded. Your application must:
- Start successfully even if the SDK cannot connect during initialization
- Function in at most a degraded state when LaunchDarkly’s service is unavailable
A degraded state is acceptable when new features are temporarily disabled or optimizations are bypassed. A degraded state is not acceptable when core functionality breaks, errors occur, or user data is corrupted.
Focus Areas
Resiliency can be achieved through SDK implementation and via external infrastructure components such as LD Relay. This section covers both strategies and their tradeoffs. Our general recommendation is to start with SDK implementation and only add LD Relay if you have a specific use case that requires it.
Future Improvements
LaunchDarkly’s ongoing work in Feature Delivery v2 (FDv2) focuses on strengthening SDK and service capabilities so customers can achieve true resilience without depending on additional infrastructure components.
Key Concepts
This section defines the fundamental concepts and metrics used when designing resilient LaunchDarkly integrations.
Initialization
Initialization is the process by which a LaunchDarkly SDK establishes a connection to LaunchDarkly’s service and retrieves the current flag rules for your environment. During initialization, the SDK performs these steps:
- Establishes a connection to LaunchDarkly’s streaming or polling endpoints
- Retrieves all flag definitions and rules for your environment
- Stores these rules in an in-memory cache
- Begins listening for real-time updates to flag rules
Until initialization completes, the SDK cannot evaluate flags using the latest rules from LaunchDarkly. If a flag evaluation is requested before initialization completes, the SDK returns the fallback value you provide.
Initialization is a one-time process that occurs when the SDK client is first created. After initialization, the SDK maintains a persistent connection in streaming mode or periodically polls in polling mode to receive updates to flag rules.
Fallback/Default Values
Fallback values, also called default values, are the values your application provides to the SDK when calling variation() or variationDetail(). These values are returned by the SDK when:
- The SDK has not yet initialized
- The SDK cannot connect to LaunchDarkly’s service
- A flag does not exist or has been deleted
- The SDK is in offline mode
Fallback values are defined in your application code and represent the safe, default behavior your application should exhibit when flag data is unavailable. These values ensure your application continues to function even when LaunchDarkly’s service is unreachable.
Critical principle: Every flag evaluation must provide a fallback value that represents a safe, degraded state for your application. Never assume flag data is always available.
Key Metrics
Understanding these metrics helps you measure and improve the resilience of your LaunchDarkly integration.
Initialization Availability
Initialization Availability measures the period where an SDK is able to successfully retrieve at worst stale flags.
High initialization availability means your application will rarely see fallback values served.
Low initialization availability means your application frequently starts without flag data and must rely entirely on fallback values, potentially leading to degraded functionality.
Initialization Latency
Initialization Latency measures the time between creating an SDK client instance and when it successfully retrieves flag rules and is ready to evaluate flags.
This metric is critical for:
- Application startup time - Long initialization latency can delay application readiness
- User experience - Client-side applications may show incorrect UI if initialization takes too long
- Serverless cold starts - High latency can impact function execution time
Best practices aim to minimize initialization latency through:
- Non-blocking initialization with appropriate timeouts
- Bootstrapping strategies for client-side SDKs
- Relay Proxy deployment for serverless and high-scale environments
Evaluation Latency
Evaluation Latency measures the time it takes for the SDK to evaluate a flag and return a value after variation() is called.
This metric is typically very low, under 1ms, because:
- Flag rules are cached in memory after initialization
- Evaluation is a local computation that does not require network calls
- The SDK uses efficient in-memory data structures
Evaluation latency can increase if:
- The SDK is evaluating many flags simultaneously under high load
- Flag rules are extremely complex with many targeting rules or large segments
- The SDK is using external stores like Redis that introduce network latency
Update Propagation Latency
Update Propagation Latency measures the time between when a flag change is made in the LaunchDarkly UI and when that change is reflected in SDK evaluations.
This metric is important for:
- Real-time feature rollouts - Understanding how quickly changes reach your applications
- Emergency rollbacks - Knowing how quickly you can disable a feature across all instances
- Consistency requirements - Ensuring multiple services see flag changes at roughly the same time
Update propagation latency depends on:
- SDK mode - Streaming mode provides near-instant updates typically under 200ms, while polling mode adds delay based on polling interval
- Network conditions - Latency between your infrastructure and LaunchDarkly’s service
- Relay Proxy configuration - Additional hop when using Relay Proxy, usually minimal
- Geographic distribution - Applications in different regions may receive updates at slightly different times
In streaming mode, updates typically propagate in under 200ms. In polling mode, updates propagate within the configured polling interval, typically 30-60 seconds.
SDK Implementation
This guide explains how to implement LaunchDarkly SDKs for maximum resilience and minimal latency. Following these practices ensures your application can start reliably and function in a degraded state when LaunchDarkly’s service is unavailable.
Key Points
-
The SDK automatically retries connectivity in the background - Even if initialization timeout was exceeded, the SDK continues attempting to establish connectivity and update flag rules automatically.
-
You do not need to manually manage connections or implement your own cache on top of the SDK - The SDK handles all connection management and caching automatically.
-
Fallback values are only served when the SDK hasn’t initialized or the flag doesn’t exist - Once initialization completes, the SDK uses cached flag rules for all evaluations. Fallback values are returned only during the initial connection period or when a flag key is invalid.
-
You can subscribe to flag updates to update application state - Most SDKs provide a mechanism to fire a callback when flag rules change. Use this to have your application react to updates when connectivity is restored instead of waiting for the next
variationcall. This is common in Single-Page Applications and Mobile Applications.
Preflight Checklist Items for Application Latency
The following items from the SDK Preflight Checklist are particularly important for improving application latency and resilience:
-
Application does not block on initialization - Set timeouts: 100-500ms client-side, 1-5s server-side. Prevents extended startup delays.
-
Bootstrapping strategy defined and implemented - For client-side SDKs. Provides flag values immediately on page load, eliminating initialization delay for first paint.
-
SDK is initialized once as a singleton early in the application’s lifecycle - Prevents duplicate connections and ensures efficient resource usage.
-
Define and document fallback strategy - Every flag evaluation must provide a safe fallback value. Enables immediate evaluation without waiting for initialization.
-
Use
variation/variationDetail, notallFlags/allFlagsStatefor evaluation - Direct evaluation is faster and provides better telemetry. -
Leverage LD Relay to reduce initialization latency - For serverless functions and high-scale applications. Reduces initialization time from hundreds to tens of milliseconds.
-
Initialize the SDK outside of the handler - For serverless functions. Allows container reuse, eliminating initialization latency for warm invocations.
Currently in Early Access, Data Saving Mode introduces several key changes to the LaunchDarkly data system that greatly improves resilience to outages. To learn more, read Data Saving Mode in the LaunchDarkly documentation.
Initialization Caching
In data saving mode, the SDK will poll for initialization and subsequently open a streaming connection to receive realtime flag configuration changes. We can achieve high initialization availability by adding a caching proxy between the SDK and the polling endpoint while leaving the direct connection to the streaming endpoint in place.
The standard datasystem will allow for falling back to polling when streaming is not available.
Fallback Values
Fallback values are critical to application resilience. They ensure your application continues functioning when LaunchDarkly’s service is unavailable or flag data cannot be retrieved. However, fallback values can become stale over time, leading to incorrect behavior during outages. This guide explains how to choose appropriate fallback values and maintain them effectively.
- Choosing Fallback Values - Strategies for selecting appropriate fallback values
- Monitoring Fallback Values - Runtime monitoring with SDK hooks and data export
- Maintaining Fallback Values - Processes and centralized management
- Automating Fallback Updates - Build-time automation
Choosing Fallback Values
The fallback value you choose depends on the risk and impact of the feature being unavailable. Consider these strategies:
Fail Closed
Definition: Turn off the feature when flag data is unavailable.
When to use:
- New features that haven’t been fully validated
- Features that have not been released to all users
- Features that could introduce significant load or stability issues if released to everyone at once
Example:
// New checkout flow - fail closed if flag unavailable
const useNewCheckout = ldClient.variation('new-checkout-flow', false);
if (useNewCheckout) {
return <NewCheckoutComponent />;
}
return <LegacyCheckoutComponent />;
Fail Open
Definition: Enable the feature for everyone when flag data is unavailable.
When to use:
- Features that have been generally available, also known as GA, for a while and are stable
- Circuit breakers/operational flags where the on state is the norm
- Features that would have significant impact if disabled for everyone at once
Example:
// Enable caching when flag is unavailable. Failing closed would cause a significant performance degradation.
const enableCache = ldClient.variation('enable-caching', true);
For temporary flags that we intend to remove, consider cleaning up and archiving the flag instead of updating the fallback value to true.
Dynamic Fallback Values
Definition: Implement logic to provide different fallback values to different users based on context.
When to use:
- Configuration/operational flags that override values from another source (environment variables, configuration, etc.)
- Advanced scenarios requiring sophisticated fallback logic
Example:
function getRateLimit(request: RequestContext): number {
// dynamic rate limit based on the request method
return ldclient.variation('config-rate-limit', request, request.method === 'GET' ? 100 : 10)
}
Maintaining Fallback Values
Fallback values can become stale as flags evolve. Use these methods to ensure fallback values remain accurate and up-to-date.
Create a Formal Process
Establish a formal process for defining and updating fallback values:
Process steps:
- Define fallback values at flag creation - Require fallback values when creating flags
- Document fallback strategy - Document why each fallback value was chosen (failing closed vs. failing open)
- Review fallback values during flag lifecycle - Review fallback values when:
- Flags are promoted from development to production
- Flags are modified or targeting rules change
- Flags are deprecated or removed
- Update fallback values as flags mature - Update fallback values when flags become GA or stable
- Test fallback values - Include fallback value testing in your testing strategy
Documentation template:
Flag: new-checkout-flow
Fallback value: false - failing closed
Rationale: New feature not yet validated in production. Safer to use legacy checkout during outages.
Review date: 2024-01-15
Next review: When flag reaches 50% rollout
Monitoring Fallback Values
Monitor fallback value usage and identify stale or incorrect fallback values.
Runtime
SDK Hooks
- Implement a before evaluation hook
- Record the fallback value for each evaluation in a telemetry system
- Compare fallback values to current flag state and desired behavior
Example SDK Hook:
class FallbackMonitoringHook implements Hook {
beforeEvaluation(seriesContext: EvaluationSeriesContext, data: EvaluationSeriesData) {
// Always log the fallback value being used
this.logFallbackValue(
seriesContext.flagKey,
seriesContext.defaultValue,
);
return data;
}
}
API
Use the LaunchDarkly API to generate reports on fallback values:
Example: This fallback-report script demonstrates how to:
- Retrieve flag definitions from the LaunchDarkly API
- Compare flag fallthrough/off variations with fallback values in code
- Generate reports identifying mismatches
Use cases:
- Scheduled reports comparing flag definitions with code fallback values
- CI/CD integration to detect fallback value mismatches
- Periodic audits of fallback value accuracy
Note: This approach relies on telemetry from SDKs generated when variation/variationDetail are called. The API only reports one fallback value and cannot reliably handle situations where different fallback values are used for different users or applications.
Static Analysis
Use static analysis to analyze fallback values:
- Scan codebases for
variation()calls - Extract fallback values from source code
- Compare with flag definitions
AI Tools
Use AI tools to analyze fallback values:
- Use AI to analyze code and suggest fallback value updates. You can find an example prompt in the LaunchDarkly Labs Agent Prompts repository.
- Identify patterns in fallback value usage
- Generate recommendations based on flag lifecycle stage
- Use
ldclior the LaunchDarkly MCP to enable the agent to compare fallback values to flag definitions
Automating Fallback Updates
This approach centralizes fallback values in a configuration file and updates them automatically during your build process. The automation queries LaunchDarkly for current flag definitions and determines the appropriate fallback value for each flag.
Centralize Fallback Management
Wrapper Functions
Create wrapper functions around variation() and variationDetail() that load fallback values from a centralized configuration:
Example:
// fallback-config.json
{
"new-checkout-flow": false,
"cache-optimization-v2": true,
"experimental-feature": false
}
// wrapper.ts
import fallbackConfig from './fallback-config.json';
export function variationWithFallback(
client: LDClient,
flagKey: string,
context: LDContext
): LDEvaluationDetail {
let fallbackValue = fallbackConfig[flagKey];
if (fallbackValue === undefined) {
// you may want to make this an error in preproduction environments to catch missing fallback values early
console.warn(`No fallback value defined for flag: ${flagKey}`);
// you can use naming convention or other logic to determine a default fallback value
// for example, release flags may default to failing closed
if (flagKey.startsWith('release-')) {
fallbackValue = false;
}
}
return client.variationDetail(flagKey, context, fallbackValue);
}
Benefits:
- Single source of truth for fallback values
- Easier to automate fallback value updates
- Logic can be shared across applications and services
Tradeoffs:
- Difficult to implement dynamic fallback values (e.g., different fallback values for different users or applications)
- Loss of locality: fallback values are no longer present in the variation call and require checking the fallback definition file
Process
- During build, query LaunchDarkly SDK polling endpoint for all flag definitions in an environment
- For each flag, intelligently determine the appropriate fallback value:
- If flag is OFF → use off variation
- If prerequisites exist and would not pass for all users → use off variation
- If flag has multiple variations in targeting (rules, rollouts, etc.) → use off variation
- If all targeting serves a single variation → use that variation
- Update fallback configuration files with determined values
- Validate that fallback values match expected types
Example Build Script
Fetch flag data from LaunchDarkly SDK polling endpoint:
#!/bin/bash
# Update fallback values from LaunchDarkly flags with intelligent fallback selection
# Uses the SDK polling endpoint which provides all flags for an environment
# Get the directory where this script is located
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
curl -s -H "Authorization: ${LD_SDK_KEY}" \
"https://sdk.launchdarkly.com/sdk/latest-all" \
| jq -f "${SCRIPT_DIR}/parse-fallbacks.jq" \
> fallback-config.json
Parse fallback values using intelligent logic (handles prerequisites, rollouts, and targeting rules):
# Recursive function to get recommended fallback for a flag
def get_fallback($flag; $allFlags):
# Helper function to get off variation value
def get_off_variation_value:
if $flag.offVariation != null then
$flag.variations[$flag.offVariation]
else
(debug("WARNING: Flag '\($flag.key)' has no offVariation set, omitting from output")) | empty
end;
# Helper function to check if flag has multi-variation rollouts
def has_multi_rollout:
(
# Check rules for multi-variation rollouts (with null safety)
(($flag.rules // []) | map(
if .rollout and .rollout.variations then
(.rollout.variations | map(select(.weight > 0)) | length) > 1
else
false
end
) | any) or
# Check fallthrough for multi-variation rollout (with null safety)
(if $flag.fallthrough.rollout and $flag.fallthrough.rollout.variations then
($flag.fallthrough.rollout.variations | map(select(.weight > 0)) | length) > 1
else
false
end)
);
# Helper function to get all unique variations served
def get_all_variations:
(
# Legacy target variations
([$flag.targets[]?.variation] // []) +
# Context target variations
([$flag.contextTargets[]?.variation] // []) +
# Rule variations (either direct or from single-variation rollout)
(($flag.rules // []) | map(
if .variation != null then
.variation
elif .rollout and .rollout.variations then
(.rollout.variations[] | select(.weight > 0) | .variation)
else
empty
end
)) +
# Fallthrough variation
(if $flag.fallthrough.variation != null then
[$flag.fallthrough.variation]
elif $flag.fallthrough.rollout and $flag.fallthrough.rollout.variations then
[$flag.fallthrough.rollout.variations[] | select(.weight > 0) | .variation]
else
[]
end)
) | unique;
# Step 1: Check prerequisites first
if ($flag.prerequisites and ($flag.prerequisites | length) > 0) then
# Check each prerequisite
($flag.prerequisites | map(
. as $prereq |
# Find the prerequisite flag in all flags
($allFlags[] | select(.key == $prereq.key)) as $prereqFlag |
# Get the fallback value for the prerequisite flag (recursive call)
get_fallback($prereqFlag; $allFlags) as $prereqFallback |
# Check if prerequisite passes:
# 1. If prereq flag is OFF, prerequisite fails
# 2. If prereq fallback does not match expected variation value, prerequisite fails
if ($prereqFlag.on == false) then
false
elif ($prereqFallback != $prereqFlag.variations[$prereq.variation]) then
false
else
true
end
) | all) as $allPrereqsPass |
# If any prerequisite fails, return off variation
if $allPrereqsPass == false then
get_off_variation_value
else
# All prerequisites pass, continue with normal evaluation
if $flag.on == false then
get_off_variation_value
elif has_multi_rollout then
get_off_variation_value
else
get_all_variations as $variations |
if ($variations | length) == 1 then
$flag.variations[$variations[0]]
else
get_off_variation_value
end
end
end
else
# No prerequisites, use normal evaluation logic
if $flag.on == false then
get_off_variation_value
elif has_multi_rollout then
get_off_variation_value
else
get_all_variations as $variations |
if ($variations | length) == 1 then
$flag.variations[$variations[0]]
else
get_off_variation_value
end
end
end;
# Main processing: convert flags object to array and process each one
.flags | to_entries | map(.value) as $allFlags |
[$allFlags[] |
. as $flag |
{
name: .key,
value: get_fallback($flag; $allFlags)
}
] | from_entries
Benefits
- Ensures fallback values match current flag definitions
- Reduces manual maintenance overhead
- Catches drift between code and flag definitions
- Supports automated flag lifecycle management
Considerations
- Uses the SDK polling endpoint (
https://sdk.launchdarkly.com/sdk/latest-all) which provides all flags for an environment - Requires an SDK key (not an API key) to authenticate
- The logic for determining fallback values is complex and handles prerequisites, rollouts, and targeting rules
- Prerequisites are evaluated recursively to determine if they would pass for all users
- Ensure your team is aware of how fallback values are generated from targeting state
- The generated fallback values represent the “safest” value for each flag during an outage
LD Relay
The LaunchDarkly Relay Proxy (LD Relay) can be deployed in your infrastructure to provide a local endpoint for SDKs, reducing outbound connections and potentially improving initialization availability. However, LD Relay introduces operational complexity and new failure modes that must be carefully managed.
Risks and Operational Burden
Using LD Relay introduces:
- Additional infrastructure: More services to deploy, monitor, scale, and secure
- Resource constraints: Insufficiently provisioned relay instances can become bottlenecks or points of failure
- Maintenance overhead: Your team must handle responsibilities previously managed by LaunchDarkly’s platform
Operationalizing LD Relay
Deploy Highly Available Infrastructure
Load balancer:
- Implement a highly available internal load balancer as the entry point for all flag delivery traffic
- If the load balancer is not highly available, it becomes a single point of failure
- Support routing to LaunchDarkly’s primary streaming network and LD Relay instances
Relay instances:
- Deploy multiple Relay Proxy instances across different availability zones
- Ensure each instance is properly sized and monitored
- Implement health checks and automatic failover
Persistent Storage
LD Relay can operate with or without persistent storage. Each approach has different tradeoffs:
With persistent storage such as Redis:
- Benefits:
- Enables scaling LD Relay instances during outages
- Allows restarting LD Relay instances without losing flag data
- Provides durable cache that survives Relay Proxy restarts
- Tradeoffs:
- Increases operational complexity. Additional service to manage.
- Requires configuring infinite cache TTLs to prevent lower availability and incorrect evaluations
- Prevents using AutoConfig. AutoConfig requires in-memory only operation.
- Additional monitoring and alerting requirements for cache health
Without persistent storage, in-memory only:
- Benefits:
- Simpler architecture with fewer components to manage
- Supports AutoConfig for dynamic environment configuration
- Lower operational overhead
- Tradeoffs:
- Relies on LD Relay cluster being able to service production traffic without restarting or adding instances during outages
- Lost cache on Relay Proxy restart. Requires re-initialization from LaunchDarkly’s service.
- Must ensure sufficient capacity and redundancy to handle outages without scaling
Monitor and Alert
Key metrics to monitor:
- Initialization latency and errors
- CPU/Memory utilization
- Network utilization
- Persistent store availability
When to use LD Relay
LD Relay can improve initialization availability in these scenarios:
Frequent Deployments or Restarts
Use LD Relay when: You deploy or restart services frequently, at least once per day.
Why: Frequent restarts mean frequent SDK initializations. LD Relay reduces initialization latency and provides cached flag data even if LaunchDarkly’s service is temporarily unavailable during a restart window.
Example scenarios:
- Kubernetes deployments with rolling restarts
- Serverless functions with frequent cold starts
- Containers that restart frequently for configuration updates
Critical Consistency Requirements
Use LD Relay when: Multiple services or instances must evaluate flags consistently, even during short outages of initialization availability.
Why: LD Relay provides a shared cache that multiple SDK instances can use, ensuring consistent flag evaluations across services even when LaunchDarkly’s service is temporarily unavailable.
Example scenarios:
- Microservices that must all evaluate the same flag consistently
- Multi-region deployments requiring consistent feature rollouts
- Applications where inconsistent flag evaluations cause data corruption or business logic errors
High Impact of Fallback Values
Use LD Relay when: Fallback values cause significant business impact, such as loss of business, not just degraded UX.
Why: When fallback values cause significant business impact such as payment processing failures, data loss, or compliance violations, LD Relay provides cached flag data to avoid serving fallbacks.
Example scenarios:
- Payment processing systems where fallback values cause transaction failures
- Compliance-critical features where fallback values violate regulations
- Safety-critical systems where degraded functionality is unacceptable
Additional information
For detailed information on LD Relay configuration, scaling and performance guidelines and refer to the LD Relay chapter.
Overview
This topic explains the LaunchDarkly Relay Proxy, a small service that runs in your own infrastructure. It connects to LaunchDarkly and provides endpoints to service SDK requests and the ability to populate persistent stores.
Resources and documentation:
Use cases
This table lists common use cases for the Relay Proxy:
| Name | Description |
|---|---|
| Restart resiliency for server-side SDKs | LD Relay acts as an external cache to server-side SDKs to provide flag and segments rules |
| Reduce egress and outbound connections | LD Relay can service initialization or streaming requests from SDKs instead of having them connect to LaunchDarkly directly. In event forwarding mode, LD Relay can buffer and compress event payloads from multiple SDK instances |
| Air-gapped environments and snapshots | LD Relay can load flags or segments from an archive exported via the LaunchDarkly API. Available for Enterprise plans only |
| Reduce initialization latency | LD Relay acts as a local cache for initialization requests |
| Support PHP and serverless environments | LD Relay can service short-lived processes via proxy mode and populate persistent stores for daemon-mode clients |
| Syncing big segments to a persistent store | LD Relay can populate big segment stores with membership information for use with server-side SDKs |
Modes of operation
Proxy Mode: SDKs connect to LD Relay to receive flags and updates.
Daemon Mode: LD Relay syncs flags to a persistent store. SDKs retrieve flags as needed directly from the store and do not establish their own streaming connection
Daemon Mode is used in environments where the SDK can not establish a long-lived connection to LaunchDarkly. This is common in serverless environments where the function is terminated after a certain amount of time or PHP.
Additional features
Event forwarding: LD Relay buffers, compresses, and forwards events from the SDKs to LaunchDarkly
Big Segment Syncing: LD Relay syncs big segment data to a persistent store
Scaling and Performance
Overview
This topic explains scaling and performance considerations for the Relay Proxy.
The computational requirements for LD Relay are fairly minimal when serving server-side SDKs or when used to populate a persistent store. In this configuration, the biggest scaling bottleneck is network bandwidth and throughput. Provision LD Relay as you would for an HTTPS proxy and tune for at least twice the number of concurrent connections you expect to see.
You should leverage monitoring and alerting to ensure that the LD Relay cluster has the capacity to handle your workload and scale it as needed.
Out of the box, LD Relay is fairly light-weight. At a minimum you can expect:
- 1 long-lived HTTPS SSE connection to LaunchDarkly’s streaming endpoint per configured environment
- 1 long-lived HTTPS SSE connection to the AutoConfiguration endpoint when automatic configuration is enabled
Memory usage increases with the number of configured environments, the payload size of the flags and segments, and the number of connected SDKs. Client-side SDKs have higher computation requirements as the evaluation occurs in LD Relay.
Event forwarding
LD Relay handles the following event forwarding patterns:
- Approximately 1 incoming HTTPS request every 2 seconds per connected SDK. This may vary based on flush interval and event capacity settings in the SDK.
- Approximately 1 outgoing HTTPS request every 2 seconds per configured environment. This may vary based on LD Relay’s configured flush interval and event capacity.
Memory usage increases with event capacity and the number of connected SDKs.
Scaling strategies
Each LD Relay instance maintains connections and manages configuration for the environments you assign to it. The number of environments a single instance can handle depends on your memory, CPU, and network resources. Monitor resource usage to determine when to scale.
When your environment count or size exceeds the limits of a single Relay instance, use one of these scaling approaches:
- Horizontal scaling: Add more Relay instances to share the load across your environments. This approach provides greater resilience and easier dynamic scaling.
- Vertical scaling: Increase the memory and CPU resources allocated to each Relay instance.
- Environment sharding: Distribute environments across multiple Relay instances so each Relay manages a subset of environments rather than all of them.
Environment sharding
Sharding distributes environment configurations across multiple Relay instances. Each Relay manages a subset of environments rather than all of them.
Use sharding in the following situations:
- Your environment count or size exceeds the limits of a single Relay instance.
- You need to seperate instances by failure or compliance domains.
- You need to simplify health checks for load balancers and container orchestrators.
Sharding provides the following advantages:
- Reduces memory and CPU load per Relay instance.
- Limits the impact of failures or configuration errors.
- Improves cache efficiency and stability by isolating workloads.
Seperation of concerns
LD Relay can perform several functions such as acting providing rules to server-side SDKs, evaluating flags for client-side SDKs, forwarding events, and populating persistent stores. You can configure LD Relay to perform one or more of these functions.
You may want consider using seperate LD Relay instances for different functions based on scaling characteristics and criticality. For example you might have seperate clusters for:
- Server-side SDKs
- Client-side SDKs (Evaluation)
- Event forwarding
- Populating persistent stores for daemon-mode or syncing big segments
This approach provides the following advantages:
- Easier to individually scale components and predict resource utilization
- Seperate concerns and increase reliabiity of critical components (e.g serving rules to server-side SDKs is more critical than event forwarding)
- Prevent client-side workloads from impacting server-side SDKs
This approach will generally increase the total cost of ownership of the deployment as you will need to deploy and manage multiple instances. It is more applicable to large deployments with a mix of use-cases.
Proxy Mode
Overview
This topic explains proxy mode configuration for the Relay Proxy.
This table shows recommended configuration options:
| Configuration | Recommendation | Notes |
|---|---|---|
| Automatic configuration | Enable if not using a persistent store for restart or scale resiliency | Automatic configuration allows you to avoid re-deploys when adding new projects or environments or rotating SDK keys. LD Relay cannot start if the automatic configuration endpoint is down even when a persistent store is used. If the ability to restart or add LD Relay nodes while LaunchDarkly is unavailable is critical, do not use automatic configuration |
| Event forwarding | Enable | Allow SDKs to forward events to LD Relay to reduce outbound connections and offload compression |
| Metrics | Enable | Enable at least one of the supported metrics integrations such as Prometheus |
| Persistent Stores | Optional. | Persistent stores can be used to allow the deployment of additional LD Relay instances while LaunchDarkly is unavailable. You may opt instead to provision LD Relay so it can handle at least 1.5x-2x of your production traffic in order to avoid the need to scale LD Relay during short outages. |
| Disconnected Status Time | Optional. | Time to wait before marking an environment as disconnected. The default is 1m and is sufficient for instances with reliable networks. If you are using LD Relay in an unreliable network environment, consider increasing this value |
Here is an example:
[Main]
; Time to wait before marking a client as disconnected.
; Impacts the status page
; disconnectedStatusTime=1m
;; For automatic configuration
[AutoConfig]
key=rel-abc-123
;; Event forwarding when setting the event uri to LD Relay in your SDK
[Events]
enable=true
;; Metrics integration for monitoring and alerting
[Prometheus]
enabled=true
Persistent Stores
Persistent stores can be used to improve reboot and restart resilience for LD Relay in the event of a network partition between your application and LaunchDarkly.
This table shows persistent store configuration options:
| Configuration | Recommendation | Notes |
|---|---|---|
| Cache TTL | Infinite with a negative number, localTTL=-1s | An infinite cache TTL means LD Relay maintains its in-memory cache of all flags, mitigating the risk of persistent store downtime. If the persistent store goes down with a non-infinite cache TTL, you may see partial or invalid evaluations due to missing flags or segments. |
| Prefix or Table name | Use the client-side id | The prefix or table name must be unique per environment. When using autoconfig, the placeholder $CID can be used. This is replaced with the client-side id of the environment. Using the same scheme when statically configuring environments allows for consistency if you switch between these options |
| Ignore Connection Errors | ignoreConnectionErrors=true when AutoConfig is disabled | By default, LD Relay shuts down if it cannot reach LaunchDarkly after the initialization timeout is exceeded. To have LD Relay begin serving requests from the persistent store after the timeout, you must set ignore connection errors to true |
| Initialization Timeout | Tune to your needs, default is 10s | This setting controls how long LD Relay attempts to initialize its environments. Until initialization succeeds, LD Relay serves 503 errors to any SDKs that attempt to connect. What happens when the timeout is exceeded depends on the setting of ignore connection errors. When ignore connection errors is false, which is the default, LD Relay shuts down after the timeout is exceeded. Otherwise, LD Relay begins servicing SDK requests using the data in the persistent store. |
Here is an example:
[Main]
;; You must set ignoreConnectionErrors=true in order for LD Relay to start without a connection to LaunchDarkly. You should set this when using a persistent store for flag availability.
ignoreConnectionErrors=true
;; How long will LD Relay wait for connection to LaunchDarkly before serving requests.
;; If ignoreConnectionErrors is false, LD Relay will exit with an error if it cannot connect to LaunchDarkly within the timeout.
initTimeout=10s;; Default is 10 seconds
;; NOTE: If you are using Automatic Configuration, LD Relay can not start without a connection to LaunchDarkly.
[AutoConfig]
key=rel-abc-123
; When using Automatic Configuration with a persistent store, you must use the $CID placeholder.
; Use a separate DynamoDB table per environment
envDatastoreTableName="ld-$CID"
; Or use a single table with unique prefix per environment
;envDatastorePrefix="ld-$CID"
;envDatastoreTableName="ld"
; When not using automatic configuration, set the prefix and/or table name for each environment.
; see https://github.com/launchdarkly/ld-relay/blob/v8/docs/configuration.md#file-section-environment-name
[Redis]
enabled=true
url=redis://host:port
; Always use an infinite cache TTL for persistent stores
localTtl=-1s
[DynamoDB]
enabled=true
; Always use an infinite cache TTL for persistent stores
localTtl=-1s
SDK Configuration
Configure the endpoints you want to handle using LD Relay. Events must be enabled if you set the events URI. To learn more about configuring endpoints, read Proxy mode.
Infrastructure
-
Minimum 3 instances in at least two availability zones per region.
- On-prem: Consider failure domains in your deployment such as separate racks and power
- Cloud: Follow the guidelines of your hosting provider to qualify for SLAs
-
Highly available load balancer with support for SSE connections
You should target 99.99 percent availability, which matches LaunchDarkly’s SLA for flag delivery.
Scaling and performance
In addition to the base scaling and performance guidelines, Proxy Mode has the following considerations:
Server-side SDKs
1 incoming long-lived HTTPS SSE connection per connected server-side SDK instance
Client-side SDKs
- 1 outgoing long-lived HTTPS SSE connection per connected client-side SDK instance with streaming enabled
- 1 incoming HTTPS request per connected client-side SDK every time a flag or segment is updated
LD Relay does not scale well with streaming client-side connections and should be avoided. It can handle polling requests to the base URI without issue. For browser SDKs, you can use LD Relay for initialization and the SaaS for streaming.
Do not point the stream URI of client-side mobile SDKs to LD Relay without careful consideration.
Daemon Mode
Overview
This topic explains daemon mode, a workaround for environments where normal operation is not possible. Avoid using it unless strictly necessary. Daemon Mode is not a solution for increasing flag availability.
- Using Daemon Mode
- Using Redis as a persistent feature store
- Using DynamoDB as a persistent feature store
Guidelines
Set useLDD=true in your SDK configuration
Daemon mode requires that you both enable daemon mode and configure the persistent store. Enabling daemon mode tells the SDK not to establish a streaming or polling connection for flags and instead rely on the persistent store. For SDK-specific configuration examples, see Using daemon mode.
Choose a unique prefix
It is critical that each environment has a unique prefix to avoid corrupting the data. When using autoconfig, you can do this automatically using the $CID placeholder. See the example below.
Restart LD Relay when persistent store is cleared
If the persistent store is missing information, either because updates were lost or the store was cleared, you should start LD Relay to repopulate the data. You may want to consider automatically restarting LD Relay or running an LD Relay instance in one-shot mode every time the persistent store starts.
Choosing a cache TTL for your SDK
The Cache TTL controls how long the SDK will cache rules for flags and segments in memory. Once the cache expires for a particular flag or segment, the SDK fetches it from the persistent store at evaluation time.
This table shows cache TTL options and their trade-offs:
| Option | Resilency | Evaluation Latency | Update Propogation Latency |
|---|---|---|---|
| Higher TTL | Higher | Lower | Higher |
| Lower TTL | Lower | Higher† | Lower |
| No Cache with TTL=0 | Lowest, persistent store must always be available | Highest, all evaluations require one or more network calls | Lowest, updates are seen as soon as they are written to the store |
| Infinite Cache with TTL=-1†† | Highest | Lowest with flags in memory | Updates are never seen by the SDK unless you configure stale-while-revalidate††† |
† In select SDKs such as the Java Server-Side SDK, you can configure stale-while-revalidate semantics so that flags are always served from the in-memory cache and refreshed from the persistent store asynchronously in the background
†† Only supported in some SDKs
††† Unless you have configured a stale-while-revalidate in a supported SDK
LD Relay Configuration
Here is an example:
; If using AutoConfig
[AutoConfig]
key=rel-abc-123
; Use a separate DynamoDB table per environment
envDatastoreTableName="ld-$CID"
; Or use a single table with unique prefix per environment
;envDatastorePrefix="ld-$CID"
;envDatastoreTableName="ld"
[Events]
; Enable event forwarding if using LD Relay as the event uri in your SDK
enable=true
[Prometheus]
enabled=true
[DynamoDB]
; change to true if using dynamodb
enabled=false
localTtl=-1s
[Redis]
; Change to true if using redis
enabled=false
url=redis://host:port
localTtl=-1s
;tls=true
;password=
SDK Configuration
In Daemon Mode, you configure both daemon mode and the persistent store in the SDK. To learn more about configuring daemon mode and the persistent store, read Daemon mode.
Scaling and performance
In addition to the base scaling and performance guidelines, Daemon Mode has the following additional characteristics:
In Daemon Mode, you only need one instance of LD Relay to keep the persistent store up to date. Multiple instances do not help scale with the number of connected clients. You should take steps to ensure at least one instance is running.
The persistent store has a read-bound workload and contains the flag and segment rules for your configured environments. The amount of data is typically quite small. The data matches the information returned by the streaming and polling endpoints. Here is how to check the payload size and content:
curl -LH "Authorization: $LD_SDK_KEY" https://sdk.launchdarkly.com/sdk/latest-all | wc -b
If you are using big segments in this environment, all of the user keys for every big segment in configured environments sync to the store. This can cause an increased write workload to the persistent store.
Reverse Proxies
Overview
This topic explains reverse proxy configuration settings for the Relay Proxy.
HTTP proxies such as corporate proxies and WAFs, and reverse proxies in front of Relay such as nginx, HAProxy, and ALB are common in LD Relay deployments. This table lists settings to configure:
| Setting | Configuration | Notes |
|---|---|---|
| Response buffering for SSE endpoints | Disable | It is common for reverse proxies to buffer the entire response from the origin server before sending it to the client. Since SSEs are effectively an HTTP response that never ends, this prevents the SDK from seeing events sent over the stream until the response buffer is filled or the request closes due to a timeout. Relay sends a special header that disables response buffering in nginx automatically: X-Accel-Response-Buffering: no |
| Forced Gzip compression for SSE endpoints | Disable if the proxy is not SSE aware | Gzip compression buffers the responses |
| Response Timeout or Max Connection Time for SSE Endpoints | Minimum 10 minutes | Avoid long timeouts because while the SDK client reconnects, you can end up wasting resources in the load balancer on disconnected clients. |
| Upstream or Proxy Read Timeout | 5 minutes | The timeout between successful read requests. In nginx, this setting is called proxy_read_timeout |
| CORS Headers | Restrict to only your domains | Send CORS headers restricted to only your domains when using LD Relay with browser SDK endpoints |
| Status endpoint | Restrict access | Restrict access to this endpoint from public access |
| Metrics or Prometheus port | Restrict access | Restrict access to this port |
AutoConfig
Overview
This topic explains AutoConfig for the Relay Proxy, which allows you to configure Relay Proxy instances automatically.
Custom Roles
These custom role policies allow members to create and modify LD Relay AutoConfigs.
All instances
For an LD Relay Admin type role with access to all LD Relay instances:
Here is an example:
[
{
"effect": "allow",
"resources": ["relay-proxy-config/*"],
"actions": ["*"]
}
]
Specific instance by ID
Here is an example:
[
{
"effect": "allow",
"resources": ["relay-proxy-config/60be765280f9560e5cac9d4b"],
"actions": ["*"]
}
]
You can find your auto-configuration ID from the URL when editing its settings or using the API:
Using AutoConfig
Selecting Environments
You can select the environments to include in a Relay Proxy configuration.
Examples
Production environment in all projects
Here is an example:
[
{
"actions": ["*"],
"effect": "allow",
"resources": ["proj/*:env/production"]
}
]
Production environment in foo project
[
{
"actions": ["*"],
"effect": "allow",
"resources": ["proj/foo:env/production"]
}
]
All environments in projects starting with foo-
[
{
"actions": ["*"],
"effect": "allow",
"resources": ["proj/foo-*:env/*"]
}
]
Production in projects tagged “bar”
[
{
"actions": ["*"],
"effect": "allow",
"resources": ["proj/*;bar:env/production"]
}
]
All non-production environments in any projects not tagged federal Deny has precedence within a single policy
[
{
"actions": ["*"],
"effect": "allow",
"resources": ["proj/*:env/*"]
},
{
"actions": ["*"],
"effect": "deny",
"resources": ["proj/*:env/production", "proj/*;federal:env/*"]
}
]
Experimentation
This topic explains how to build and sustain an experimentation practice using LaunchDarkly Experimentation. A strong experimentation practice turns product decisions into measurable outcomes, reduces risk, and accelerates learning across your organization.
Why invest in an experimentation practice
Without a practice: Teams ship features based on intuition. Some changes underperform, others cause regressions, and there is no systematic way to know which is which until revenue or engagement data arrives weeks later.
With a practice: Every significant change has a measurable outcome before full rollout. Teams catch regressions early, double down on what works, and build a compounding knowledge base that makes each decision faster and more confident than the last.
Without a deliberate practice, experimentation efforts stall after an initial pilot. The guidance in this section addresses the organizational, strategic, and operational foundations that make experimentation succeed.
Your turn: List your top business goals and your current experimentation baseline. This snapshot helps you measure progress as you build your practice.
| Business goal | How you measure it today | Experiments run in the last quarter |
|---|---|---|
Focus areas
This section covers four focus areas. We recommend working through them in order:
- Building a culture of experimentation: Identify stakeholders, build awareness, gain leadership support, and scale beyond a single team.
- Problem-solution mapping: Generate experiment ideas tied to business goals and convert them into testable hypotheses.
- Test planning: Write complete test plans with hypotheses, SMART goals, treatment designs, metrics, and risk assessments.
- Experimentation process design: Define decision ownership, build a RACI, integrate with development workflows, and communicate results.
Prerequisites
Before running experiments in LaunchDarkly, your SDK implementation must support Experimentation. To learn more, read Preflight Checklist.
Building a culture of experimentation
This topic explains how to establish experimentation as an organizational practice, not a one-time activity owned by a single team.
Without culture-building: A single team runs a few experiments, but no one else adopts the practice. The pilot stalls when that team shifts priorities, and the organization loses the investment it made in tooling and training.
With culture-building: Experimentation becomes a shared capability. Multiple teams propose and run tests independently. Knowledge compounds across the organization, and the practice sustains itself even as teams and priorities change.
Identify stakeholders and roles
Every responsibility needs an owner. In smaller organizations, one person may fill multiple roles.
The following table describes common experimentation roles:
| Role | Responsibility |
|---|---|
| Executive sponsor | Champions experimentation at the leadership level, connects outcomes to business KPIs, and removes organizational blockers. |
| Program owner | Owns the experimentation roadmap, coordinates across teams, and tracks the overall health of the practice. |
| Test designer | Writes test plans, defines hypotheses, and selects metrics for individual experiments. |
| Developer | Implements feature flag variations, instruments metrics, and ensures experiments deploy correctly. |
| Data analyst | Monitors experiment results, validates statistical significance, and provides interpretation for stakeholders. |
| Product manager | Prioritizes experiment ideas against the product backlog and decides how to act on results. |
Your turn: Map these roles to people in your organization. If a role has no owner, that is a gap to address before launching your practice.
| Role | Person or team | Notes |
|---|---|---|
| Executive sponsor | ||
| Program owner | ||
| Test designer | ||
| Developer | ||
| Data analyst | ||
| Product manager |
Build awareness across the organization
Cover the following topics when introducing experimentation to new teams:
- What experimentation is and how it differs from feature flagging
- How LaunchDarkly Experimentation works at a high level
- What kinds of questions experimentation answers well
- How experiment results inform product decisions
- Real examples of experiments that produced meaningful outcomes
Tailor examples to your audience: technical metrics for engineering, conversion rates for product, revenue impact for leadership.
Gain leadership support
Leadership support determines whether experimentation survives past a pilot phase. When leaders actively sponsor the practice, teams treat it as core work.
To gain and maintain leadership support:
- Connect experiment outcomes to existing business KPIs leadership already tracks.
- Present early wins from one or two high-visibility experiments.
- Quantify the cost of not experimenting by highlighting past releases that underperformed.
- Report quarterly on experiments run, key findings, and business impact.
Your turn: Identify the business KPIs your leadership tracks and map each one to an experiment opportunity. This mapping gives you the language to pitch experimentation in terms leadership already cares about.
| Business KPI | Potential experiment opportunity |
|---|---|
Expand beyond the initial team
Start with a motivated team that has the technical foundation to run experiments. Use that team as a proof of concept for the rest of the organization.
Validate with an A/A test
Before the pilot team runs its first real experiment, run an A/A test. An A/A test serves both groups the same experience and validates your experimentation stack end to end: SDK integration, flag evaluation, metric event delivery, and results analysis.
Run an A/A test when:
- A team runs its first experiment on a new application or service
- You deploy a new SDK or update metric instrumentation
- You onboard a new team to Experimentation
- You migrate to a new context schema or change identity resolution
Always use frequentist statistics for A/A tests. Bayesian priors can nudge results toward a “winning” variation even when both groups receive the same experience. Frequentist analysis tests a clear null hypothesis and reports a p-value you interpret directly.
A successful A/A test shows no statistically significant difference between the two groups. If you see a significant result, investigate before running real experiments. Common causes: duplicate metric events, inconsistent context keys across SDKs, incorrect flag evaluation logic, or metric events firing before SDK initialization.
After the pilot team has run a successful A/A test and several real experiments, expand:
- Document the pilot team’s process, including templates for test plans and intake forms.
- Run cross-team workshops on problem-solution mapping and test planning.
- Pair experienced experimenters with new teams for their first experiments.
- Create a shared library of completed experiments.
- Establish a regular review cadence for presenting results and sharing lessons.
Common challenges
The following table describes common challenges and strategies:
| Challenge | Strategy |
|---|---|
| Teams do not have time to experiment. | Start small. Run experiments on changes already in the backlog rather than creating net-new work. |
| Leadership does not see the value. | Present results in business terms. Show the revenue or efficiency impact of experiment-informed decisions. |
| Experiments produce inconclusive results. | Review test plans for clear hypotheses and adequate sample sizes before launch. Inconclusive results still provide learning. |
| Only one team runs experiments. | Create visibility by sharing results in company-wide channels. Run workshops to lower the barrier for new teams. |
| Teams skip the planning process. | Make test plan review a required step before development begins. Provide templates to reduce friction. |
Your turn: Review the challenges above and check the ones your organization faces today. For each one you check, write one concrete next step you plan to take.
| Challenge applies to us | Challenge | Our next step |
|---|---|---|
| □ | Teams do not have time to experiment. | |
| □ | Leadership does not see the value. | |
| □ | Experiments produce inconclusive results. | |
| □ | Only one team runs experiments. | |
| □ | Teams skip the planning process. |
Problem-solution mapping
This topic explains how to run a problem-solution mapping workshop to generate experiment ideas tied to your business goals. Problem-solution mapping produces a prioritized library of validated problems and testable hypotheses that feed directly into your experimentation backlog.
Without problem-solution mapping: Teams brainstorm experiment ideas ad hoc. Ideas lack clear ties to business goals, making it hard to prioritize and harder to demonstrate impact when results arrive.
With problem-solution mapping: Every experiment traces back to a ranked business goal. Prioritization is straightforward, results map directly to KPIs leadership tracks, and the team always has a backlog of high-value ideas ready to test.
Align to goals and KPIs
Problem-solution mapping is a structured brainstorming exercise for 5 to 15 participants from diverse roles. Participants identify problems that block business goals, then convert those problems into hypotheses. A successful session produces a categorized problem list, validation notes, and draft hypotheses ready for test planning.
Start with your organization’s goals. Ask leadership for the current quarter or year’s top priorities, ranked by importance. Use goals that are specific and measurable. “Increase revenue” is too broad. “Increase mobile checkout conversion rate” gives participants a clear target.
Focus on no more than three goals per session to maintain quality.
Your turn: Write your organization’s top goals ranked by priority, along with the KPIs that measure them. These goals become the anchor for your brainstorming session.
| Rank | Business goal | Primary KPI | Current baseline |
|---|---|---|---|
| 1 | |||
| 2 | |||
| 3 |
Define problem categories
Create 3 to 5 categories that relate to your goals and work across multiple goals when possible. Categories focus brainstorming and help you organize the problem library afterward.
Examples of effective categories:
- User experience, checkout process, inventory management
- Acquisition, activation, retention
- People, process, technology
Your turn: Draft your 3 to 5 problem categories. Choose categories that relate to your goals above and give participants a clear focus for each brainstorming round.
| Category | Why this category matters for our goals |
|---|---|
Generate problems
Run the problem generation exercise in timed rounds. Each round focuses on one category for one goal.
Follow these steps to run a round:
- Present the goal and the category for the current round.
- Give participants 5 to 10 minutes of silent, individual brainstorming. Each person writes one problem per note.
- Collect and group similar problems as participants write them.
- After the round, review notable submissions and invite brief clarification from authors.
- Repeat for each category, then move to the next goal.
Keep these principles in mind during the exercise:
- Lead with empathy. Do not attribute contributions by name. Encourage open sharing.
- Limit leadership participation. Invite leaders for a brief opening, then ask them to leave. Their presence can stifle candid discussion.
- Enforce silent working. Individual brainstorming produces more diverse ideas than group discussion.
- Frame problems positively. Treat every submission as valuable input, not a complaint.
Build a problem library
A well-organized problem library gives your team a persistent backlog of experiment ideas, each tied to evidence and a business goal. After the session, organize all problems into a shared tracking document.
Your turn: Use the following template to start your problem library. Fill in at least three problems from your brainstorming session or from known pain points:
| Problem statement | Category | Goal | Validation notes |
|---|---|---|---|
After the session, deduplicate entries and prioritize problems with strong data support.
Convert problems to hypotheses
For each prioritized problem, draft a hypothesis that describes a proposed change and its expected impact.
Use SMART criteria to evaluate each hypothesis before promoting it to test planning:
- Specific: The hypothesis names a concrete change and a measurable outcome.
- Measurable: You have access to the data needed to evaluate the outcome.
- Achievable: The proposed change is technically and organizationally feasible.
- Relevant: The hypothesis ties back to a prioritized business goal.
- Time-bound: The experiment has a realistic timeline for reaching statistical significance.
Examples of well-formed hypotheses:
Problem: Mobile app users abandon the checkout flow at the shipping step. Hypothesis: Reducing the number of required shipping fields from 8 to 5 increases mobile checkout completion rate by 10% within 30 days.
Problem: New users do not complete the onboarding tutorial. Hypothesis: Adding a progress indicator to the onboarding flow increases tutorial completion rate by 15% within 2 weeks.
Your turn: Pick one problem from your library and convert it into a testable hypothesis using the template below.
| Component | Your hypothesis |
|---|---|
| Problem statement | |
| Proposed change | |
| Target audience | |
| Primary metric | |
| Expected impact | |
| Time period | |
| Rationale |
Full hypothesis sentence: If we [proposed change] for [target audience], then [primary metric] will [increase/decrease] by [expected impact] within [time period], because [rationale].
To learn more about writing complete test plans for these hypotheses, read Test planning.
Test planning
This topic explains how to write a complete test plan for an experiment. Writing the plan before development begins prevents wasted effort and produces higher quality experiments.
Without test planning: Experiments launch with vague success criteria, missing metrics, or unclear rollback plans. Stakeholders disagree on what “success” means, and development effort is wasted.
With test planning: Every experiment launches with a shared definition of success, validated instrumentation, and a documented decision framework.
Elements of a test plan
A complete test plan contains the following elements:
- A validated problem statement
- A testable hypothesis
- SMART goals
- Treatment design for control and variation
- Primary, secondary, and guardrail metrics
- A risk assessment with early stopping criteria
- A review checklist
Define the problem and hypothesis
Start with the problem your experiment addresses, then write a hypothesis using the following format:
If we [make this specific change] for [this audience], then [this metric] will [increase/decrease] by [target amount] within [time period], because [rationale based on evidence].
Your turn: Draft the hypothesis for your next experiment using the template below. Fill in each component, then combine them into the full sentence.
| Component | Your value |
|---|---|
| Specific change | |
| Target audience | |
| Primary metric | |
| Target amount | |
| Time period | |
| Rationale |
Your hypothesis: If we ______ for ______, then ______ will ______ by ______ within ______, because ______.
Set SMART goals
The following table defines each SMART component:
| Component | Definition | Example |
|---|---|---|
| Specific | Name the exact metric and direction of change. | Increase checkout completion rate. |
| Measurable | Confirm you have instrumentation to track the metric. | Checkout completion events fire on the confirmation page. |
| Achievable | Validate that the expected change is realistic based on prior data. | Similar changes in the industry produced 5 to 15% lifts. |
| Relevant | Connect the metric to a business goal. | Checkout completion directly impacts quarterly revenue targets. |
| Time-bound | Set a duration for the experiment based on traffic and expected effect size. | Run for 4 weeks to reach 95% statistical power. |
Your turn: Fill in the SMART components for the hypothesis you drafted above.
| Component | Your experiment |
|---|---|
| Specific | |
| Measurable | |
| Achievable | |
| Relevant | |
| Time-bound |
Design treatments
Follow these guidelines when designing treatments:
- Test one change at a time. Multiple simultaneous changes make attribution impossible.
- Match the control to the current experience. The control group must see exactly what users see today.
- Document variations clearly. Include screenshots, copy, or specifications for developers.
- Ensure both treatments are functional. Do not ship broken or partial variations.
Select metrics
Choose three types of metrics for every experiment:
- Primary metric: The single metric your hypothesis predicts will change. This determines whether the experiment succeeded.
- Secondary metrics: Related metrics that reveal the full impact. For example, average order value alongside checkout completion.
- Guardrail metrics: Metrics that should not degrade. For example, page load time, error rate, or support ticket volume.
Confirm that all metrics are instrumented and reporting correct data before launch.
Validate instrumentation with an A/A test
If this is your first experiment on a new application, service, or SDK integration, run an A/A test first. An A/A test serves both groups the identical experience and validates that your metrics pipeline, flag evaluation, and user assignment work correctly end to end. Always analyze A/A tests using frequentist statistics, not Bayesian. Bayesian priors can report a “winning” variation even when both groups receive the same experience.
A successful A/A test shows no statistically significant difference between the two groups. If you see a significant result, investigate before proceeding. Common causes include duplicate metric events, inconsistent context keys, metric events that fire before the SDK initializes, or incorrect flag evaluation logic. To learn more about when to run A/A tests, read the A/A testing guidance in Building a culture of experimentation.
Your turn: Identify the metrics for your experiment. Defining these now ensures your instrumentation is complete before development starts.
| Metric type | Metric name | How it is measured | Current baseline |
|---|---|---|---|
| Primary | |||
| Secondary | |||
| Secondary | |||
| Guardrail | |||
| Guardrail |
Assess risks
Identify risks before launch. Common technical risks: performance regressions from flag implementation, incomplete metric instrumentation, and insufficient sample size. Common business risks: negative user experience, conflicts with active experiments, and premature decisions based on early results.
For each risk, define a mitigation strategy and early stopping criteria. Document your rollback plan, which typically means turning off the flag variation.
Review checklist
Use this checklist on your actual test plan before starting development:
- The problem is validated with data, not assumptions
- The hypothesis follows the standard format and includes a rationale
- SMART goals are defined with specific targets and a time period
- The control matches the current production experience
- Each variation changes only one variable from the control
- Primary, secondary, and guardrail metrics are defined
- All metrics are instrumented and firing correctly
- Sample size and experiment duration are estimated
- Technical and business risks are documented
- Early stopping criteria and rollback plans are in place
- If this is the first experiment on this application, an A/A test has passed
- The test plan has been reviewed by at least one other team member
Your turn: Review your test plan draft against this checklist. For any item you marked “no,” note the action needed to resolve it before development begins.
| Checklist item | Status | Action needed |
|---|---|---|
| Problem validated with data | ||
| Hypothesis follows standard format | ||
| SMART goals defined | ||
| Control matches production | ||
| Single variable per variation | ||
| Metrics defined | ||
| Metrics instrumented | ||
| Sample size estimated | ||
| Risks documented | ||
| Stopping criteria in place | ||
| A/A test passed (if first experiment) | ||
| Peer review completed |
To learn more about building the process around experiment intake and review, read Experimentation process design.
Experimentation process design
This topic explains how to design the operational process that supports experimentation at your organization.
Without process design: No one is sure who approves a test, when it should end, or how to act on results. Decisions stall and results sit in a dashboard no one checks.
With process design: Every experiment follows a defined path from idea to outcome. Decision ownership is explicit, handoffs are smooth, and results reach the people who act on them.
Map decision points
Before building a process, identify who owns each decision in the experiment lifecycle.
Your turn: Write the name or role of the person who owns each decision today. If there is no clear owner, leave the cell blank. Blank cells represent gaps in your current process.
| Decision | Owner |
|---|---|
| Who reviews and approves new experiment ideas? | |
| Who decides when an experiment starts and when it ends? | |
| Who analyzes the results and determines whether the experiment succeeded? | |
| Who decides what action to take based on the results? | |
| Who needs to be notified when an experiment launches or concludes? |
Use the answers to build a RACI for experimentation.
Build a RACI for experimentation
A RACI assigns each activity an owner at one of four levels: R (Responsible, performs the work), A (Accountable, owns the outcome), C (Consulted, provides input), I (Informed, notified after). Only one person is Accountable per activity.
The following table provides a starting template:
| Activity | Program owner | Product manager | Developer | Data analyst | Executive sponsor |
|---|---|---|---|---|---|
| Submit experiment idea | C | R | C | C | I |
| Write test plan | C | R | C | C | I |
| Review test plan | A | R | C | R | I |
| Implement experiment | I | C | R | C | I |
| Launch experiment | A | R | C | C | I |
| Monitor experiment | I | C | I | R | I |
| Analyze results | I | C | I | R | I |
| Decide next action | C | A | I | C | I |
| Communicate results | R | C | I | C | I |
Your turn: Create your own RACI using the blank template below. Replace the column headers with the actual roles or names from your organization. Use R, A, C, and I to fill in each cell.
| Activity | ______ | ______ | ______ | ______ | ______ |
|---|---|---|---|---|---|
| Submit experiment idea | |||||
| Write test plan | |||||
| Review test plan | |||||
| Implement experiment | |||||
| Launch experiment | |||||
| Monitor experiment | |||||
| Analyze results | |||||
| Decide next action | |||||
| Communicate results |
Integrate with development workflows
Experimentation works best when it fits into existing development processes rather than running as a separate track.
Backlog and planning. Add experiment work items to your existing backlog with clear acceptance criteria, estimated effort, and a linked test plan. Allocate capacity for experiment work during sprint or iteration planning.
Development and deployment. Implement experiments using feature flags. Each variation in the test plan maps to a flag variation in LaunchDarkly. Deploy all variations to production before launching. Validate that metric events fire correctly and that each variation renders as designed. For first experiments on a new application, run an A/A test as a required deployment step. To learn more, read the A/A testing guidance in Building a culture of experimentation.
Review and retrospective. After each experiment concludes, review results with the team and add findings to your shared experiment library. Include experimentation in your regular retrospectives.
Create an experiment intake process
The following five-step flow provides a starting point:
- Idea submission: Anyone submits an experiment idea using a standard template that captures the problem, draft hypothesis, target audience, and expected impact.
- Triage: The program owner evaluates submissions on a regular cadence for feasibility, goal alignment, and conflicts with active experiments.
- Test plan development: Approved ideas move to test plan writing using the guidance in the test planning section.
- Review and approval: The completed test plan goes through a checklist review, then moves to the development backlog.
- Execution and closeout: The team implements, launches, monitors, and analyzes the experiment. The program owner records the outcome.
Your turn: Sketch your intake flow by filling in who owns each step and what artifact they produce. Adapt the steps to match your organization’s workflow.
| Step | Owner | Artifact produced | Cadence |
|---|---|---|---|
| Idea submission | |||
| Triage | |||
| Test plan development | |||
| Review and approval | |||
| Execution and closeout |
Tell data stories
A well-told data story turns raw numbers into organizational momentum. Structure data stories around these four elements:
- The problem: The original problem and the business goal it connects to.
- The experiment: What you tested, who was included, and how long the experiment ran.
- The results: Primary metric outcome first, then secondary and guardrail metrics.
- The recommendation: What action the results support and what the team does next.
Combine quantitative results with qualitative context like user research or support feedback.
Your turn: Use this template to draft the data story for your first experiment. Fill in each section with one or two sentences.
| Section | Your data story |
|---|---|
| The problem | |
| The experiment | |
| The results | |
| The recommendation |
To learn more about generating experiment ideas, read Problem-solution mapping.
Coordinating releases
Coordinate releases across applications and teams using LaunchDarkly feature flags. These strategies enable autonomous teams to manage dependent releases with minimal overhead.
Overview
When multiple applications or teams need to coordinate releases, LaunchDarkly provides several strategies to manage dependencies without requiring synchronous communication or manual coordination.
Each coordination strategy describes:
- What people need to do: How teams work in LaunchDarkly (create flags, configure approvals, grant permissions)
- What technical setup is required: How developers implement the coordination (evaluate flags, pass metadata, handle dependencies)
Coordination strategies
Prerequisite flags
Use prerequisite flags when applications share a LaunchDarkly project and need to coordinate releases.
Best for:
- Tightly coupled applications and services
- Teams that share a project
- Large initiatives with multiple dependent releases
- Frontend and backend components that must be released together
To learn more, read Prerequisite flags.
Request metadata
Use request metadata when services communicate via APIs and need to manage breaking changes or versioning.
Best for:
- API services with external consumers
- Microservices communicating via HTTP or RPC
- Applications that need to maintain backward compatibility
- Services in separate projects
To learn more, read Request metadata.
Delegated authority
Use delegated authority when cross-functional teams need to manage flags across multiple projects.
Best for:
- Cross-functional collaboration with customer success, support, or sales teams
- Database administrators managing schema migrations
- Security engineers managing security features
- Teams that need to control flags in projects they do not own
To learn more, read Delegated authority.
Choosing a strategy
Use this guide to select the appropriate coordination strategy:
| Scenario | Strategy | Why |
|---|---|---|
| Frontend and backend for the same product | Prerequisite flags | Same project, tightly coupled |
| API with third-party consumers | Request metadata | Different projects, version management |
| Multiple microservices, same product | Prerequisite flags or Request metadata | Depends on coupling and API design |
| Customer success managing early access | Delegated authority | Cross-functional collaboration |
| Database team managing migrations | Delegated authority | Cross-project coordination |
Combining strategies
Strategies can be combined for complex scenarios:
Delegated authority with prerequisites:
Grant customer success teams permission to manage prerequisite flags that control early access programs across multiple applications.
Request metadata with delegated authority:
Allow API consumers to request custom targeting rules for their specific API versions or client configurations.
Project architecture considerations
Coordination strategy selection affects project architecture decisions:
- Applications using prerequisite flags should share a project
- Applications using request metadata or delegated authority can be in separate projects
- Applications in the same project have access to all coordination strategies
To learn more about project architecture, read Projects and environments.
Prerequisite flags
Coordinate releases for tightly coupled applications using prerequisite flags. Prerequisites enable teams to maintain autonomy over their releases while managing technical and business dependencies.
Overview
Prerequisite flags create dependencies between feature flags within a single project. When a flag has a prerequisite, the prerequisite must evaluate to a specific variation before the dependent flag can be enabled.
When to use prerequisite flags
Use prerequisite flags when:
- Applications execute in the same project
- Teams need to coordinate releases with minimal overhead
- Dependencies exist between frontend and backend components
- Multiple teams contribute to a single product release
How prerequisite flags work
Each coordination strategy has two parts:
Platform
What teams do in LaunchDarkly: Create prerequisites and allow teams to request, review, and apply targeting changes through approval workflows.
Application
What developers implement: Evaluate flags in the application. LaunchDarkly handles the prerequisite logic automatically. No additional code is required.
Prerequisites scope
Prerequisite flags work within specific boundaries:
| Architecture | Supported |
|---|---|
| Within a single project | Yes |
| Across multiple projects | No |
| Outside of projects | No |
Use cases
Technical dependencies
Coordinate releases when backend changes must be deployed before frontend changes.
Example scenario:
Release a new widget that requires both API and frontend changes.
Setup:
- Create flag: Release: Widget API
- Create flag: Release: Widget Mobile
- Add prerequisite to Release: Widget Mobile: requires Release: Widget API to be On
Result:
Teams maintain autonomy over their own domains without coordination overhead. The mobile team can enable their flag when ready, knowing the prerequisite ensures the API is available.
Business dependencies
Group multiple releases under a keystone flag to coordinate a product launch.
Example scenario:
Spring product launch requires new reporting features and AI assistant functionality.
Setup:
- Create keystone flag: Release: Spring Launch Event
- Create flag: Release: Advanced Reporting
- Create flag: Release: AI Assistant
- Add prerequisite to both feature flags: requires Release: Spring Launch Event to be On
- Delegate authority to product and marketing teams to control the keystone flag
Result:
Engineering teams can develop and test features independently. Product and marketing teams control the coordinated launch by managing only the keystone flag.
Implementation steps
Step 1: Create feature flags
Create flags for each component or feature that needs coordination:
- Navigate to the project in LaunchDarkly
- Click Create flag
- Provide a descriptive name and key
- Select the appropriate flag type
- Click Save flag
Step 2: Configure prerequisites
Add prerequisites to flags that depend on other flags:
- Navigate to the dependent flag’s settings
- Locate the Prerequisites section
- Click Add prerequisite
- Select the prerequisite flag
- Choose the required variation
- Click Save
Step 3: Test coordination
Verify that prerequisites work correctly:
- Ensure the prerequisite flag is Off
- Attempt to enable the dependent flag
- Verify that the dependent flag remains Off or serves the fallback variation
- Enable the prerequisite flag
- Verify that the dependent flag now evaluates correctly
Step 4: Document dependencies
Create documentation for your team:
- List all prerequisite relationships
- Document the intended release order
- Identify the owners of each flag
- Specify the communication plan for coordinated releases
Example configuration
Scenario: E-commerce checkout redesign
Flags:
| Flag | Description | Prerequisites |
|---|---|---|
| Release: Checkout | Keystone flag for the overall release | None |
| Release: Checkout API | Backend API changes | Release: Checkout must be On |
| Release: Checkout Web | Web frontend changes | Release: Checkout and Release: Checkout API must be On |
| Release: Checkout Mobile | Mobile app changes | Release: Checkout and Release: Checkout API must be On |
Configuration in LaunchDarkly:
Release: Checkout (Keystone)
├── Release: Checkout API
│ └── Release: Checkout Web
│ └── Release: Checkout Mobile
Release process:
- Backend team enables Release: Checkout API when ready
- Web team enables Release: Checkout Web when ready
- Mobile team enables Release: Checkout Mobile when ready
- Product team controls the overall release through Release: Checkout
Best practices
Use clear naming conventions
Name flags to indicate their role in the dependency chain:
- Keystone flags: Release: [Feature Name]
- Component flags: Release: [Feature Name] [Component]
Avoid deep chains
Limit prerequisite chains to 2-3 levels deep. Deeper chains increase complexity and make troubleshooting difficult.
Document ownership
Clearly document who owns each flag in the prerequisite chain. Teams should know who to contact when coordinating releases.
Test in lower environments first
Verify prerequisite relationships in development and staging environments before configuring them in production.
Use flag statuses
Track prerequisite flags with flag statuses to monitor when they are safe to remove.
To learn more, read Flag statuses.
Troubleshooting
Dependent flag not evaluating correctly
Symptom: A flag with a prerequisite does not evaluate as expected even though the prerequisite appears to be enabled.
Possible causes:
- Prerequisite is enabled in a different environment
- Prerequisite requires a specific variation that is not being served
- Targeting rules conflict with the prerequisite
Solution:
- Verify the prerequisite flag is On in the same environment
- Check that the prerequisite serves the required variation
- Review targeting rules for conflicts
Unable to create prerequisite
Symptom: LaunchDarkly prevents creating a prerequisite relationship.
Possible causes:
- Creating a circular dependency
- Flags are in different projects
- Insufficient permissions
Solution:
- Review the dependency chain for circular references
- Ensure both flags are in the same project
- Verify you have permission to modify both flags
Related strategies
Combine prerequisite flags with other coordination strategies:
- Delegated authority: Grant product teams permission to control keystone flags
- Request metadata: Use request metadata to coordinate with external services while using prerequisites internally
To learn more about project architecture, read Projects.
Request metadata
Coordinate releases across services using request metadata passed in API calls, headers, or RPC messages. This strategy enables version management and backward compatibility without requiring consumers to use LaunchDarkly.
Overview
Request metadata allows API providers to make targeting decisions based on information passed in requests. This enables coordinated releases with external consumers and loosely coupled services without requiring all participants to integrate with LaunchDarkly.
When to use request metadata
Use request metadata when:
- Services communicate via APIs, HTTP, or RPC
- API consumers do not use LaunchDarkly
- Services are in separate LaunchDarkly projects
- Services need to maintain backward compatibility
- Services have external or third-party consumers
How request metadata works
Each coordination strategy has two parts:
Platform
What teams do in LaunchDarkly: Create targeting rules based on API versions or client metadata to coordinate releases with external consumers.
Application
What developers implement:
Consumers attach metadata to requests through:
- HTTP headers
- Query parameters
- Request body fields
- RPC metadata
Providers use the metadata to define LaunchDarkly contexts for evaluation and targeting.
Request metadata scope
Request metadata works across boundaries:
| Architecture | Supported |
|---|---|
| Within a single project | Yes |
| Across multiple projects | Yes |
| Outside of projects | Yes |
Only the providing service needs to use LaunchDarkly. Consumers do not require LaunchDarkly integration.
Use cases
API version management
Manage breaking changes in public APIs by targeting based on API version.
Example scenario:
API service introduces breaking change to use UUID identifiers instead of integer IDs.
Setup:
- API consumers include version in header:
X-API-Version: 2.0.0 - API service extracts version and creates context:
const context = { kind: 'request', key: requestId, apiVersion: req.headers['x-api-version'] }; - Create targeting rule:
If apiVersion >= 2.0.0 then serve Available
Result:
Newer API clients automatically receive the new identifier format. Legacy clients continue using integer IDs until they upgrade.
Client-specific feature rollouts
Target features to specific mobile app versions or web browser versions.
Example scenario:
Mobile app releases new offline mode that requires minimum app version 3.5.0.
Setup:
- Mobile app includes version in API calls
- Backend service creates context with app version
- Create targeting rule:
If appVersion >= 3.5.0 then serve Available
Result:
Backend enables offline sync features only for app versions that support offline mode.
Tenant-specific releases
Enable features for specific tenants in multi-tenant services.
Example scenario:
SaaS application wants to roll out advanced analytics to premium tier customers.
Setup:
- API gateway includes tenant information in header:
X-Tenant-Id: acme-corp - Backend service extracts tenant and subscription tier
- Create targeting rule:
If subscriptionTier equals "premium" then serve Available
Result:
Premium tier customers see advanced analytics. Standard tier customers do not see the feature.
Implementation steps
Step 1: Define metadata schema
Determine what metadata consumers should provide:
- API version
- Client version
- Client type (web, mobile, desktop)
- Tenant identifier
- User tier or subscription level
Step 2: Update API consumers
Instruct consumers to include metadata in requests:
HTTP headers:
X-API-Version: 2.1.0
X-Client-Type: mobile-ios
X-Client-Version: 3.5.2
Query parameters:
GET /api/widgets?api_version=2.1.0&client=mobile-ios
Request body:
{
"data": { ... },
"metadata": {
"apiVersion": "2.1.0",
"clientType": "mobile-ios"
}
}
Step 3: Extract metadata in provider
Extract metadata from requests and create LaunchDarkly contexts:
Node.js example:
const express = require('express');
const { init } = require('@launchdarkly/node-server-sdk');
const app = express();
const ldClient = init(process.env.LD_SDK_KEY);
app.get('/api/widgets', async (req, res) => {
// Extract metadata from request
const apiVersion = req.headers['x-api-version'] || '1.0.0';
const clientType = req.headers['x-client-type'] || 'unknown';
// Create context for flag evaluation
const context = {
kind: 'request',
key: req.requestId,
apiVersion: apiVersion,
clientType: clientType
};
// Evaluate feature flag
const useNewFormat = await ldClient.variation(
'use-new-response-format',
context,
false
);
// Return appropriate response
if (useNewFormat) {
res.json({ data: getNewFormatData() });
} else {
res.json({ data: getLegacyFormatData() });
}
});
Step 4: Create targeting rules
Create targeting rules based on the extracted metadata:
- Navigate to the feature flag in LaunchDarkly
- Select the environment
- Create a new targeting rule
- Use the context attribute (for example, apiVersion)
- Configure the targeting logic (for example,
apiVersion >= 2.0.0) - Set the variation to serve
- Save the targeting rule
Step 5: Monitor and deprecate
Track usage of old API versions using flag evaluation metrics:
- Monitor which variations are being served
- Identify clients still using old versions
- Communicate deprecation timeline to affected clients
- Remove backward compatibility code when safe
Example configuration
Scenario: GraphQL API versioning
Context schema:
{
"kind": "api-consumer",
"key": "consumer-xyz",
"apiVersion": "2023-11-01",
"consumerType": "mobile-app",
"consumerVersion": "4.2.0"
}
Targeting rules:
Flag: use-new-schema
Rule 1: Early access beta testers
If consumerType equals "internal" then serve Available
Rule 2: New API version
If apiVersion >= "2023-11-01" then serve Available
Rule 3: Opt-in consumers
If consumer is one of ["consumer-abc", "consumer-xyz"] then serve Available
Default: Unavailable
Result:
- Internal testers always get the new schema
- Consumers using API version 2023-11-01 or later get the new schema
- Specific consumers can opt in to the new schema early
- All other consumers get the legacy schema
Best practices
Use semantic versioning
Use semantic versioning for API versions to enable meaningful comparisons:
- Major version: Breaking changes
- Minor version: Backward-compatible features
- Patch version: Backward-compatible fixes
Document metadata requirements
Provide clear documentation for API consumers:
- Required and optional metadata fields
- Format and valid values
- How metadata affects feature availability
- Deprecation timelines
Provide defaults
Handle missing or invalid metadata gracefully:
const apiVersion = req.headers['x-api-version'] || '1.0.0';
const parsed = parseVersion(apiVersion) || { major: 1, minor: 0, patch: 0 };
Use flag statuses
Track deprecated API versions with flag statuses. Monitor usage to determine when old versions can be safely removed.
To learn more, read Flag statuses.
Communicate deprecation timelines
Provide advance notice to API consumers:
- Include deprecation headers in API responses
- Send notifications to registered consumers
- Provide migration guides
- Offer support during the transition
Example deprecation header:
Deprecation: version="1.0", date="2024-06-01"
Sunset: date="2024-12-01"
Link: <https://api.example.com/docs/migration-guide>; rel="deprecation"
Test with multiple client versions
Verify targeting rules work correctly for all supported client versions:
- Test with minimum supported version
- Test with latest version
- Test with versions at version boundaries
- Test with missing or invalid metadata
Troubleshooting
All clients receiving the same variation
Symptom: Targeting rules based on metadata do not differentiate between clients.
Possible causes:
- Metadata not being extracted from requests
- Context not being created with the correct attributes
- Targeting rules using incorrect attribute names
Solution:
- Log extracted metadata to verify it is present
- Verify context creation includes the expected attributes
- Check targeting rule attribute names match context attribute names
Legacy clients breaking after rollout
Symptom: Clients using old API versions receive errors or unexpected behavior.
Possible causes:
- Default variation serves new behavior
- Targeting rule comparison logic is incorrect
- Missing fallback handling for old versions
Solution:
- Ensure default variation serves legacy behavior
- Verify version comparison logic (for example,
>=vs>) - Add explicit targeting rules for old versions if needed
Related strategies
Combine request metadata with other coordination strategies:
- Delegated authority: Grant API consumers permission to request custom targeting rules
- Prerequisite flags: Use prerequisites for internal coordination while using request metadata for external coordination
To learn more about context attributes, read Contexts.
Delegated authority
Enable cross-functional collaboration by granting teams authority to manage flags in projects they do not own. This strategy empowers support, security, and operations teams to coordinate releases without bottlenecking engineering teams.
Overview
Delegated authority uses LaunchDarkly’s custom roles and approval workflows to grant specific teams permission to manage subsets of flags across multiple projects. This enables cross-functional coordination while maintaining security and governance.
When to use delegated authority
Use delegated authority when:
- Cross-functional teams need to coordinate releases
- Customer success or support teams manage early access programs
- Database administrators control schema migration flags
- Security engineers manage security features
- Teams need to modify flags in projects they do not own
How delegated authority works
Each coordination strategy has two parts:
Platform
What teams do in LaunchDarkly:
Members are delegated authority to manage a subset of flags or segments inside other teams’ projects through:
- Custom roles with scoped permissions
- Approval workflows for change review
- API access for automation
Delegated authority can be combined with prerequisite flags.
Application
What developers implement: Applications evaluate flags as usual. No additional code is required.
Delegated authority scope
Delegated authority works across boundaries:
| Architecture | Supported |
|---|---|
| Within a single project | Yes |
| Across multiple projects | Yes |
| Outside of projects | No |
Both teams must have access to LaunchDarkly.
Use cases
Customer success early access programs
Empower customer success teams to manage early access enrollments without engineering involvement.
Example scenario:
Customer success team manages beta access to new analytics dashboard.
Setup:
- Create flag: Release: Analytics Dashboard
- Create custom role: Customer Success - Early Access
- Grant role permission to modify targeting rules for flags tagged with early-access
- Tag Release: Analytics Dashboard with early-access
- Assign role to customer success team members
Result:
Customer success can add customers to early access beta programs by modifying targeting rules. Engineering maintains control over flag creation and removal.
Database administrators schema migrations
Enable database administrators to signal migration completion across multiple services.
Example scenario:
Database team manages schema migration flags across multiple microservice projects.
Setup:
- Create flags in each project: DB: User Table Migration Complete
- Create custom role: DBA - Schema Migrations
- Grant role permission to manage flags tagged with schema-migration
- Assign role to database administrators
Result:
Database administrators can signal when migrations are complete. Engineering teams evaluate these flags to enable new code that depends on schema changes.
Security engineers rapid response
Allow security team to quickly enable security mitigations across multiple projects.
Example scenario:
Security team needs ability to enable rate limiting or security features in response to incidents.
Setup:
- Create flags in each project: Security: Enhanced Rate Limiting
- Create custom role: Security - Incident Response
- Grant role permission to manage flags tagged with security
- Configure approval workflow with security team as reviewers
- Assign role to security engineers
Result:
Security engineers can request activation of security features during incidents. Approvals ensure changes are reviewed while enabling rapid response.
Frontend team segment management
Grant frontend teams authority to manage browser support segments across backend services.
Example scenario:
Frontend team maintains list of supported browsers across multiple services.
Setup:
- Create segment: Supported Browsers in each relevant project
- Create custom role: Frontend - Browser Support
- Grant role permission to modify segments tagged with browser-support
- Assign role to frontend team members
Result:
Frontend team can update supported browser list once. Backend services use the segment for targeting decisions.
Implementation steps
Step 1: Identify delegated permissions
Determine what permissions to delegate:
- Which flags or segments can be modified
- Which environments can be accessed
- Whether changes require approval
- Which projects are in scope
Step 2: Create custom role
Create a custom role with scoped permissions:
- Navigate to Account settings → Roles
- Click Create role
- Provide a descriptive name
- Configure permissions using policies
Example policy for early access management:
[
{
"effect": "allow",
"actions": ["updateOn", "updateFallthrough", "updateTargets"],
"resources": ["proj/*:env/production:flag/*;early-access"]
}
]
This policy allows:
- Updating targeting rules
- Only in production environment
- Only for flags tagged with early-access
- Across all projects
Step 3: Configure approval workflows
Require approvals for delegated changes in production:
- Navigate to environment settings
- Enable Approvals
- Configure approval requirements:
- Minimum number of reviewers
- Required reviewers
- Service tokens excluded from approvals
- Save configuration
Step 4: Assign role to team
Grant the custom role to appropriate team members:
- Navigate to Account settings → Team
- Select team member
- Click Add role
- Select the custom role
- Save changes
Step 5: Document delegation
Create documentation for delegated teams:
- Which flags they can modify
- How to request changes
- Approval process and timeline
- Escalation procedures
- Examples of appropriate changes
Example configuration
Scenario: Customer support troubleshooting
Custom role: Support - Customer Overrides
Policy:
[
{
"effect": "allow",
"actions": ["updateTargets", "updateOn"],
"resources": ["proj/support-enabled:env/production:flag/*"]
}
]
Tagged flags:
- Enable Advanced Logging
- Enable Debug Mode
- Enable Experimental Features
Workflow:
- Support receives customer issue requiring advanced diagnostics
- Support creates approval request to enable advanced logging for specific customer
- Engineering reviews and approves request
- Support enables logging for customer
- After troubleshooting, support disables logging
Best practices
Use tag-based permissions
Grant permissions based on flag tags rather than specific flag names. This allows new flags to inherit permissions automatically:
{
"resources": ["proj/*:env/production:flag/*;customer-success"]
}
Require approvals in production
Always require approvals for delegated changes in production environments. This provides an audit trail and ensures changes are reviewed.
Limit environment access
Grant delegated access only to necessary environments:
{
"resources": ["proj/*:env/production:flag/*"]
}
Avoid granting access to development or test environments unless specifically needed.
Document flag ownership
Maintain clear documentation of flag ownership:
- Primary owner responsible for flag lifecycle
- Delegated teams authorized to modify targeting
- Change approval process
- Escalation contacts
Use flag descriptions
Document delegation in flag descriptions:
Flag: Enable Advanced Logging
Owner: Platform Engineering
Delegated to: Customer Support for troubleshooting
Requires approval: Yes
Approval reviewers: @platform-oncall
Monitor delegated changes
Track changes made through delegated authority:
- Review approval requests regularly
- Monitor flag evaluation metrics
- Audit delegated access quarterly
- Revoke access when no longer needed
Troubleshooting
Team member cannot modify flags
Symptom: Team member with delegated role cannot modify flags they should have access to.
Possible causes:
- Flags not tagged correctly
- Role policy scope too restrictive
- Approval workflow blocking direct changes
- Insufficient base permissions
Solution:
- Verify flags have the required tags
- Review role policy to ensure it matches flag resources
- Check if approval workflow requires request instead of direct modification
- Verify team member has base permissions to access the project
Approval requests not reaching reviewers
Symptom: Approval requests are created but reviewers do not receive notifications.
Possible causes:
- Reviewers not configured in approval workflow
- Notification settings disabled
- Role does not require approvals
- Approval workflow not enabled
Solution:
- Verify approval workflow configuration includes required reviewers
- Check reviewer notification settings in account preferences
- Review role policy for approval requirements
- Confirm approvals are enabled in the environment
Excessive approval requests
Symptom: Delegated team creates many approval requests that engineering teams struggle to review.
Possible causes:
- Delegated permissions too restrictive
- Approval workflow too strict
- Poor communication about appropriate changes
- Missing self-service capabilities
Solution:
- Review if delegated permissions should allow direct changes for low-risk modifications
- Adjust approval requirements such as reducing required reviewers
- Provide training and documentation on appropriate use
- Consider expanding delegated permissions for routine changes
Combining with other strategies
Delegated authority works well with other coordination strategies:
Delegated authority with prerequisite flags
Scenario: Product marketing team controls release timing for coordinated product launches.
Setup:
- Engineering creates prerequisite flags for feature components
- Engineering creates keystone flag with product-marketing tag
- Custom role grants product marketing team permission to modify keystone flag
- Product marketing controls launch timing by enabling keystone flag
Result:
Engineering maintains control over component flags and technical implementation. Product marketing controls the coordinated release without engineering involvement.
Delegated authority with request metadata
Scenario: API consumers can request custom targeting rules for their client versions.
Setup:
- API service evaluates flags using request metadata
- Custom role grants API consumers permission to create targeting rules
- Approval workflow requires API team review
- Consumers create approval requests for custom targeting
Result:
API consumers can request accommodations for their specific client versions. API team reviews and approves requests without custom code changes.
Related documentation
To learn more about roles and permissions:
To learn about project architecture:
Targeting
This section covers recipes for targeting contexts with feature flags in scenarios that go beyond the standard operators.
Recipes
- Precalculated attributes - Target based on calculated values when built-in operators cannot express the required logic
Precalculated attributes
This recipe explains how to target contexts based on calculated values that LaunchDarkly’s built-in operators cannot express directly.
Use case
You want to target users based on relative time periods or calculated metrics. For example:
- Users who logged in more than 90 days ago
- Users whose subscription expires within 7 days
- Users with an account balance below a threshold percentage
LaunchDarkly provides before and after operators for date comparisons. These operators require absolute dates as arguments. There are no operators for relative date comparisons or mathematical calculations.
Solution
Precalculate the derived value in your application and pass it to LaunchDarkly as a context attribute. Your application code performs the calculation before the SDK evaluates any flags.
The pattern follows these steps:
- Calculate the value in your application code when building the context.
- Add the calculated value as a custom attribute on the context.
- Create targeting rules using standard numeric operators.
Example: days since last login
Your application calculates the number of days between the current date and the user’s last login date. Store this value as daysSinceLastLogin on the context.
In LaunchDarkly, create a targeting rule that checks if daysSinceLastLogin is greater than 90. The flag serves users who have not logged in for more than 90 days.
The following screenshot shows a targeting rule using the precalculated daysSinceLastLogin attribute:

Other examples
This pattern applies to many scenarios:
daysUntilExpiration- target users whose subscriptions expire soonstorageUsedPercent- target users approaching storage limitslifetimeSpend- target high-value customers for loyalty programsdistanceToNearestStore- target users far from physical locationsfeaturesUsedCount- target users who might benefit from plan upgradesriskScore- target based on fraud or churn risk calculated from multiple signals
When to use this pattern
This pattern applies when your targeting logic requires:
- Relative date comparisons instead of absolute dates
- Mathematical operations like subtraction, addition, or percentage calculations
- Derived values that combine multiple source fields
- Complex business logic that cannot map to built-in operators
Considerations
Calculate the attribute value at the appropriate point in your application. For server-side applications, calculate the value when constructing the context before flag evaluation. For client-side applications, calculate the value before initializing the SDK.
Keep attribute names descriptive. Names like daysSinceLastLogin communicate the attribute’s purpose clearly.
Resources
Testing
This section covers testing strategies for applications using LaunchDarkly feature flags.
Testing Recipes
Integration Testing
- Emulating LaunchDarkly Downtime - Validate your application behaves correctly when LaunchDarkly is unavailable
Unit Testing
- Unit Testing React Components - Test React components that use LaunchDarkly hooks in isolation
Choosing a Testing Strategy
Integration testing validates that your application handles LaunchDarkly service unavailability gracefully. Use these patterns to test:
- Application startup with LD unreachable
- Fallback value behavior
- Timeout and retry logic
Unit testing validates component behavior with specific flag values. Use these patterns to test:
- Component rendering with different flag states
- Flag-dependent logic and branches
- Component interactions with LaunchDarkly hooks
Emulating LaunchDarkly Downtime
This topic covers strategies for emulating LaunchDarkly downtime in your tests.
Use Case
Validate that your application behaves correctly when LaunchDarkly is unavailable. The goal of these tests is to ensure:
- The application does not block indefinitely for initialization
- Fallback values are updated and well maintained
Methods
Emulating LaunchDarkly Downtime in Playwright
You can use a fixture in Playwright to abort all network calls to LaunchDarkly before the test runs. This allows for flexibility of the user to:
- Invoke running different tests when the LaunchDarkly connection is blocked. The customer may expect different behavior.
- Allows for a user to run tests as they were prior, assuming the connection to LaunchDarkly is made.
- The user can invoke one test against both scenarios. For example, my application should return a 200 whether I can connect to LaunchDarkly or not.
Create a fixture
const { test: base } = require('@playwright/test');
const blockLaunchDarkly = base.extend({
page: async ({ page }, use) => {
// Block all requests to LaunchDarkly before the test runs
await page.route('**/*launchdarkly.com/**', route => route.abort());
await use(page);
},
});
const connectedLaunchDarkly = base;
module.exports = {
blockLaunchDarkly,
connectedLaunchDarkly,
expect: base.expect,
};
Example test
const { blockLaunchDarkly, connectedLaunchDarkly, expect } = require('./fixtures');
/**
* Shared tests that run against BOTH scenarios:
* - LaunchDarkly blocked (unreachable)
* - LaunchDarkly connected (working)
*
* This ensures the app works regardless of LaunchDarkly availability.
*/
const scenarios = [
{ testFn: blockLaunchDarkly, name: 'LD blocked' },
{ testFn: connectedLaunchDarkly, name: 'LD connected' },
];
// Run the same test for each scenario
for (const { testFn, name } of scenarios) {
testFn(`app returns 200 (${name})`, async ({ page }) => {
const response = await page.goto('/');
expect(response.status()).toBe(200);
});
}
Unit Testing React Components
This topic covers strategies for unit testing React components that use LaunchDarkly feature flags.
Use Case
Test React components that use LaunchDarkly hooks (useFlags, useLDClient, etc.) in isolation without requiring a live connection to LaunchDarkly. This enables:
- Fast, reliable unit tests that don’t depend on network connectivity
- Testing component behavior with specific flag values
- Testing flag-dependent logic and conditional rendering
Challenge
Unlike Jest which has built-in module mocking, Mocha requires external libraries or code patterns to mock the LaunchDarkly React SDK.
Methods
- Mocha - Three approaches for mocking LaunchDarkly in Mocha tests
Unit Testing React Components with Mocha
Three approaches for mocking LaunchDarkly React SDK hooks in Mocha tests.
Background
Mocha doesn’t include built-in module mocking like Jest. To test React components that use LaunchDarkly hooks (useFlags, useLDClient), you need to either:
- Mock the module using a library
- Inject dependencies as props
- Use a mock context provider
Each approach has tradeoffs in terms of code changes required and test complexity.
Approach 1: Monkey Patch Dependencies
Replace the LaunchDarkly module at import time using a mocking library. This allows testing components without any code modifications.
Using ESMock (ES Modules)
import esmock from "esmock";
import { render, screen, cleanup } from "@testing-library/react";
import { expect } from "chai";
describe("App with ESMock", () => {
afterEach(() => {
cleanup();
});
it("renders new header when simpleToggle is true", async () => {
const { default: App } = await esmock("../src/App.jsx", {
"launchdarkly-react-client-sdk": {
useFlags: () => ({ simpleToggle: true })
}
});
render(<App />);
expect(screen.getByTestId("header").textContent).to.equal("New Header Experience");
});
it("renders legacy header when simpleToggle is false", async () => {
const { default: App } = await esmock("../src/App.jsx", {
"launchdarkly-react-client-sdk": {
useFlags: () => ({ simpleToggle: false })
}
});
render(<App />);
expect(screen.getByTestId("header").textContent).to.equal("Legacy Header Experience");
});
});
Component (unchanged):
import React from "react";
import { useFlags } from "launchdarkly-react-client-sdk";
export default function App() {
const { simpleToggle } = useFlags();
return (
<div>
<h1 data-testid="header">
{simpleToggle ? "New Header Experience" : "Legacy Header Experience"}
</h1>
</div>
);
}
Using ProxyRequire (CommonJS)
const proxyquire = require("proxyquire");
const { render, screen, cleanup } = require("@testing-library/react");
const { expect } = require("chai");
const React = require("react");
describe("App with ProxyRequire", () => {
afterEach(() => {
cleanup();
});
it("renders new header when simpleToggle is true", () => {
const App = proxyquire("../src/App", {
"launchdarkly-react-client-sdk": {
useFlags: () => ({ simpleToggle: true })
}
});
render(React.createElement(App));
expect(screen.getByTestId("header").textContent).to.equal("New Header Experience");
});
it("renders legacy header when simpleToggle is false", () => {
const App = proxyquire("../src/App", {
"launchdarkly-react-client-sdk": {
useFlags: () => ({ simpleToggle: false })
}
});
render(React.createElement(App));
expect(screen.getByTestId("header").textContent).to.equal("Legacy Header Experience");
});
});
Component (unchanged):
const React = require("react");
const { useFlags } = require("launchdarkly-react-client-sdk");
function App() {
const { simpleToggle } = useFlags();
return React.createElement(
"div",
null,
React.createElement(
"h1",
{ "data-testid": "header" },
simpleToggle ? "New Header Experience" : "Legacy Header Experience"
)
);
}
module.exports = App;
Important: When mocking the entire module, you must provide all exports used in your component (e.g., both useFlags and useLDClient if both are used).
Pros:
- No code changes required
- Test-runner agnostic
- Component remains production-focused
Cons:
- Async-only (must use
awaitwith ESMock) - Requires
--import=esmockloader flag (can be slow) - Must mock all used exports from the module
Approach 2: Hook/Flag Map Injection
Pass the hook function or flag values as props instead of importing directly.
Component with dependency injection:
import React from "react";
import { useFlags } from "launchdarkly-react-client-sdk";
export default function App({ useFlags: useFlags_prop }) {
// Use injected hook for testing, or real hook for production
const useFlagsHook = useFlags_prop || useFlags;
const { simpleToggle } = useFlagsHook();
return (
<div>
<h1 data-testid="header">
{simpleToggle ? "New Header Experience" : "Legacy Header Experience"}
</h1>
</div>
);
}
Test:
import React from "react";
import { render, screen, cleanup } from "@testing-library/react";
import { expect } from "chai";
import App from "./hook-injection.jsx";
describe("App with Hook Injection", () => {
afterEach(() => {
cleanup();
});
it("renders new header when simpleToggle is true", () => {
const mockUseFlags = () => ({ simpleToggle: true });
render(<App useFlags={mockUseFlags} />);
expect(screen.getByTestId("header").textContent).to.equal("New Header Experience");
});
it("renders legacy header when simpleToggle is false", () => {
const mockUseFlags = () => ({ simpleToggle: false });
render(<App useFlags={mockUseFlags} />);
expect(screen.getByTestId("header").textContent).to.equal("Legacy Header Experience");
});
});
Pros:
- No mocking library required
- Clear dependency injection pattern
- Fast test execution
Cons:
- Requires component code changes
- Additional props to pass through
- Props only used for testing
Approach 3: Mock Context Provider
Create a mock React context that matches the LaunchDarkly structure. Requires modifying components to accept an optional test context.
Mock Provider:
import React, { createContext, useContext } from "react";
const MockLDContext = createContext({ flags: {} });
function MockLDProvider({ flags = {}, children }) {
const value = { flags };
return React.createElement(MockLDContext.Provider, { value }, children);
}
export { MockLDProvider, MockLDContext };
Component with optional context:
import React, { useContext } from "react";
import { useFlags } from "launchdarkly-react-client-sdk";
export default function App({ testContext } = {}) {
let flags;
if (testContext) {
// In TEST: use the mock context
const ctx = useContext(testContext);
flags = ctx.flags;
} else {
// In PROD: use the real LaunchDarkly hook
flags = useFlags();
}
const { simpleToggle } = flags;
return (
<div>
<h1 data-testid="header">
{simpleToggle ? "New Header Experience" : "Legacy Header Experience"}
</h1>
</div>
);
}
Test:
import React from "react";
import { render, screen, cleanup } from "@testing-library/react";
import { expect } from "chai";
import App from "./mock-context.jsx";
import { MockLDProvider, MockLDContext } from "./MockLDProvider.jsx";
describe("App with Mock Context Provider", () => {
afterEach(() => {
cleanup();
});
it("renders new header when simpleToggle is true", () => {
render(
React.createElement(
MockLDProvider,
{ flags: { simpleToggle: true } },
React.createElement(App, { testContext: MockLDContext })
)
);
expect(screen.getByTestId("header").textContent).to.equal("New Header Experience");
});
it("renders legacy header when simpleToggle is false", () => {
render(
React.createElement(
MockLDProvider,
{ flags: { simpleToggle: false } },
React.createElement(App, { testContext: MockLDContext })
)
);
expect(screen.getByTestId("header").textContent).to.equal("Legacy Header Experience");
});
});
Pros:
- No mocking library required
- Can test provider wrapping behavior
- Reusable mock provider across tests
Cons:
- Requires component modifications
- Adds conditional logic to production code
- More complex test setup
Choosing an Approach
| Criterion | Monkey Patch | Hook Injection | Mock Context |
|---|---|---|---|
| Code changes required | None | Minimal | Moderate |
| Test complexity | Moderate | Low | Moderate |
| Test speed | Slow (ESMock) | Fast | Fast |
| External dependencies | Yes | No | No |
| Production code impact | None | Minor (extra prop) | Moderate (conditional logic) |
Recommendations:
- Monkey Patch: Choose when you cannot modify production code or need to test existing components
- Hook Injection: Choose for new components where dependency injection is acceptable
- Mock Context: Choose when you need to test multiple components with provider wrapping