Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Overview

This topic explains the objectives and key results for the LaunchDarkly Program. We recommend using the following guidelines when creating objectives and key results:

Objectives

Write objectives as clear, inspirational statements that describe what you want to achieve. Each objective should be:

  • Measurable through specific key results
  • Aligned with broader program goals
  • Reviewed and updated regularly to reflect changing priorities

Limit each objective to three to five key results.

Key results

Key results measure progress toward your objective. They must be specific, measurable, and verifiable.

Use quantitative metrics:

  • State exact numbers, percentages, or timeframes
  • Define what you measure and how you measure it
  • Include baseline values when comparing improvements
  • Specify the time period for measurement

Focus on outcomes:

  • Measure results, not activities
  • Track what changes, not what you do
  • Use leading indicators that predict success
  • Include lagging indicators that confirm achievement

Ensure clarity:

  • Use clear language that anyone can understand
  • Avoid vague terms like "improve" or "better"
  • Include specific targets or thresholds
  • Define success criteria explicitly

Set realistic targets:

  • Aim for 70% to 80% achievement probability
  • Set stretch goals that require effort to reach
  • Review historical performance to inform targets

Include time-bound elements:

  • Set clear deadlines or timeframes
  • Define measurement periods
  • Align timing with objective review cycles

Program OKRs

Objective: Application teams onboard quickly and correctly with minimal support

Key Results

  • Time to provision and access LaunchDarkly with correct permissions is less than one day per team
  • Less than three hours of ad hoc support requested per team during onboarding
  • Less than five support tickets created per team during onboarding
  • All critical onboarding tasks complete and first application live in a customer-facing environment within two sprints

Tasks

To achieve this objective, complete the following tasks:

  • Create self-service documentation with step-by-step guides and video tutorials when necessary
  • Configure single sign-on (SSO) and identity provider (IdP) integration
  • Define teams and member mapping
  • Assign all flag lifecycle related actions to at least one member in each project
  • Define how SDK credentials are stored and made available to the application
  • Create application onboarding checklist and a method to track completion across teams

Key Results

  • More than 95% of new flags created per quarter comply with naming convention
  • More than 95% of active users access dashboard filters and shortcuts at least once per month
  • More than 95% of new flags created per quarter include descriptions with at least 20 characters and at least one tag
  • More than 95% of release flags created per quarter link to a Jira ticket
  • Zero incidents of incorrect flag changes due to ambiguity per quarter

Tasks

To achieve this objective, complete the following tasks:

  • Create naming convention document
  • Document flag use cases and when and when not to use flags
  • Create a method to track compliance with naming convention
  • Enforce approvals in critical environments

Objective: Team members de-risk releases consistently

Key Results

  • Starting Q1 2026, 90% of features requiring subject matter expert (SME) testing released per quarter are behind feature flags
  • More than 75% of P1/P2 incidents related to new features per quarter are remediated without new deploys
  • Mean time to repair (MTTR) reduced by 50% compared to baseline for issues related to new features by end of Q2 2026

Tasks

To achieve this objective, complete the following tasks:

  • Define and document release strategies:
    • Who does what when
    • How to implement in the platform using targeting rules and release pipelines
    • How to implement in code
  • Define and document incident response strategies
  • Integrate with software development lifecycle (SDLC) tooling
  • Enable project management
  • Enable communication
  • Enable observability and application performance monitoring (APM)
  • Enable governance and change control

Key Results

  • More than 95% of active flags have a documented owner at any point in time
  • More than 95% of active flags have an up-to-date description and tags and comply with naming conventions at any point in time
  • Median time to archive feature flags after release is less than 12 weeks
  • 100% of flags older than six months are reviewed quarterly
  • Flag cleanup service level agreements (SLAs) are established and followed for 100% of projects

Tasks

To achieve this objective, complete the following tasks:

  • Implement actionable dashboards to visualize flag status including new, active, stale, and launched
  • Define flag archival and cleanup policies
  • Implement code references integration