proposal implemented

Git-Based PDP/Spec Process Proposal

Bob Matsuoka Updated 2026-03-11
apex process pdp

Git-Based PDP/Spec Process Proposal

Purpose: Define a structured, git-based product development process that supports LLM augmentation through machine-readable formats and clear traceability from initiative through implementation.

Tooling: GitHub + JIRA only (no additional tools required)


TL;DR: Philosophy Evolution

Aspect Original (Clarity-Optimized) New (Learning-Optimized)
Starting Point Spec-first: Write complete spec before code Hypothesis-first: Validate before scaling
Primary Artifact PRD (approved before implementation) Initiative (the bet we're making)
Discovery Implicit in hypothesis validation First-class citizen with experiments
Success Measure Spec approved, feature shipped Metric moved, learning captured
Failure Mode Shipped feature nobody uses Learned what doesn't work (celebrated)

Key Change: Make discovery artifacts (initiatives, experiments) first-class citizens in Git alongside specs.


1. Process Overview

1.1 The PDP Pipeline (Outcome-Driven)

INITIATIVE → EXPERIMENTS → PRD → EPIC/STORIES → CODE → MEASURE → LEARN
    │            │           │        │           │        │         │
  The Bet    Discovery    Review   Track       Test    Verify    Iterate/
 (Metric)   (hypothesis   Approve  Progress    Ship    Impact      Kill
            + learning)

Experiment = Hypothesis + Learning

Each Experiment contains:
┌─────────────────────────────────────────────────────────────┐
│  BEFORE (Hypothesis)  │  DURING (Execution)  │  AFTER (Learning)  │
│  - What we believe    │  - What we did       │  - What we learned │
│  - How we'll test     │  - What we observed  │  - What we recommend│
└─────────────────────────────────────────────────────────────┘

OLD Flow (Spec-First):

PM writes spec → Spec reviewed → Spec approved → Engineer implements → Code shipped → Done

NEW Flow (Hypothesis-First):

Team defines metric → PM creates initiative → Discovery (experiments with embedded hypotheses) →
Learnings inform spec → Spec reviewed/approved → MVP implemented →
Deploy with measurement → Learn from data → Iterate, pivot, or kill

1.2 Core Principles

Principle Description
Outcome-First Every initiative ties to a team metric we're trying to move
Hypothesis-Driven Start with a testable bet, not a solution
Evidence-Based Progression requires evidence, not opinions
Time-Boxed Discovery Max 6 weeks (3 experiments x 2 weeks) before decision
Git-Native All artifacts live in version control with full history
Machine-Readable YAML frontmatter enables LLM parsing and automation
Traceable Bidirectional links from initiative -> experiment -> PRD -> JIRA -> commits
Learning-Captured Killed initiatives are celebrated, learnings are preserved

2. Artifact Types

2.1 Initiative (The Bet)

New artifact type. An initiative represents a strategic bet that a specific action will move a team metric.

---
id: I-2026-NNN
type: initiative
status: discovery | validated | delivery | shipped | killed
metric_target: "[Team metric this moves]"
hypothesis: "We believe [action] will move [metric] by [amount]"
confidence: low | medium | high
max_experiments: 3
time_box: "6 weeks max for discovery"
created: YYYY-MM-DD
updated: YYYY-MM-DD
author: [PM Name]
team: [Team Charter ID]
tags:
  - [product-area]
  - [quarter]
---

2.2 Experiment (Unified: Hypothesis + Learning)

Simplified artifact type. An experiment is a time-boxed discovery activity that contains both the hypothesis (before) and thesis/learning (after). This follows Kartik's insight: "Experiment is a Hypothesis before you start it and a Thesis (Learning) when done."

---
id: E-2026-NNN
type: experiment
parent_initiative: I-2026-NNN
status: planned | running | completed

# HYPOTHESIS (the "before" - what we believe)
hypothesis:
  statement: "We believe [X] will [Y] because [Z]"
  confidence: low | medium | high
  validation_method: interview | analytics | prototype | a_b_test | spike

# EXPERIMENT DESIGN
time_box: "2 weeks"
success_criteria: "[threshold that validates/invalidates]"

# THESIS/LEARNING (the "after" - what we learned)
learning:
  outcome: validated | invalidated | inconclusive
  key_insight: "[primary learning]"
  recommendation: "[next step]"
  evidence: []  # links to artifacts

created: YYYY-MM-DD
updated: YYYY-MM-DD
author: [PM Name]
tags:
  - [product-area]
---

Status semantics: - planned: Hypothesis section filled, Learning section empty - running: Actively gathering data - completed: Both Hypothesis and Learning sections filled

2.4 PRD (Existing - Enhanced)

---
id: PRD-2026-NNN
type: prd
status: draft | review | approved | in-development | shipped | learning  # NEW: learning status
parent_initiative: I-2026-NNN    # NEW: Link to initiative
validated_experiments: [E-2026-NNN]  # Links to experiments that validated the approach
epic_id: null
version: 1.0.0
created: YYYY-MM-DD
updated: YYYY-MM-DD
author: [PM Name]
approvers:
  - CPO
  - CTO
  - Engineering Lead
stakeholders:
  - CS Lead
  - Sales Lead
tags:
  - [product-area]
  - [priority]
  - [quarter]
---

2.5 Team Charter (New)

First-class artifact defining team metrics and ownership.

---
id: TC-NNN
type: team-charter
team_name: "[Team Name]"
mission: "[One-line team mission]"
metrics:
  - name: "[Primary Metric]"
    baseline: "[Current value]"
    target: "[Goal value]"
    measurement: "[How measured]"
  - name: "[Secondary Metric]"
    baseline: "[Current value]"
    target: "[Goal value]"
created: YYYY-MM-DD
updated: YYYY-MM-DD
owner: [Engineering Manager]
---

3. Repository Structure

llm-supported-pdp-sdlc/
├── README.md                     # Process overview and quick start
├── CLAUDE.md                     # AI assistant instructions
├── CONTRIBUTING.md               # How to create/modify specs
│
├── initiatives/                  # NEW: The bets (I-2026-NNN)
│   ├── 2026/
│   │   ├── I-2026-001.md
│   │   └── I-2026-002.md
│   └── archive/                  # Killed or completed initiatives
│
├── experiments/                  # Discovery artifacts with embedded hypothesis + learning
│   ├── 2026/
│   │   ├── E-2026-001.md         # Contains hypothesis (before) + learning (after)
│   │   └── E-2026-002.md
│   └── archive/
│
├── team-charters/                # Team metrics as first-class artifacts
│   ├── TC-001-revenue-team.md
│   └── TC-002-platform-team.md
│
├── prds/                         # Actionable specifications
│   ├── active/                   # In development
│   │   └── PRD-2026-001/
│   │       ├── PRD-2026-001.md   # Main PRD document
│   │       └── assets/           # Diagrams, wireframes
│   ├── shipped/                  # Completed and released
│   └── archive/                  # Deprecated or superseded
│
├── retrospectives/               # Learning capture
│   └── 2026/
│       └── RETRO-PRD-2026-001.md
│
├── templates/                    # Document templates
│   ├── initiative-template.md   # The bet
│   ├── experiment-template.md   # Unified: hypothesis + learning
│   ├── team-charter-template.md # Team metrics
│   ├── prd-template.md          # Actionable spec
│   └── retrospective-template.md
│
├── schemas/                      # Frontmatter validation
│   └── frontmatter.schema.json
│
├── scripts/                      # Automation scripts
│   ├── create-spec-pr.sh         # Create properly-formatted spec PR
│   ├── sync-reviewers.sh         # Sync PR reviewers with frontmatter
│   └── update-status.sh          # Update spec status from PR state
│
└── .github/
    ├── PULL_REQUEST_TEMPLATE/
    │   ├── initiative.md
    │   ├── experiment.md         # Unified: hypothesis + learning
    │   └── prd.md
    └── workflows/
        ├── validate-specs.yml    # CI validation
        ├── auto-assign-reviewers.yml
        └── jira-integration.yml

3.1 File Naming Conventions

Type Pattern Example
Initiative I-YYYY-NNN.md I-2026-001.md
Experiment E-YYYY-NNN.md E-2026-001.md (contains hypothesis + learning)
Team Charter TC-NNN-team-name.md TC-001-revenue-team.md
PRD PRD-YYYY-NNN.md PRD-2026-001.md
Retrospective RETRO-PRD-YYYY-NNN.md RETRO-PRD-2026-001.md

Removed: Separate H-YYYY-NNN.md (hypothesis) and T-YYYY-NNN.md (thesis) files. These are now sections within experiments.


4. Status Workflows

4.1 Initiative (NEW)

discovery → validated → delivery → shipped
         └→ killed (with learnings)
Status Description
discovery Running experiments to validate hypothesis
validated Experiments confirm hypothesis; ready for spec
delivery PRD approved, implementation in progress
shipped Feature released, measuring impact
killed Hypothesis invalidated or deprioritized

4.2 Experiment (Unified)

planned → running → completed
   ↓         ↓          ↓
 (hypo-   (gather    (learning
  thesis    data)     filled)
  filled)
Status Description Sections
planned Experiment designed, hypothesis captured Hypothesis: filled, Learning: empty
running Actively gathering data Hypothesis: filled, Learning: in progress
completed Learning documented with outcome Hypothesis: filled, Learning: filled

Experiment Outcomes: - validated: Evidence supports hypothesis -> proceed toward PRD - invalidated: Evidence contradicts hypothesis -> pivot or kill - inconclusive: Need more data -> run another experiment

4.3 PRD (Enhanced)

draft → review → approved → in-development → shipped → learning
             └→ rejected → archive/
Status Description
draft Work in progress
review Submitted for approval
approved Ready for implementation
in-development JIRA epic in progress
shipped Released to production
learning NEW: Measuring outcomes, gathering data

5. Document Templates

5.1 Initiative Template (NEW)

---
title: "[Initiative title - the bet]"
id: I-2026-NNN
type: initiative
status: discovery
metric_target: "[Team metric this moves]"
hypothesis: "We believe [action] will move [metric] by [amount]"
confidence: low
max_experiments: 3
time_box: "6 weeks"
created: YYYY-MM-DD
updated: YYYY-MM-DD
author: [PM Name]
team: TC-NNN
related_experiments: []
related_prd: null
tags:
  - [product-area]
  - [quarter]
---

# Initiative: [Title]

## The Bet

**Hypothesis:** We believe that [specific action/capability] will [move metric] by [amount/percentage] because [underlying assumption].

**Team Metric:** [Which metric from team charter this targets]

**Current Baseline:** [Current metric value]
**Target:** [Goal metric value]
**Measurement:** [How we'll measure]

## Why This Bet?

[1-2 paragraphs on why this is worth investigating]

### Evidence/Signal
- [Signal 1: Customer feedback, data pattern, competitor move]
- [Signal 2]
- [Signal 3]

### Strategic Alignment
- **OKR:** [Which OKR this serves]
- **Company Priority:** [How it aligns]

## Discovery Plan

### Time Box
- **Max Duration:** 6 weeks
- **Max Experiments:** 3
- **Decision Date:** [Date by which we decide: ship MVP, pivot, or kill]

### Planned Experiments
1. **E-2026-NNN:** [Brief description]
2. **E-2026-NNN:** [Brief description]
3. **E-2026-NNN:** [Contingency if first two inconclusive]

## Exit Criteria

### Validate If:
- [Specific threshold that validates hypothesis]
- [e.g., "60%+ of interviewed users express willingness to pay"]

### Kill If:
- [Specific threshold that invalidates hypothesis]
- [e.g., "Technical feasibility requires >6 months"]
- After 6 weeks without clear signal

## Decision Log

| Date | Decision | Rationale | Next Step |
|------|----------|-----------|-----------|
| [Date] | [Decision] | [Why] | [What's next] |

## Outcome

**Status:** [discovery | validated | killed]
**Key Learning:** [What we learned regardless of outcome]
**Next Step:** [PRD-2026-NNN | Pivot to I-2026-NNN | Archive]

5.2 Experiment Template (Unified: Hypothesis + Learning)

---
title: "[Experiment title]"
id: E-2026-NNN
type: experiment
parent_initiative: I-2026-NNN
status: planned

# HYPOTHESIS (the "before" - what we believe)
hypothesis:
  statement: "We believe [X] will [Y] because [Z]"
  confidence: low  # low | medium | high
  validation_method: interview  # interview | analytics | prototype | a_b_test | spike

# EXPERIMENT DESIGN
time_box: "2 weeks"
success_criteria: "[threshold that validates/invalidates]"

# THESIS/LEARNING (the "after" - what we learned)
learning:
  outcome: null  # validated | invalidated | inconclusive
  key_insight: null
  recommendation: null
  evidence: []  # links to artifacts

created: YYYY-MM-DD
updated: YYYY-MM-DD
author: [PM Name]
tags:
  - [product-area]
---

# Experiment: [Title]

## Parent Initiative
- **Initiative:** [I-2026-NNN](../initiatives/2026/I-2026-NNN.md)

---

## HYPOTHESIS (Before)

### Problem Statement
[What problem are we solving? Who has it? How painful is it?]

### Hypothesis Statement
**We believe that** [target user segment]
**Will** [take action / achieve outcome]
**If we** [provide capability/feature]
**Because** [underlying assumption about user behavior]

### Confidence Level
- **Confidence:** [low | medium | high]
- **Key Assumptions:**
  1. [Assumption about user behavior]
  2. [Assumption about technical feasibility]
  3. [Assumption about market timing]

---

## EXPERIMENT DESIGN (During)

### Objective
[What specific question are we trying to answer?]

### Method
- **Type:** [Interview | Survey | Prototype | A/B Test | Analytics | Technical Spike]
- **Duration:** [X days/weeks, max 2 weeks]
- **Sample Size:** [N participants/data points needed]

### Success Criteria
| Criterion | Validates If | Invalidates If |
|-----------|--------------|----------------|
| [Metric 1] | [Threshold] | [Threshold] |
| [Metric 2] | [Threshold] | [Threshold] |

### Resources Required
- **People:** [Who's involved]
- **Tools:** [What's needed]
- **Budget:** [If any]

### Execution Log
| Date | Activity | Observation |
|------|----------|-------------|
| [Date] | [What happened] | [What we noticed] |

### Raw Data
[Summary of data collected - to be filled during/after execution]

---

## THESIS / LEARNING (After)

*Complete this section when the experiment is done.*

### Results Analysis
[What the data tells us]

### Outcome
**Result:** [validated | invalidated | inconclusive]

### Key Learning
[Primary insight - the most important thing we learned]

### Evidence Summary
| Source | Finding | Confidence |
|--------|---------|------------|
| [Data source] | [What we learned] | High/Med/Low |

### Recommendation
**Next Step:** [Continue to next experiment | Proceed to PRD | Kill initiative | Pivot]

**Rationale:** [Why this is the right next step]

### Evidence Artifacts
- [Link to interview notes]
- [Link to survey results]
- [Link to prototype/recording]

5.3 Team Charter Template

---
title: "[Team Name] Charter"
id: TC-NNN
type: team-charter
team_name: "[Team Name]"
mission: "[One-line team mission]"
created: YYYY-MM-DD
updated: YYYY-MM-DD
owner: [Engineering Manager]
tags:
  - [product-area]
---

# [Team Name] Charter

## Mission
[One-line description of what this team exists to do]

## Team Metrics

### Primary Metric
| Attribute | Value |
|-----------|-------|
| **Metric** | [Name] |
| **Definition** | [How it's calculated] |
| **Baseline** | [Current value] |
| **Target** | [Goal for period] |
| **Measurement** | [Data source/tool] |
| **Cadence** | [How often measured] |

### Secondary Metrics
| Metric | Baseline | Target | Measurement |
|--------|----------|--------|-------------|
| [Metric 1] | [Value] | [Goal] | [How] |
| [Metric 2] | [Value] | [Goal] | [How] |

## Scope

### In Scope
- [Area 1]
- [Area 2]

### Out of Scope
- [Explicitly excluded area 1]
- [Explicitly excluded area 2]

## Active Initiatives
| ID | Initiative | Status | Metric Target |
|----|------------|--------|---------------|
| [I-2026-NNN] | [Title] | [Status] | [Which metric] |

## Team Members
| Role | Person | Responsibilities |
|------|--------|------------------|
| EM | [Name] | Overall delivery |
| PM | [Name] | Product direction |
| Tech Lead | [Name] | Technical decisions |

## Stakeholders
- [Stakeholder 1]: [Interest/touchpoint]
- [Stakeholder 2]: [Interest/touchpoint]

5.4 PRD Template (Enhanced)

---
title: "[Feature name]"
id: PRD-2026-NNN
type: prd
status: draft
version: 1.0.0
created: YYYY-MM-DD
updated: YYYY-MM-DD
author: [PM Name]
parent_initiative: I-2026-NNN
validated_experiments: [E-2026-NNN]  # Experiments that validated the approach
epic_id: null                 # JIRA Epic - populated on approval
approvers:
  - CPO
  - CTO
  - Engineering Lead
stakeholders:
  - CS Lead
  - Sales Lead
tags:
  - [product-area]
  - [priority]
  - [quarter]
  - [theme]
---

# PRD: [Feature Name]

## 1. Overview

### 1.1 Initiative Link
- **Initiative:** [I-2026-NNN](../initiatives/2026/I-2026-NNN.md)
- **Validated Experiments:** [E-2026-NNN](../experiments/2026/E-2026-NNN.md)
- **Metric Target:** [Team metric this PRD aims to move]
- **Discovery Summary:** [1-2 sentences on what experiments validated]

### 1.2 Problem Statement
[From thesis - concise problem description]

### 1.3 Solution Summary
[What we're building - high level]

### 1.4 Success Metrics
| Metric | Baseline | Target | Measurement | Timeline |
|--------|----------|--------|-------------|----------|
| [KPI] | [Current] | [Goal] | [How measured] | [When to measure] |

## 2. User Stories

*[Sections 2-8 unchanged from original]*

## 9. Post-Ship Learning Plan (NEW)

### 9.1 Measurement Plan
| Metric | Tool | Baseline | Check Date |
|--------|------|----------|------------|
| [Metric] | [Analytics/Survey] | [Value] | [Date] |

### 9.2 Success Threshold
- **Hit Target:** Metric improved by [X]% - continue iteration
- **Miss Target:** Metric unchanged - retrospective required
- **Negative Impact:** Metric declined - rollback consideration

### 9.3 Learning Checkpoint
- **Date:** [2 weeks post-ship]
- **Owner:** [PM]
- **Output:** Update initiative status, create retro if needed

6. Frontmatter Standards

6.1 Required Fields (All Documents)

---
title: "Feature Name"
id: H-2026-001               # Unique identifier
type: hypothesis             # initiative | experiment | prd | retrospective | team-charter
status: draft                # See status workflows below
created: 2026-02-07          # ISO date
updated: 2026-02-07          # Last modified
author: PM Name              # Primary owner
tags:                        # For discovery and filtering
  - product-area
  - priority
  - quarter
---

6.2 Extended Fields by Type

Initiative additional fields:

metric_target: "[Team metric]"
hypothesis: "We believe..."
confidence: low | medium | high
max_experiments: 3
time_box: "6 weeks"
team: TC-NNN
related_experiments: []
related_prd: null

Experiment additional fields (unified hypothesis + learning):

# Hypothesis (before)
hypothesis:
  statement: "We believe [X] will [Y] because [Z]"
  confidence: low | medium | high
  validation_method: interview | analytics | prototype | a_b_test | spike

# Experiment design
parent_initiative: I-2026-NNN
time_box: "2 weeks"
success_criteria: "[Threshold]"

# Learning (after)
learning:
  outcome: validated | invalidated | inconclusive | null
  key_insight: "[Result]"
  recommendation: "[Next step]"
  evidence: []

PRD additional fields:

parent_initiative: I-2026-NNN
validated_experiments: [E-2026-NNN]  # Replaces thesis link
epic_id: EPIC-123
approvers:
  - CPO
  - CTO
  - Engineering Lead
stakeholders:
  - CS Lead
  - Sales Lead

6.3 Tags Taxonomy

# Product areas
product_area:
  - rate-management
  - revenue-optimization
  - reporting
  - integrations
  - platform
  - data-pipeline

# Priority
priority:
  - p0-critical
  - p1-high
  - p2-medium
  - p3-low

# Timeline
quarter:
  - q1-2026
  - q2-2026
  - q3-2026
  - q4-2026

# Themes
theme:
  - self-service
  - automation
  - ai-ml
  - ux-improvement
  - scalability
  - tech-debt

7. JIRA Integration

7.1 Spec-to-JIRA Mapping (Updated)

SPEC DOCUMENT              JIRA
─────────────────────────────────────────
Initiative (I-2026-001) →   Theme (Portfolio level)
Experiment (E-2026-001) →   Story (if tracking needed)
PRD (PRD-2026-001)      →   Epic
User Story (US-001)     →   Story
Acceptance Criteria     →   Story description

Note: Hypothesis and Thesis are now embedded in Experiments, not separate JIRA items.

7.2 Custom Fields (NEW)

Add to JIRA for full traceability:

Field Name Type Description
initiative_id Text I-2026-NNN
metric_target Text Team metric being moved
experiment_count Number Experiments run for this initiative
spec_url URL Link to PRD in GitHub
prd_id Text PRD-2026-NNN

7.3 Bidirectional Linking

In Spec (frontmatter):

epic_id: EPIC-123
jira_stories:
  - STORY-456
  - STORY-457

In JIRA (Custom Fields or Labels): - spec_url: Link to PRD in GitHub - prd_id: PRD-2026-001 - initiative_id: I-2026-001

7.4 JIRA Workflow

  1. Initiative Created - Create JIRA Theme (optional)
  2. Discovery Phase - Track experiments as Stories (optional)
  3. PRD Approved - Create JIRA Epic with link to PRD
  4. Stories Written - Create JIRA Stories from PRD user stories
  5. Development - Stories link back to PRD for requirements
  6. Completion - Update PRD status to shipped, then learning

8. GitHub Workflow

8.1 Branching Strategy

main                           # Approved specs only
├── initiative/I-2026-001      # Initiative in discovery
├── experiment/E-2026-001      # Experiment (with hypothesis + learning)
└── prd/PRD-2026-001           # PRD in development

Note: Hypothesis and Thesis no longer have separate branches - they are sections within experiments.

8.2 Review Process

  1. Create branch for new spec
  2. Submit PR with spec content
  3. Request reviews from approvers (listed in frontmatter)
  4. Address feedback in PR comments
  5. Merge when approved (all approvers must approve)
  6. Update status in frontmatter

8.3 PR Template Example

## Spec Type
- [ ] Initiative
- [ ] Experiment (with hypothesis + learning)
- [ ] PRD

## Checklist
- [ ] Frontmatter complete and valid
- [ ] All required sections filled
- [ ] Links to related specs included
- [ ] Approvers listed in frontmatter
- [ ] Parent initiative linked (if applicable)

## Summary
[Brief description of what this spec proposes]

## Initiative Link
Initiative: [I-2026-NNN or N/A]
Metric Target: [What metric this moves]

## Validation Evidence (for hypotheses/experiments)
[Summary of validation results]

8.4 GitHub CLI Commands

[Commands from Section 6.4 of original document apply here]

8.5 Automation Scripts

[Scripts from Section 6.5 of original document apply here]

8.6 GitHub Actions Integration (Updated)

CI Enforcement Change

Original rule: No implementation PR without approved spec

New rule: No implementation PR without: 1. Approved spec (PRD in approved status), AND 2. Parent initiative in validated or delivery status

Workflow: .github/workflows/validate-specs.yml

name: Validate Spec Frontmatter

on:
  pull_request:
    paths:
      - 'initiatives/**/*.md'
      - 'experiments/**/*.md'
      - 'prds/**/*.md'
      - 'team-charters/**/*.md'

jobs:
  validate:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Install dependencies
        run: npm install -g ajv-cli gray-matter-cli

      - name: Extract and validate frontmatter
        run: |
          # Find all changed spec files
          CHANGED_FILES=$(git diff --name-only ${{ github.event.pull_request.base.sha }} ${{ github.sha }} | grep -E '\.(md)$' || true)

          for file in $CHANGED_FILES; do
            if [[ -f "$file" ]]; then
              echo "Validating: $file"

              # Extract frontmatter to temp file
              gray-matter "$file" --output json > /tmp/frontmatter.json

              # Determine schema based on type
              TYPE=$(jq -r '.data.type' /tmp/frontmatter.json)

              case "$TYPE" in
                initiative)
                  SCHEMA="schemas/initiative.schema.json"
                  ;;
                experiment)
                  SCHEMA="schemas/experiment.schema.json"
                  ;;
                prd)
                  SCHEMA="schemas/prd.schema.json"
                  # Validate parent initiative status
                  PARENT_INIT=$(jq -r '.data.parent_initiative' /tmp/frontmatter.json)
                  if [[ -n "$PARENT_INIT" && "$PARENT_INIT" != "null" ]]; then
                    INIT_FILE="initiatives/2026/${PARENT_INIT}.md"
                    if [[ -f "$INIT_FILE" ]]; then
                      INIT_STATUS=$(grep "^status:" "$INIT_FILE" | awk '{print $2}')
                      if [[ "$INIT_STATUS" != "validated" && "$INIT_STATUS" != "delivery" ]]; then
                        echo "ERROR: PRD parent initiative $PARENT_INIT is in '$INIT_STATUS' status, must be 'validated' or 'delivery'"
                        exit 1
                      fi
                    fi
                  fi
                  ;;
                team-charter)
                  SCHEMA="schemas/team-charter.schema.json"
                  ;;
                *)
                  echo "Unknown type: $TYPE in $file"
                  continue
                  ;;
              esac

              # Validate against schema
              if [[ -f "$SCHEMA" ]]; then
                ajv validate -s "$SCHEMA" -d /tmp/frontmatter.json
              fi
              echo "Valid: $file"
            fi
          done

      - name: Check required fields
        run: |
          CHANGED_FILES=$(git diff --name-only ${{ github.event.pull_request.base.sha }} ${{ github.sha }} | grep -E '\.(md)$' || true)

          ERRORS=""
          for file in $CHANGED_FILES; do
            if [[ -f "$file" ]]; then
              # Check for required fields
              if ! grep -q "^id:" "$file"; then
                ERRORS+="Missing 'id' in $file\n"
              fi
              if ! grep -q "^type:" "$file"; then
                ERRORS+="Missing 'type' in $file\n"
              fi
              if ! grep -q "^status:" "$file"; then
                ERRORS+="Missing 'status' in $file\n"
              fi
              if ! grep -q "^author:" "$file"; then
                ERRORS+="Missing 'author' in $file\n"
              fi
            fi
          done

          if [[ -n "$ERRORS" ]]; then
            echo -e "$ERRORS"
            exit 1
          fi

9. Risk Mitigations

Risk 1: Discovery Becomes Waterfall

Symptom: Teams spend months in "discovery" without shipping anything.

Mitigations: - Hard time-box: Max 3 experiments per initiative, 2 weeks each (6 weeks total) - Forcing function: If uncertain after 6 weeks, force decision: ship MVP or kill - Tracking: Monitor "discovery to delivery" cycle time; flag if >8 weeks - Escalation: If initiative in discovery >6 weeks, requires leadership review to continue

Risk 2: Hypothesis Theater

Symptom: Teams write hypotheses to justify predetermined solutions.

Mitigations: - Review mechanism: Make hypothesis subject of 1:1s and sprint reviews - Celebrate kills: Explicitly celebrate killed initiatives ("we learned this won't work") - Health metric: Track kill rate; if <30% of initiatives hit targets, hypotheses may be wishful thinking - Evidence standards: Require structured experiment results, not just "we talked to some users"

Risk 3: Experiment Rigor Collapse

Symptom: Experiments become rubber-stamp approvals rather than real tests.

Mitigations: - Minimum evidence thresholds: Define what constitutes sufficient evidence per experiment type - Structured templates: Require success/failure criteria BEFORE experiment runs - Peer review: Experiment designs reviewed by another PM before execution - Outcome tracking: Track experiment outcome distribution (should see ~40-60% invalidation rate)

Risk 4: Metric Gaming

Symptom: Teams choose easy-to-move metrics or manipulate measurement.

Mitigations: - Team charter review: Metrics approved by leadership annually - Counter-metrics: Require secondary metrics that would catch gaming - External validation: Periodic audit of metric calculation


10. Definition of Done

10.1 Initiative Done

  • [ ] Hypothesis clearly stated with metric target
  • [ ] Time-box defined (max 6 weeks discovery)
  • [ ] At least one experiment completed (with hypothesis + learning)
  • [ ] Status updated (validated/killed)
  • [ ] Key learning documented
  • [ ] If validated: PRD created and linked

10.2 Experiment Done (Unified)

An experiment is done when both the Hypothesis (before) and Learning (after) sections are complete:

Hypothesis section (before running): - [ ] Parent initiative linked - [ ] Problem statement clear - [ ] Hypothesis statement in "We believe... will... if... because..." format - [ ] Confidence level set - [ ] Success criteria defined (validates if / invalidates if)

Learning section (after running): - [ ] Data collected and documented - [ ] Outcome recorded (validated/invalidated/inconclusive) - [ ] Key insight captured - [ ] Evidence summary with sources - [ ] Recommendation made (continue/ship/kill/pivot) - [ ] Evidence artifacts linked

10.3 PRD Done

  • [ ] Parent initiative linked and in validated/delivery status
  • [ ] Validated experiments linked
  • [ ] All user stories with acceptance criteria
  • [ ] Requirements clear and testable
  • [ ] Dependencies identified
  • [ ] All approvers signed off
  • [ ] JIRA Epic created and linked

10.4 Feature Done

  • [ ] All JIRA stories completed
  • [ ] Acceptance criteria verified
  • [ ] PRD status -> shipped -> learning
  • [ ] Post-ship metrics captured
  • [ ] Learning checkpoint completed
  • [ ] Retrospective created if miss target

11. LLM Augmentation Points

11.1 Where AI Assists

Stage AI Capability Human Responsibility
Initiative Generate hypothesis alternatives Select and commit to bet
Initiative Identify relevant metrics Validate metric appropriateness
Experiment (Hypothesis) Generate hypothesis statements from problem Select and validate
Experiment (Design) Design experiment structure Ensure rigor and feasibility
Experiment (Learning) Summarize research findings Interpret meaning
Experiment (Learning) Structure evidence into thesis Verify accuracy
PRD Generate user stories from goals Refine and prioritize
PRD Draft acceptance criteria Validate testability
PRD Identify edge cases Assess coverage
Tickets Decompose to JIRA stories Estimate effort
Review Check completeness Make decisions
Learning Synthesize learnings across initiatives Apply to future work

11.2 Machine-Readable Patterns

Structured YAML frontmatter enables: - Automated status dashboards - Spec search and filtering - Progress tracking - Initiative health monitoring - LLM context injection

Gherkin acceptance criteria enables: - Automated test generation - Unambiguous requirements - LLM-assisted QA


12. Implementation Plan

Phase 1: Foundation (Week 1-2)

  • [ ] Set up repository structure (add initiatives/, experiments/, team-charters/)
  • [ ] Create templates (initiative, unified experiment, team-charter)
  • [ ] Remove deprecated templates (hypothesis-template.md, thesis-template.md)
  • [ ] Update PRD template with validated_experiments link
  • [ ] Add PR templates for new artifact types
  • [ ] Document in README

Phase 2: Pilot (Week 3-4)

  • [ ] Create one team charter as pilot
  • [ ] Migrate one active feature through full pipeline:
  • Create initiative from existing idea
  • Document experiments with embedded hypothesis + learning
  • Convert existing PRD to new template with initiative and experiment links

Phase 3: Rollout (Month 2)

  • [ ] Team training on outcome-driven process
  • [ ] Create team charters for all teams
  • [ ] JIRA custom fields for initiative/metric linking
  • [ ] GitHub Actions for validation
  • [ ] Dashboards for initiative health

Phase 4: Optimization (Month 3+)

  • [ ] Review initiative kill rate (target: 30-50%)
  • [ ] Review discovery-to-delivery cycle time
  • [ ] Refine time-boxes based on learnings
  • [ ] Automate metric tracking from analytics

13. Success Metrics

Metric Baseline Target Measurement
Initiative-to-ship time Unknown Measure baseline, then improve Git timestamps
Discovery phase duration Unknown <6 weeks average Initiative status dates
Initiative kill rate Unknown 30-50% (healthy learning) Status counts
Spec-to-ship time Unknown Reduce by 20% Git timestamps
PRD revision cycles Unknown Reduce by 30% PR comment threads
Feature success rate Unknown Improve hypothesis validation rate Retro outcomes
Traceability None 100% features have initiative -> ship chain Frontmatter links
Post-ship learning capture Unknown 100% shipped features have learning checkpoint Status transitions

14. CLI Cheat Sheet

Quick reference for common spec operations using gh CLI.

New Initiative

# 1. Create from template
cp templates/initiative-template.md initiatives/2026/I-2026-XXX.md

# 2. Edit the file with your initiative details

# 3. Create branch and PR
git checkout -b initiative/I-2026-XXX
git add initiatives/2026/I-2026-XXX.md
git commit -m "feat(initiative): Add I-2026-XXX - Brief Description"
git push -u origin initiative/I-2026-XXX

# 4. Create PR
gh pr create \
  --title "Initiative: Brief Description" \
  --body-file initiatives/2026/I-2026-XXX.md \
  --reviewer "pm-lead-username" \
  --label "initiative,discovery"

New Experiment

# 1. Create from template
cp templates/experiment-template.md experiments/2026/E-2026-XXX.md

# 2. Edit with experiment details, link to parent initiative

# 3. Create branch and PR
git checkout -b experiment/E-2026-XXX
git add experiments/2026/E-2026-XXX.md
git commit -m "feat(experiment): Add E-2026-XXX for I-2026-YYY"
git push -u origin experiment/E-2026-XXX

# 4. Create PR
gh pr create \
  --title "Experiment: Brief Description" \
  --body-file experiments/2026/E-2026-XXX.md \
  --label "experiment,planned"

Validate Initiative (Move to Delivery)

# 1. Update initiative status
sed -i '' 's/status: discovery/status: validated/' initiatives/2026/I-2026-XXX.md
git add initiatives/2026/I-2026-XXX.md
git commit -m "chore: Mark I-2026-XXX as validated"
git push

# 2. Create PRD linked to initiative
cp templates/prd-template.md prds/active/PRD-2026-XXX/PRD-2026-XXX.md
# Edit to add: parent_initiative: I-2026-XXX

Kill Initiative

# 1. Update status and document learning
sed -i '' 's/status: discovery/status: killed/' initiatives/2026/I-2026-XXX.md
# Add learning to ## Outcome section

git add initiatives/2026/I-2026-XXX.md
git commit -m "chore: Kill I-2026-XXX - [brief reason]"
git push

# 2. Move to archive
git mv initiatives/2026/I-2026-XXX.md initiatives/archive/
git commit -m "chore: Archive killed initiative I-2026-XXX"
git push

Query by Initiative Status

# All initiatives in discovery
gh pr list --label "initiative,discovery" --state open

# All validated initiatives
grep -l "status: validated" initiatives/2026/*.md

# Initiatives older than 6 weeks in discovery (needs attention)
find initiatives/2026 -name "I-*.md" -mtime +42 -exec grep -l "status: discovery" {} \;

Daily Workflow

# Morning: Check initiative health
echo "=== Initiatives in Discovery ==="
grep -l "status: discovery" initiatives/2026/*.md 2>/dev/null | wc -l

echo "=== Experiments Running ==="
grep -l "status: running" experiments/2026/*.md 2>/dev/null | wc -l

echo "=== PRs Awaiting My Review ==="
gh pr list --search "review-requested:@me"

echo "=== My Open PRs ==="
gh pr list --author "@me" --state open

Status: DRAFT Created: 2026-02-07 Updated: 2026-02-07 Author: Bob Matsuoka