Skip to content

Analysis: Why Metrics Show 0.0 Values

Date: 2025-11-17 Analysis Type: Data Quality Investigation


Executive Summary

All teams are showing 0.0 values for Flow Time, P85 Flow Time, Lead Time, and MTTR metrics. This investigation reveals the root cause and provides recommendations.


Root Cause Identified

Issues with transition history: 0 for ALL teams

ACQREG:  0 out of 1000 issues have transition history
LGMT:    0 out of 1000 issues have transition history
O2C:     0 out of 1000 issues have transition history
SSH:     0 out of 1000 issues have transition history
UAS:     0 out of 1000 issues have transition history

What This Means

The JIRA data fetch is NOT capturing workflow transition history. The calculators require this data to compute accurate metrics:

  1. Flow Time (Flow Framework)

    • Requires: Status transition timestamps
    • Current behavior: Falls back to cycle time (created → resolved)
    • Why it's 0: Fallback calculation not working properly
  2. P85 Flow Time

    • Requires: List of flow times
    • Current behavior: Returns 0 when flow_times list is empty
  3. Lead Time for Changes (DORA)

    • Requires: First "in progress" transition → resolution
    • Current behavior: Falls back to cycle time without transitions
    • Why it's 0: Same as Flow Time
  4. MTTR (DORA)

    • Requires: Incident creation → resolution timestamps
    • Current behavior: Filters by severity levels
    • Why it's 0: No incidents match the configured severity filter
  5. Deployment Frequency (DORA)

    • Should be calculated from completed features
    • Why it's 0: Calculator is using period_issues (issues created in period) instead of completed_in_period

What IS Working

Despite missing transition data, we DO have:

  1. Issue created_at timestamps: 100% coverage
  2. Issue resolved_at timestamps: 44-62% coverage (varies by team)
  3. Completed issues: 440-620 per team
  4. Defect tracking: 60-120 defects per team
  5. Cycle time calculation: Can calculate created → resolved

This is sufficient data to calculate meaningful metrics, but the calculators need adjustment.


Why Deployment Frequency Shows 0.0

Looking at the DORA calculator (src/calculators/dora_metrics.py:44-48):

# Filter issues created in period
period_issues = [
    issue for issue in all_issues
    if issue.created_at and issue.created_at > period_start
]

The problem: It filters by created_at in the period, then later filters by done status. But we should be counting completed features (resolved_at in period), not features created in period.


Recommendations

Immediate Fix (High Priority)

  1. Update JIRA data fetch to include transition history
    • Add expand=changelog to JIRA API request
    • Parse changelog into StatusTransition objects
    • This will enable accurate flow time calculations

Short-term Workaround (Can implement now)

  1. Fix calculators to use cycle time when transitions are missing

    • Update FlowMetricsCalculator._calculate_flow_times()
    • Ensure fallback to cycle time works properly
    • Currently returning empty list, should return cycle times
  2. Fix Deployment Frequency calculation

    • Change from "created in period" to "resolved in period"
    • This matches how velocity is calculated
  3. Fix MTTR calculation

    • Make severity filtering optional
    • Include all defects when no severity filter configured

Code Changes Required

File: src/adapters/jira_adapter.py

# Current
response = self.session.get(url, params={'maxResults': batch_size, ...})

# Needed
response = self.session.get(url, params={
    'maxResults': batch_size,
    'expand': 'changelog',  # ADD THIS
    ...
})

Then parse changelog:

def _parse_transitions(self, jira_issue: Dict) -> List[StatusTransition]:
    """Parse JIRA changelog into transitions"""
    transitions = []
    changelog = jira_issue.get('changelog', {}).get('histories', [])

    for history in changelog:
        for item in history.get('items', []):
            if item.get('field') == 'status':
                transitions.append(StatusTransition(
                    from_status=item.get('fromString', ''),
                    to_status=item.get('toString', ''),
                    timestamp=self._parse_date(history.get('created')),
                    author=history.get('author', {}).get('displayName')
                ))

    return transitions

File: src/calculators/dora_metrics.py:44-52

# Current (WRONG - filters by created_at)
period_issues = [
    issue for issue in all_issues
    if issue.created_at and issue.created_at > period_start
]

# Fixed (filters by resolved_at)
period_issues = [
    issue for issue in all_issues
    if issue.resolved_at and period_start < issue.resolved_at <= period_end
]

File: src/calculators/flow_metrics.py:110-135

# Ensure fallback actually works
def _calculate_flow_times(self, completed_issues: List[Issue]) -> List[float]:
    flow_times = []
    method = self.config.kpi_preferences.lead_time_method

    for issue in completed_issues:
        if method == "first_in_progress" and issue.transitions:
            # Accurate method with transitions
            flow_time = issue.calculate_lead_time_days(...)
            if flow_time is not None:
                flow_times.append(flow_time)
        else:
            # FALLBACK: Use cycle time (this branch should work!)
            flow_time = issue.calculate_cycle_time_days()
            if flow_time is not None:  # <-- This check may be too restrictive
                flow_times.append(flow_time)

    return flow_times

Impact Assessment

Current State

  • ❌ Flow Time: Inaccurate (0 days)
  • ❌ P85 Flow Time: Inaccurate (0 days)
  • ❌ Lead Time: Inaccurate (0 days)
  • ❌ MTTR: Inaccurate (0 hours)
  • ❌ Deployment Frequency: Incorrect (0.0/week)
  • ✅ Velocity: CORRECT (uses completed_in_period correctly)
  • ✅ Load (WIP): CORRECT
  • ✅ Distribution: CORRECT
  • ✅ Change Failure Rate: CORRECT

After Transition History Fix

  • ✅ All metrics accurate with proper workflow data
  • ✅ Can distinguish active work time from wait time
  • ✅ Flow efficiency can be calculated accurately

After Cycle Time Fallback Fix (without transition history)

  • ⚠️ Flow Time: Approximate (created → resolved)
  • ⚠️ Lead Time: Approximate (created → resolved)
  • ⚠️ Less accurate but better than 0
  • ✅ Deployment Frequency: Correct
  • ✅ MTTR: Correct for tracked incidents

Recommendation: Priority Order

  1. HIGH PRIORITY: Fix deployment frequency calculation (1 line change)
  2. HIGH PRIORITY: Fix cycle time fallback logic (ensure it returns values)
  3. MEDIUM PRIORITY: Add transition history to JIRA fetch (enables accurate metrics)
  4. LOW PRIORITY: Fix MTTR severity filtering

Validation After Fixes

After implementing fixes, verify:

  1. Run python investigate_and_report_outliers.py
  2. Check "Data Availability Analysis" section
  3. Confirm:
    • Flow Time > 0 for teams with resolved issues
    • Lead Time > 0 for teams with resolved issues
    • Deployment Frequency > 0 for teams with completed features
    • MTTR > 0 for teams with resolved defects

Status: Analysis Complete Next Step: Implement recommended fixes