What Periscope tracks
Periscope computes cycle time from GitHub PR merge events. The dashboard shows:- Percentiles — p50, p75, and p95 cycle times (in hours)
- Average cycle time
- Weekly trends showing how cycle time changes over time
- Individual PR data for identifying outliers
How it is calculated
pull_request webhook event when a PR is closed and merged. The created_at and merged_at timestamps from GitHub are used.
Percentiles are computed across all merged PRs in the selected time range for your monitored repositories.
Interpreting the data
- p50 under 24 hours is a strong indicator of healthy review practices and good team flow.
- p50 over 72 hours typically signals bottlenecks — slow reviews, large PRs, or CI pipeline issues.
- Large gap between p50 and p95 means most PRs flow well but some get stuck. Investigate the tail — are they large PRs, PRs from specific contributors, or PRs to specific services?
- Increasing weekly trend may indicate growing team size (more review load), accumulating tech debt, or process friction.
Common causes of long cycle times
- Large PRs that are hard to review (see size vs time)
- Insufficient reviewer capacity or unclear ownership
- Slow CI pipelines blocking merge
- Timezone misalignment between author and reviewers
- PRs waiting for manual QA or product sign-off
Reducing cycle time
- Break work into smaller PRs (under 400 lines)
- Set review SLAs and use PR assignment or CODEOWNERS
- Invest in faster CI — flaky or slow tests are the biggest hidden tax
- Use draft PRs to get early feedback before the full review
- Automate what you can — auto-merge when CI passes and approvals are met
Cycle time vs lead time
These two metrics are related but measure different things:| Metric | Measures | Data source |
|---|---|---|
| PR cycle time | PR open to merge | GitHub |
| Lead time for changes | PR merge to production deploy | GitHub + CI/CD |