Engineering Metrics for Startups — A Free, Practical Guide
What to measure, why it matters, and how to get started in 5 minutes
Why Metrics Matter
There is an old management adage: what gets measured gets improved. In engineering, this is especially true. Without metrics, conversations about delivery health devolve into gut feelings and anecdotes. "I think we're shipping faster." "It feels like PRs are taking longer." "Are we on track? I'm not sure."
Metrics replace guesswork with data. They give you a shared language for discussing performance, a baseline to improve against, and an early warning system for problems that are easier to fix when you catch them early.
But here is the catch: most startups do not track engineering metrics because the tools are either expensive, complex, or both. You should not need an enterprise license and a dedicated platform team to know how fast your team ships.
This guide covers the metrics that matter, what good looks like, and how to start tracking them for free using GitHub and Octoboard.
The DORA Metrics
The DevOps Research and Assessment (DORA) program — now part of Google Cloud — identified four key metrics that predict software delivery performance. These are the gold standard, backed by years of research across thousands of teams.
Deployment Frequency
How often does your team deploy to production? High-performing teams deploy on demand, often multiple times per day. Lower performers deploy weekly, monthly, or less.
Why it matters: Frequent deployments mean smaller batches, lower risk per deploy, and faster feedback loops. If you are deploying once a month, every release is a big, scary event.
How to track it: Count your production deployment workflow runs in GitHub Actions over time. Octoboard does this automatically.
Lead Time for Changes
How long does it take from a commit being made to that commit running in production? This measures the efficiency of your entire delivery pipeline — from code to production.
Why it matters: Long lead times mean slow feedback. A bug fix that takes three days to reach production is three days of impact on users.
How to track it: Measure the time between PR merge and successful deployment. Octoboard calculates this from your GitHub Actions workflow data.
Mean Time to Recovery (MTTR)
When something breaks in production, how long does it take to restore service? This is not about preventing failures — failures are inevitable. It is about how quickly you bounce back.
Why it matters: Teams with low MTTR can afford to ship faster because they know they can recover quickly. Teams with high MTTR ship cautiously, which paradoxically makes things worse.
Benchmark: Elite teams recover in under one hour. Low performers take days or weeks.
Change Failure Rate
What percentage of deployments cause a failure in production? This measures the quality of your delivery process — are you shipping reliable changes?
Why it matters: A high change failure rate erodes trust. Teams stop deploying frequently because every deploy feels risky. This creates a vicious cycle of large, infrequent, high-risk releases.
Benchmark: Elite teams have a change failure rate of 0-15%. Low performers exceed 45%.
Beyond DORA: Practical Engineering Metrics
Cycle Time
The total time from when work begins on an issue to when it is completed (merged and deployed). Cycle time is the single best metric for understanding how fast your team delivers value.
Break it down into phases: time in progress, time in review, time waiting for merge. This tells you where the bottlenecks are. If most of your cycle time is "waiting for review," you have a review bandwidth problem, not a coding speed problem.
Benchmark: High-performing teams typically have a median cycle time of 1-3 days. If your median is over a week, there are improvements to be made.
Throughput
How many issues or PRs does your team complete per week? Throughput is a simple, honest measure of output. It is most useful when tracked over time to spot trends — are you shipping more or less than you were three months ago?
Tip: Do not use throughput to compare teams or individuals. Use it to understand your own team's capacity and trends.
Review Time
How long does a pull request wait before it gets its first review? Long review times are one of the most common bottlenecks in engineering teams. A PR that sits for two days waiting for review is two days of wasted cycle time.
Benchmark: Aim for first review within 4-8 hours. If PRs routinely wait more than 24 hours, it is time to address your review process.
Stale Work
How many open PRs have not been updated in the last 7 days? How many assigned issues have seen no activity? Stale work is a leading indicator of delivery problems — it means things are stuck, and nobody is noticing.
What Good Looks Like
Based on DORA research and industry benchmarks, here is what high-performing teams typically look like:
Deployment frequency: Multiple deploys per day or on-demand
Lead time: Less than one day from commit to production
MTTR: Under one hour
Change failure rate: 0-15%
Cycle time (median): 1-3 days
First review time: Under 8 hours
Stale PRs: Near zero
You do not need to hit all of these on day one. The point is to know where you stand, pick one or two metrics to focus on, and improve incrementally. Small, consistent improvements compound over time.
How to Get These Metrics for Free
If your team uses GitHub, you already have all the raw data you need. GitHub tracks issue creation and close times, PR open and merge times, workflow run results, assignees, labels, and milestones. The problem is that GitHub does not surface this data as actionable metrics.
Octoboard connects to your GitHub org and automatically calculates all of the metrics described in this guide. There is no manual data entry, no CSV exports, and no spreadsheet formulas. You connect your org, and within minutes you have a live dashboard showing DORA metrics, cycle time breakdowns, throughput trends, review time, and stale work detection.
Getting Started: 5 Minutes
Step 1: Sign up
Go to app.octoboard.io and create an account.
Step 2: Connect your GitHub org
Authorize Octoboard to read your GitHub organization data. It syncs metadata only — issues, PRs, milestones, and workflow runs. Never your source code.
Step 3: Explore your metrics
Your dashboard populates automatically. See your DORA metrics, cycle time, throughput, and risk signals. Identify your biggest bottleneck and start improving.
Step 4: Share with your team
Use the AI-powered board summaries for standups and leadership updates. Everyone sees the same data, and conversations shift from "what happened?" to "what should we do next?"
Start Measuring, Start Improving
Engineering metrics are not about surveillance or micromanagement. They are about giving your team the data it needs to get better at shipping software. The best teams measure relentlessly, discuss openly, and improve incrementally.
The good news: if you use GitHub, you are five minutes away from having all of this. Here is how to go from GitHub to DORA metrics in 5 minutes.
Start tracking your engineering metrics
Connect your GitHub org and get DORA metrics, cycle time, and more in minutes.