← All articles

GitHub Actions CI Is Slow? Here’s What’s Actually Wasting Your Time

The top 5 time wasters in GitHub Actions pipelines — and how to fix each one with real workflow examples.

·13 min read

Your GitHub Actions pipeline takes 20 minutes. Your team runs it 50 times a day. That’s 16 hours of CI compute daily — and most of it is waste. Developers context-switch while waiting, merge queues back up, and by the end of the week your team has lost an entire engineer’s worth of productive time to a slow pipeline.

The fix isn’t “buy bigger runners.” It’s eliminating the waste that’s already in your pipeline. Here are the five biggest time wasters and how to fix each one.

The hidden cost of slow CI

Slow CI doesn’t just waste compute. It creates a cascade of productivity losses that compound across your team:

  • Developer wait time: A developer waiting 20 minutes for CI is not coding. They’re checking Slack, reading Hacker News, or starting a second task that creates costly context-switching when CI finishes.
  • Context switching: Studies show it takes 23 minutes to fully refocus after a context switch. A 20-minute CI wait often creates a 43-minute productivity gap.
  • Merge queue bottlenecks: When CI takes 20 minutes, your merge queue can process 3 PRs per hour at most (serially). With a team of 10 developers, PRs stack up and block each other.
  • Deployment velocity: Slow CI means fewer deployments per day, which means larger batch sizes, which means more risk per deploy. It’s a vicious cycle.

The math is simple: if your CI takes 20 minutes and you have 10 developers, optimizing it to 8 minutes saves 2 hours of developer wait time per day. At $150/hour loaded engineering cost, that’s $300/day or $78,000/year.

How much is your CI actually wasting?

Use our Flaky Test Cost Calculator to plug in your team’s numbers and see the dollar impact. Or install Kleore for an automated analysis of your actual CI history.

Time waster #1: Flaky test reruns

This is the single biggest source of CI waste, and it’s the one most teams underestimate. When a flaky test fails, developers re-run the entire pipeline. That re-run wastes 100% of the compute — you’re running the same tests again just to get a different roll of the dice.

The numbers are staggering. In our analysis of 10,000 GitHub Actions workflow runs, we found that 15-25% of CI compute is wasted on flaky test reruns. That means if you spend $10,000/month on GitHub Actions, $1,500 to $2,500 is literally burned on re-running tests that aren’t actually broken.

The fix: Identify and quarantine flaky tests.

You can’t fix what you can’t measure. Start by identifying which tests are flaky, then quarantine them so they don’t block CI while you fix the root causes.

.github/workflows/test.yml — retry with reporting
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version-file: ".node-version"
      - run: npm ci
      - name: Run tests with retry reporting
        run: |
          # Run tests, capture exit code
          npm test -- --json --outputFile=test-results.json || true

          # If tests failed, check if it's a known flaky test
          if [ -f test-results.json ]; then
            node scripts/check-flaky.js test-results.json
          fi

For a deeper dive on fixing flaky tests specifically, see our guides for Jest and pytest.

Time waster #2: No dependency caching

Every CI run that starts with npm install or pip install -r requirements.txt from scratch is downloading the same packages over and over. For a typical Node.js project, this wastes 1-3 minutes per run. Multiply that by 50 runs/day and you’re losing 1-2.5 hours daily.

The fix: Use actions/cache or built-in caching.

Node.js — cache node_modules
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version-file: ".node-version"
          cache: "npm"  # Built-in npm cache support
      - run: npm ci    # Uses cache when lockfile hasn't changed
Python — cache pip packages
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with:
          python-version-file: ".python-version"
          cache: "pip"  # Built-in pip cache support
      - run: pip install -r requirements.txt
Custom cache for monorepos or complex setups
- name: Cache dependencies
  uses: actions/cache@v4
  with:
    path: |
      node_modules
      ~/.cache/Cypress
      .next/cache
    key: deps-${{ runner.os }}-${{ hashFiles('**/package-lock.json') }}
    restore-keys: |
      deps-${{ runner.os }}-

Pro tip: npm ci is faster than npm install in CI because it skips the lockfile resolution step. Always use npm ci when you have a lockfile.

Time waster #3: Running all tests on every PR

If a PR only changes a README file, there’s no reason to run your entire test suite. Yet most teams configure their pipeline to run everything on every push. For large monorepos, this wastes enormous amounts of compute.

The fix: Use path filters and affected test detection.

Path filters — skip tests for docs-only changes
on:
  pull_request:
    paths:
      # Only run tests when code files change
      - "src/**"
      - "tests/**"
      - "package.json"
      - "package-lock.json"
      - ".github/workflows/test.yml"
    paths-ignore:
      # Never run tests for these changes
      - "**.md"
      - "docs/**"
      - ".vscode/**"
Conditional jobs based on changed files
jobs:
  changes:
    runs-on: ubuntu-latest
    outputs:
      backend: ${{ steps.filter.outputs.backend }}
      frontend: ${{ steps.filter.outputs.frontend }}
    steps:
      - uses: actions/checkout@v4
      - uses: dorny/paths-filter@v3
        id: filter
        with:
          filters: |
            backend:
              - "api/**"
              - "tests/api/**"
            frontend:
              - "web/**"
              - "tests/web/**"

  test-backend:
    needs: changes
    if: ${{ needs.changes.outputs.backend == 'true' }}
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: npm run test:backend

  test-frontend:
    needs: changes
    if: ${{ needs.changes.outputs.frontend == 'true' }}
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: npm run test:frontend

Time waster #4: Sequential jobs that could be parallel

Many teams structure their pipeline as a linear chain: lint, then type-check, then unit tests, then integration tests, then e2e tests. If linting takes 2 minutes and tests take 15 minutes, you’re waiting 17 minutes total. But lint and tests don’t depend on each other — they can run simultaneously.

The fix: Parallelize independent jobs and use matrix strategy.

Parallel independent jobs
jobs:
  lint:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: npm ci
      - run: npm run lint

  typecheck:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: npm ci
      - run: npm run typecheck

  test:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        shard: [1, 2, 3, 4]  # Split tests across 4 runners
    steps:
      - uses: actions/checkout@v4
      - run: npm ci
      - run: npx jest --shard=${{ matrix.shard }}/4

  # Gate deployment on all checks passing
  deploy:
    needs: [lint, typecheck, test]
    runs-on: ubuntu-latest
    steps:
      - run: echo "All checks passed, deploying..."

With this setup, lint (2 min), typecheck (1 min), and 4 parallel test shards (4 min each instead of 16 min total) all run simultaneously. Total wall time drops from 19 minutes to about 4 minutes. You pay for more compute-minutes, but your developers get feedback 5x faster.

Time waster #5: Oversized Docker images

If your CI builds Docker images, the image size directly impacts build time, push time, and pull time. A 2GB image takes minutes to push to a registry and minutes to pull on every deploy. Most of that size is build dependencies and tooling that aren’t needed at runtime.

The fix: Multi-stage builds with slim base images.

Dockerfile — multi-stage build
# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Stage 2: Production (only runtime dependencies)
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production

# Only copy what's needed to run
COPY --from=builder /app/package*.json ./
RUN npm ci --omit=dev
COPY --from=builder /app/dist ./dist

# Result: ~150MB instead of ~1.5GB
CMD ["node", "dist/server.js"]
GitHub Actions — Docker layer caching
- name: Build and push Docker image
  uses: docker/build-push-action@v5
  with:
    context: .
    push: true
    tags: myapp:latest
    cache-from: type=gha    # Use GitHub Actions cache
    cache-to: type=gha,mode=max

How to measure CI waste

Before optimizing, measure where your time actually goes. GitHub provides a built-in usage report, but it only shows total minutes. To understand why those minutes are being spent, you need more granularity.

  1. GitHub Actions usage report: Go to Settings → Billing → Actions to see total minutes consumed. This gives you the dollar baseline.
  2. Workflow run duration trends: Use the GitHub API or gh run list to track how your workflow duration has changed over time. If it’s trending up, something is degrading.
  3. Job-level timing: Look at individual job durations in the Actions tab. The longest job is your bottleneck — that’s where optimization has the biggest impact.
  4. Flaky test cost: Kleore specifically measures the cost of flaky test reruns — how many minutes are wasted re-running workflows that failed due to flaky tests rather than real bugs.

Quick wins checklist

Here’s a prioritized checklist you can work through this week. Each item is independent — start with whichever is easiest for your setup.

  1. Enable dependency caching — 5 minutes to set up, saves 1-3 minutes per run. Use actions/setup-node or actions/setup-python with the cache option.
  2. Parallelize lint/typecheck/test — 15 minutes to restructure your workflow. Independent jobs run simultaneously instead of sequentially.
  3. Add path filters — 10 minutes to add paths and paths-ignore to your workflow trigger. Docs-only PRs skip CI entirely.
  4. Shard your test suite — 20 minutes to set up matrix strategy. Split tests across 2-4 runners for a proportional speedup.
  5. Identify and quarantine flaky tests — 5 minutes to install Kleore. Get a ranked list of every flaky test, then quarantine the worst offenders to stop wasting reruns.
  6. Use multi-stage Docker builds — 30 minutes to refactor your Dockerfile. Cuts image size by 50-90%, which speeds up both build and deploy.

See how much your CI is wasting.

Kleore scans your GitHub Actions history and shows you exactly where your CI minutes go — flaky reruns, slow tests, and wasted compute. You get a dollar amount and a prioritized fix list.

Further reading

Stop guessing.
Start measuring.

Two minutes from now, you’ll know exactly how much your CI flakes cost. No credit card. No config changes.

Scan my repos — free

Free to start. No credit card required.