Mehdi Abaakouk Mehdi Abaakouk
April 10, 2026 · 8 min read

Your Monorepo's Merge Queue Has a Concurrency Problem

Your Monorepo's Merge Queue Has a Concurrency Problem

How scoped parallel queues turn a serial bottleneck into a DAG, so your frontend team stops waiting for backend CI to finish.

A merge queue is supposed to protect your main branch. In a monorepo, it also becomes the bottleneck that makes your entire team wait.

You have a frontend team and a backend team pushing PRs into the same repository. PR #1 changes a React component, PR #2 rewrites a database migration. They don’t share a single line of code. But in a serial merge queue, PR #2 sits idle for 20 minutes while PR #1 runs its CI pipeline.

Yes, you can run only the affected tests per PR. Nx, Bazel, and Turborepo all do this well, and you should. But selective CI reduces the time each PR spends running tests. It doesn’t change the fact that PRs still wait in line, one behind the other. The bottleneck isn’t CI duration, it’s CI serialization.

Why serial queues break down in monorepos

A traditional merge queue works like a linked list. Each PR waits for the PR ahead of it to pass CI, then runs its own CI on top of those changes. If PR #1 fails, PR #2 gets rebuilt. This guarantees that main never breaks, because every PR is tested against the exact state it will be merged into.

The problem is that this model assumes every PR can conflict with every other PR. In a single-project repository, that’s a reasonable assumption. In a monorepo with apps/dashboard/ and services/api/, it’s wasteful. Your CSS fix doesn’t conflict with a database migration. But the queue doesn’t know that.

The cost scales with team size. Five teams pushing PRs into the same queue means each team is waiting for four other teams’ CI to finish. A queue that takes 15 minutes for a single-project repo can easily stretch past an hour in a monorepo. Engineers start batching their work into larger PRs to avoid the queue, which makes code review harder, which slows everything down further.

Zuul (from the OpenStack ecosystem) introduced project-based gating years ago. GitHub’s merge queue supports a parallel mode that tests multiple PRs concurrently. But these approaches either require a specific CI system or lack awareness of which PRs actually conflict with each other.

Scopes: teaching the queue what can conflict

Give the merge queue information about which parts of the codebase each PR touches. We call these scopes.

A scope is a label you attach to a PR. If PR #1 has scope frontend and PR #2 has scope backend, the queue knows they’re independent. If PR #3 has scopes frontend and backend, the queue knows it depends on both.

You can define scopes in a few ways:

File path patterns. You define patterns in your Mergify configuration. Changes under apps/dashboard/** get scope frontend. Changes under services/api/** get scope backend. Simple and works for most monorepo layouts.

merge_queue:
  mode: parallel
  scopes:
    source:          # where scope information comes from
      file_patterns: # map directory patterns to scope names
        frontend:
          - "apps/dashboard/**"
        backend:
          - "services/api/**"

Monorepo tools. If you use Nx or Bazel, you already have a dependency graph. Your CI can ask nx affected or bazel query which projects changed and pass them as scopes. This is more accurate than file patterns because it understands cross-project dependencies.

Manual labels. Your CI pipeline sets scopes explicitly, useful when the relationship between files and components isn’t a simple directory mapping.

Scope detection runs during CI on the merge queue’s draft PRs. The queue reads the resulting scopes and uses them to decide what depends on what.

Scopes need to reflect real dependencies. If your file patterns miss a shared utility library that crosses scope boundaries, you can end up merging incompatible changes. If your monorepo boundaries are blurry, monorepo-tool-based detection (nx affected, bazel query) is safer than static file patterns because it follows the actual dependency graph.

From a linked list to a DAG

Once the queue knows each PR’s scopes, it stops being a list and becomes a directed acyclic graph.

Two batches depend on each other only if they share at least one scope. Batches with non-overlapping scopes are independent and run CI at the same time.

Five PRs enter the queue:

PRScopesDescription
#1frontendUpdate dashboard layout
#2backendAdd new API endpoint
#3frontendFix login page bug
#4backend, infraAdd database index + Terraform
#5infraUpdate monitoring config

PRs #1 and #3 share the frontend scope, so #3 depends on #1. PR #4 shares backend with #2 and infra with #5, so #4 depends on both #2 and #5. But #1 and #2 share nothing. They’re independent.

The queue builds this dependency graph:

graph RL
    PR3["#3 (frontend)"] --> PR1["#1 (frontend)"]
    PR4["#4 (backend, infra)"] --> PR2["#2 (backend)"]
    PR4 --> PR5["#5 (infra)"]

In serial mode, these five PRs run one after another. If each CI run takes 10 minutes, you’re looking at 50 minutes total.

In parallel mode, the queue starts CI for #1, #2, and #5 simultaneously (they have no dependencies). Once #1 finishes, #3 starts. Once both #2 and #5 finish, #4 starts. Total wall-clock time: 20 minutes.

gantt
    title Serial — 50 minutes
    dateFormat HH:mm
    axisFormat %M min
    section Queue
    PR 1 :pr1, 00:00, 10m
    PR 2 :pr2, after pr1, 10m
    PR 3 :pr3, after pr2, 10m
    PR 4 :pr4, after pr3, 10m
    PR 5 :pr5, after pr4, 10m
gantt
    title Parallel — 20 minutes
    dateFormat HH:mm
    axisFormat %M min
    section frontend
    PR 1 :pr1, 00:00, 10m
    PR 3 :pr3, after pr1, 10m
    section backend
    PR 2 :pr2, 00:00, 10m
    PR 4 :pr4, after pr2, 10m
    section infra
    PR 5 :pr5, 00:00, 10m

The actual improvement depends on how many PRs share scopes. If every PR touches shared-libs, you’re back to serial. The gain comes from independence between scopes, and real monorepos with clear project boundaries have plenty of it.

Speculative merges with multiple parents

The queue doesn’t just run CI on each PR in isolation. Like serial merge queues, it runs speculative CI: each batch is tested against the state of main after all its dependencies have merged.

In a serial queue, that’s simple. PR #3 gets tested on top of PR #2’s changes, which include PR #1’s changes. It’s a chain.

In a DAG, a batch can have multiple parents. PR #4 (backend, infra) depends on both PR #2 (backend) and PR #5 (infra), which ran independently. The queue builds PR #4’s speculative branch by starting from the current base SHA, merging PR #2’s changes, then PR #5’s changes, then PR #4’s own changes.

If either parent fails CI, PR #4 gets requeued. PR #1 and PR #3 keep running. They share no scope with the failed batch, so there’s nothing to invalidate.

Failure stays in its lane

In a serial queue, a failure at position 2 invalidates everything behind it. Positions 3, 4, 5 all get rebuilt. In a busy monorepo, one flaky test in the backend can send the entire frontend team back to square one.

With scoped parallel queues, failure cascades only through the dependency graph. If PR #2 (backend) fails CI:

  • PR #4 (backend, infra) gets invalidated, because it depends on #2
  • PR #1, #3, #5 are completely unaffected

The frontend team doesn’t even notice. Their PRs keep running, pass CI, and merge on schedule.

Accurate scopes matter here. If a backend failure genuinely breaks a frontend PR but the scopes don’t capture that dependency, you’ve got a false green. Static file patterns are approximate. For teams where cross-scope dependencies are common, using nx affected or bazel query to compute scopes dynamically is worth setting up.

Batching still works

PRs with identical scopes still batch together. If four frontend PRs enter the queue, the queue groups them into a single batch and tests them as one CI run, just like serial mode. The batch inherits the shared scope, so the DAG treats it as a single node.

Batching reduces the number of CI runs within a scope. Parallelism eliminates the wait between scopes.

Getting started

Switch the merge queue mode to parallel and define your scopes in the Mergify configuration:

merge_queue:
  mode: parallel
  scopes:
    source:
      file_patterns:
        frontend:
          - "apps/dashboard/**"
          - "packages/ui/**"
        backend:
          - "services/**"

If you’re already using Nx or Bazel, you can pass affected projects as scopes through your CI instead of maintaining file patterns manually.

Teams with cleanly separated projects see the biggest gains. Teams with lots of cross-cutting code see less. But even partial independence helps: if half your PRs are in a scope nobody else touches, that’s half your queue that runs without waiting.

Your frontend team shouldn’t have to wait for your backend team’s tests. Now they don’t have to.

Merge Queue

Tired of broken main branches?

Mergify's merge queue tests every PR against the latest main before merging. Try it free.

Learn about Merge Queue

Recommended posts

Diving into pytest Finalizers
April 10, 2026 · 5 min read

Diving into pytest Finalizers

While building pytest-mergify, we hit a wall with fixture teardown during test reruns. The fix was two helper functions and a trick borrowed from pytest-rerunfailures.

Rémy Duthu Rémy Duthu
How a Dashboard Arms Race Put Me on the App Store
April 8, 2026 · 7 min read

How a Dashboard Arms Race Put Me on the App Store

It started with a shell script on Slack. Four days and 31 commits later, I had a native macOS app built entirely with Claude Code in a language I'd never touched.

Julian Maurin Julian Maurin
The Day My AI Agent Deleted 29 Git Worktrees
April 1, 2026 · 4 min read

The Day My AI Agent Deleted 29 Git Worktrees

What happens when you rubber-stamp an AI agent's suggestion to 'clean up' your git worktrees, and the agent uses --force on all of them.

Alexandre Gaubert Alexandre Gaubert