What Is a Merge Queue?
A merge queue tests every pull request against the future state of main before it merges. If your team has ever shipped two PRs that each passed CI but together broke main, a merge queue makes that impossible by construction. This guide covers how it works in practice and which features actually matter.
In one paragraph
A merge queue sits between "PR approved" and "PR merged." It rebases each PR onto the latest main (plus any PRs ahead of it in the queue) and only merges the PR if CI passes against that combined state. If CI fails, the PR is rejected and main is never touched. The bigger your team, the more you need this.
The problem a merge queue solves
flowchart LR PR1["PR #1<br/>CI ✓ (vs old main)"] PR2["PR #2<br/>CI ✓ (vs old main)"] Main["main<br/>✗ BROKEN"] PR1 --> Main PR2 --> Main style PR1 fill:#E6F8F2,stroke:#1CB893,color:#1A1D24 style PR2 fill:#E6F8F2,stroke:#1CB893,color:#1A1D24 style Main fill:#FDECEA,stroke:#E53935,color:#1A1D24
No merge conflict. Just a semantic clash CI never saw because each PR was tested against an outdated main.
Two engineers open pull requests on Tuesday morning. Both pass CI. Both get approved. Both merge within the hour. Main is now red. Neither change is "wrong" in isolation. They were tested against an outdated snapshot of main, not against each other.
This happens every time PRs are developed in parallel. A function gets renamed in one PR while another PR adds a new caller. A config value changes in one branch while another adds code that reads the old one. Git sees no merge conflict and CI was happy on each PR individually, but main is broken anyway.
The usual responses do not hold up at scale:
- "Just rebase before merging" turns into a full-time job once you have more than a handful of in-flight PRs.
- "Block merges until CI passes on the latest main" works only if exactly one person merges at a time. Everyone else queues by hand.
A merge queue is the systematic answer. Instead of testing PRs against the snapshot of main they were branched off of, it tests them against the state main will actually have at merge time. The race condition disappears.
How a merge queue works
flowchart LR
A["PR approved<br/>+ PR CI ✓"] --> B["Enter queue"]
B --> C["Build test branch:<br/>main + queued PRs + this PR"]
C --> D{"Run queue CI"}
D -->|pass| E["Merge ✓<br/>main stays green"]
D -->|fail| F["Reject<br/>main untouched"]
style E fill:#E6F8F2,stroke:#1CB893,color:#1A1D24
style F fill:#FDECEA,stroke:#E53935,color:#1A1D24
The PR is tested as if it were already merged. Pass: the merge proceeds. Fail: only the bad PR's author has work to do.
A pull request goes through four stages once it enters the queue.
1. Entering the queue
The PR has been approved and PR-level CI has passed. The author (or an automation rule) adds it to the queue. The queue checks the PR is eligible (required checks present, no draft, branch is up to date enough), assigns a position based on priority and arrival time, and records the current state of the target branch.
2. Building the test branch
The queue creates a temporary branch that represents "what main will look like after this PR merges." This is the key insight. The test branch contains every commit on main, every PR ahead in the queue (when speculative checks are enabled, meaning the queue tests PRs in parallel rather than one at a time), and the PR's own changes merged in.
3. Running CI
The queue triggers CI on the test branch. This is often called "queue CI" to distinguish it from the CI that ran on the PR itself. The queue waits for all required checks to complete and watches their statuses.
On GitHub Actions, queue CI runs on the merge_group event. If your existing workflows trigger on push or pull_request, adding merge_group is a one-line change. No workflow rewrite required.
A common pattern is two-step CI: lightweight checks on the PR (lint, unit tests, type check) and the full expensive suite (E2E, integration, browser tests) only in the queue. You only pay the full CI cost on PRs that are actually about to land.
4. Merging or failing
If CI passes, the queue merges the PR using your configured strategy (merge commit, squash, rebase, or fast-forward) and notifies the next PR in line. If CI fails, the PR is removed from the queue, main stays untouched, and the author gets a comment explaining why. PRs behind the failed one are automatically re-evaluated, since they were being tested against a state that will never exist.
What this costs you in time and CI
Queue CI adds one CI run of latency per PR. With a 30-minute pipeline and a serial queue, a PR queued at 9:00 merges around 9:30. Two features change that math:
- Batching groups multiple PRs into a single CI run. Batches of 4 mean roughly 25% as many CI runs, at the cost of one extra bisection round when a batch fails. Most teams cut their queue CI bill by 50 to 75% with this alone.
- Speculative checks run multiple PRs in parallel instead of one at a time. Per-PR latency drops back toward the single-CI floor even at 50+ PRs a day. The cost is wasted CI when an early speculation fails and the PRs behind it have to restart.
Most teams turn both on. The combined math gets you the throughput of a parallel queue at the cost of a serial one.
See the Mergify merge queue docs for the configuration syntax that controls each of these stages.
What the queue looks like in motion
At any given moment, the queue holds several PRs in different states. A simplified view:
| Position | PR | Status | Tested against |
|---|---|---|---|
| 1 | #101 | Testing | main |
| 2 | #102 | Testing | main + #101 |
| 3 | #103 | Pending | main + #101 + #102 |
| 4 | #104 | Pending | main + #101 + #102 + #103 |
Each PR's "tested against" base includes every PR ahead of it. That is how the queue guarantees PRs are tested against the future state of main, not the past.
When a PR fails mid-queue (say #102), the queue removes it and re-tests everything behind it. PR #103 was being tested against main + #101 + #102. That world no longer exists, so the queue rebuilds the test branch as main + #101 and re-runs CI. Manual queue management is the bug a merge queue exists to fix; you should never have to do this by hand.
The features that make a queue actually scale
A serial merge queue solves the correctness problem, but on its own it caps your throughput at one PR per CI run. The features below are what separate a basic queue from one that can keep up with a real engineering org.
Two-step CI
Run lightweight checks on every PR, save the full suite for when the PR is actually about to land. Faster feedback, cheaper CI bill.
Read more →Batching
Group multiple PRs into a single CI run, then bisect on failure. The fewer CI runs your queue does, the cheaper your CI bill.
Read more →Speculative checks
Test PRs in parallel by assuming the ones ahead of them will pass. Cuts queue latency from hours to minutes when the failure rate is low.
Read more →Parallel queues
Independent merge lanes per scope. A frontend CSS fix should not wait behind a backend API migration.
Read more →GitHub merge queue and the alternatives
GitHub launched its native merge queue in 2023. It works for the basic case: PRs queue up, get tested via the merge_group event in Actions, and merge in order. For small teams on a single repo, it can be enough.
It runs into limits quickly. There is no parallel scoping (a CSS PR waits behind every other PR), no batching with automatic bisection, no two-step CI pattern, no priority queues, and the analytics are minimal. There has also been at least one well-publicized incident where the native queue silently reverted merged code. For monorepos, regulated environments, or teams over about 30 engineers, the gaps start to bite.
When GitHub's native queue is the right answer
A single repo with one team, fewer than ~30 engineers, modest PR volume, and no monorepo complexity. It is free and integrates natively. If that describes your team, start there. The native queue is good enough for the team it was designed for. The dedicated tools exist for the teams that hit its ceiling.
The dedicated merge queue tools (Mergify, Trunk, Aviator, and a handful of others) cover those gaps with different tradeoffs. The most common comparison is the one against GitHub's native queue:
ComparisonMergify vs GitHub merge queue
Side-by-side: parallel queues, batching, two-step CI, monorepo support, queue analytics. Where GitHub's queue is enough and where it stops being enough.
Read the comparison →For the other paid options, see Mergify vs Trunk or Mergify vs Aviator.
Do you need one yet?
Two scenarios. If you fit both, the second one wins.
You probably don't need a queue
- ○ You ship fewer than 5 PRs a day.
- ○ Main rarely breaks from in-flight PRs.
Good tests and pulling main before you push will carry you. A queue would add complexity without solving a problem you have.
A queue starts paying for itself
- ● Main breaks more than once a week from "two green PRs together."
- ● One team's slow CI blocks another team's fast merges.
Either one and the queue pays for itself within a sprint.
FAQ
What is a merge queue?
A merge queue is a system that sits between approved pull requests and the main branch. It tests each PR against the actual state main will have at merge time (including PRs ahead of it in the queue), then only merges when CI passes. The result is that main stays green even when many engineers ship in parallel.
What is GitHub merge queue?
GitHub merge queue is GitHub's built-in feature for serializing pull request merges. It became generally available in 2023. It supports basic queueing and group merges but lacks features like parallel scopes, batching with bisection, two-step CI, and queue analytics that mature merge queues offer.
How is Mergify different from GitHub merge queue?
Mergify supports parallel queues by scope (so a frontend PR does not wait behind a backend migration), batches multiple PRs into a single CI run with automatic bisection on failure, runs two-step CI (lightweight checks on PRs, full suite in the queue), and gives you queue analytics. GitHub's queue does not. The full comparison lives at /compare/github-merge-queue/.
Does GitHub Actions have a merge queue?
GitHub Actions itself does not, but it integrates with GitHub merge queue (a separate GitHub feature). Workflows can be triggered by the merge_group event when a PR is being tested in the queue. Most production teams pair Actions with a dedicated merge queue tool like Mergify for the features GitHub's native queue does not cover.
Do I need a merge queue for a small team?
If you ship fewer than 5 PRs a day on a small repo with a single team, you probably do not. The pain shows up once two PRs in flight start breaking main on merge, or once your CI pipeline takes long enough that serial rebasing wastes real time. That usually happens around 8 to 10 engineers actively merging.
What happens when a PR fails in the merge queue?
The PR is removed from the queue and main is left untouched. The PR author is notified. If the queue runs speculative checks or batches, PRs behind the failing one are automatically re-tested against the new queue state. No manual triage and no broken main.
Is Mergify a replacement for GitHub merge queue?
Yes. Teams use Mergify instead of GitHub's native queue when they need parallel queues, batching with bisection, two-step CI, queue analytics, monorepo support, or priority queues. Mergify also works on top of any CI provider and supports more merge strategies. See /compare/github-merge-queue/ for the side by side.
See Mergify's merge queue.
The page customers usually start on. Real screenshots, real customers, real pricing.