Vitest's `threads` pool is fast. It is also why your suite leaks state.
Why module singletons survive across tests inside a Vitest worker thread, what the failure looks like, and the two-line config change that gives you per-file isolation.
A Vitest test that mutates a module-level Set passes alone and fails when it runs alongside another test file that imports the same module. The failure looks like a logic bug in the second test. It is not. It is the threads pool doing what it advertises: keeping module state alive across tests in the same worker.
We see this pattern often enough on Mergify Test Insights that it earned its own slot in our flaky Vitest catalog. The cause is the default pool, the failure mode crosses files, and the fix is a config decision — not a code rewrite.
What you see
// counter.ts
export const seen = new Set<string>();
export function record(id: string) { seen.add(id); }
// a.test.ts
import { record, seen } from "./counter";
test("records once", () => {
record("u-1");
expect(seen.size).toBe(1); // passes
});
// b.test.ts
import { seen } from "./counter";
test("starts empty", () => {
expect(seen.size).toBe(0); // fails: 1
});
b.test.ts expects seen to be empty. The module-level Set is created once when counter.ts first loads. Under Vitest’s default threads pool, a worker thread loads counter.ts once and keeps it loaded for every test file that worker handles. a.test.ts adds "u-1" to the set. b.test.ts runs on the same worker, imports the same module, sees the set populated.
The frustrating part: b.test.ts did not import record. It only reads seen. From the test author’s perspective, the test should be impossible to break by accident. The module is supposed to start clean every time.
Why threads pool is different from forks
Vitest’s pool option defaults to threads. Each worker is a long-lived Node.js worker thread that loads modules into a shared module graph and reuses them across test files. This is fast because the cost of module loading is amortized: parse once, run many times.
The forks pool gives each test file its own Node.js process. Modules load fresh per file. State cannot leak across files because there is nothing shared. The trade-off is process startup cost: a few hundred milliseconds per file, which adds up on suites with thousands of files.
Vitest picked threads as the default because the speed gain is real. The trap is that the default model breaks the implicit assumption most engineers carry from Jest: that test files run in isolation.
The naive fix and why it is incomplete
// counter.ts
let seen = new Set<string>();
export { seen };
export function record(id: string) { seen.add(id); }
export function reset() { seen = new Set(); }
// b.test.ts
beforeEach(() => reset());
This works for b.test.ts if the author remembers to call reset(). It does not work for a.test.ts if a third test file runs after a.test.ts and reads seen without calling reset() first. The fix shifts the burden from the bug-prone test author to the bug-prone test author. Every test that touches shared module state has to remember the reset, forever, including tests written by people who do not know the pattern exists.
The fix that holds
Two options, depending on whether you can give up some speed.
Per-file process isolation (slowest, safest):
// vitest.config.ts
export default defineConfig({
test: {
pool: "forks",
poolOptions: {
forks: { singleFork: false },
},
},
});
Each test file runs in its own process. Module state cannot cross files. You pay 200-500ms of process startup per file. For a suite with 200 test files on a 4-CPU machine, that is roughly 25 extra seconds. For most teams, that is the right trade.
Per-file/glob pool selection (faster, more discipline):
Keep the threads pool, but isolate the offending modules. Vitest’s poolMatchGlobs lets you pin specific files to forks while the rest stay on threads:
export default defineConfig({
test: {
pool: "threads",
poolMatchGlobs: [
["**/integration/*.test.ts", "forks"],
],
},
});
This is the right answer when you have a small number of test files that touch shared singletons (database tests, Redis tests, anything with a connection pool) and a much larger number of pure-function tests that benefit from threads.
Avoid the shared state entirely:
If you control the production code, replace the module-level Set with a factory. Each consumer creates its own. The flake disappears because there is nothing to leak.
// counter.ts
export function createSeen() { return new Set<string>(); }
Not always possible — production code sometimes legitimately needs a process-wide cache — but when it is, this is the cheapest fix.
How Mergify catches this before you ship
Without instrumentation, thread-pool leakage looks like the kind of failure where one engineer says “weird, passes for me” and the other says “yeah, but it failed in CI three times this week.” The failing test is not the test with the bug. Manual triage usually blames the wrong file.
Test Insights catches the cross-file signature: file B fails consistently when file A ran on the same worker thread, and never alone. The dashboard tags the dependency and surfaces both files together. You see the actual culprit (the test that mutated the module-level Set) on the same screen as the symptom.
Quarantine kicks in once the pattern is confirmed. The merge queue keeps moving while you decide between forks and a refactor.
Mergify catches this before you ship. Point it at your Vitest suite — the native @mergifyio/vitest plugin installs in one npm install.
More patterns like this
Thread-pool state leakage is one of the eight patterns in the flaky-tests-in-Vitest guide. The others are variants of the same theme: shared state that survives between tests because Vitest’s defaults optimize for speed. vi.mock hoisting traps, isolate: false sharing the module cache, fake-timer leakage, snapshot races inside test.concurrent. Different APIs, same shared-state trap.
Once the failure mode is named, the patterns are finite, and most have a config-only fix. You almost never need to rewrite test code.