Mehdi Abaakouk

Oct 8, 2025

4 min

read

Why WARNING Has No Place in Modern Logging

Stay ahead in CI/CD

The latest blog posts, release news, and automation tips straight in your inbox

Stay ahead in CI/CD

The latest blog posts, release news, and automation tips straight in your inbox

Most systems drown in meaningless WARNING logs. They waste money, obscure real errors, and help no one. Here’s why your next logging cleanup should start by deleting WARNING — and how structured logs make your production systems clearer, cheaper, and safer.

Every production system has it: a stream of WARNING logs that nobody looks at, nobody acts on, and nobody dares to delete. They clutter dashboards, inflate storage bills, and, worst of all, dilute the signal when something truly bad happens.

WARNING is the junk drawer of logging — full of "just in case" messages that neither help developers debug nor help ops keep the system healthy.

It's time we admit it: WARNING is useless.

How We Got Here

The intent behind WARNING made sense at first: it’s not an error, but it’s also not fine — that "something feels weird" state.

The problem? Nobody agrees on what that means. One team uses WARNING for deprecations. Another for retries. Another for validation edge cases. Over time, WARNING becomes a dumping ground for anything that feels uncomfortable but doesn't clearly belong in ERROR or INFO.

The result: log files and dashboards filled with yellow noise.

And when everything is WARNING, nothing is.

The Problem With WARNING

  1. No actionability – Who's supposed to act on it? If it doesn't trigger monitoring, nobody does.

  2. Noise overload – When millions of WARNINGs pile up, engineers stop reading them. They scroll past, eyes trained only to catch red ERRORs.

  3. Cost – Indexing useless logs in Datadog, Splunk, or ELK burns real money.

  4. Dilution – Real signals get buried under half-important noise.

A Better Way to Think About Logs

If you kill WARNING, you're forced to get intentional.

ERROR / CRITICAL → Unexpected Failures

If it's unexpected, it's an ERROR. Period. It should alert someone — always. They’re the logs that deserve a human to look at them.

This isn’t just text — it’s structured, filterable, and actionable. Datadog, Sentry, or PagerDuty can wake someone up for it.

INFO → Business-Relevant Events

INFO should tell the story of the business, not the code. You don’t need to know that function calculateTax()ran — you need to know that an invoice was created.

These make dashboards meaningful and help correlate application health with user-facing impact. This helps ops understand system behavior without needing to read source code.

DEBUG → Developer Insights

DEBUG is where code-level details belong. Retries, stack traces, execution paths — everything useful for debugging but irrelevant (and noisy) in production monitoring.

DEBUG logs should be off by default in production. Developers can turn them on when troubleshooting.

DEBUG: Retrying payment request [attempt=3, order_id=123]

This is great for local debugging, but ops should never have to care.

What About "Almost Errors"?

That's where WARNING used to live. Instead of vague yellow noise, use structured metadata:

This preserves nuance without creating a meaningless log level.

What Happens When You Kill WARNING

When WARNING is gone, the log landscape gets clearer:

  • Ops know that ERROR means "wake up now," INFO means "business events," DEBUG means "developer-only."

  • Developers get cleaner dashboards — no yellow noise hiding real failures.

  • Costs go down because fewer useless logs are stored and indexed.

  • Everyone has more trust in logs because each one has a clear purpose.

Lessons Learned

  • Use logs for state, not steps. Logs should capture what happened, not narrate the code path. If you want code timings or call stacks, use APM tracing.

  • Separate ops visibility (INFO, ERROR) from developer visibility (DEBUG). They serve different needs.

  • Structured logging beats WARNING every time. Key/value logs give you more power than vague severity labels.

  • Every log line has a cost — in attention, in clarity, in money. Make sure each one buys you something valuable.

Final Reflection

I used to sprinkle WARNINGs everywhere, thinking I was being cautious. In reality, I was just adding noise.

Now my rule is simple:

  • ERROR for unexpected failures.

  • INFO for meaningful events.

  • DEBUG for developers.

Everything else? Log pollution.

Logs are not for therapy — they're for clarity.

Stay ahead in CI/CD

The latest blog posts, release news, and automation tips straight in your inbox

Stay ahead in CI/CD

The latest blog posts, release news, and automation tips straight in your inbox

Recommended blogposts

Nov 17, 2025

5 min

read

Goodbye Checklists, Hello AI Linters

We turned our pull request rules into small AI-powered linters using GitHub’s new actions/ai-inference. Each linter enforces one rule: catching risky changes before humans do, without regexes, static analysis, or friction.

Mehdi Abaakouk

Nov 17, 2025

5 min

read

Goodbye Checklists, Hello AI Linters

We turned our pull request rules into small AI-powered linters using GitHub’s new actions/ai-inference. Each linter enforces one rule: catching risky changes before humans do, without regexes, static analysis, or friction.

Mehdi Abaakouk

Nov 17, 2025

5 min

read

Goodbye Checklists, Hello AI Linters

We turned our pull request rules into small AI-powered linters using GitHub’s new actions/ai-inference. Each linter enforces one rule: catching risky changes before humans do, without regexes, static analysis, or friction.

Mehdi Abaakouk

Nov 17, 2025

5 min

read

Goodbye Checklists, Hello AI Linters

We turned our pull request rules into small AI-powered linters using GitHub’s new actions/ai-inference. Each linter enforces one rule: catching risky changes before humans do, without regexes, static analysis, or friction.

Mehdi Abaakouk

Nov 5, 2025

5 min

read

Shadow Shipping: How We Double-Executed Code to Ship Safely

How do you ship risky code without crossing your fingers? In this post, we explain how he ran old and new logic in parallel (“shadow shipping”) to validate behavior in production before rollout. Learn how this simple pattern turned feature-flag anxiety into data-driven confidence.

Julian Maurin

Nov 5, 2025

5 min

read

Shadow Shipping: How We Double-Executed Code to Ship Safely

How do you ship risky code without crossing your fingers? In this post, we explain how he ran old and new logic in parallel (“shadow shipping”) to validate behavior in production before rollout. Learn how this simple pattern turned feature-flag anxiety into data-driven confidence.

Julian Maurin

Nov 5, 2025

5 min

read

Shadow Shipping: How We Double-Executed Code to Ship Safely

How do you ship risky code without crossing your fingers? In this post, we explain how he ran old and new logic in parallel (“shadow shipping”) to validate behavior in production before rollout. Learn how this simple pattern turned feature-flag anxiety into data-driven confidence.

Julian Maurin

Nov 5, 2025

5 min

read

Shadow Shipping: How We Double-Executed Code to Ship Safely

How do you ship risky code without crossing your fingers? In this post, we explain how he ran old and new logic in parallel (“shadow shipping”) to validate behavior in production before rollout. Learn how this simple pattern turned feature-flag anxiety into data-driven confidence.

Julian Maurin

Oct 29, 2025

6 min

read

Why PostgreSQL Ignored Our Index (and What the Planner Was Thinking)

PostgreSQL doesn’t "ignore" your indexes, it just does the math differently. We dive into how the planner weighs cost, why it sometimes chooses sequential scans, and how we tuned our queries to make peace with it.

Fabien Martinet

Oct 29, 2025

6 min

read

Why PostgreSQL Ignored Our Index (and What the Planner Was Thinking)

PostgreSQL doesn’t "ignore" your indexes, it just does the math differently. We dive into how the planner weighs cost, why it sometimes chooses sequential scans, and how we tuned our queries to make peace with it.

Fabien Martinet

Oct 29, 2025

6 min

read

Why PostgreSQL Ignored Our Index (and What the Planner Was Thinking)

PostgreSQL doesn’t "ignore" your indexes, it just does the math differently. We dive into how the planner weighs cost, why it sometimes chooses sequential scans, and how we tuned our queries to make peace with it.

Fabien Martinet

Oct 29, 2025

6 min

read

Why PostgreSQL Ignored Our Index (and What the Planner Was Thinking)

PostgreSQL doesn’t "ignore" your indexes, it just does the math differently. We dive into how the planner weighs cost, why it sometimes chooses sequential scans, and how we tuned our queries to make peace with it.

Fabien Martinet

Curious where your CI is slowing you down?

Try CI Insights — observability for CI teams.

Curious where your CI is slowing you down?

Try CI Insights — observability for CI teams.

Curious where your CI is slowing you down?

Try CI Insights — observability for CI teams.

Curious where your CI is slowing you down?

Try CI Insights — observability for CI teams.