How to read a CSP violation report (every field, in plain English)
An evergreen reference covering violated-directive, blocked-uri, source-file, document-uri, sample, disposition.

You finally deployed a Content Security Policy (CSP), and the reports are flooding in. But instead of the satisfying "Security improved" notification you expected, your dashboard is a wall of JSON blobs that look like script-src 'self' ... blocked-uri: data:. Is this an active XSS attack, or did someone just add a new analytics pixel without telling you? Maybe it's just a browser extension gone rogue? If you're staring at a terminal or a log aggregator at 2 AM trying to figure out if your site is actually broken or just being noisy, you aren't alone. Content Security Policy is one of the most powerful security tools in a developer's arsenal, but its reporting mechanism was seemingly designed for browser engine architects rather than the developers who actually have to fix the errors.
The browser is trying to tell you something important, but it's speaking in a dialect of RFC-speak and JSON keys that don't always map directly to your source code. This guide acts as the Rosetta Stone for the CSP report, mapping cryptic field names to real-world debugging actions. We'll move beyond "what the field is" to "what the field tells you about your security posture," making sense of the noise so you can start fixing the signal.
The Anatomy of a CSP Violation Report
When a browser encounters an action that violates your policy—like a script trying to load from an unauthorized domain or an inline style being blocked—it generates a JSON object. Depending on how you've configured your headers, the browser will POST this JSON to a URL you specify. There are two main ways the browser handles this: the legacy report-uri directive and the modern Report-To header. While the delivery mechanism differs, the core data inside the csp-report object remains largely consistent.
Let's look at what a raw JSON violation report from a modern version of Chrome actually looks like. This is the "standard" format you'll see in most logging tools:
{
"csp-report": {
"document-uri": "https://example.com/checkout",
"referrer": "https://example.com/cart",
"violated-directive": "script-src-elem",
"effective-directive": "script-src-elem",
"original-policy": "default-src 'self'; script-src 'self' https://trusted.cdn.com; report-uri /api/csp-report",
"disposition": "enforce",
"blocked-uri": "https://malicious-ads.net/track.js",
"line-number": 42,
"column-number": 12,
"source-file": "https://example.com/static/js/bundle.js",
"status-code": 200,
"script-sample": ""
}
}At first glance, it's a lot of data. But every key here serves a specific purpose in your investigation. The report is essentially trying to answer four questions: Where did it happen? What was blocked? Why was it blocked? And what were the consequences?
The "Where": document-uri and referrer
The first step in any triage is identifying the blast radius. The document-uri field tells you exactly which page the user was on when the violation occurred. This is critical because your CSP might be global, but your application logic is local. If you see a violation only occurring on /checkout, it might be related to a third-party payment iframe. If it's on /blog, it could be a legacy social sharing widget.
The referrer field provides context on how the user arrived at that page. While it might seem secondary, the referrer is a vital "security breadcrumb." For example, if you see a violation on a specific page that only happens when the referrer is an external search engine, you might be looking at a DOM-based XSS attack where the malicious payload is being passed through a URL parameter that your app is unsafely rendering. Alternatively, if the referrer is https://example.com/admin, you know the issue is contained within your authenticated staff area.
Consider a scenario where a violation is triggered only on /checkout. If the blocked-uri is a script from a known analytics provider, you might realize that your marketing team added a "Conversion Pixel" specifically for the checkout success page, but forgot to update the CSP whitelist. Without document-uri, you'd be hunting through every page in your app to find the offending script tag.
The "What": blocked-uri and source-file
The blocked-uri is arguably the most important field in the entire report. It identifies the resource that was denied access. This could be a full URL (https://fonts.gstatic.com/s/inter/v12/abc.woff2), a scheme (data:), or even a keyword like inline.
When blocked-uri contains a full URL, your job is relatively easy: you either recognize the domain or you don't. Recognize https://www.google-analytics.com? You probably just need to update your script-src. Don't recognize https://evil-hacker.ru/logger.php? You might have an active XSS vulnerability where someone has successfully injected a script tag into your database-backed content.
The "inline" problem is where things get tricky. If blocked-uri is empty, or specifically says "inline", it means the browser blocked an inline <script> tag, an onclick handler, or an inline <style> block. Because there is no external source file to point to, the browser simply tells you it was "inline."
This is where source-file, line-number, and column-number come to the rescue. For external scripts that trigger a violation (like an authorized script trying to load an unauthorized worker), source-file points to the script that initiated the request. For inline violations, source-file usually points to the HTML document itself, with the line number indicating exactly where the offending inline code starts. Note that if you are using a minified bundle, line 1, column 54321 isn't very helpful unless you have source maps handy—but it at least confirms that the violation is coming from your own code and not a third-party script.
The "Why": violated-directive vs. effective-directive
CSP logic uses a fallback system, and understanding the difference between violated-directive and effective-directive is key to fixing your policy efficiently. The violated-directive is the specific rule that was actually tripped. The effective-directive is the directive that the browser used to make the decision.
Wait, why are they different? It comes down to the default-src fallback. If you define default-src 'self' but don't define a font-src, and then try to load a font from Google Fonts, the violated-directive will be font-src, but the effective-directive will be default-src. The browser is essentially saying: "I looked for a font-src rule, didn't find one, so I fell back to default-src, which told me to block this."
Understanding this helps you decide how to fix the policy. Should you add Google Fonts to your default-src (bad idea, too broad) or should you explicitly define a font-src (good idea, follows principle of least privilege)?
The original-policy field is also included in the report, which is a snapshot of the exact header the browser received. This is incredibly helpful when you're in the middle of a rolling deployment and want to know if the report you're seeing came from the "old" policy or the "new" one you just pushed to production ten minutes ago.
The "Security Breadcrumbs": sample and script-sample
For a long time, debugging inline violations was a nightmare. You'd get a report saying "inline script blocked on line 200," you'd look at line 200, and it would be a blank line or a closing tag because the HTML was dynamically generated. Chrome and other modern browsers introduced the sample (or script-sample) field to solve this.
When enabled, the browser includes the first 40 characters of the blocked inline script or style in the report. This is a game-changer for identification. Instead of guessing, you see "onclick: window.analytics.track('click'..." and immediately know it's a legacy tracking snippet. Or you see "var _0x4f2a=..." and realize you have an obfuscated malicious script injection.
However, there is a catch: privacy. Because the sample field could theoretically contain sensitive data (like a CSRF token embedded in an inline script), browsers will only include the sample if you explicitly opt-in by adding 'report-sample' to your directive. For example:
Content-Security-Policy: script-src 'self' 'report-sample'; report-uri /api/csp-reportWithout 'report-sample', the script-sample field in your JSON will be empty, and you'll be back to manual sleuthing. If you're serious about fixing CSP violations, you need to enable this during your rollout phase.
Disposition and Status: Are you actually blocking?
Not every report represents a blocked action. The disposition field tells you if the resource was actually blocked (enforce) or if it was just reported (report). This maps to which header you used: Content-Security-Policy vs. Content-Security-Policy-Report-Only.
The Report-Only mode is a developer's best friend. It allows you to test a strict policy against real-world traffic without breaking the site for a single user. You can see what would have been blocked. If you see thousands of reports with disposition: report for a critical script, you can fix your policy before ever moving it to enforce mode.
Finally, the status-code field tells you the HTTP status of the document where the violation happened. Why does this matter? Often, 404 pages or 500 error pages have different layouts or inline scripts than your standard app pages. If you see a cluster of violations with a 404 status code, you know you only need to look at your "Page Not Found" template.
The Diagnosis: Attack vs. Misconfiguration
Once you understand the fields, you can move to triage. Every report generally falls into one of three categories. Here is a 3-step decision tree to handle any report:
- Is the
blocked-uria domain you recognize? If it'shttps://cdn.stripe.comorhttps://www.clarity.ms, it's a misconfiguration. You just need to whitelist it. If it's a domain you've never heard of, move to step 2. - Does the
violated-directivematch the resource type? Ifscript-srcis blocking a.jpgfile, you might have a mis-typed directive or adefault-srcissue. If the directive and the resource type match (e.g.,script-srcblocking a.jsfile from an unknown domain), this is a high-priority security investigation. - Check the
samplefor common "noise." Browser extensions are the primary source of CSP noise. Tools like Grammarly, LastPass, or ad blockers often inject scripts into the DOM that trigger violations. If thesamplelooks like extension code (e.g., references tochrome-extension://or specific extension IDs), you can safely ignore it. Your CSP is doing its job by blocking the extension from messing with your site, but it's not a bug in your code.
Distinguishing between an XSS attack and a browser extension can be difficult. A true XSS attack will usually appear across many different users with different browsers but on the same document-uri. A browser extension violation will often be isolated to a single user but might appear across every page they visit on your site.
The Shortcut: Decoding reports in one click
Manual JSON parsing is fine for the first three reports. It is soul-crushing for the three-hundredth. When you're looking at a batch of violations, you don't want to be cross-referencing RFCs to remember if script-src-elem is a subset of script-src (it is) or why the source-file is null.
We built the GlitchReplay CSP Violation Decoder to solve this exact problem. Instead of squinting at a blob of text, you can paste your raw JSON report directly into the tool. It instantly translates the RFC jargon into "Developer Action Needed" language. It categorizes the report as a "Potential Threat" or a "Likely Misconfiguration," highlights the exact line in your policy that needs changing, and provides the exact snippet you need to add to your header to fix it.
For example, if you paste a report where default-src 'self' is the effective directive blocking a font, the decoder will tell you: "Your default policy is too strict for fonts. Suggestion: Add font-src 'self' https://fonts.gstatic.com to your policy." It turns a five-minute investigation into a five-second fix.
Effective security shouldn't feel like a chore. Content Security Policy is a "defense in depth" layer that can stop an XSS attack even when your code has a vulnerability. By learning to read these reports—or using a tool to do it for you—you turn that wall of JSON from an annoyance into a roadmap for a more secure application.
Stop squinting at JSON and start securing your site. If you're ready to move from "What happened?" to "I fixed it," use the GlitchReplay CSP Violation Decoder for a plain-English explanation and an instant security verdict.
GlitchReplay is Sentry-SDK compatible, includes session replay and security signals, and never charges per event. Free to start, five minutes to first event.