Migrating from Sentry to GlitchReplay in one afternoon

Step-by-step DSN swap, source-map upload, alert rule mapping, and rollback plan. No code changes beyond a single env var.

GlitchReplay team··
migrationsentrytutorial

It starts with a "Usage Alert" email from Sentry at 10:00 AM. By 2:00 PM, you've hit your monthly event cap, and you're faced with a choice: pay a $400 overage fee or fly blind until the next billing cycle. It is a stressful, high-stakes game of whack-a-mole where every success in your application's growth is punished by an increasing observability tax. We've all been there, staring at a dashboard, wondering if we should disable certain error types just to keep the bill under control.

What if you could swap that variable-cost stress for a flat-rate infrastructure that uses the exact same SDK you've already spent years tuning? At GlitchReplay, we believe that you shouldn't have to decide which bugs are "worth" tracking based on your remaining budget for the month. Because we are wire-compatible with the Sentry SDK, the actual code change to migrate is often just a single environment variable. You can move from consumption-based anxiety to fixed-cost clarity in less than an afternoon.

The "Drop-in" Reality: Why a Migration Doesn't Mean a Rewrite

The biggest hurdle in switching observability tools is usually the "rip and replace" fear. You've already integrated Sentry into your Next.js middleware, your background workers, and your frontend React components. You've set up custom tags, user context, and breadcrumb scrubbing. Redoing all of that work for a new API is a non-starter for most teams. This is why we built GlitchReplay to be a compatible ingest worker for existing Sentry SDKs like @sentry/nextjs, @sentry/browser, and @sentry/node.

Wire-compatibility explained: The /api/n/envelope/ endpoint

Sentry SDKs communicate with the server using a protocol known as the "Envelope" format. When an error occurs, the SDK doesn't just send a simple JSON POST; it packages the event, headers, and attachments (like breadcrumbs or session metadata) into a specific multipart structure and sends it to an endpoint that usually looks like /api/[project_id]/envelope/. GlitchReplay implements this exact same endpoint structure. We ingest the exact same data packets that the Sentry SDK produces.

Because we support the same wire protocol, you don't need to change your Sentry.init() calls or hunt through your codebase for every instance of Sentry.captureException(). You simply point the existing SDK at a different destination. It's the equivalent of changing the SMTP server in your email configuration—the messages stay the same, but the delivery truck changes.

Why you don't need to change your Sentry.init() calls

Maintaining your existing initialization logic is crucial for stability. Your Sentry.init() likely contains complex configuration for beforeSend hooks, PII scrubbing rules, and environment detection. By keeping the Sentry SDK in place, you ensure that your application's internal logic for capturing errors remains untouched. You get to keep the battle-tested reliability of the Sentry SDK while benefiting from GlitchReplay's flat-rate backend and integrated session replays.

// Your existing initialization remains exactly the same
Sentry.init({
  dsn: process.env.NEXT_PUBLIC_SENTRY_DSN,
  integrations: [new Sentry.Replay()],
  // All your custom logic stays here
  beforeSend(event) {
    if (event.user) {
      delete event.user.email;
    }
    return event;
  },
});

Step 1: The DSN Swap and Environment Configuration

The Data Source Name (DSN) is the primary configuration string that tells the SDK where to send data and which project it belongs to. A standard Sentry DSN looks something like https://abc123def456@o789.ingest.sentry.io/450123. It contains a public key, an organization identifier, and a project ID. GlitchReplay uses a simplified version of this format that is still recognized by the SDK's internal parser.

Creating your first project in GlitchReplay

When you sign up for GlitchReplay, the first thing you'll do is create a project. We'll provide you with a new DSN immediately. This DSN is mapped to our ingest servers, which are distributed across the Cloudflare network to ensure minimal latency for your users, regardless of where they are in the world. Once you have this string, you're 90% of the way to a finished migration.

Updating .env.production: From sentry.io to glitchreplay.com

In most modern frameworks, you'll have your DSN stored in an environment variable. To migrate, you simply update your .env.production (or your Vercel/Netlify/Cloudflare dashboard) with the new GlitchReplay DSN. Here is a comparison of how the configuration changes:

# Before: Sentry
SENTRY_DSN=https://public_key@o0.ingest.sentry.io/12345

# After: GlitchReplay
SENTRY_DSN=https://your_api_token@glitchreplay.com/project_id

Handling "Tunneling" and proxying

If you previously configured a Sentry "tunnel" to bypass ad-blockers (which often block requests to sentry.io), you'll need to ensure your proxy endpoint is updated to forward requests to glitchreplay.com. Since GlitchReplay is often hosted on the same domain as your application (if you're using our Cloudflare integration), many ad-blockers won't even see the request as a third-party tracking script, which can actually improve your error capture rates significantly.

Step 2: Source Map Parity (Build-time Integration)

Error tracking is only half as useful if you're staring at a stack trace that points to main.a7b2c.js:1:4502. To get meaningful insights, the server needs your source maps to de-obfuscate the minified production code. While Sentry has a proprietary CLI and various Webpack plugins, GlitchReplay uses a standard multipart/form-data POST API that is easy to integrate into any CI/CD pipeline.

Generating a GlitchReplay API Token for CI/CD

Go to your Project Settings in GlitchReplay and generate an API Token. This token allows your build server to upload source maps securely without needing to expose your primary account credentials. Store this as GLITCHREPLAY_API_TOKEN in your CI secrets.

Updating your GitHub Actions or Vercel build scripts

Instead of using sentry-cli, you can use a simple curl command or a small Node.js script to upload your maps. If you're looking for a deeper dive on how this works, check out our guide on fixing minified stack traces. Here is an example of what a deployment script might look like:

# Example of uploading source maps in a build script
curl -X POST https://glitchreplay.com/api/v1/sourcemaps \
  -H "Authorization: Bearer $GLITCHREPLAY_API_TOKEN" \
  -F "release=my-app@1.0.0" \
  -F "dist=production" \
  -F "file=@./dist/main.js.map"

Verifying stack trace de-obfuscation

Once you've uploaded your maps, trigger a test error in your production environment. Within seconds, you should see the error appear in the GlitchReplay dashboard with the original TypeScript or JavaScript source code highlighted. This "Time to First Event" is typically less than 5 minutes after the DSN swap. If the maps aren't resolving, check that the release name in your Sentry.init matches the release name used during the upload.

Step 3: Mapping Alert Rules and Team Notifications

A common concern when leaving Sentry is losing the sophisticated alerting logic you've built. Error tracking is useless if nobody sees the errors until the next morning. GlitchReplay provides a robust alerting engine that mimics the core functionality you rely on, without the complexity of Sentry's "Issue States" which can often become cluttered.

Setting up Slack and Microsoft Teams webhooks

Most teams just want a ping when something is broken. We support standard webhooks for Slack and Microsoft Teams. You can configure these in the "Alerts" section of your dashboard. You can define rules like "Notify me when a new error is first seen" or "Notify me if an error occurs more than 50 times in 1 minute."

Recreating "First seen" and "Regression" alerts

GlitchReplay automatically tracks the first time a specific error fingerprint is seen. We also track when an error that was previously marked as "Resolved" reappears in a new release—this is a "Regression." These are the two most important alerts for any engineering team, and they are active by default in GlitchReplay.

Grouping logic: Fingerprints vs. Defaults

Sentry uses a complex, often opaque set of rules to group errors into "Issues." Sometimes it groups things that shouldn't be grouped, and other times it creates ten different issues for the same root cause. GlitchReplay uses a transparent fingerprinting system based on the stack trace. If you need more control, you can pass a fingerprint array in the Sentry SDK, and we will honor it exactly. This ensures that your grouping logic remains consistent across the migration.

Step 4: Enabling Session Replay (The "Free Upgrade")

This is where the migration usually pays for itself. In Sentry, Session Replay is an add-on that gets expensive fast. It often feels like you have to choose between "knowing what happened" and "not going over budget." In GlitchReplay, session replay is a core feature included in our flat-rate pricing. We don't charge you extra for "Replay Units."

Configuring replaysSessionSampleRate

Because you are already using the Sentry SDK, enabling replays in GlitchReplay is just a matter of adjusting the sample rate in your initialization code. Unlike Sentry, where you might keep this at 0.1% to save money, you can safely crank this up to 100% for error-only replays in GlitchReplay without fearing a massive bill.

Sentry.init({
  dsn: "https://your_api_token@glitchreplay.com/project_id",
  replaysSessionSampleRate: 0.1, // Sample 10% of all sessions
  replaysOnErrorSampleRate: 1.0, // Always capture a replay when an error occurs
});

The "30-second pre-error window"

One of the ways we optimize storage (and keep our prices flat) is by focusing on the 30-second window leading up to an error. This is usually all you need to see the user's clicks and navigation that led to the crash. GlitchReplay handles this buffer automatically, ensuring you have the context you need to reproduce the bug without storing hours of idle video data.

Privacy first: Masking and Blocking

Since we use the rrweb engine under the hood (the same as Sentry), all your privacy settings like maskAllText or blockAllMedia carry over perfectly. You don't need to re-audit your PII compliance. If you've already configured the Sentry SDK to hide sensitive input fields, those fields will remain hidden in GlitchReplay.

Step 5: The Dual-Run Strategy (Zero-Risk Rollback)

If you are managing a high-traffic production application, the idea of "flipping a switch" on your error tracking might feel risky. What if events are dropped during the cutover? What if the new system doesn't group things the way you expect? For these cases, we recommend a "Shadowing" technique where you send events to both Sentry and GlitchReplay for 24 hours.

The "Shadowing" technique

You can actually initialize two instances of the Sentry client (or use a simple beforeSend hook to duplicate the event). This allows you to compare the dashboards side-by-side. You can verify that event counts match and that the metadata is being captured correctly. This gives your team confidence that the new system is reliable before you finally pull the plug on the Sentry billing.

// A quick way to shadow events to both during migration
const primarySentry = Sentry.init({ dsn: "sentry_dsn" });

// Use a custom transport or a second client for GlitchReplay
// (Consult our migration scripts for the full helper)

The Final Cutover

Once you've verified that GlitchReplay is capturing everything you need, you can perform the final cutover. Remove the old Sentry DSN, cancel your Sentry subscription, and breathe a sigh of relief. You've just moved from a world of "usage-based anxiety" to a world where observability is a fixed, predictable part of your stack.

Conclusion: From Consumption-Based Anxiety to Fixed-Cost Clarity

The shift from Sentry to GlitchReplay is about more than just saving money—it is about changing your relationship with your production data. When you aren't being charged for every click and every crash, you start using your tools differently. You stop worrying about "cluttering" the dashboard and start focusing on actually fixing the bugs. You can learn more about why we think this is the future of observability in our comparison of flat-rate vs per-event pricing.

By 3:00 PM, you can be checking your first real production bug in GlitchReplay, watching a high-fidelity replay of exactly what the user did, and knowing that your bill at the end of the month won't change by a single cent, no matter how many errors you find. Stop paying the "success tax" on your application. Switch to GlitchReplay today for Sentry-compatible tracking that actually lets you sleep at night.

Stop watching your error bill spike.

GlitchReplay is Sentry-SDK compatible, includes session replay and security signals, and never charges per event. Free to start, five minutes to first event.