The combination of Grafana OnCall’s archival and Atlassian’s premium‑only integrations forces SRE teams to rebuild notification plumbing they hoped to avoid.

When Opsgenie’s pricing rose and its data‑center offering narrowed, many engineering groups turned to self‑hosted alternatives—most prominently Grafana OnCall OSS. That hope was short‑lived. In March 2026 Grafana announced the archival of the open‑source project and the removal of Cloud Connection support for phone, SMS, and push notifications, effectively pushing teams back toward heavyweight, custom‑built notification stacks. At the same time Atlassian’s migration documentation makes clear that only Premium customers can enjoy bidirectional integrations, Twilio number porting, and automatic chat reconnects. The net result is a thinner, more fragmented self‑hosted on‑call market precisely when incident‑response owners need reliable, low‑maintenance tooling.

Below we unpack the timeline, the technical fallout, and the strategic choices SRE leads face when the “self‑hosted” promise collapses.


What caused Grafana OnCall OSS to disappear, and how does it affect self‑hosted teams?

Grafana launched its cloud‑first on‑call service in late 2021 and open‑sourced the product in 2022 to give teams a self‑managed, on‑premises alternative to commercial SaaS tools. The open‑source edition was marketed as “simple to use, accessible to everyone, and allows engineers to actually sleep well at night” — a direct appeal to teams that wanted control without the operational overhead of building their own notification pipeline.

However, the same Grafana Cloud team that offered a global 24/7 monitoring layer for SMS, Slack, and other channels in the original OSS launch later withdrew that safety net. By March 2024 the “Cloud Connection” feature that automatically routed phone, SMS, and push notifications through Grafana’s managed service was sunset, leaving OSS users to provision their own Twilio, Vonage, or similar providers.

When the project was finally archived on 24 March 2026, the official Grafana page stopped listing a download option and the repository entered maintenance‑only mode. For teams that had relied on the OSS promise of “no‑ops notification plumbing,” the change translates into new engineering work, vendor lock‑in risk, and a loss of the original cost‑saving narrative.

In short, the disappearance of Grafana OnCall OSS removes one of the few truly open‑source on‑call options, forcing groups to either pay for a commercial tier or rebuild the stack from scratch.


How do Atlassian’s migration docs increase the pain of self‑hosting?

Atlassian’s own documentation for migrating from Data Center to Cloud now explicitly restricts critical integrations to Premium licenses. The migration guide lists bidirectional sync with external alerting tools, Twilio number porting for voice/SMS alerts, and automatic reconnection of chat channels (Slack, Microsoft Teams) as premium‑only features.

For teams that have already invested in self‑hosted Opsgenie or PagerDuty replacements, this creates a double‑lock: not only is the open‑source on‑call engine disappearing, but the downstream integrations they need to keep functional are now gated behind a higher‑priced tier. The result is a technical debt spiral—engineers must either:

  • upgrade to Atlassian Premium (and absorb the cost), or
  • build and maintain custom adapters for each integration point, a task that historically required deep knowledge of both the notification provider APIs and Grafana’s own webhook format.

The self‑hosted docs after Atlassian’s data‑center cutoff article on Kindalame illustrates how similar lock‑ins have already forced teams to re‑evaluate their tooling stacks, often opting for hybrid solutions that retain some SaaS components while keeping core data on‑prem.


What does heavier self‑managed notification plumbing look like?

When Grafana’s Cloud Connection disappears, teams must provision and maintain their own telephony and SMS providers. That typically involves:

  • Setting up a Twilio (or alternative) account, buying numbers, configuring webhook endpoints, and handling failover logic.
  • Building a retry and escalation engine that mirrors the reliability guarantees previously offered by Grafana’s global monitoring team.
  • Integrating with chat platforms via custom bots that can reconnect after network interruptions—a problem Atlassian’s premium tier now solves automatically.

The self‑hosted observability article on Kindalame shows how even a seemingly straightforward stack—ClickHouse, Prometheus scrapers, Grafana dashboards—requires a dedicated ops team to keep it healthy. The same principle applies to on‑call: what was once a “plug‑and‑play” OSS component becomes a full‑blown service that must be monitored, scaled, and secured.

For many SRE leads, that added burden outweighs the perceived cost savings of staying off‑cloud. The opportunity cost—time spent on notification reliability instead of core product work—quickly erodes any financial advantage.


Should teams double‑down on commercial SaaS or double‑up on self‑hosting?

The decision now hinges on three practical considerations:

ConsiderationSelf‑Hosted (post‑Grafana)Commercial SaaS (Opsgenie / PagerDuty)
Up‑front costLow (software free) but high ops overheadPredictable subscription fees
ReliabilityDependent on in‑house telephony/SMS setupVendor‑managed global redundancy
Compliance & Data ResidencyFull control, but requires audit effortVendor may offer compliant zones, but at premium

If your organization already runs a robust platform engineering team that can absorb the notification plumbing work, staying self‑hosted may still make sense—especially for regulated environments where data residency is non‑negotiable. However, for most mid‑size SRE groups, the combined friction of Grafana’s OSS sunset and Atlassian’s premium‑only integrations tips the scales toward a commercial solution, even if that means paying for a higher tier.


How can incident‑response owners future‑proof their on‑call strategy?

  1. Abstract the integration layer – Use a thin middleware (e.g., an open‑source webhook router) that can swap out providers without rewriting core escalation logic.
  2. Invest in observability of the notification pipeline – Treat SMS/voice delivery as first‑class metrics; Grafana’s own blog stresses the importance of a global 24/7 test harness for these channels here. Replicate that monitoring internally.
  3. Maintain a vendor‑agnostic contract – When negotiating with Atlassian or any other SaaS partner, ask for contractual guarantees that keep critical integrations available at the lower tier, or secure an exit clause that protects you from future feature lock‑ins.
  4. Plan for a migration runway – Keep a lightweight “fallback” on‑call tool (e.g., a simple PagerDuty free tier) ready to take over if your primary self‑hosted stack fails.

By building modular, observable, and contract‑aware on‑call pipelines today, teams can avoid being caught in the next wave of SaaS feature gating.


What’s the bottom line for SRE leads evaluating PagerDuty, Opsgenie, or a rebuilt self‑hosted stack?

The market for self‑hosted on‑call management has thinned precisely when reliability demands are highest. Grafana OnCall OSS’s archival removes a key open‑source pillar, while Atlassian’s migration docs effectively price‑gate the integrations that keep a self‑hosted stack functional.

If your organization values full control and data sovereignty above all else, you’ll need to budget for the extra engineering effort and accept the higher operational risk. If you prioritize speed, reliability, and predictable cost, moving to a commercial SaaS tier—despite the price—may be the safer path.

The choice is no longer about “which tool replaces Opsgenie?” but about whether the self‑hosted model still delivers the promised ROI in a landscape where the supporting ecosystem is receding.


What’s your experience with the recent Grafana and Atlassian changes? Have you rebuilt your notification pipeline, or are you shifting to a commercial solution? Share your thoughts below—let’s discuss how the on‑call community can adapt together.