Back to Blog
SuperPM Blog/Prompt Guide

Draft an internal launch readiness check before the public release

Your team is two weeks from a public launch and the question keeps coming up at standup: are we actually ready? This drafts the internal launch readiness check that puts product, engineering, support, sales, marketing, and legal on the same artifact, with explicit pass/fail conditions per row, so the go decision is not a vibe.

Delivery
1 uses·Published 5/8/2026·Updated 5/8/2026

A Public Launch Is the Easy Part; Internal Readiness Is Where Launches Actually Fail

Public launches that fail rarely fail because the product is broken. They fail because support was not trained, sales did not have the demo, the analytics dashboard did not ship, the legal review came in late, or the on-call rotation was thin. Each of those gaps was preventable; the team just did not have a single artifact that forced every function to commit to a pass condition before launch day.

The internal readiness check is that artifact. It is not a status doc. It is a contract.

Atlassian's writing on product launches and agile-at-scale practices frame the same point: cross-functional launches require explicit ownership and explicit pass conditions per function. A general "we are ready" never holds up at scale.

Why a status matrix beats a checklist

Status checklists drift to all green because owners self-attest and there is no peer review. A matrix that requires a specific pass criterion per row, plus a peer sanity check, blocks the drift. The team cannot land a launch with five "looks good" rows; each row has to point to evidence.

Three discipline rules make the matrix work:

  • Specific pass criteria. "Support is ready" is not a criterion. "5 macros are deployed and tested with two real tickets" is.
  • Peer sanity check on greens. Self-attestation produces optimistic greens. A peer reads the evidence and confirms.
  • No green without evidence. Evidence can be a screenshot, a test report, a deploy URL, or a brief written confirmation. It cannot be a verbal "yes."

How the Draft an internal launch readiness check prompt works

Step 1 sets the launch tier. Tier 1 is broad public with press; tier 2 is segment or beta graduation; tier 3 is silent or flag-gated. The tier scopes the matrix so a flag-gated rollout does not need the marketing rows.

Step 2 builds the matrix. Each row has a function, an owner, and a pass criterion. The standard set covers engineering, QA, support, sales, marketing, legal, analytics, billing, security, and comms. Tier 2 and 3 trim the rows that do not apply.

Step 3 defines hard pass/fail per row. The prompt forces specific tests. "Telemetry exists" is replaced by "the launch dashboard shows live data for the 5 key events." Specificity is what prevents drift to green.

Step 4 schedules dry runs. T-7 days pulls every owner together with the matrix. Reds and yellows get a blocker and a date. Greens get a peer sanity check. T-3 days revisits anything that was yellow.

Step 5 defines the go/no-go criteria. Tier 1 demands all green at T-1 except for at most two yellows with documented mitigations; any red is no-go. The criterion is in writing so the launch decision is not a debate the day before.

Step 6 defines the day-1 watchlist. What the team will monitor in the first 24-72 hours and the thresholds that trigger a rollback. SVPG's writing on team objectives and Transformed emphasize that launches are observed, not declared. The watchlist is what makes observation systematic.

When the matrix saves a launch

Three patterns where the matrix earns its keep:

  • Late legal review. Legal sees the matrix at T-7 and flags a privacy update needed for two markets. The team ships those markets a week later instead of pulling the launch globally.
  • Untrained support. Support flags that only 4 of 12 reps have been briefed. The team adds a brown-bag at T-2 and avoids launch-day chaos.
  • Telemetry gap. Analytics flags that two of five key events do not log to the dashboard. Engineering ships the fix at T-3 and day-1 monitoring is intact.

Atlassian's writing on sprint planning reinforces the underlying discipline: large initiatives that touch many teams require explicit ownership artifacts, not implicit shared understanding.

When to use it

  • A public launch is two to four weeks out and the team has been answering "are we ready" with a feeling.
  • A previous launch suffered from a preventable gap (support was untrained, telemetry shipped late, legal flagged something at the last minute).
  • A new product is going to market and the team has not run a launch at this scale before.
  • A regulator-sensitive launch needs explicit legal and security sign-off.
  • A leadership review is asking which launches are go and which are at risk.

Common pitfalls

  • All green by self-attestation. Peer sanity checks are non-negotiable.
  • Generic pass criteria. "Ready" is not a criterion. Specific tests are.
  • No tier scoping. A silent flag rollout does not need the marketing rows; a Tier 1 public launch needs all of them.

Sources

Sources

  1. Product launchAtlassian
  2. Agile at scaleAtlassian
  3. Sprint planningAtlassian
  4. Team objectives overviewSilicon Valley Product Group
  5. TransformedSilicon Valley Product Group

Prompt details

Category
Delivery
Total uses
1
Created
5/8/2026
Last updated
5/8/2026

Ready to try the prompt?

Open the live prompt detail page for the full workflow.

View prompt details

More Delivery Guides