← Back to Blog
DevOps2026-05-13·77 min read

Three post-deploy checks I run after every Cloudflare Pages build

By MORINAGA

Edge Deployment Verification: A Post-Release Protocol for Static Sites

Current Situation Analysis

Modern static site generators and edge CDNs have successfully abstracted away the mechanical complexity of deployment. Build pipelines compile assets, push artifacts to a global edge network, and report success. Yet this abstraction creates a critical blind spot: the pipeline validates compilation, not consumption. External systems—search crawlers, indexing APIs, and edge routing layers—interact with the deployed site on different timelines and under different rules than the build environment.

Teams routinely overlook post-deploy verification because CI/CD platforms treat deployment as a binary event. Once the build artifact reaches the CDN, the pipeline assumes victory. But edge routing configurations, CDN propagation delays, and third-party indexer APIs operate independently of the build process. A successful compilation does not guarantee that sitemap-index.xml is reachable, that IndexNow verification files are served correctly, or that a framework update hasn't introduced layout shifts.

Real-world production incidents demonstrate the cost of this gap. Routing misconfigurations in edge redirect rules can silently mask sitemap endpoints for days, appearing functional in browsers that automatically follow redirects while failing for strict crawlers. IndexNow verification windows require immediate post-deploy validation; delays in detecting 403 responses directly impact indexing velocity. Performance regressions in static builds often stem from CSS framework updates or component changes that only manifest under real-world rendering conditions. Without targeted post-release checks, these issues remain invisible until they impact organic traffic or user experience.

The solution is not a heavier test suite. It is a lightweight, post-deploy verification protocol that targets the actual failure surface of edge-deployed static sites: crawler reachability, indexer submission, and rendering stability.

WOW Moment: Key Findings

Traditional CI/CD gates and post-deploy verification serve fundamentally different purposes. The table below contrasts their operational characteristics based on production telemetry from multiple static site deployments.

Approach Detection Window False Positive Rate Pipeline Duration Impact Crawler/Indexer Coverage
Traditional CI Gate Build-time only Low +2–4 min per run None
Post-Deploy Verification Post-propagation Medium (tunable) Decoupled (async) Full

Post-deploy verification shifts detection from compilation to consumption. By decoupling validation from the build pipeline, you eliminate false confidence from successful compiles while capturing routing, propagation, and indexer failures that only surface after edge distribution. This approach reduces mean-time-to-detect (MTTD) for crawler-facing issues from days to minutes, without inflating CI costs or blocking development velocity.

Core Solution

The verification protocol consists of three targeted checks, each addressing a distinct failure mode. The architecture deliberately separates these checks from the build pipeline to respect CDN propagation timing and indexer API requirements.

Step 1: Sitemap Reachability & Volume Validation

Browsers automatically follow HTTP redirects, which masks routing misconfigurations. Crawlers and strict HTTP clients do not. The first check verifies that sitemap-index.xml returns a 200 OK status without following redirects, then validates that the actual sub-sitemap (sitemap-0.xml) contains a minimum URL count. A drop in URL volume typically indicates a silent failure in the data pipeline or build-time content generation.

Implementation (TypeScript):

import { fetch } from 'undici';

interface SitemapCheckResult {
  domain: string;
  indexStatus: number;
  subUrlCount: number;
  threshold: number;
  passed: boolean;
}

async function validateSitemap(domain: string, minUrls: number): Promise<SitemapCheckResult> {
  const baseUrl = `https://${domain}`;
  
  // Check index without following redirects
  const indexRes = await fetch(`${baseUrl}/sitemap-index.xml`, { 
    redirect: 'manual',
    headers: { 'User-Agent': 'SitemapValidator/1.0' }
  });
  
  if (indexRes.status !== 200) {
    return { domain, indexStatus: indexRes.status, subUrlCount: 0, threshold: minUrls, passed: false };
  }

  // Parse sub-sitemap and count URLs
  const subRes = await fetch(`${baseUrl}/sitemap-0.xml`);
  const xmlText = await subRes.text();
  const urlMatches = xmlText.match(/<loc>.*?<\/loc>/g);
  const urlCount = urlMatches ? urlMatches.length : 0;

  return {
    domain,
    indexStatus: indexRes.status,
    subUrlCount: urlCount,
    threshold: minUrls,
    passed: urlCount >= minUrls
  };
}

// Usage
const targets = [
  { domain: 'aiappdex.com', minUrls: 1000 },
  { domain: 'findindiegame.com', minUrls: 150 },
  { domain: 'ossfind.com', minUrls: 150 }
];

for (const target of targets) {
  const result = await validateSitemap(target.domain, target.minUrls);
  console.log(`${result.domain}: index=${result.indexStatus}, urls=${result.subUrlCount}, pass=${result.passed}`);
}

Architecture Rationale: Using redirect: 'manual' ensures that edge rewrite rules or emergency _redirects configurations are caught immediately. The URL count threshold acts as a circuit breaker for silent data pipeline failures. This check runs synchronously against live endpoints to validate actual edge behavior, not build artifacts.

Step 2: IndexNow Batch Submission

Search indexers like Bing, Yandex, Naver, and Seznam support the IndexNow protocol for immediate URL submission. The protocol requires site-specific verification keys and expects live, fully propagated URLs. Submitting before CDN propagation completes results in indexer rejections or delayed processing.

Implementation (TypeScript):

import { fetch } from 'undici';

const INDEXNOW_ENDPOINT = 'https://api.indexnow.org/indexnow';
const SITE_KEYS: Record<string, string> = {
  'aiappdex.com': 'a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6',
  'findindiegame.com': 'q7r8s9t0u1v2w3x4y5z6a7b8c9d0e1f2',
  'ossfind.com': 'g3h4i5j6k7l8m9n0o1p2q3r4s5t6u7v8'
};

async function extractUrls(domain: string): Promise<string[]> {
  const res = await fetch(`https://${domain}/sitemap-0.xml`);
  const xml = await res.text();
  const matches = xml.match(/<loc>(.*?)<\/loc>/g);
  return matches ? matches.map(m => m.replace(/<\/?loc>/g, '')) : [];
}

async function submitToIndexNow(domain: string, urls: string[]): Promise<void> {
  const key = SITE_KEYS[domain];
  if (!key) throw new Error(`Missing IndexNow key for ${domain}`);

  const payload = {
    host: domain,
    key,
    keyLocation: `https://${domain}/${key}.txt`,
    urlList: urls
  };

  const res = await fetch(INDEXNOW_ENDPOINT, {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify(payload)
  });

  if (res.status === 403) {
    console.warn(`⚠ ${domain}: IndexNow 403. Verify key file routing and MIME type.`);
  } else if (res.status === 200) {
    console.log(`✅ ${domain}: Submitted ${urls.length} URLs`);
  }
}

// Execution
for (const domain of Object.keys(SITE_KEYS)) {
  const urls = await extractUrls(domain);
  await submitToIndexNow(domain, urls);
}

Architecture Rationale: The script is triggered via workflow_dispatch after deployment succeeds, not inline in the build pipeline. This respects the 2–3 minute CDN propagation window and ensures indexers receive live URLs. The 403 detection explicitly catches misrouted verification files, a common failure when edge redirect rules interfere with static asset serving.

Step 3: Weekly Lighthouse Trend Monitoring

Static sites rarely change at runtime, but framework updates, CSS refactors, or component additions can introduce rendering regressions. Running Lighthouse on every deploy is wasteful and creates false alarms from minor score fluctuations. A weekly cron job targeting representative pages provides trend data without blocking releases.

Architecture Rationale: The workflow uses treosh/lighthouse-ci-action with a matrix of one homepage and one deep entry page per domain. Results are uploaded to artifact storage for historical diffing. Hard failure thresholds are avoided; instead, the system monitors for Performance dropping below 80, CLS exceeding 0.1, or accessibility score regressions. This treats Lighthouse as a diagnostic trend monitor, not a deployment gate.

Pitfall Guide

1. Silent Redirect Masking

Explanation: Default HTTP clients follow 3xx responses automatically. If an edge _redirects rule rewrites sitemap-index.xml to a sub-sitemap, the check returns 200 OK while crawlers receive incorrect routing. Fix: Explicitly disable redirect following (redirect: 'manual') and assert strict 200 status codes on crawler-facing endpoints.

2. Premature IndexNow Submission

Explanation: Submitting URLs immediately after build completion, before CDN propagation finishes, causes indexers to reject or queue URLs indefinitely. Fix: Decouple submission from the build pipeline. Trigger via manual dispatch or a post-deploy webhook after a 3–5 minute propagation window.

3. Hard-Gating Lighthouse Scores

Explanation: Blocking deploys because a score drops from 94 to 88 creates disproportionate friction for static sites with minimal traffic. Minor fluctuations stem from network jitter or CI environment variance. Fix: Treat Lighthouse as a trend monitor. Set regression thresholds (e.g., >5 point drop week-over-week) and alert rather than block.

4. Ignoring Key File Routing

Explanation: IndexNow requires the verification file (/<key>.txt) to be served with the correct MIME type and path. Edge redirect rules or asset optimization pipelines can silently alter routing. Fix: Add an explicit HTTP check for the key file before submission. Verify Content-Type: text/plain and 200 status.

5. Missing Content Volume Thresholds

Explanation: Checking only for HTTP 200 on sitemaps misses silent data pipeline failures. A valid XML structure with zero URLs still returns 200. Fix: Parse the sub-sitemap and assert a minimum URL count based on historical baselines. Alert if volume drops below 80% of expected.

6. Over-Monitoring Static Assets

Explanation: Adding uptime monitoring, E2E user flows, or API runtime checks for pre-rendered sites inflates CI costs and creates noise. Static sites have no runtime state to monitor. Fix: Scope checks to crawler-facing endpoints, indexer APIs, and rendering stability. Rely on CDN provider status pages for infrastructure uptime.

7. Mixing Build-Time and Runtime Concerns

Explanation: Checking database availability or API health for SSG sites is unnecessary. Build-time queries (e.g., Turso DB during Astro compilation) do not require runtime verification. Fix: Recognize architectural boundaries. Verify only what the edge network serves post-build. Document which systems are build-time only to prevent scope creep.

Production Bundle

Action Checklist

  • Verify sitemap index returns strict 200 without redirect following
  • Parse sub-sitemap and validate URL count against historical baseline
  • Confirm IndexNow verification file serves correct MIME type and path
  • Trigger IndexNow submission after CDN propagation window (3–5 min)
  • Configure weekly Lighthouse cron for representative pages only
  • Store Lighthouse artifacts for historical diffing and trend analysis
  • Document build-time vs runtime boundaries to prevent scope creep
  • Set alerting thresholds, not hard deployment gates, for performance metrics

Decision Matrix

Scenario Recommended Approach Why Cost Impact
Pre-revenue static site Post-deploy verification + weekly Lighthouse Low traffic, high crawler dependency, minimal runtime surface Near-zero CI cost, manual dispatch overhead
High-traffic SSG with dynamic routes Post-deploy verification + daily Lighthouse + IndexNow on content publish Frequent content changes require faster indexer feedback Moderate CI cost, requires webhook integration
Hybrid SSR/SSG site Post-deploy verification + runtime health checks + Lighthouse on PR Mixed rendering requires both build and runtime validation Higher CI cost, requires infrastructure monitoring
Internal documentation site Sitemap check only Crawlers are primary consumers; performance/indexing less critical Minimal cost, simplified pipeline

Configuration Template

GitHub Actions: Post-Deploy Verification

name: Post-Deploy Verification
on:
  workflow_dispatch:
  push:
    branches: [main]

jobs:
  verify-sitemap:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with: { node-version: 20 }
      - run: npm ci
      - run: npx tsx scripts/validate-sitemap.ts

  submit-indexnow:
    needs: verify-sitemap
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with: { node-version: 20 }
      - run: npm ci
      - run: npx tsx scripts/submit-indexnow.ts

  lighthouse-trend:
    if: github.event_name == 'schedule'
    runs-on: ubuntu-latest
    strategy:
      matrix:
        site:
          - { domain: aiappdex.com, sample: /models/timm-vit-base-patch16-clip-224-openai/ }
          - { domain: findindiegame.com, sample: /games/dredge-1562430/ }
          - { domain: ossfind.com, sample: /alternatives/ghost/ }
    steps:
      - uses: treosh/lighthouse-ci-action@v11
        with:
          urls: |
            https://${{ matrix.site.domain }}
            https://${{ matrix.site.domain }}${{ matrix.site.sample }}
          uploadArtifacts: true
          temporaryPublicStorage: true

Cron Schedule: Add to repository settings or workflow file:

on:
  schedule:
    - cron: '30 4 * * 1' # Monday 04:30 UTC

Quick Start Guide

  1. Initialize verification scripts: Create scripts/validate-sitemap.ts and scripts/submit-indexnow.ts using the templates above. Install undici and tsx as dev dependencies.
  2. Configure site baselines: Update the targets array in the sitemap validator with your domains and minimum URL thresholds based on current content volume.
  3. Set up IndexNow keys: Generate verification keys for each domain, place <key>.txt in the public/ directory, and update the SITE_KEYS mapping in the submission script.
  4. Deploy and trigger: Push to main, wait 3–5 minutes for CDN propagation, then manually trigger the Post-Deploy Verification workflow via GitHub Actions UI or API.
  5. Schedule Lighthouse monitoring: Add the cron schedule to your workflow file. Verify that artifacts upload successfully and review the first trend report after one week.