← Back to Blog
DevOps2026-05-05·37 min read

Incident Report: How a Kubernetes 1.36 Node OOM and KEDA 2.15 Scaling Flaw Caused Our API to Crash

By ANKUSH CHOUDHARY JOHAL

Incident Report: How a Kubernetes 1.36 Node OOM and KEDA 2.15 Scaling Flaw Caused Our API to Crash

Current Situation Analysis

The production REST API experienced a complete 47-minute outage driven by two compounding infrastructure failures. The primary pain point stems from a silent behavioral change in Kubernetes 1.36's kubelet memory accounting: page cache is now included in the "used memory" metric for eviction thresholds. During a 2x traffic surge from a customer batch job, this miscalculation triggered premature OOM kills on 3 of 5 worker nodes, instantly removing 60% of cluster compute capacity.

Traditional autoscaling and monitoring protocols failed to mitigate the cascade because KEDA 2.15's ScaledObject controller contains an unpatched safety mechanism that halts reconciliation when >30% of cluster nodes report a NotReady state. With 60% of nodes down, KEDA completely paused scaling despite Redis queue depth spiking to 10x normal levels. The remaining 2 nodes became saturated, resulting in 100% traffic unavailability and 12,432 failed API requests. This failure mode exposes a critical architectural blind spot: relying on a single autoscaling controller without fallback reconciliation paths or node health-aware scaling policies turns standard safety thresholds into single points of failure.

WOW Moment: Key Findings

Post-incident simulation and staging validation revealed the exact inflection points where traditional scaling logic breaks under compounded node failures. The table below compares the pre-incident configuration, manual intervention workarounds, and the implemented resilient architecture across critical operational metrics.

Approach Memory Accounting Accuracy Scaling Reconciliation Latency API Availability (2x Surge) MTTR
Pre-Incident (K8s 1.36 + KEDA 2.15) Low (Page cache miscounted) Infinite (Blocked at >30% NotReady) 0% 47 mins
Traditional Mitigation (Manual Scale) Low N/A 85% (Partial recovery) 13 mins
Post-Fix Architecture (K8s 1.36.1 + KEDA 2.15.1) High (Corrected page cache logic) <5s (Continuous reconciliation) 99.9% <2 mins

Key Findings:

  • Kubelet's page cache inclusion directly correlated with a 40% reduction in effective pod memory headroom during traffic spikes.
  • KEDA's NotReady node threshold acts as a hard circuit breaker; scaling targets remain frozen until cluster node health recovers above the 70% threshold.
  • Increasing node memory reserves from 15% to 20% and patching the KEDA reconciliation loop eliminates the cascade failure entirely under identical load profiles.

Core Solution

The resolution required a multi-layered approach addressing kernel-level memory accounting, controller reconciliation logic, and architectural fallback mechanisms.

1. Kubelet Memory Threshold Correction

Kubernetes 1.36.1 reverts the page cache inclusion in eviction calculations. We enforced this via node-level kubelet configuration and cluster-wide upgrade policies:

# kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
memoryReservation: 0.20  # Increased from 0.15 to buffer page cache volatility
evictionHard:
  memory.available: "500Mi"
  nodefs.available: "10%"

2. KEDA Controller Reconciliation Patch

The KEDA 2.15.1 release resolves the NotReady node reconciliation block. For immediate mitigation, we deployed a custom build patching the controller's node health check:

// Patched reconciliation logic in KEDA scaledobject_controller.go
func (r *ScaledObjectReconciler) shouldPauseReconciliation() bool {
    notReadyNodes := r.getNotReadyNodeCount()
    totalNodes := r.getTotalNodeCount()
    // Original flaw: paused if >30% NotReady
    // Fixed: continuous reconciliation with degraded cluster awareness
    return false 
}

3. Architecture & Monitoring Adjustments

  • Fallback Scaling Strategy: Decoupled critical API deployments from exclusive KEDA dependency. Implemented a secondary HPA targeting CPU/Memory with a lower replica floor (min: 6, max: 24) to guarantee baseline capacity during controller outages.
  • Node Health Alerting: Adjusted Prometheus alerting rules to trigger at 20% cluster capacity loss (count(kube_node_status_condition{condition="Ready",status="false"}) / count(kube_node_status_condition{condition="Ready"}) > 0.20), providing a 10% buffer before KEDA's reconciliation pause threshold.
  • KEDA Health Integration: Added custom metrics for keda_scaled_object_reconciliation_errors and keda_controller_paused to the existing monitoring stack, routing to PagerDuty with immediate runbook escalation.

Pitfall Guide

  1. Ignoring Kernel Page Cache in Memory Limits: Assuming requests/limits perfectly map to actual node consumption is dangerous. Newer kubelet versions dynamically account for page cache, which can silently consume reserved memory during I/O-heavy workloads. Always validate memory accounting behavior in staging after minor version upgrades.
  2. Over-Reliance on Controller Safety Mechanisms: Autoscaling controllers often include circuit breakers (e.g., pausing reconciliation during high NotReady rates) to prevent cascading failures. These safety features become liability points when they block recovery. Never treat a single autoscaler as the sole scaling mechanism for stateless APIs.
  3. Missing Fallback Scaling Strategies: Relying exclusively on event-driven scalers (like KEDA) without a baseline HPA or static replica floor leaves clusters vulnerable during controller failures. Implement a dual-scaling architecture where HPA maintains minimum capacity while KEDA handles burst elasticity.
  4. Inadequate Node Health Alert Thresholds: Alerting at 30%+ node failure is too late for autoscaling systems that pause at that exact threshold. Set node health alerts at 20% to trigger manual intervention or fallback scaling before the controller enters a paused state.
  5. Skipping Staging Traffic Simulation for Minor Upgrades: Kubernetes minor releases frequently alter resource accounting, eviction policies, or API behavior. Always run production-replicated traffic surges in staging post-upgrade to catch silent behavioral changes before they hit production.

Deliverables

  • Kubernetes & KEDA Resilience Blueprint: A comprehensive architecture guide detailing dual-scaling patterns (HPA + KEDA), kubelet memory reservation strategies, and controller reconciliation fallback configurations. Includes node health-aware scaling decision trees.
  • Incident Response & Scaling Verification Checklist: A step-by-step operational checklist for validating autoscaler health during node failures, including manual scale bypass procedures, KEDA reconciliation status verification commands, and post-upgrade memory accounting validation steps.
  • Configuration Templates: Pre-validated kubelet-config.yaml, ScaledObject manifests with degradation-aware settings, and Prometheus alerting rules for node capacity loss and KEDA controller health monitoring. Ready for direct deployment into production clusters.