← Back to Blog
DevOps2026-05-14·79 min read

We Rebuilt the Kubernetes Dashboard From Scratch — Here's What We Added

By Gregory Griffin

Architecting a Zero-Dependency Kubernetes Dashboard: Security, Observability, and Go-First Design

Current Situation Analysis

The archival of the official Kubernetes Dashboard in January 2026 marked a significant inflection point for cluster observability. The legacy codebase, built on an aging Angular framework, had become unmaintainable, and the toolchain fragility forced upstream maintainers to discontinue development. This left a void in the ecosystem, particularly for teams requiring a hardened, in-cluster solution rather than a desktop client or a multi-cluster management platform.

The industry faces a persistent tension between feature richness and security auditability. Many modern dashboards rely on dynamic plugin runtimes, such as JavaScript execution environments, to extend functionality. While this enables rapid feature iteration, it introduces a substantial attack surface. Dynamic code loading bypasses static analysis, complicates supply chain security, and makes the dashboard binary non-deterministic. Furthermore, legacy dashboards were often designed around kubectl proxy or NodePort services, making integration with modern in-cluster gateways and load balancers like MetalLB cumbersome or impossible without significant workarounds.

Data from the archival analysis indicates that the original dashboard's Go backend contracts were stable, but the frontend layer contributed disproportionately to maintenance overhead and security vulnerabilities. Teams operating in regulated environments or home labs with strict security postures found themselves choosing between insecure, feature-rich tools and secure, minimalistic CLI workflows. The gap demanded a dashboard that could run as a hardened pod, integrate seamlessly with service meshes and gateways, and provide advanced security features without requiring external operators or CRD controllers.

WOW Moment: Key Findings

A comparative analysis of dashboard architectures reveals that eliminating the plugin runtime and adopting a Go-first backend yields significant improvements in security posture and operational simplicity. The following table contrasts the legacy approach, dynamic plugin models, and the zero-dependency architecture described in this guide.

Dimension Legacy Dashboard Dynamic Plugin Model Zero-Dependency Go Dashboard
Runtime Environment Angular / Go React / Go + JS VM React 19 / Go (Static)
Plugin Architecture None Dynamic JS Loading None (Native Go Modules)
Security Surface High (NodePort/Proxy) Medium (Plugin Runtime) Low (Hardened Pod)
Auditability Medium Low (Dynamic Code) High (Static Binary)
Gateway Integration Limited Variable Native (Kong/MetalLB)
External Dependencies Moderate High (Plugin Registry) None (K8s API Only)
Feature Density Standard High High (Native Features)

This finding matters because it demonstrates that advanced features—such as policy auditing, certificate tracking, and RBAC visualization—can be implemented natively within the dashboard backend. This eliminates the need for third-party operators like Polaris, Goldilocks, or cert-manager, reducing cluster footprint and version coupling. The result is a deterministic, auditable binary that integrates cleanly with modern infrastructure components like Kong and MetalLB.

Core Solution

The architecture centers on a decoupled frontend and backend, both designed for security and performance. The frontend leverages React 19 with Material UI v6 and Vite for a responsive, type-safe user interface. The backend consists of four Go modules: API, Auth, Metrics Scraper, and Common. Kong 3.6 in DBless mode serves as the in-cluster API gateway, handling authentication and routing.

Architecture Decisions

  1. Go-First Backend: All business logic resides in Go. This ensures that features like policy auditing and certificate parsing are compiled into the binary, eliminating runtime dependencies. It also allows direct interaction with the Kubernetes API without intermediate operators.
  2. Kong DBless Gateway: Kong is configured in DBless mode, meaning configuration is loaded from a static file or ConfigMap. This aligns with GitOps workflows and ensures the gateway state is reproducible. Kong handles bearer token validation and CSRF protection, offloading these concerns from the application.
  3. RBAC-Aware UI: The frontend queries the Kubernetes API using SelfSubjectAccessReview to determine user permissions before rendering action buttons. This prevents unauthorized actions and improves UX by greying out inaccessible controls.
  4. Hardened Deployment: The dashboard runs as a pod with readOnlyRootFilesystem, dropped capabilities, and seccompProfile: RuntimeDefault. Network policies enforce default-deny traffic, with explicit allow rules for API server communication.

Implementation Details

1. Go Backend Module Structure

The backend is organized into modular packages to promote separation of concerns and testability.

// pkg/api/server.go
package api

import (
	"net/http"
	"github.com/your-org/dashboard/pkg/auth"
	"github.com/your-org/dashboard/pkg/scraper"
)

type Server struct {
	AuthHandler   *auth.Handler
	MetricsClient *scraper.Client
}

func NewServer(authHandler *auth.Handler, metricsClient *scraper.Client) *Server {
	return &Server{
		AuthHandler:   authHandler,
		MetricsClient: metricsClient,
	}
}

func (s *Server) RegisterRoutes(mux *http.ServeMux) {
	mux.HandleFunc("/api/v1/pods", s.AuthHandler.RequireAuth(s.handlePods))
	mux.HandleFunc("/api/v1/audit/policy", s.AuthHandler.RequireAuth(s.handlePolicyAudit))
}

2. RBAC-Aware React Hook

The frontend uses a custom hook to check permissions dynamically. This hook calls the SelfSubjectAccessReview API and caches the result to minimize API server load.

// src/hooks/useK8sPermission.ts
import { useState, useEffect } from 'react';
import { k8sApi } from '../api/client';

interface PermissionCheck {
  resource: string;
  verb: string;
  namespace?: string;
}

export function useK8sPermission({ resource, verb, namespace }: PermissionCheck) {
  const [allowed, setAllowed] = useState<boolean | null>(null);

  useEffect(() => {
    const checkPermission = async () => {
      try {
        const response = await k8sApi.post('/apis/authorization.k8s.io/v1/selfsubjectaccessreviews', {
          spec: {
            resourceAttributes: {
              resource,
              verb,
              namespace: namespace || undefined,
            },
          },
        });
        setAllowed(response.data.status.allowed);
      } catch {
        setAllowed(false);
      }
    };

    checkPermission();
  }, [resource, verb, namespace]);

  return allowed;
}

3. Policy Audit Implementation

The policy audit module evaluates pod specifications against a set of security checks without requiring an external Polaris deployment.

// pkg/audit/policy.go
package audit

import (
	"fmt"
	corev1 "k8s.io/api/core/v1"
)

type PolicyAuditor struct {
	Checks []Check
}

type Check struct {
	Name     string
	Evaluate func(pod *corev1.Pod) Result
}

type Result struct {
	Status   string // "Pass", "Warning", "Danger"
	Message  string
}

func (a *PolicyAuditor) AuditPod(pod *corev1.Pod) []Result {
	var results []Result
	for _, check := range a.Checks {
		results = append(results, check.Evaluate(pod))
	}
	return results
}

// Example check: Privilege Escalation
func CheckPrivilegeEscalation(pod *corev1.Pod) Result {
	for _, container := range pod.Spec.Containers {
		if container.SecurityContext != nil && container.SecurityContext.AllowPrivilegeEscalation != nil {
			if *container.SecurityContext.AllowPrivilegeEscalation {
				return Result{
					Status:  "Danger",
					Message: fmt.Sprintf("Container %s allows privilege escalation", container.Name),
				}
			}
		}
	}
	return Result{Status: "Pass", Message: "No privilege escalation detected"}
}

4. Certificate Tracker Logic

The certificate tracker parses TLS secrets using Go's standard library, providing expiry tracking without cert-manager dependencies.

// pkg/certs/tracker.go
package certs

import (
	"crypto/x509"
	"encoding/pem"
	"time"
)

type CertInfo struct {
	CommonName   string
	SANs         []string
	Issuer       string
	Expiry       time.Time
	DaysRemaining int
	Status       string // "Valid", "Warning", "Critical", "Expired"
}

func ParseTLSSecret(data []byte) (*CertInfo, error) {
	block, _ := pem.Decode(data)
	if block == nil {
		return nil, fmt.Errorf("failed to decode PEM block")
	}

	cert, err := x509.ParseCertificate(block.Bytes)
	if err != nil {
		return nil, err
	}

	now := time.Now()
	days := int(cert.NotAfter.Sub(now).Hours() / 24)
	status := "Valid"
	if days <= 7 {
		status = "Critical"
	} else if days <= 30 {
		status = "Warning"
	} else if days < 0 {
		status = "Expired"
	}

	return &CertInfo{
		CommonName:    cert.Subject.CommonName,
		SANs:          cert.DNSNames,
		Issuer:        cert.Issuer.CommonName,
		Expiry:        cert.NotAfter,
		DaysRemaining: days,
		Status:        status,
	}, nil
}

Pitfall Guide

  1. Dynamic Plugin Injection Risks

    • Explanation: Allowing dynamic JavaScript plugins introduces XSS vulnerabilities and supply chain risks. Malicious plugins can exfiltrate tokens or execute arbitrary code within the dashboard context.
    • Fix: Adopt a static build approach. Implement all features natively in the backend. If extensibility is required, use a compiled plugin system with strict sandboxing, though this adds complexity.
  2. Token Caching Leaks

    • Explanation: Caching API responses by URL alone can lead to data leakage between users. If User A requests a resource and User B requests the same URL, User B might receive User A's cached data.
    • Fix: Key the cache by SHA256(token) + URL. This ensures that cached data is isolated per user token. Additionally, never log or cache tokens in plain text.
  3. RBAC Over-Exposure

    • Explanation: Rendering action buttons without checking permissions leads to a poor UX and potential security confusion. Users may attempt actions they cannot perform, resulting in API errors.
    • Fix: Use SelfSubjectAccessReview to pre-check permissions. Grey out or hide buttons based on the result. This provides immediate feedback and reduces unnecessary API calls.
  4. Certificate Parsing Failures

    • Explanation: TLS secrets may contain multiple certificates or legacy formats. Naive parsing can fail or miss SANs, leading to incorrect expiry warnings.
    • Fix: Use crypto/x509 robustly. Handle PEM blocks with multiple certificates. Validate SANs and Common Name fields. Implement fallback logic for legacy certificate formats.
  5. Gateway Misconfiguration

    • Explanation: Kong in DBless mode requires precise configuration. Missing plugins or incorrect routes can break authentication or routing.
    • Fix: Use a declarative configuration file. Validate the config with kong config dbless validate. Ensure the JWT or Bearer plugin is correctly configured to validate Kubernetes service account tokens.
  6. Resource Exhaustion from Metrics Scraping

    • Explanation: Aggressive metrics scraping can overwhelm the Kubernetes API server or the metrics server, leading to rate limiting and degraded performance.
    • Fix: Implement rate limiting in the metrics scraper. Use caching with appropriate TTLs. Offload historical data to VictoriaMetrics to reduce real-time API load.
  7. CSRF Vulnerabilities in SPAs

    • Explanation: Single-page applications making mutating requests to the Kubernetes API are susceptible to CSRF attacks if tokens are stored in cookies.
    • Fix: Use bearer tokens stored in memory or secure storage. Implement CSRF protection on all mutating endpoints. Kong can handle CSRF token validation via plugins.

Production Bundle

Action Checklist

  • Deploy Hardened Manifests: Apply raw Kubernetes manifests with readOnlyRootFilesystem, drop: [ALL] capabilities, and seccompProfile: RuntimeDefault.
  • Configure NetworkPolicy: Set default-deny ingress/egress. Allow traffic only to the Kubernetes API server and necessary internal services.
  • Setup Kong Gateway: Deploy Kong 3.6 in DBless mode. Configure routes and authentication plugins. Verify MetalLB integration.
  • Create ServiceAccount: Generate a dedicated ServiceAccount with least-privilege RBAC rules. Bind only required permissions.
  • Enable CSRF Protection: Configure Kong to validate CSRF tokens on mutating endpoints. Ensure frontend sends tokens in headers.
  • Implement Cache Keying: Verify that API response cache keys include SHA256(token). Test with multiple users to ensure isolation.
  • Deploy VictoriaMetrics (Optional): If historical metrics are needed, deploy VictoriaMetrics StatefulSet. Configure dashboard to push metrics via remote write.
  • Audit RBAC UI: Test SelfSubjectAccessReview integration. Verify that buttons are correctly greyed out for restricted users.

Decision Matrix

Scenario Recommended Approach Why Cost Impact
Single Cluster Security Zero-Dependency Dashboard Hardened, low footprint, native features. Low
Multi-Cluster Operations Rancher or Headlamp Federation and cross-cluster management required. High
Desktop Development Lens Optimized UX for local development. Freemium
Regulated Environments Zero-Dependency Dashboard Auditable, no plugin runtime, static binary. Low
High Availability Zero-Dependency + Kong Scalable gateway, stateless backend pods. Medium

Configuration Template

Kong DBless Configuration (kong.yml)

_format_version: "2.1"
services:
  - name: dashboard-api
    url: http://dashboard-api:8080
    routes:
      - name: api-route
        paths:
          - /api/
        strip_path: false
    plugins:
      - name: jwt
        config:
          claims_to_verify:
            - exp
          key_claim_name: kid
          secret_is_base64: true
      - name: cors
        config:
          origins:
            - "*"
          methods:
            - GET
            - POST
            - PUT
            - DELETE
          headers:
            - Authorization
            - X-CSRF-Token

Go Config (config.yaml)

server:
  port: 8080
  read_timeout: 10s
  write_timeout: 10s

auth:
  token_validation: true
  csrf_protection: true

metrics:
  scraper_interval: 30s
  victoria_metrics_url: "" # Optional

security:
  policy_audit:
    enabled: true
    checks:
      - privilege_escalation
      - host_network
      - resource_limits

Quick Start Guide

  1. Clone and Build: Clone the repository and run make build to compile the Go backend and React frontend.
  2. Deploy Manifests: Apply the raw Kubernetes manifests using kubectl apply -f deploy/. This includes the dashboard pods, ServiceAccount, and Kong gateway.
  3. Access Dashboard: Retrieve the MetalLB IP for the Kong service. Open the dashboard URL in your browser.
  4. Authenticate: Use a Kubernetes ServiceAccount token to log in. The token can be obtained via kubectl create token <sa-name>.
  5. Verify Features: Check the Policy Audit, Certificate Tracker, and RBAC Viewer sections. Ensure all features are functioning without external dependencies.