Docker Is Not the Only Option: I Tested Podman, containerd, and nerdctl So You Don't Have To
Beyond the Daemon: Engineering a Secure, License-Neutral Container Runtime Strategy
Current Situation Analysis
The container runtime landscape has shifted from a convenience-driven monoculture to a strategic infrastructure decision. For years, Docker Engine and Docker Desktop operated as the default assumption across development and production pipelines. That assumption is no longer tenable for mid-to-large engineering organizations. Two converging pressures have forced a reevaluation: licensing thresholds that trigger commercial subscriptions for companies exceeding 250 employees or $10M in annual revenue, and the persistent security debt of running a privileged root daemon as the default execution model.
The licensing change was the catalyst, but the underlying technical reality is what demands attention. Docker's architecture relies on a long-running root daemon that manages namespaces, cgroups, and network interfaces. While convenient for local development, this model creates a direct privilege escalation path: a misconfigured volume mount, a vulnerable base image, or a container breakout exploit can compromise the host kernel. Security teams have flagged this boundary for years, but migration was historically deprioritized due to ecosystem lock-in, tooling dependencies, and the perceived complexity of retraining engineering workflows.
Data from real-world migrations shows that the friction isn't in core container operations. Pulling images, building artifacts, and running isolated processes work identically across modern runtimes. The actual migration cost lives in the edges: Compose specification drift, rootless UID/GID mapping behavior, BuildKit cache mount syntax differences, and CI/CD socket sharing patterns. Organizations that treat runtime migration as a simple binary swap consistently hit production failures. Those that approach it as a security and toolchain architecture decision achieve measurable reductions in attack surface and licensing overhead without sacrificing developer velocity.
The problem is overlooked because container runtimes are treated as implementation details rather than security boundaries. Engineering teams optimize for muscle memory and existing CI/CD templates, while security and platform teams absorb the long-term risk. A license-neutral, rootless-first strategy requires deliberate architectural choices, explicit network configuration, and updated operational runbooks. This article provides the technical blueprint for making that transition without disrupting delivery pipelines.
WOW Moment: Key Findings
Runtime selection is rarely about raw feature parity. It's about aligning security posture, platform constraints, and team operational maturity. The following comparison isolates the metrics that actually impact production reliability and migration feasibility.
| Approach | Security Model | CLI Parity | Compose Spec Coverage | Local K8s Capability | Platform Support | Licensing |
|---|---|---|---|---|---|---|
| Docker Desktop | Root daemon (VM-isolated on macOS) | Native | 100% (official plugin) | Single-node kubeadm cluster | macOS, Linux, Windows | Free <250 emp / <$10M; $21β$35/user/mo otherwise |
| Podman 4.x | Rootless by default, daemonless | High (alias-compatible) | ~90% (native + v2 delegation) | podman play kube (pod-level, not cluster) |
macOS (QEMU VM), Linux, Windows (preview) | Apache 2.0 |
| containerd + nerdctl | Rootless optional, daemon-based | High (drop-in CLI) | ~80% (misses profiles/extends) |
Requires k3s or external cluster | Linux, macOS (manual), Windows (WSL2) | Apache 2.0 |
| Lima | Rootless VM guest, daemonless host | Depends on guest tooling | Guest-dependent | No native cluster support | macOS only | Apache 2.0 |
The critical insight is that CLI compatibility masks deeper architectural differences. Podman's daemonless design eliminates the root daemon attack surface entirely, but requires explicit network and volume configuration. containerd + nerdctl aligns directly with Kubernetes production runtimes, making it ideal for teams already standardizing on CRI-compatible workflows. Lima solves the macOS virtualization gap without Docker's licensing overhead, but demands manual guest configuration. Docker Desktop remains the path of least resistance for small teams or organizations where the licensing threshold hasn't been crossed, but its root daemon and proprietary licensing make it a strategic liability for scaled engineering groups.
This finding matters because it shifts runtime selection from a "drop-in replacement" mindset to a platform architecture decision. Teams that recognize the 10% edge-case friction early can design migration runbooks, update CI/CD socket policies, and train developers on rootless volume semantics before hitting production blockers.
Core Solution
Migrating to a license-neutral runtime requires a phased approach that prioritizes security boundaries, network isolation, and CI/CD compatibility. The following implementation path assumes a microservices stack with multi-stage builds, persistent volumes, and local Kubernetes validation.
Step 1: Select the Runtime Based on Platform & Security Requirements
For Linux workstations and CI runners, Podman 4.x provides the smoothest transition due to its daemonless architecture and native rootless enforcement. For teams already standardizing on Kubernetes CRI workflows, containerd + nerdctl eliminates runtime translation layers. On macOS, Lima provides a lightweight QEMU-based VM that runs containerd natively, bypassing Docker Desktop's licensing and resource overhead.
Step 2: Configure Rootless Execution & Namespace Mapping
Rootless execution requires explicit UID/GID mapping to prevent permission drift during bind mounts. Podman handles this automatically via user namespaces, but containerd requires manual configuration.
# containerd rootless bootstrap script
#!/usr/bin/env bash
set -euo pipefail
CONTAINERD_HOME="${HOME}/.local/share/containerd"
CONTAINERD_SOCKET="${CONTAINERD_HOME}/containerd.sock"
mkdir -p "${CONTAINERD_HOME}/run" "${CONTAINERD_HOME}/state"
# Generate rootless containerd config
cat > "${CONTAINERD_HOME}/config.toml" <<EOF
version = 2
root = "${CONTAINERD_HOME}/root"
state = "${CONTAINERD_HOME}/state"
[grpc]
address = "${CONTAINERD_SOCKET}"
uid = 0
gid = 0
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "registry.k8s.io/pause:3.9"
EOF
# Start containerd in rootless mode
containerd --config "${CONTAINERD_HOME}/config.toml" &
export CONTAINERD_ADDRESS="${CONTAINERD_SOCKET}"
echo "containerd listening on ${CONTAINERD_SOCKET}"
Step 3: Align Compose & Build Workflows
Compose specification drift is the most common migration failure point. Instead of relying on implicit network bridges, explicitly define pod-level networking and volume ownership.
# compose.runtime.yml
version: "3.9"
services:
api-server:
image: internal-registry/api-service:latest
build:
context: ./src/api
dockerfile: Dockerfile.prod
args:
NODE_ENV: production
volumes:
- app-data:/var/lib/app/data:Z
networks:
- backend
environment:
- DB_HOST=postgres-primary
- CONTAINER_RUNTIME=nerdctl
postgres-primary:
image: postgres:16-alpine
volumes:
- pg-storage:/var/lib/postgresql/data
networks:
- backend
environment:
- POSTGRES_USER=app_user
- POSTGRES_DB=app_production
volumes:
app-data:
pg-storage:
networks:
backend:
driver: bridge
Step 4: Validate CI/CD Socket & Namespace Isolation
Shared Docker sockets in CI environments create privilege escalation risks. Replace socket mounts with runtime-specific namespace isolation and image signing verification.
// runtime-validator.ts
import { execSync } from 'child_process';
import { existsSync } from 'fs';
interface RuntimeConfig {
binary: string;
socketPath: string;
namespace: string;
requiresRootless: boolean;
}
const RUNTIMES: Record<string, RuntimeConfig> = {
podman: {
binary: 'podman',
socketPath: `${process.env.HOME}/.local/share/containers/podman/machine/qemu/podman.sock`,
namespace: 'default',
requiresRootless: true,
},
nerdctl: {
binary: 'nerdctl',
socketPath: '/run/containerd/containerd.sock',
namespace: 'k8s.io',
requiresRootless: false,
},
};
function validateRuntime(runtime: keyof typeof RUNTIMES): boolean {
const config = RUNTIMES[runtime];
if (!config) throw new Error(`Unknown runtime: ${runtime}`);
const binaryExists = execSync(`which ${config.binary}`, { stdio: 'pipe' }).toString().trim();
if (!binaryExists) {
console.error(`β ${config.binary} not found in PATH`);
return false;
}
if (config.requiresRootless) {
const rootlessCheck = execSync(`${config.binary} info --format '{{.Host.Security.Rootless}}'`, { stdio: 'pipe' }).toString().trim();
if (rootlessCheck !== 'true') {
console.error(`β οΈ ${runtime} is not running in rootless mode`);
return false;
}
}
console.log(`β
${runtime} validated successfully`);
return true;
}
export { validateRuntime, RUNTIMES };
Architecture Decisions & Rationale
- Daemonless vs Daemon-based: Podman's fork-exec model eliminates the root daemon, reducing the attack surface to the container process itself. containerd remains daemon-based but aligns with Kubernetes CRI, making it preferable for teams standardizing on production-grade orchestration.
- Explicit Network Definitions: Implicit bridge networks cause service discovery failures during migration. Defining networks and pod-level DNS resolution prevents silent routing breaks.
- Volume SELinux Relabeling: The
:Zflag ensures host directories are correctly labeled for container access under rootless execution, preventing write failures in Fedora/RHEL environments. - Namespace Isolation in CI: Sharing the host socket in CI runners violates least-privilege principles. Runtime-specific namespaces and image signing (cosign) provide verifiable, isolated build environments.
Pitfall Guide
1. The 1:1 CLI Mirage
Explanation: Assuming alias docker=podman or alias docker=nerdctl covers all workflows leads to silent failures in advanced build flags, multi-platform emulation, and Compose spec extensions.
Fix: Audit CI/CD pipelines for --build-arg, --platform, and Compose profiles/extends usage. Replace with runtime-specific equivalents or abstract build steps through Makefiles or Taskfiles.
2. Rootless UID/GID Mapping Blind Spots
Explanation: Containers running as UID 0 inside a rootless namespace map to the host user's UID. Bind mounts owned by root on the host will fail with permission denied errors.
Fix: Use :Z for SELinux relabeling, or explicitly set volume ownership via chown in the container entrypoint. Validate mount permissions with podman unshare cat /proc/self/uid_map.
3. Compose Network Isolation Assumptions
Explanation: Docker Compose automatically creates a shared bridge network with DNS resolution by service name. Podman Compose and nerdctl Compose may isolate services unless explicitly configured.
Fix: Define networks explicitly in the Compose file. Use podman network create or nerdctl network create before deployment. Verify DNS resolution with nslookup inside containers.
4. BuildKit Emulation & Cache Mount Drift
Explanation: --mount=type=cache and multi-architecture builds behave differently across runtimes. Podman delegates to Buildah, which may not support all BuildKit cache syntax in older versions.
Fix: Pin Podman to 4.2+. Use podman build --cache-from for explicit cache management. For multi-platform builds, use podman build --platform linux/amd64,linux/arm64 and verify manifest lists with podman manifest inspect.
5. Socket Path & Tooling Integration Gaps
Explanation: Tools like Testcontainers, VS Code Dev Containers, and CI runners expect the default Docker socket. Runtime-specific socket paths cause detection failures.
Fix: Export CONTAINER_HOST or DOCKER_HOST to the runtime socket path. Update tooling configuration files to reference the correct socket. Validate with curl -s --unix-socket $CONTAINER_HOST http://localhost/_ping.
6. Local K8s Cluster vs Pod Play Confusion
Explanation: podman play kube executes Kubernetes YAML as isolated pods, not a full cluster with API server, RBAC, or CRDs. Teams expecting kubectl cluster behavior will encounter missing resources.
Fix: Use k3s or kind for local Kubernetes clusters. Configure nerdctl to point to k3s's containerd socket: export CONTAINERD_ADDRESS=/run/k3s/containerd/containerd.sock. Validate with kubectl get nodes.
7. CI/CD Privilege Escalation via Shared Sockets
Explanation: Mounting the host container socket in CI runners grants build jobs full host control, violating security compliance and enabling supply chain attacks.
Fix: Use rootless CI runners with runtime-specific sockets. Implement image signing with cosign and verify signatures in pipeline stages. Isolate build environments using bubblewrap or firejail.
Production Bundle
Action Checklist
- Audit current Docker usage against licensing thresholds (>250 employees or >$10M revenue)
- Identify rootless-compatible workloads and map UID/GID requirements for bind mounts
- Replace implicit Compose networks with explicit bridge or macvlan definitions
- Update CI/CD pipelines to use runtime-specific sockets instead of host Docker socket
- Validate BuildKit cache mounts and multi-platform builds against target runtime
- Implement image signing and verification in build stages
- Train engineering teams on rootless volume semantics and namespace isolation
- Deploy runtime health checks and socket validation in CI pre-flight stages
Decision Matrix
| Scenario | Recommended Approach | Why | Cost Impact |
|---|---|---|---|
| Small team (<250 emp, <$10M rev), macOS-heavy | Docker Desktop | Lowest migration friction, native GUI, full Compose/K8s support | Free under threshold; scales poorly |
| Linux workstations, security-compliant, rootless required | Podman 4.x | Daemonless, rootless by default, high CLI parity | Zero licensing cost; minimal training overhead |
| Kubernetes-aligned stack, CRI standardization | containerd + nerdctl | Production runtime parity, namespace isolation, k3s integration | Zero licensing cost; requires CNI/network config |
| macOS developers, Docker licensing triggered | Lima + containerd | Lightweight VM, bypasses Docker Desktop, free | Zero licensing cost; manual guest setup |
| CI/CD runners, supply chain security focus | Rootless Podman + cosign | Isolated builds, verifiable artifacts, no host socket sharing | Zero licensing cost; pipeline refactoring required |
Configuration Template
# containerd-rootless-config.toml
version = 2
root = "${HOME}/.local/share/containerd/root"
state = "${HOME}/.local/share/containerd/state"
address = "${HOME}/.local/share/containerd/containerd.sock"
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "registry.k8s.io/pause:3.9"
stream_server_address = "127.0.0.1"
stream_server_port = "0"
[plugins."io.containerd.snapshotter.v1.devmapper"]
pool_name = "containerd-pool"
root_path = "${HOME}/.local/share/containerd/devmapper"
base_image_size = "10GB"
discard_blocks = true
# runtime-bootstrap.sh
#!/usr/bin/env bash
set -euo pipefail
RUNTIME="${1:-podman}"
SOCKET_DIR="${HOME}/.local/share/containers/runtime"
mkdir -p "${SOCKET_DIR}"
case "${RUNTIME}" in
podman)
podman machine init --cpus 4 --memory 8192 --disk-size 50
podman machine start
export CONTAINER_HOST="unix://${SOCKET_DIR}/podman.sock"
;;
nerdctl)
containerd --config "${HOME}/.config/containerd/config.toml" &
export CONTAINERD_ADDRESS="${SOCKET_DIR}/containerd.sock"
;;
*)
echo "Unsupported runtime: ${RUNTIME}" >&2
exit 1
;;
esac
echo "β
${RUNTIME} initialized. Socket: ${CONTAINER_HOST:-${CONTAINERD_ADDRESS}}"
Quick Start Guide
- Install the target runtime:
brew install podman(macOS) orsudo dnf install podman(RHEL/Fedora). For containerd, use your distribution's package manager or install viacontainerd.iorepository. - Initialize the execution environment: Run
podman machine init && podman machine startfor macOS, or execute theruntime-bootstrap.shscript for containerd/nerdctl. Export the appropriate socket environment variable. - Validate rootless execution: Run
podman info --format '{{.Host.Security.Rootless}}'ornerdctl info --format '{{.SecurityOptions}}'. Confirm output indicates rootless mode. - Test a multi-service workload: Deploy the
compose.runtime.ymltemplate withpodman compose up -dornerdctl compose -f compose.runtime.yml up -d. Verify network isolation and volume permissions. - Integrate into CI/CD: Replace Docker socket mounts with runtime-specific socket paths. Add
runtime-validator.tsto pipeline pre-flight stages. Implement cosign signing for production images.
