Docker to Kubernetes: A Software Engineer's Production Journey
You've written a Dockerfile. It works on your machine. But "works on my machine" is not a deployment strategy. The gap between a running container and a production-grade deployment is vast — and that's where Kubernetes comes in.
This guide takes you from Docker basics to production Kubernetes, covering the real decisions and patterns you'll face along the way.
Part 1: Docker Done Right
The Dockerfile Everyone Writes First
This works. It's also terrible for production. Here's why:
- Image size: ~1.2GB. You're shipping the entire Node.js development environment.
- Build cache: Any file change invalidates
npm install, re-downloading every dependency. - Security: Running as root. Dev dependencies included. Node.js headers included.
The Multi-Stage Build
Multi-stage builds use multiple FROM statements. Each stage can copy artifacts from previous stages, discarding everything else.
Result: ~150MB image (down from 1.2GB). Non-root user. No dev dependencies. Cached dependency layer.
Container Best Practices
1. One process per container. Don't run your app, database, and Redis in the same container. Each should be its own service.
2. Handle signals properly. Kubernetes sends SIGTERM before killing containers. Your app should handle graceful shutdown:
3. Use .dockerignore. Keep node_modules, .git, and .env out of your build context:
4. Pin your base image versions. node:20-alpine is good. node:latest is a ticking time bomb.
Part 2: Enter Kubernetes
Why Kubernetes?
Docker runs containers. Kubernetes orchestrates them. It handles:
- Scaling — Run 3 replicas? 30? Auto-scale based on CPU? Kubernetes does it.
- Self-healing — Container crashes? Kubernetes restarts it. Node dies? Kubernetes reschedules the pods.
- Rolling updates — Deploy new versions with zero downtime.
- Service discovery — Containers find each other by name, not IP.
- Config management — Secrets, environment variables, and config files managed declaratively.
The Core Concepts
Your First Deployment
Part 3: Production Patterns
Health Checks
Kubernetes needs to know if your container is healthy. Two probes matter:
- Liveness probe — Is the process alive? If it fails, Kubernetes restarts the container.
- Readiness probe — Can the container serve traffic? If it fails, Kubernetes removes it from the service load balancer.
Rolling Deployments
By default, Kubernetes performs rolling updates — replacing pods one at a time:
Deploy a new version:
Helm: Package Manager for Kubernetes
Raw YAML gets repetitive. Helm lets you template and package Kubernetes manifests:
Helm charts let you parameterize everything — replicas, resource limits, env vars, ingress rules — and version your infrastructure alongside your code.
Horizontal Pod Autoscaler (HPA)
Scale based on metrics:
When average CPU across pods exceeds 70%, Kubernetes adds more pods. When it drops, it scales down. Simple, effective.
Part 4: Observability
Running in production without observability is flying blind.
The Three Pillars
1. Logs — What happened?
Use structured logging (JSON) so log aggregators (Loki, ELK, Datadog) can parse and query them.
2. Metrics — How is it performing?
Expose Prometheus metrics from your app:
3. Traces — Where is the bottleneck?
Distributed tracing (OpenTelemetry, Jaeger) shows you the full journey of a request across services.
The CI/CD Pipeline
Putting it all together:
--atomic ensures the deployment rolls back automatically if it fails. No half-deployed states.
Wrapping Up
The journey from Docker to Kubernetes is really a journey from "it runs" to "it runs reliably at scale":
- Docker — Package your app consistently
- Multi-stage builds — Keep images small and secure
- Kubernetes Deployments — Run replicas with self-healing
- Health checks — Let Kubernetes know when something's wrong
- Rolling updates — Deploy without downtime
- Helm — Manage complexity with templates
- Autoscaling — Handle traffic spikes automatically
- Observability — Know what's happening in production
Start simple. Add complexity only when you need it. And always, always have a rollback plan.