Built byPhoenix

© 2026 Phoenix

← Blog
DockerKubernetesDevOpsInfrastructureCI/CDCloud

Docker to Kubernetes: A Software Engineer's Production Journey

Phoenix·February 8, 2026·25 min read

Docker to Kubernetes: A Software Engineer's Production Journey

You've written a Dockerfile. It works on your machine. But "works on my machine" is not a deployment strategy. The gap between a running container and a production-grade deployment is vast — and that's where Kubernetes comes in.

This guide takes you from Docker basics to production Kubernetes, covering the real decisions and patterns you'll face along the way.


Part 1: Docker Done Right

The Dockerfile Everyone Writes First

dockerfile
FROM node:20WORKDIR /appCOPY . .RUN npm installRUN npm run buildEXPOSE 3000CMD ["npm", "start"]

This works. It's also terrible for production. Here's why:

  • Image size: ~1.2GB. You're shipping the entire Node.js development environment.
  • Build cache: Any file change invalidates npm install, re-downloading every dependency.
  • Security: Running as root. Dev dependencies included. Node.js headers included.

The Multi-Stage Build

Multi-stage builds use multiple FROM statements. Each stage can copy artifacts from previous stages, discarding everything else.

dockerfile
# Stage 1: Install dependenciesFROM node:20-alpine AS depsWORKDIR /appCOPY package.json pnpm-lock.yaml ./RUN corepack enable && pnpm install --frozen-lockfile# Stage 2: Build the applicationFROM node:20-alpine AS builderWORKDIR /appCOPY --from=deps /app/node_modules ./node_modulesCOPY . .RUN corepack enable && pnpm build# Stage 3: Production imageFROM node:20-alpine AS runnerWORKDIR /app# Don't run as rootRUN addgroup --system --gid 1001 appgroupRUN adduser --system --uid 1001 appuser# Copy only what's neededCOPY --from=builder /app/dist ./distCOPY --from=builder /app/node_modules ./node_modulesCOPY --from=builder /app/package.json ./USER appuserEXPOSE 3000CMD ["node", "dist/main.js"]

Result: ~150MB image (down from 1.2GB). Non-root user. No dev dependencies. Cached dependency layer.

Container Best Practices

1. One process per container. Don't run your app, database, and Redis in the same container. Each should be its own service.

2. Handle signals properly. Kubernetes sends SIGTERM before killing containers. Your app should handle graceful shutdown:

typescript
process.on('SIGTERM', async () => {  console.log('SIGTERM received, shutting down gracefully')  await server.close()  await db.disconnect()  process.exit(0)})

3. Use .dockerignore. Keep node_modules, .git, and .env out of your build context:

node_modules.git.env*dist*.md

4. Pin your base image versions. node:20-alpine is good. node:latest is a ticking time bomb.


Part 2: Enter Kubernetes

Why Kubernetes?

Docker runs containers. Kubernetes orchestrates them. It handles:

  • Scaling — Run 3 replicas? 30? Auto-scale based on CPU? Kubernetes does it.
  • Self-healing — Container crashes? Kubernetes restarts it. Node dies? Kubernetes reschedules the pods.
  • Rolling updates — Deploy new versions with zero downtime.
  • Service discovery — Containers find each other by name, not IP.
  • Config management — Secrets, environment variables, and config files managed declaratively.

The Core Concepts

Cluster  └── Nodes (machines)       └── Pods (smallest deployable unit)            └── Containers (your Docker images)Services → expose Pods to the networkDeployments → manage Pod replicas and updatesConfigMaps → configuration dataSecrets → sensitive data (encrypted)Ingress → HTTP routing (domain → service)

Your First Deployment

yaml
# deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata:  name: my-apispec:  replicas: 3  selector:    matchLabels:      app: my-api  template:    metadata:      labels:        app: my-api    spec:      containers:        - name: my-api          image: registry.example.com/my-api:1.0.0          ports:            - containerPort: 3000          resources:            requests:              memory: '128Mi'              cpu: '100m'            limits:              memory: '256Mi'              cpu: '500m'          env:            - name: DATABASE_URL              valueFrom:                secretKeyRef:                  name: my-api-secrets                  key: database-url
yaml
# service.yamlapiVersion: v1kind: Servicemetadata:  name: my-apispec:  selector:    app: my-api  ports:    - port: 80      targetPort: 3000  type: ClusterIP
bash
kubectl apply -f deployment.yamlkubectl apply -f service.yamlkubectl get pods  # See your 3 replicas running

Part 3: Production Patterns

Health Checks

Kubernetes needs to know if your container is healthy. Two probes matter:

  • Liveness probe — Is the process alive? If it fails, Kubernetes restarts the container.
  • Readiness probe — Can the container serve traffic? If it fails, Kubernetes removes it from the service load balancer.
yaml
containers:  - name: my-api    livenessProbe:      httpGet:        path: /health        port: 3000      initialDelaySeconds: 10      periodSeconds: 15      failureThreshold: 3    readinessProbe:      httpGet:        path: /health/ready        port: 3000      initialDelaySeconds: 5      periodSeconds: 5
typescript
// Health check endpointsapp.get('/health', (req, res) => {  res.status(200).json({ status: 'alive' })})app.get('/health/ready', async (req, res) => {  try {    await db.query('SELECT 1')    res.status(200).json({ status: 'ready' })  } catch {    res.status(503).json({ status: 'not ready' })  }})

Rolling Deployments

By default, Kubernetes performs rolling updates — replacing pods one at a time:

yaml
spec:  strategy:    type: RollingUpdate    rollingUpdate:      maxUnavailable: 1    # At most 1 pod down during update      maxSurge: 1           # At most 1 extra pod during update

Deploy a new version:

bash
kubectl set image deployment/my-api my-api=registry.example.com/my-api:2.0.0kubectl rollout status deployment/my-api  # Watch the rolloutkubectl rollout undo deployment/my-api    # Rollback if something breaks

Helm: Package Manager for Kubernetes

Raw YAML gets repetitive. Helm lets you template and package Kubernetes manifests:

bash
# Create a charthelm create my-api# Install ithelm install my-api ./my-api --set image.tag=2.0.0# Upgradehelm upgrade my-api ./my-api --set image.tag=3.0.0# Rollbackhelm rollback my-api 1

Helm charts let you parameterize everything — replicas, resource limits, env vars, ingress rules — and version your infrastructure alongside your code.

Horizontal Pod Autoscaler (HPA)

Scale based on metrics:

yaml
apiVersion: autoscaling/v2kind: HorizontalPodAutoscalermetadata:  name: my-apispec:  scaleTargetRef:    apiVersion: apps/v1    kind: Deployment    name: my-api  minReplicas: 2  maxReplicas: 10  metrics:    - type: Resource      resource:        name: cpu        target:          type: Utilization          averageUtilization: 70

When average CPU across pods exceeds 70%, Kubernetes adds more pods. When it drops, it scales down. Simple, effective.


Part 4: Observability

Running in production without observability is flying blind.

The Three Pillars

1. Logs — What happened?

bash
kubectl logs -f deployment/my-api           # Stream logskubectl logs my-api-pod-xyz --previous      # Logs from crashed container

Use structured logging (JSON) so log aggregators (Loki, ELK, Datadog) can parse and query them.

2. Metrics — How is it performing?

Expose Prometheus metrics from your app:

typescript
import { Counter, Histogram, register } from 'prom-client'const httpRequests = new Counter({  name: 'http_requests_total',  help: 'Total HTTP requests',  labelNames: ['method', 'path', 'status'],})const httpDuration = new Histogram({  name: 'http_request_duration_seconds',  help: 'HTTP request duration',  labelNames: ['method', 'path'],})

3. Traces — Where is the bottleneck?

Distributed tracing (OpenTelemetry, Jaeger) shows you the full journey of a request across services.


The CI/CD Pipeline

Putting it all together:

yaml
# .github/workflows/deploy.ymlname: Deployon:  push:    branches: [main]jobs:  deploy:    runs-on: ubuntu-latest    steps:      - uses: actions/checkout@v4      - name: Build & push Docker image        run: |          docker build -t registry.example.com/my-api:${{ github.sha }} .          docker push registry.example.com/my-api:${{ github.sha }}      - name: Deploy to Kubernetes        run: |          helm upgrade my-api ./helm/my-api \            --set image.tag=${{ github.sha }} \            --atomic \            --timeout 300s

--atomic ensures the deployment rolls back automatically if it fails. No half-deployed states.


Wrapping Up

The journey from Docker to Kubernetes is really a journey from "it runs" to "it runs reliably at scale":

  1. Docker — Package your app consistently
  2. Multi-stage builds — Keep images small and secure
  3. Kubernetes Deployments — Run replicas with self-healing
  4. Health checks — Let Kubernetes know when something's wrong
  5. Rolling updates — Deploy without downtime
  6. Helm — Manage complexity with templates
  7. Autoscaling — Handle traffic spikes automatically
  8. Observability — Know what's happening in production

Start simple. Add complexity only when you need it. And always, always have a rollback plan.


Further Reading

← All postsShare on X
FROM node:20WORKDIR /appCOPY . .RUN npm installRUN npm run buildEXPOSE 3000CMD ["npm", "start"]
# Stage 1: Install dependenciesFROM node:20-alpine AS depsWORKDIR /appCOPY package.json pnpm-lock.yaml ./RUN corepack enable && pnpm install --frozen-lockfile# Stage 2: Build the applicationFROM node:20-alpine AS builderWORKDIR /appCOPY --from=deps /app/node_modules ./node_modulesCOPY . .RUN corepack enable && pnpm build# Stage 3: Production imageFROM node:20-alpine AS runnerWORKDIR /app# Don't run as rootRUN addgroup --system --gid 1001 appgroupRUN adduser --system --uid 1001 appuser# Copy only what's neededCOPY --from=builder /app/dist ./distCOPY --from=builder /app/node_modules ./node_modulesCOPY --from=builder /app/package.json ./USER appuserEXPOSE 3000CMD ["node", "dist/main.js"]
process.on('SIGTERM', async () => {  console.log('SIGTERM received, shutting down gracefully')  await server.close()  await db.disconnect()  process.exit(0)})
node_modules.git.env*dist*.md
Cluster  └── Nodes (machines)       └── Pods (smallest deployable unit)            └── Containers (your Docker images)Services → expose Pods to the networkDeployments → manage Pod replicas and updatesConfigMaps → configuration dataSecrets → sensitive data (encrypted)Ingress → HTTP routing (domain → service)
# deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata:  name: my-apispec:  replicas: 3  selector:    matchLabels:      app: my-api  template:    metadata:      labels:        app: my-api    spec:      containers:        - name: my-api          image: registry.example.com/my-api:1.0.0          ports:            - containerPort: 3000          resources:            requests:              memory: '128Mi'              cpu: '100m'            limits:              memory: '256Mi'              cpu: '500m'          env:            - name: DATABASE_URL              valueFrom:                secretKeyRef:                  name: my-api-secrets                  key: database-url
# service.yamlapiVersion: v1kind: Servicemetadata:  name: my-apispec:  selector:    app: my-api  ports:    - port: 80      targetPort: 3000  type: ClusterIP
kubectl apply -f deployment.yamlkubectl apply -f service.yamlkubectl get pods  # See your 3 replicas running
containers:  - name: my-api    livenessProbe:      httpGet:        path: /health        port: 3000      initialDelaySeconds: 10      periodSeconds: 15      failureThreshold: 3    readinessProbe:      httpGet:        path: /health/ready        port: 3000      initialDelaySeconds: 5      periodSeconds: 5
// Health check endpointsapp.get('/health', (req, res) => {  res.status(200).json({ status: 'alive' })})app.get('/health/ready', async (req, res) => {  try {    await db.query('SELECT 1')    res.status(200).json({ status: 'ready' })  } catch {    res.status(503).json({ status: 'not ready' })  }})
spec:  strategy:    type: RollingUpdate    rollingUpdate:      maxUnavailable: 1    # At most 1 pod down during update      maxSurge: 1           # At most 1 extra pod during update
kubectl set image deployment/my-api my-api=registry.example.com/my-api:2.0.0kubectl rollout status deployment/my-api  # Watch the rolloutkubectl rollout undo deployment/my-api    # Rollback if something breaks
# Create a charthelm create my-api# Install ithelm install my-api ./my-api --set image.tag=2.0.0# Upgradehelm upgrade my-api ./my-api --set image.tag=3.0.0# Rollbackhelm rollback my-api 1
apiVersion: autoscaling/v2kind: HorizontalPodAutoscalermetadata:  name: my-apispec:  scaleTargetRef:    apiVersion: apps/v1    kind: Deployment    name: my-api  minReplicas: 2  maxReplicas: 10  metrics:    - type: Resource      resource:        name: cpu        target:          type: Utilization          averageUtilization: 70
kubectl logs -f deployment/my-api           # Stream logskubectl logs my-api-pod-xyz --previous      # Logs from crashed container
import { Counter, Histogram, register } from 'prom-client'const httpRequests = new Counter({  name: 'http_requests_total',  help: 'Total HTTP requests',  labelNames: ['method', 'path', 'status'],})const httpDuration = new Histogram({  name: 'http_request_duration_seconds',  help: 'HTTP request duration',  labelNames: ['method', 'path'],})
# .github/workflows/deploy.ymlname: Deployon:  push:    branches: [main]jobs:  deploy:    runs-on: ubuntu-latest    steps:      - uses: actions/checkout@v4      - name: Build & push Docker image        run: |          docker build -t registry.example.com/my-api:${{ github.sha }} .          docker push registry.example.com/my-api:${{ github.sha }}      - name: Deploy to Kubernetes        run: |          helm upgrade my-api ./helm/my-api \            --set image.tag=${{ github.sha }} \            --atomic \            --timeout 300s