Introduction
Next.js has become one of the most popular frameworks for building full-stack web applications. But deploying it to Kubernetes in production requires careful attention to container optimization, health checks, graceful shutdown, and caching. This guide covers everything we need to go from a Next.js 16 project to a production-ready Kubernetes deployment on K3s.
We assume familiarity with Next.js basics and basic Kubernetes concepts. By the end, we will have a complete Dockerfile, Kubernetes manifests, Horizontal Pod Autoscaler, and Cloudflare Tunnel integration.
1. Configuring Next.js for Standalone Output
The standalone output mode is essential for containerized deployments. It bundles only the files needed to run the application, drastically reducing image size.
In next.config.ts:
import type { NextConfig } from "next";
const nextConfig: NextConfig = {
output: "standalone",
};
export default nextConfig;
2. Multi-Stage Dockerfile
This Dockerfile produces a minimal production image:
# Stage 1: Install dependencies
FROM node:22-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
# If using Prisma, copy the schema before install
COPY prisma ./prisma/
RUN npm ci --ignore-scripts
# Generate Prisma client if needed
RUN npx prisma generate
# Stage 2: Build the application
FROM node:22-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Build-time environment variables
ARG NEXT_PUBLIC_API_URL
ENV NEXT_PUBLIC_API_URL=${NEXT_PUBLIC_API_URL}
RUN npm run build
# Stage 3: Production image
FROM node:22-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# Copy the standalone output
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static
COPY --from=builder /app/public ./public
# If using Prisma, copy the generated client
COPY --from=deps /app/node_modules/.prisma ./node_modules/.prisma
COPY --from=deps /app/node_modules/@prisma ./node_modules/@prisma
USER nextjs
EXPOSE 3000
ENV PORT=3000
ENV HOSTNAME="0.0.0.0"
# Graceful shutdown handler
CMD ["node", "server.js"]
Key points:
- Three stages keep the final image small (typically 150-200 MB vs 1 GB+ with a naive Dockerfile).
- Non-root user for security.
- Static files copied separately because the standalone output does not include them.
- Prisma client must be explicitly copied to the runner stage.
3. Handling Environment Variables
Next.js has two types of environment variables:
NEXT_PUBLIC_*: Inlined at build time into the client bundle. Must be available duringnpm run build.- Server-side variables: Available at runtime via
process.env.
For Kubernetes, we pass build-time variables as Docker build args and runtime variables through the Kubernetes Deployment:
# Build with public env vars
docker build \
--build-arg NEXT_PUBLIC_API_URL=https://api.example.com \
-t myapp:v1.0.0 .
4. Kubernetes Deployment Manifests
Namespace
apiVersion: v1
kind: Namespace
metadata:
name: myapp
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
namespace: myapp
spec:
replicas: 3
revisionHistoryLimit: 5
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
terminationGracePeriodSeconds: 30
containers:
- name: myapp
image: registry.example.com/myapp:v1.0.0
ports:
- containerPort: 3000
name: http
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: myapp-secrets
key: database-url
- name: REDIS_URL
value: "redis://redis-svc.myapp:6379"
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: "1"
memory: 512Mi
readinessProbe:
httpGet:
path: /api/health
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
livenessProbe:
httpGet:
path: /api/health
port: 3000
initialDelaySeconds: 10
periodSeconds: 30
failureThreshold: 3
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 5"]
Service
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: myapp
spec:
selector:
app: myapp
ports:
- port: 80
targetPort: 3000
name: http
type: ClusterIP
Health Check Endpoint
Create a simple API route for health checks:
// app/api/health/route.ts
import { NextResponse } from "next/server";
export async function GET() {
// Add database connectivity check if needed
return NextResponse.json({ status: "ok" }, { status: 200 });
}
5. Horizontal Pod Autoscaler
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: myapp
namespace: myapp
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
minReplicas: 3
maxReplicas: 15
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Pods
value: 1
periodSeconds: 60
6. Cloudflare Tunnel Integration
Instead of exposing the cluster with a cloud load balancer, we use Cloudflare Tunnel:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloudflared
namespace: myapp
spec:
replicas: 2
selector:
matchLabels:
app: cloudflared
template:
metadata:
labels:
app: cloudflared
spec:
containers:
- name: cloudflared
image: cloudflare/cloudflared:latest
args:
- tunnel
- --no-autoupdate
- run
- --token
- $(TUNNEL_TOKEN)
env:
- name: TUNNEL_TOKEN
valueFrom:
secretKeyRef:
name: cloudflare-tunnel
key: token
Configure the tunnel to route traffic to the myapp service:
tunnel: <TUNNEL_ID>
ingress:
- hostname: myapp.example.com
service: http://myapp.myapp.svc.cluster.local:80
- service: http_status:404
7. Prisma in Containers
When using Prisma ORM, we need to handle migrations carefully:
# Run migrations as a Kubernetes Job before deploying
kubectl create job prisma-migrate --image=registry.example.com/myapp:v1.0.0 \
-- npx prisma migrate deploy
Or as a pre-deploy Job in the manifests:
apiVersion: batch/v1
kind: Job
metadata:
name: prisma-migrate
namespace: myapp
spec:
template:
spec:
containers:
- name: migrate
image: registry.example.com/myapp:v1.0.0
command: ["npx", "prisma", "migrate", "deploy"]
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: myapp-secrets
key: database-url
restartPolicy: Never
backoffLimit: 3
8. Build and Deploy Script
#!/usr/bin/env bash
set -euo pipefail
TAG=$(git rev-parse --short HEAD)
REGISTRY="registry.example.com/myapp"
echo "Building image with tag: $TAG"
docker build \
--build-arg NEXT_PUBLIC_API_URL=https://api.example.com \
-t "$REGISTRY:$TAG" \
-t "$REGISTRY:latest" .
echo "Pushing image..."
docker push "$REGISTRY:$TAG"
docker push "$REGISTRY:latest"
echo "Updating deployment..."
kubectl set image deployment/myapp myapp="$REGISTRY:$TAG" -n myapp
echo "Waiting for rollout..."
kubectl rollout status deployment/myapp -n myapp --timeout=300s
echo "Deployment complete!"
Conclusion
Deploying Next.js to Kubernetes requires attention to standalone output configuration, proper Dockerfile layering, health checks, and graceful shutdown. With the manifests in this guide, we get rolling deployments with zero downtime, automatic scaling based on CPU and memory, and secure exposure via Cloudflare Tunnel. The complete setup runs reliably on even a single K3s node for smaller projects and scales horizontally across multiple nodes for high-traffic applications.