Introduction
Zero-downtime deployments are no longer a luxury reserved for large engineering organizations. With K3s (a lightweight Kubernetes distribution) and ArgoCD (a GitOps continuous delivery tool), even small teams can ship code multiple times a day without dropping a single request. This guide walks through the full setup from a bare server to a working GitOps pipeline.
Prerequisites
- A Linux server (Ubuntu 22.04 or 24.04) with at least 2 CPU cores and 4 GB RAM
- A Git repository for Kubernetes manifests
- A container registry (Docker Hub, GitHub Container Registry, or a private registry)
kubectlinstalled on the local machine
1. Install K3s
K3s installs in under 60 seconds:
curl -sfL https://get.k3s.io | sh -s - \
--disable traefik \
--write-kubeconfig-mode 644
# Verify the cluster is running
sudo kubectl get nodes
We disable Traefik because we will use our own ingress controller (or Cloudflare Tunnel) later.
Copy the kubeconfig for remote access:
cat /etc/rancher/k3s/k3s.yaml
Replace 127.0.0.1 with the server's public IP and save it as ~/.kube/config on the local machine.
2. Install ArgoCD
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
# Wait for all pods to be ready
kubectl -n argocd rollout status deployment argocd-server
# Get the initial admin password
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
Expose the ArgoCD UI (for initial setup only; in production, use an ingress or port-forward):
kubectl -n argocd port-forward svc/argocd-server 8080:443
3. Structure the GitOps Repository
We use a simple directory layout:
k8s-manifests/
base/
deployment.yaml
service.yaml
hpa.yaml
overlays/
production/
kustomization.yaml
patch-replicas.yaml
base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 3
revisionHistoryLimit: 5
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
terminationGracePeriodSeconds: 30
containers:
- name: myapp
image: ghcr.io/myorg/myapp:latest
ports:
- containerPort: 3000
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
readinessProbe:
httpGet:
path: /healthz
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
livenessProbe:
httpGet:
path: /healthz
port: 3000
initialDelaySeconds: 15
periodSeconds: 20
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 5"]
Key settings for zero downtime:
maxUnavailable: 0ensures no pod is terminated before a new one is ready.readinessProbetells Kubernetes when the new pod is ready to receive traffic.preStoplifecycle hook adds a 5-second delay before SIGTERM, giving the ingress controller time to remove the pod from the load balancer.
base/service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
selector:
app: myapp
ports:
- port: 80
targetPort: 3000
type: ClusterIP
base/hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: myapp
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
4. Create the ArgoCD Application
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: myapp
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/myorg/k8s-manifests.git
targetRevision: main
path: overlays/production
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
Apply it:
kubectl apply -f argocd-app.yaml
Now whenever we push a change to the main branch of k8s-manifests, ArgoCD detects the drift and syncs the cluster to match the desired state.
5. The Deployment Workflow
- Build a new container image in CI and push it with a unique tag (e.g., the Git SHA).
- Update the image tag in
deployment.yamland push to the manifest repo. - ArgoCD detects the change and begins a rolling update.
- Kubernetes creates new pods with the updated image.
- Readiness probes confirm the new pods are healthy.
- Old pods are gracefully drained and terminated.
- Zero requests are dropped throughout the process.
6. Testing Zero Downtime
We can verify zero downtime with a simple load test during a deployment:
# In one terminal, run a continuous request loop
while true; do
curl -s -o /dev/null -w "%{http_code}\n" http://myapp.example.com/healthz
sleep 0.1
done
# In another terminal, trigger a deployment
kubectl set image deployment/myapp myapp=ghcr.io/myorg/myapp:v2.0.0
Every response should return 200. If we see 502 or 503 errors, we need to check the readiness probe configuration and the preStop hook.
K3s gives us a production-grade Kubernetes cluster in under a minute. ArgoCD gives us a GitOps workflow that keeps the cluster in sync with our repository. Together, they provide a deployment pipeline where shipping new code is as simple as merging a pull request, and users never notice a thing.
Need help with this?
Our team handles this kind of work daily. Let us take care of your infrastructure.
Related Articles
The Ultimate Guide to Linux Server Management in 2025
A comprehensive guide to modern Linux server management covering automation, containerization, cloud integration, AI-driven operations, security best practices, and essential tooling for 2025.
Server & DevOpsFixing "421 Misdirected Request" for Plesk Sites on Ubuntu 22.04 After Apache Update
Resolve the 421 Misdirected Request error affecting all HTTPS sites on Plesk for Ubuntu 22.04 after an Apache update, caused by changed SNI requirements in the nginx-to-Apache proxy chain.
Server & DevOpsHow to Set Up GlusterFS on Ubuntu
A complete guide to setting up a distributed, replicated GlusterFS filesystem across multiple Ubuntu 22.04 nodes, including installation, volume creation, client mounting, maintenance, and troubleshooting.