A production-grade Node.js HTTP server with Redis, deployed on Kubernetes.
All manifests are bundled and attached to each GitHub Release. Apply the latest release with:
kubectl apply -f https://github.com/pPrecel/claude-warmup/releases/latest/download/manifests.yaml
To pin to a specific version:
kubectl apply -f https://github.com/pPrecel/claude-warmup/releases/download/v1.0.0/manifests.yaml
| File | Purpose |
|---|---|
deployment.yaml |
App deployment — 2 replicas, probes, securityContext, RBAC SA |
service.yaml |
ClusterIP service, port 80 → 3000 |
redis.yaml |
Redis Deployment + PersistentVolumeClaim + ClusterIP Service |
secret.yaml.example |
Template for the Redis credentials secret |
network-policy.yaml |
Ingress + egress NetworkPolicy rules |
rbac.yaml |
ServiceAccount, Role, RoleBinding for the app |
Redis credentials are stored in a Kubernetes Secret named redis-credentials. Never commit secret.yaml — it is gitignored.
Create the secret manually before deploying:
kubectl create secret generic redis-credentials \
--from-literal=redis-password=$(openssl rand -base64 24)
Or copy and fill in k8s/secret.yaml.example:
cp k8s/secret.yaml.example k8s/secret.yaml
kubectl apply -f k8s/secret.yaml
For production, use an external secrets manager (Vault, AWS Secrets Manager, Sealed Secrets, External Secrets Operator) instead of managing the secret manually.
The app runs as UID 1000 (the built-in node user):
securityContext:
runAsNonRoot: true
runAsUser: 1000
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
Two policies are defined in k8s/network-policy.yaml:
redis-allow-app — Redis only accepts ingress from pods labelled app: claude-warmup on port 6379. All other ingress to Redis is denied.app-egress — App pods may only send traffic to DNS (port 53) and Redis (port 6379). All other egress is denied.A dedicated ServiceAccount (claude-warmup) is created with an empty Role — the app makes no Kubernetes API calls, so no API permissions are granted.
Every response includes:
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
Content-Security-Policy: default-src 'none'
Both liveness and readiness probes call GET /healthz:
livenessProbe:
httpGet:
path: /healthz
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /healthz
port: 3000
initialDelaySeconds: 3
periodSeconds: 5
Redis uses exec probes with authenticated redis-cli ping.
Both deployments have explicit requests and limits:
| Resource | Request | Limit |
|---|---|---|
| CPU | 50m | 200m |
| Memory | 64Mi | 128Mi |
Redis data is stored on a 1Gi PersistentVolumeClaim (redis-data) mounted at /data. Counter values survive pod restarts and rescheduling.
The app runs with 2 replicas by default. Since the counter is stored in Redis (not in-process), all replicas share the same counter state correctly.
The CI pipeline pushes two tags on every build:
ghcr.io/pprecel/claude-warmup:latestghcr.io/pprecel/claude-warmup:<git-sha>For production deployments, always reference the SHA tag for reproducibility. The deployment uses imagePullPolicy: Always to ensure the latest image is pulled on pod restarts.
redis-credentials created before kubectl apply