Kubernetes Deployment
Deploy Ovyxa on Kubernetes for high availability, auto-scaling, and enterprise-grade reliability.
Prerequisites
- Kubernetes cluster 1.25+ (EKS, GKE, AKS, or self-managed)
- kubectl configured
- Helm 3.x installed
- 3+ nodes recommended (high availability)
- Persistent volume provisioner (gp3, pd-ssd, etc.)
Quick Install with Helm
1. Add Helm Repository
helm repo add ovyxa https://charts.ovyxa.com
helm repo update
2. Create Namespace
kubectl create namespace ovyxa
3. Install Chart
helm install ovyxa ovyxa/ovyxa \
--namespace ovyxa \
--set ingress.hosts[0].host=analytics.yourdomain.com \
--set ingress.tls[0].secretName=ovyxa-tls \
--set ingress.tls[0].hosts[0]=analytics.yourdomain.com \
--set postgresql.auth.password=$(openssl rand -hex 32) \
--set redis.auth.password=$(openssl rand -hex 32)
4. Wait for Pods
kubectl wait --for=condition=ready pod \
-l app.kubernetes.io/instance=ovyxa \
-n ovyxa \
--timeout=300s
5. Access Dashboard
# Get external IP
kubectl get ingress -n ovyxa
# Visit https://analytics.yourdomain.com
Custom values.yaml
Create values.yaml for advanced configuration:
# Global settings
global:
domain: analytics.yourdomain.com
# Ingress
ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
hosts:
- host: analytics.yourdomain.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: ovyxa-tls
hosts:
- analytics.yourdomain.com
# API (ingestion + stats)
api:
replicaCount: 3
image:
repository: ovyxa/api
tag: latest
pullPolicy: IfNotPresent
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 2000m
memory: 2Gi
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 70
env:
- name: NODE_ENV
value: production
- name: LOG_LEVEL
value: info
# Web dashboard
web:
replicaCount: 2
image:
repository: ovyxa/web
tag: latest
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 1000m
memory: 512Mi
# PostgreSQL (metadata)
postgresql:
enabled: true
auth:
database: ovyxa
username: ovyxa
password: "" # Auto-generated if empty
primary:
persistence:
enabled: true
size: 20Gi
storageClass: gp3 # AWS example
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 2000m
memory: 4Gi
# ClickHouse (analytics)
clickhouse:
enabled: true
replicaCount: 3 # For HA
persistence:
enabled: true
size: 500Gi
storageClass: gp3
resources:
requests:
cpu: 2000m
memory: 4Gi
limits:
cpu: 8000m
memory: 16Gi
zookeeper:
enabled: true # Required for replication
replicaCount: 3
# Redis (cache)
redis:
enabled: true
architecture: standalone # or 'replication' for HA
auth:
enabled: true
password: "" # Auto-generated
master:
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 1000m
memory: 1Gi
persistence:
enabled: true
size: 10Gi
Install with custom values:
helm install ovyxa ovyxa/ovyxa \
--namespace ovyxa \
--values values.yaml
High Availability Setup
For production HA:
# values-ha.yaml
api:
replicaCount: 5
podAntiAffinity: hard # Spread across nodes
web:
replicaCount: 3
podAntiAffinity: soft
postgresql:
architecture: replication
replicaCount: 3
clickhouse:
replicaCount: 3
shards: 2 # Horizontal sharding
redis:
architecture: replication
sentinel:
enabled: true
Monitoring
Prometheus + Grafana
Enable metrics:
# values.yaml
metrics:
enabled: true
serviceMonitor:
enabled: true
namespace: monitoring
Install Prometheus Operator:
helm install prometheus prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--create-namespace
Access Grafana:
kubectl port-forward -n monitoring svc/prometheus-grafana 3000:80
# Visit http://localhost:3000
# Default: admin/prom-operator
Import Ovyxa dashboard (ID: 12345) from Grafana.com.
Scaling
Manual Scaling
# Scale API pods
kubectl scale deployment ovyxa-api \
--replicas=10 \
-n ovyxa
# Scale ClickHouse
kubectl scale statefulset ovyxa-clickhouse \
--replicas=5 \
-n ovyxa
Auto-scaling (HPA)
Already configured in Helm chart. Monitor:
kubectl get hpa -n ovyxa
Backup & Restore
Using Velero
Install Velero:
velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.8.0 \
--bucket ovyxa-backups \
--backup-location-config region=us-east-1 \
--snapshot-location-config region=us-east-1
Schedule backups:
velero schedule create ovyxa-daily \
--schedule="0 2 * * *" \
--include-namespaces ovyxa \
--ttl 720h0m0s
Restore:
velero restore create --from-backup ovyxa-daily-20250113
Upgrade
# Update Helm repo
helm repo update
# Upgrade release
helm upgrade ovyxa ovyxa/ovyxa \
--namespace ovyxa \
--values values.yaml \
--reuse-values
# Check status
helm status ovyxa -n ovyxa
Troubleshooting
Pods Not Starting
# Check pod status
kubectl get pods -n ovyxa
# Describe problematic pod
kubectl describe pod ovyxa-api-xxx -n ovyxa
# View logs
kubectl logs -f ovyxa-api-xxx -n ovyxa
PVC Not Binding
Check storage class:
kubectl get storageclass
kubectl get pvc -n ovyxa
Ensure provisioner is configured correctly.
High Memory Usage
Check ClickHouse queries:
kubectl exec -it ovyxa-clickhouse-0 -n ovyxa -- \
clickhouse-client --query "SELECT * FROM system.query_log ORDER BY memory_usage DESC LIMIT 10"
Cost Optimization
AWS EKS Example
# Use spot instances for non-critical workloads
web:
nodeSelector:
kubernetes.io/lifecycle: spot
# Use reserved instances for databases
clickhouse:
nodeSelector:
kubernetes.io/lifecycle: on-demand
Right-sizing
Monitor actual usage:
kubectl top pods -n ovyxa
Adjust requests/limits accordingly.
Next Steps
- Configuration Reference - All environment variables
- Backup & Restore - Detailed backup procedures
- Docker Compose - Simpler alternative
Kubernetes is ideal for 100M+ events/month and enterprise requirements.