- Check-in voordat we aan bugfix beginnen te werken.

- Introductie van static-files serving met standaard nginx (niet ons docker nginx image), en een rsync service om static files te synchroniseren. Nog niet volledig afgewerkt!
This commit is contained in:
Josako
2025-08-21 05:48:03 +02:00
parent 9c63ecb17f
commit 4c00d33bc3
10 changed files with 467 additions and 194 deletions

View File

@@ -0,0 +1,202 @@
# Evie Object Storage Governance (Optie 3)
**Doel:** 1 bucket per omgeving (staging / prod), met **prefixen per
tenant**. Duidelijke scheiding van datatypes (documents vs assets), lage
beheerlast, goed schaalbaar.
------------------------------------------------------------------------
## 1) Structuur & naamgeving
### Buckets (per omgeving)
- **staging:** `evie-staging`
- **prod:** `evie-prod`
> Buckets zijn S3-compatibel op Scaleway
> (`https://s3.<regio>.scw.cloud`). Houd buckets "plat" (alle tenants
> als prefix).
### Prefix layout (per tenant)
<bucket>/
tenant-<tenantId>/
documents/
<subfolders naar keuze>...
assets/
<subfolders naar keuze>...
**Conventies** - **Tenant prefix:** `tenant-<tenantId>` (tenantId =
interne stabiele sleutel; geen PII). - **Datatypes:** `documents/` en
`assets/` (harde scheiding). - **Bestandsnamen:** `snake-case` of
`kebab-case`; voeg optioneel datum/uuid toe bij uploads die kunnen
conflicteren.
------------------------------------------------------------------------
## 2) Toegang & secrets
### IAM-model
- **Één IAM Application per omgeving**
- `evie-staging-app` → keys in **staging** k8s Secret\
- `evie-prod-app` → keys in **prod** k8s Secret\
- Toegang **alleen** tot het eigen bucket (`evie-staging` of
`evie-prod`).
### App-side secrets (env)
- `S3_ENDPOINT=https://s3.<regio>.scw.cloud`
- `S3_BUCKET=evie-<env>`
- `S3_ACCESS_KEY=***`
- `S3_SECRET_KEY=***`
- `S3_REGION=<regio>` (bv. `fr-par`)
- (optioneel) `S3_FORCE_PATH_STYLE=false`
> **Presigned uploads**: genereer **server-side** presigned URL's per
> tenant/prefix; geef nooit de master-keys aan de client.
------------------------------------------------------------------------
## 3) Policies (conceptueel)
- **Bucket policy**: sta alleen requests toe met geldige credentials
van de Evie-app van die omgeving.
- **Prefix scope** (in app-logica): alle reads/writes **moeten** met
pad beginnen op `tenant-<tenantId>/...`.
- **Optioneel** (later): extra policy-groepen voor specifieke
workflows (vb. tijdelijke ingest job).
> **Belangrijk:** autorisatie op tenantniveau afdwingen in **je
> applicatie** (context = `tenantId`). Nooit paden samenstellen vanuit
> user input zonder whitelisting/validation.
------------------------------------------------------------------------
## 4) Lifecycle & retentie
**Doel:** kosten beheersen, assets sneller "kouder", documenten langer
bewaren.
-----------------------------------------------------------------------
Scope (Filter) Regel
----------------------------------- -----------------------------------
`tenant-*/assets/` → One Zone-IA na 30 dagen
`tenant-*/assets/` → Glacier/Archive na 180 dagen
(optioneel)
`tenant-*/documents/` → Standard (geen transition) of IA
na 180d
`tenant-*/documents/` (tijdelijke Expire (delete) na 7--14 dagen
previews)
-----------------------------------------------------------------------
> Lifecycle definieer je **per bucket** met **prefix filters**, zodat
> regels verschillend zijn voor `assets/` en `documents/`.
------------------------------------------------------------------------
## 5) CORS & distributie
- **CORS**: indien browser direct upload/download doet, whitelist de
domeinen van je app (origin), en methodes `GET, PUT, POST`. Alleen
benodigde headers toestaan.
- **Publieke distributie** (indien nodig):
- Kleine public-reads via presigned URL's (aanbevolen).\
- Of activeer publieke read op een **specifieke** `public/`-prefix
(niet op de hele bucket).\
- Overweeg een CDN/edge-lag via Scaleway Edge Services voor
veelgevraagde assets.
------------------------------------------------------------------------
## 6) Observability & beheer
- **Logging/metrics**:
- App: log alle S3 calls met `tenantId` + object key.\
- Platform: gebruik Scaleway Cockpit voor capacity & request
metrics.
- **Quota & limieten**:
- 1 bucket per omgeving beperkt "bucket-sprawl".\
- Objecten en totale grootte zijn praktisch onbeperkt; plan wel
lifecycle om groei te managen.
- **Monitoring**:
- Alerts op snelgroeiende **assets**-prefixen, high error rates
(4xx/5xx), en mislukte lifecycle-transities.
------------------------------------------------------------------------
## 7) Operationele workflows
### Tenant aanmaken
1. DB schema provisionen.
2. (S3) **Geen nieuwe bucket**; enkel **prefix**:
`tenant-<id>/documents/` en `tenant-<id>/assets/` zijn impliciet.
3. (Optioneel) Init bestanden/placeholder objecten.
4. App-config linkt de tenant aan zijn prefix (centrale mapping).
### Upload (app -\> S3)
1. App valideert `tenantId` en datatype (`documents|assets`).\
2. App construeert **canonical path**: `tenant-<id>/<datatype>/<...>`\
3. App genereert **presigned PUT** (tijdelijk) en geeft terug aan
frontend.\
4. Frontend uploadt rechtstreeks naar S3 met presigned URL.
### Download / Serve
- Interne downloads: app signed GET of server-side stream.\
- Externe/public: **presigned GET** met korte TTL of via public-only
prefix + CDN.
### Opruimen & lifecycle
- Tijdelijke artefacten: app scheduled cleanup (of lifecycle
"Expiration").\
- Archivering: lifecycle transitions per prefix.
------------------------------------------------------------------------
## 8) Beveiliging
- **Least privilege**: IAM-keys enkel voor het bucket van de
omgeving.\
- **Encryptie**: server-side encryption (default) volstaat vaak;
overweeg KMS als apart key-beleid nodig is.\
- **Auditing**: log alle **write**-operaties met gebruikers- en
tenantcontext.\
- **Backups**: documenten zijn de "bron"? Zo ja, S3 is primaire opslag
en RAG-index kan herbouwd worden. Anders: definieer
export/replica-strategie.
------------------------------------------------------------------------
## 9) Migratie van MinIO → Scaleway
1. **Freeze window** (kort): pauzeer uploads of werk met **duale
write** (MinIO + S3) gedurende migratie.\
2. **Sync**: gebruik `rclone` of `mc mirror` om
`minio://bucket/tenant-*/{documents,assets}/`
`s3://evie-<env>/tenant-*/...`.\
3. **Verifieer**: random checksums / sample reads per tenant.\
4. **Switch**: zet `S3_ENDPOINT` en keys naar Scaleway; laat nieuwe
writes enkel naar S3 gaan.\
5. **Decom**: na grace-periode MinIO uitfaseren.
------------------------------------------------------------------------
## 10) Checklist (TL;DR)
- [ ] Buckets: `evie-staging`, `evie-prod`.\
- [ ] Prefix: `tenant-<id>/{documents,assets}/`.\
- [ ] IAM: 1 Application per omgeving; keys in k8s Secret.\
- [ ] Policy: alleen app-toegang; app dwingt prefix-scope per tenant
af.\
- [ ] Lifecycle: assets sneller koud, docs langer.\
- [ ] CORS: alleen noodzakelijke origins/methods.\
- [ ] Presigned URLs voor browser interacties.\
- [ ] Logging/metrics/alerts ingericht.\
- [ ] Migratiepad van MinIO uitgewerkt en getest.

45
k8s/deploy-static-files.sh Executable file
View File

@@ -0,0 +1,45 @@
#!/bin/bash
set -e
# Deploy Static Files Script voor EveAI
# File: k8s/deploy-static-files.sh
ENVIRONMENT=${1:-dev}
DRY_RUN=${2}
# Configuratie
REMOTE_HOST="minty.ask-eve-ai-local.com"
BUILD_DIR="nginx/static"
CLUSTER_CONTEXT="kind-eveai-${ENVIRONMENT}-cluster"
NAMESPACE="eveai-${ENVIRONMENT}"
echo "🚀 Deploying static files to ${ENVIRONMENT} cluster on ${REMOTE_HOST}..."
# Check if build exists
if [ ! -d "$BUILD_DIR" ]; then
echo "❌ Build directory $BUILD_DIR not found."
echo " Please run: cd nginx && npm run build && cd .."
exit 1
fi
# Show what will be deployed
echo "📦 Static files to deploy:"
du -sh "$BUILD_DIR"
find "$BUILD_DIR" -type f | wc -l | xargs echo " Files:"
if [ "$DRY_RUN" = "--dry-run" ]; then
echo "🔍 DRY RUN - would deploy to $ENVIRONMENT cluster"
exit 0
fi
# Deploy via direct rsync to access pod
echo "🚀 Deploying via rsync to $REMOTE_HOST:3873..."
rsync -av --delete "$BUILD_DIR/" "rsync://$REMOTE_HOST:3873/static/"
echo "✅ Static files deployed to cluster"
# Optional: Restart nginx pods to clear caches
echo "🔄 Restarting nginx pods..."
ssh "$REMOTE_HOST" "kubectl --context=$CLUSTER_CONTEXT rollout restart deployment/static-files -n $NAMESPACE"
echo "✅ Deployment completed successfully!"

View File

@@ -1,157 +0,0 @@
# EveAI Kubernetes Ingress Migration - Complete Implementation
## Migration Summary
The migration from nginx reverse proxy to Kubernetes Ingress has been successfully implemented. This migration provides a production-ready, native Kubernetes solution for HTTP routing.
## Changes Made
### 1. Setup Script Updates
**File: `setup-dev-cluster.sh`**
- ✅ Added `install_ingress_controller()` function
- ✅ Automatically installs NGINX Ingress Controller for Kind
- ✅ Updated main() function to include Ingress Controller installation
- ✅ Updated final output to show Ingress-based access URLs
### 2. New Configuration Files
**File: `static-files-service.yaml`**
- ConfigMap with nginx configuration for static file serving
- Deployment with initContainer to copy static files from existing nginx image
- Service (ClusterIP) for internal access
- Optimized for production with proper caching headers
**File: `eveai-ingress.yaml`**
- Ingress resource with path-based routing
- Routes: `/static/`, `/admin/`, `/api/`, `/chat-client/`, `/`
- Proper annotations for proxy settings and URL rewriting
- Host-based routing for `minty.ask-eve-ai-local.com`
**File: `monitoring-services.yaml`**
- Extracted monitoring services from nginx-monitoring-services.yaml
- Contains: Flower, Prometheus, Grafana deployments and services
- No nginx components included
### 3. Deployment Script Updates
**File: `deploy-all-services.sh`**
- ✅ Replaced `deploy_nginx_monitoring()` with `deploy_static_ingress()` and `deploy_monitoring_only()`
- ✅ Added `test_connectivity_ingress()` function for Ingress endpoint testing
- ✅ Added `show_connection_info_ingress()` function with updated URLs
- ✅ Updated main() function to use new deployment functions
## Architecture Changes
### Before (nginx reverse proxy):
```
Client → nginx:3080 → {eveai_app:5001, eveai_api:5003, eveai_chat_client:5004}
```
### After (Kubernetes Ingress):
```
Client → Ingress Controller:3080 → {
/static/* → static-files-service:80
/admin/* → eveai-app-service:5001
/api/* → eveai-api-service:5003
/chat-client/* → eveai-chat-client-service:5004
}
```
## Benefits Achieved
1. **Native Kubernetes**: Using standard Ingress resources instead of custom nginx
2. **Production Ready**: Separate static files service with optimized caching
3. **Scalable**: Static files service can be scaled independently
4. **Maintainable**: Declarative YAML configuration instead of nginx.conf
5. **No CORS Issues**: All traffic goes through same host (as correctly identified)
6. **URL Rewriting**: Handled by existing `nginx_utils.py` via Ingress headers
## Usage Instructions
### 1. Complete Cluster Setup (One Command)
```bash
cd k8s/dev
./setup-dev-cluster.sh
```
This now automatically:
- Creates Kind cluster
- Installs NGINX Ingress Controller
- Applies base manifests
### 2. Deploy All Services
```bash
./deploy-all-services.sh
```
This now:
- Deploys application services
- Deploys static files service
- Deploys Ingress configuration
- Deploys monitoring services separately
### 3. Access Services (via Ingress)
- **Main App**: http://minty.ask-eve-ai-local.com:3080/admin/
- **API**: http://minty.ask-eve-ai-local.com:3080/api/
- **Chat Client**: http://minty.ask-eve-ai-local.com:3080/chat-client/
- **Static Files**: http://minty.ask-eve-ai-local.com:3080/static/
### 4. Monitoring (Direct Access)
- **Flower**: http://minty.ask-eve-ai-local.com:3007
- **Prometheus**: http://minty.ask-eve-ai-local.com:3010
- **Grafana**: http://minty.ask-eve-ai-local.com:3012
## Validation Status
✅ All YAML files validated for syntax correctness
✅ Setup script updated and tested
✅ Deployment script updated and tested
✅ Ingress configuration created with proper routing
✅ Static files service configured with production optimizations
## Files Modified/Created
### Modified Files:
- `setup-dev-cluster.sh` - Added Ingress Controller installation
- `deploy-all-services.sh` - Updated for Ingress deployment
### New Files:
- `static-files-service.yaml` - Dedicated static files service
- `eveai-ingress.yaml` - Ingress routing configuration
- `monitoring-services.yaml` - Monitoring services only
- `INGRESS_MIGRATION_SUMMARY.md` - This summary document
### Legacy Files (can be removed after testing):
- `nginx-monitoring-services.yaml` - Contains old nginx configuration
## Next Steps for Testing
1. **Test Complete Workflow**:
```bash
cd k8s/dev
./setup-dev-cluster.sh
./deploy-all-services.sh
```
2. **Verify All Endpoints**:
- Test admin interface functionality
- Test API endpoints
- Test static file loading
- Test chat client functionality
3. **Verify URL Rewriting**:
- Check that `nginx_utils.py` still works correctly
- Test all admin panel links and forms
- Verify API calls from frontend
4. **Performance Testing**:
- Compare static file loading performance
- Test under load if needed
## Rollback Plan (if needed)
If issues are discovered, you can temporarily rollback by:
1. Reverting `deploy-all-services.sh` to use `nginx-monitoring-services.yaml`
2. Commenting out Ingress Controller installation in `setup-dev-cluster.sh`
3. Using direct port access instead of Ingress
## Migration Complete ✅
The migration from nginx reverse proxy to Kubernetes Ingress is now complete and ready for testing. All components have been implemented according to the agreed-upon architecture with production-ready optimizations.

View File

@@ -56,6 +56,11 @@ nodes:
hostPort: 3012 hostPort: 3012
protocol: TCP protocol: TCP
# Static files rsync access
- containerPort: 30873
hostPort: 3873
protocol: TCP
# Mount points for persistent data on host # Mount points for persistent data on host
extraMounts: extraMounts:
# MinIO data persistence # MinIO data persistence

View File

@@ -108,6 +108,52 @@ spec:
values: values:
- eveai-dev-cluster-control-plane - eveai-dev-cluster-control-plane
---
# Static Files Storage
apiVersion: v1
kind: PersistentVolume
metadata:
name: static-files-pv
labels:
app: static-files
environment: dev
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/static-files
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- eveai-dev-cluster-control-plane
---
# Static Files Persistent Volume Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: static-files-pvc
namespace: eveai-dev
spec:
accessModes:
- ReadWriteMany
storageClassName: local-storage
resources:
requests:
storage: 1Gi
selector:
matchLabels:
app: static-files
environment: dev
--- ---
# StorageClass for local storage # StorageClass for local storage
apiVersion: storage.k8s.io/v1 apiVersion: storage.k8s.io/v1

View File

@@ -71,8 +71,7 @@ create_host_directories() {
"$BASE_DIR/prometheus" "$BASE_DIR/prometheus"
"$BASE_DIR/grafana" "$BASE_DIR/grafana"
"$BASE_DIR/certs" "$BASE_DIR/certs"
) "$BASE_DIR/static-files" )
for dir in "${directories[@]}"; do for dir in "${directories[@]}"; do
if [ ! -d "$dir" ]; then if [ ! -d "$dir" ]; then
mkdir -p "$dir" mkdir -p "$dir"
@@ -353,6 +352,7 @@ apply_manifests() {
manifests=( manifests=(
"namespace.yaml" "namespace.yaml"
"persistent-volumes.yaml" "persistent-volumes.yaml"
"static-files-access.yaml"
"config-secrets.yaml" "config-secrets.yaml"
"network-policies.yaml" "network-policies.yaml"
) )

View File

@@ -0,0 +1,106 @@
# Static Files Access Pod for EveAI Dev Environment
# File: static-files-access.yaml
# Provides rsync daemon access to static files PVC
---
# Rsync Access Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: static-files-access
namespace: eveai-dev
labels:
app: static-files-access
environment: dev
spec:
replicas: 1
selector:
matchLabels:
app: static-files-access
template:
metadata:
labels:
app: static-files-access
spec:
containers:
- name: rsync-daemon
image: alpine:latest
command: ["/bin/sh"]
args:
- -c
- |
# Install rsync
apk add --no-cache rsync
# Create rsync configuration
cat > /etc/rsyncd.conf << 'RSYNC_EOF'
pid file = /var/run/rsyncd.pid
lock file = /var/run/rsync.lock
log file = /var/log/rsyncd.log
port = 873
[static]
path = /data/static
comment = Static Files Volume
uid = nobody
gid = nobody
read only = false
list = yes
auth users =
secrets file =
hosts allow = *
RSYNC_EOF
# Create target directory
mkdir -p /data/static
chown nobody:nobody /data/static
# Start rsync daemon
echo "Starting rsync daemon..."
rsync --daemon --no-detach --config=/etc/rsyncd.conf
ports:
- containerPort: 873
name: rsync
volumeMounts:
- name: static-files
mountPath: /data
livenessProbe:
tcpSocket:
port: 873
initialDelaySeconds: 10
periodSeconds: 30
readinessProbe:
tcpSocket:
port: 873
initialDelaySeconds: 5
periodSeconds: 10
resources:
requests:
memory: "32Mi"
cpu: "25m"
limits:
memory: "64Mi"
cpu: "50m"
volumes:
- name: static-files
persistentVolumeClaim:
claimName: static-files-pvc
---
# NodePort Service for external rsync access
apiVersion: v1
kind: Service
metadata:
name: static-files-access-service
namespace: eveai-dev
labels:
app: static-files-access
spec:
type: NodePort
ports:
- port: 873
targetPort: 873
nodePort: 30873
protocol: TCP
name: rsync
selector:
app: static-files-access

View File

@@ -1,7 +1,7 @@
# Static Files Service for EveAI Dev Environment # Static Files Service for EveAI Dev Environment (v2 - PersistentVolume based)
# File: static-files-service.yaml # File: static-files-service.yaml
--- ---
# Static Files ConfigMap for nginx configuration # Static Files ConfigMap (enhanced caching)
apiVersion: v1 apiVersion: v1
kind: ConfigMap kind: ConfigMap
metadata: metadata:
@@ -13,11 +13,31 @@ data:
listen 80; listen 80;
server_name _; server_name _;
# Gzip compression
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_types text/css application/javascript application/json image/svg+xml;
location /static/ { location /static/ {
alias /usr/share/nginx/html/static/; alias /usr/share/nginx/html/static/;
expires 1y;
add_header Cache-Control "public, immutable"; # Aggressive caching voor versioned assets
add_header X-Content-Type-Options nosniff; location ~* \.(js|css)$ {
expires 1y;
add_header Cache-Control "public, immutable";
add_header X-Content-Type-Options nosniff;
}
# Moderate caching voor images
location ~* \.(png|jpg|jpeg|gif|ico|svg)$ {
expires 30d;
add_header Cache-Control "public";
}
# Default caching
expires 1h;
add_header Cache-Control "public";
} }
location /health { location /health {
@@ -27,7 +47,7 @@ data:
} }
--- ---
# Static Files Deployment # Static Files Deployment (GEEN CUSTOM IMAGE!)
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
@@ -37,7 +57,7 @@ metadata:
app: static-files app: static-files
environment: dev environment: dev
spec: spec:
replicas: 1 replicas: 2 # Voor high availability
selector: selector:
matchLabels: matchLabels:
app: static-files app: static-files
@@ -46,28 +66,15 @@ spec:
labels: labels:
app: static-files app: static-files
spec: spec:
initContainers:
- name: copy-static-files
image: registry.ask-eve-ai-local.com/josakola/nginx:latest
command: ['sh', '-c']
args:
- |
echo "Copying static files..."
cp -r /etc/nginx/static/* /static-data/static/ 2>/dev/null || true
ls -la /static-data/static/
echo "Static files copied successfully"
volumeMounts:
- name: static-data
mountPath: /static-data
containers: containers:
- name: nginx - name: nginx
image: nginx:alpine image: nginx:alpine # 🎉 STANDARD IMAGE!
ports: ports:
- containerPort: 80 - containerPort: 80
volumeMounts: volumeMounts:
- name: nginx-config - name: nginx-config
mountPath: /etc/nginx/conf.d mountPath: /etc/nginx/conf.d
- name: static-data - name: static-files
mountPath: /usr/share/nginx/html mountPath: /usr/share/nginx/html
livenessProbe: livenessProbe:
httpGet: httpGet:
@@ -92,11 +99,12 @@ spec:
- name: nginx-config - name: nginx-config
configMap: configMap:
name: static-files-config name: static-files-config
- name: static-data - name: static-files
emptyDir: {} persistentVolumeClaim:
claimName: static-files-pvc
--- ---
# Static Files Service # Service (ongewijzigd)
apiVersion: v1 apiVersion: v1
kind: Service kind: Service
metadata: metadata:

View File

@@ -380,6 +380,27 @@ kstart-entitlements() {
start_individual_service "eveai-entitlements" start_individual_service "eveai-entitlements"
} }
# Static files management functions
kdeploy-static() {
local dry_run=""
if [[ "$1" == "--dry-run" ]]; then
dry_run="--dry-run"
fi
echo "🚀 Deploying static files to $K8S_ENVIRONMENT environment..."
"$K8S_CONFIG_DIR/../deploy-static-files.sh" "$K8S_ENVIRONMENT" "$dry_run"
}
kstatic-status() {
echo "📊 Static Files Status for $K8S_ENVIRONMENT:"
echo "============================================="
kubectl get pvc static-files-pvc -n "$K8S_NAMESPACE" 2>/dev/null || echo "PVC not found"
kubectl get pods -l app=static-files -n "$K8S_NAMESPACE" 2>/dev/null || echo "No static-files pods found"
echo ""
echo "💾 PVC Usage:"
kubectl describe pvc static-files-pvc -n "$K8S_NAMESPACE" 2>/dev/null | grep -E "(Capacity|Used)" || echo "Usage info not available"
}
# Cluster management functions # Cluster management functions
cluster-start() { cluster-start() {
log_operation "INFO" "Starting cluster: $K8S_CLUSTER" log_operation "INFO" "Starting cluster: $K8S_CLUSTER"

View File

@@ -5,20 +5,18 @@
# Service group definitions # Service group definitions
declare -A SERVICE_GROUPS declare -A SERVICE_GROUPS
# Infrastructure services (Redis, MinIO) # Infrastructure services (Redis, MinIO, Static Files)
SERVICE_GROUPS[infrastructure]="redis minio" SERVICE_GROUPS[infrastructure]="redis minio static-files-access static-files"
# Application services (all EveAI apps) # Application services (all EveAI apps)
SERVICE_GROUPS[apps]="eveai-app eveai-api eveai-chat-client eveai-workers eveai-chat-workers eveai-beat eveai-entitlements" SERVICE_GROUPS[apps]="eveai-app eveai-api eveai-chat-client eveai-workers eveai-chat-workers eveai-beat eveai-entitlements"
# Static files and ingress
SERVICE_GROUPS[static]="static-files eveai-ingress"
# Monitoring services # Monitoring services
SERVICE_GROUPS[monitoring]="prometheus grafana flower" SERVICE_GROUPS[monitoring]="prometheus grafana flower"
# All services combined # All services combined
SERVICE_GROUPS[all]="redis minio eveai-app eveai-api eveai-chat-client eveai-workers eveai-chat-workers eveai-beat eveai-entitlements static-files eveai-ingress prometheus grafana flower" SERVICE_GROUPS[all]="redis minio static-files-access static-files eveai-app eveai-api eveai-chat-client eveai-workers eveai-chat-workers eveai-beat eveai-entitlements prometheus grafana flower"
# Service to YAML file mapping # Service to YAML file mapping
declare -A SERVICE_YAML_FILES declare -A SERVICE_YAML_FILES
@@ -26,6 +24,7 @@ declare -A SERVICE_YAML_FILES
# Infrastructure services # Infrastructure services
SERVICE_YAML_FILES[redis]="redis-minio-services.yaml" SERVICE_YAML_FILES[redis]="redis-minio-services.yaml"
SERVICE_YAML_FILES[minio]="redis-minio-services.yaml" SERVICE_YAML_FILES[minio]="redis-minio-services.yaml"
SERVICE_YAML_FILES[static-files-access]="static-files-access.yaml"
# Application services # Application services
SERVICE_YAML_FILES[eveai-app]="eveai-services.yaml" SERVICE_YAML_FILES[eveai-app]="eveai-services.yaml"
@@ -36,9 +35,8 @@ SERVICE_YAML_FILES[eveai-chat-workers]="eveai-services.yaml"
SERVICE_YAML_FILES[eveai-beat]="eveai-services.yaml" SERVICE_YAML_FILES[eveai-beat]="eveai-services.yaml"
SERVICE_YAML_FILES[eveai-entitlements]="eveai-services.yaml" SERVICE_YAML_FILES[eveai-entitlements]="eveai-services.yaml"
# Static and ingress services # Static files service
SERVICE_YAML_FILES[static-files]="static-files-service.yaml" SERVICE_YAML_FILES[static-files]="static-files-service.yaml"
SERVICE_YAML_FILES[eveai-ingress]="eveai-ingress.yaml"
# Monitoring services # Monitoring services
SERVICE_YAML_FILES[prometheus]="monitoring-services.yaml" SERVICE_YAML_FILES[prometheus]="monitoring-services.yaml"
@@ -51,6 +49,8 @@ declare -A SERVICE_DEPLOY_ORDER
# Infrastructure first (order 1) # Infrastructure first (order 1)
SERVICE_DEPLOY_ORDER[redis]=1 SERVICE_DEPLOY_ORDER[redis]=1
SERVICE_DEPLOY_ORDER[minio]=1 SERVICE_DEPLOY_ORDER[minio]=1
SERVICE_DEPLOY_ORDER[static-files-access]=1
SERVICE_DEPLOY_ORDER[static-files]=1
# Core apps next (order 2) # Core apps next (order 2)
SERVICE_DEPLOY_ORDER[eveai-app]=2 SERVICE_DEPLOY_ORDER[eveai-app]=2
@@ -63,9 +63,6 @@ SERVICE_DEPLOY_ORDER[eveai-workers]=3
SERVICE_DEPLOY_ORDER[eveai-chat-workers]=3 SERVICE_DEPLOY_ORDER[eveai-chat-workers]=3
SERVICE_DEPLOY_ORDER[eveai-beat]=3 SERVICE_DEPLOY_ORDER[eveai-beat]=3
# Static files and ingress (order 4)
SERVICE_DEPLOY_ORDER[static-files]=4
SERVICE_DEPLOY_ORDER[eveai-ingress]=4
# Monitoring last (order 5) # Monitoring last (order 5)
SERVICE_DEPLOY_ORDER[prometheus]=5 SERVICE_DEPLOY_ORDER[prometheus]=5