- Set static url (for staging and production) to (bunny.net) static storage

This commit is contained in:
Josako
2025-09-05 07:55:57 +02:00
parent 54a9641440
commit 6115cc7e13
4 changed files with 135 additions and 43 deletions

View File

@@ -330,6 +330,9 @@ class DevConfig(Config):
EVEAI_CHAT_LOCATION_PREFIX = '/chat'
CHAT_CLIENT_PREFIX = 'chat-client/chat/'
# Define the static path
STATIC_URL = 'https://evie-staging-static.askeveai.com'
# PATH settings
ffmpeg_path = '/usr/bin/ffmpeg'
@@ -354,6 +357,9 @@ class StagingConfig(Config):
EVEAI_CHAT_LOCATION_PREFIX = '/chat'
CHAT_CLIENT_PREFIX = 'chat-client/chat/'
# Define the static path
STATIC_URL = 'https://evie-staging-static.askeveai.com'
# PATH settings
ffmpeg_path = '/usr/bin/ffmpeg'
@@ -383,9 +389,8 @@ class ProdConfig(Config):
EVEAI_APP_LOCATION_PREFIX = '/admin'
EVEAI_CHAT_LOCATION_PREFIX = '/chat'
# flask-mailman settings
MAIL_USERNAME = 'eveai_super@flow-it.net'
MAIL_PASSWORD = '$6xsWGbNtx$CFMQZqc*'
# Define the static path
STATIC_URL = 'https://evie-staging-static.askeveai.com'
# PATH settings
ffmpeg_path = '/usr/bin/ffmpeg'

View File

@@ -465,7 +465,7 @@ kubectl -n tools port-forward svc/pgadmin-pgadmin4 8080:80
### Phase 8: RedisInsight Tool Deployment
### Phase 9: Enable Scaleway Registry
### Phase 9: Application Services Deployment
#### Create Scaleway Registry Secret
Create docker pull secret via External Secrets (once):
@@ -509,9 +509,105 @@ Use the staging overlay to deploy apps with registry rewrite and imagePullSecret
```bash
kubectl apply -k scaleway/manifests/overlays/staging/
```
Notes:
- Base manifests keep generic images (josakola/...). The overlay rewrites them to rg.fr-par.scw.cloud/eveai-staging/josakola/...:staging and adds imagePullSecrets to all Pods.
- Staging uses imagePullPolicy: Always, so new pushes to :staging are pulled automatically.
##### Deploy backend workers
```bash
kubectl apply -k scaleway/manifests/base/applications/backend/
kubectl -n eveai-staging get deploy | egrep 'eveai-(workers|chat-workers|entitlements)'
# Optional: quick logs
kubectl -n eveai-staging logs deploy/eveai-workers --tail=100 || true
kubectl -n eveai-staging logs deploy/eveai-chat-workers --tail=100 || true
kubectl -n eveai-staging logs deploy/eveai-entitlements --tail=100 || true
```
##### Deploy frontend services
```bash
kubectl apply -k scaleway/manifests/base/applications/frontend/
kubectl -n eveai-staging get deploy,svc | egrep 'eveai-(app|api|chat-client)'
```
##### Verify Ingress routes (Ingress managed separately)
Ingress is intentionally not managed by the staging Kustomize overlay. Apply or update it manually using your existing manifest and handle it per your cluster-install.md guide:
```bash
kubectl apply -f scaleway/manifests/base/networking/ingress-https.yaml
kubectl -n eveai-staging describe ingress eveai-staging-ingress
```
Then verify the routes:
```bash
curl -k https://evie-staging.askeveai.com/verify/health
curl -k https://evie-staging.askeveai.com/admin/healthz/ready
curl -k https://evie-staging.askeveai.com/api/healthz/ready
curl -k https://evie-staging.askeveai.com/client/healthz/ready
```
#### Updating the staging deployment
- Als je de images met dezelfde tag (bijv. :staging) opnieuw hebt gepusht én je staging pods gebruiken imagePullPolicy: Always (zoals in de handleiding), dan hoef je alleen een rollout te triggeren zodat de pods opnieuw starten en de nieuwste image pullen.
- Doe dit in de juiste namespace (waarschijnlijk eveai-staging) met kubectl rollout restart.
##### Snelste manier (alle deployments in één keer)
```bash
# Staging namespace (pas aan als je een andere gebruikt)
kubectl -n eveai-staging rollout restart deployment
# Optioneel: status volgen totdat alles klaar is
kubectl -n eveai-staging rollout status deploy --all
# Controleren welke image draait per pod
kubectl -n eveai-staging get pods -o=jsonpath='{range .items[*]}{@.metadata.name}{"\t"}{range .spec.containers[*]}{@.image}{" "}{end}{"\n"}{end}'
```
Dit herstart alle Deployments in de namespace. Omdat imagePullPolicy: Always staat, zal Kubernetes de nieuwste image voor de gebruikte tag (bijv. :staging) ophalen.
##### Specifieke services opnieuw starten
Wil je alleen bepaalde services restarten:
```bash
kubectl -n eveai-staging rollout restart deployment/eveai-app
kubectl -n eveai-staging rollout restart deployment/eveai-api
kubectl -n eveai-staging rollout restart deployment/eveai-chat-client
kubectl -n eveai-staging rollout restart deployment/eveai-workers
kubectl -n eveai-staging rollout restart deployment/eveai-chat-workers
kubectl -n eveai-staging rollout restart deployment/eveai-entitlements
kubectl -n eveai-staging rollout status deployment/eveai-app
```
##### Alternatief: (her)apply van manifesten
De handleiding plaatst de manifests in scaleway/manifests en beschrijft het gebruik van Kustomize overlays. Je kunt ook simpelweg opnieuw apply-en:
```bash
# Overlay die images herschrijft naar de Scaleway registry en imagePullSecrets toevoegt
kubectl apply -k scaleway/manifests/overlays/staging/
# Backend en frontend (indien je base afzonderlijk gebruikt)
kubectl apply -k scaleway/manifests/base/applications/backend/
kubectl apply -k scaleway/manifests/base/applications/frontend/
```
Let op: apply alleen triggert niet altijd een rollout als er geen inhoudelijke spec-wijziging is. Combineer dit zo nodig met een rollout restart zoals hierboven.
##### Als je met versie-tags werkt (productie-achtig)
- Gebruik je géén channel tag (:staging/:production) maar een vaste, versiegebonden tag (bijv. :v1.2.3) en imagePullPolicy: IfNotPresent, dan moet je óf:
- de tag in je manifest/overlay aanpassen naar de nieuwe versie en opnieuw apply-en, of
- met een eenmalige set-image een nieuwe ReplicaSet forceren:
```bash
kubectl -n eveai-staging set image deploy/eveai-api eveai-api=rg.fr-par.scw.cloud/<namespace>/josakola/eveai-api:v1.2.4
kubectl -n eveai-staging rollout status deploy/eveai-api
```
##### Troubleshooting
- Check of de registry pull secret aanwezig is (volgens handleiding):
```bash
kubectl apply -f scaleway/manifests/base/secrets/scaleway-registry-secret.yaml
kubectl -n eveai-staging get secret scaleway-registry-cred
```
- Bekijk events/logs als pods niet up komen:
```bash
kubectl get events -n eveai-staging --sort-by=.lastTimestamp
kubectl -n eveai-staging describe pod <pod-naam>
kubectl -n eveai-staging logs deploy/eveai-api --tail=200
```
## Verification and Testing
@@ -570,40 +666,7 @@ curl https://evie-staging.askeveai.com/verify/
- Change A-record to CNAME pointing to CDN endpoint
- Or update A-record to CDN IP
## Key Differences from Old Setup
### Advantages of New Modular Approach
1. **Modular Structure**: Separate infrastructure from applications
2. **Environment Management**: Easy staging/production separation
3. **HTTPS-First**: TLS certificates managed automatically
4. **Monitoring Integration**: Prometheus/Grafana via Helm charts
5. **Scaleway Integration**: Managed services secrets support
6. **Maintainability**: Clear separation of concerns
### Migration Benefits
- **Organized**: Base configurations with environment overlays
- **Scalable**: Easy to add new services or environments
- **Secure**: HTTPS-only from deployment start
- **Observable**: Built-in monitoring stack
- **Automated**: Less manual intervention required
## Troubleshooting
### Common Issues
```bash
# Certificate not issued
kubectl describe certificate evie-staging-tls -n eveai-staging
kubectl logs -n cert-manager deployment/cert-manager
# Ingress not accessible
kubectl describe ingress eveai-staging-ingress -n eveai-staging
kubectl logs -n ingress-nginx deployment/ingress-nginx-controller
# Check events for issues
kubectl get events -n eveai-staging --sort-by='.lastTimestamp'
```
For detailed troubleshooting, refer to the main deployment guide: `documentation/scaleway-deployment-guide.md`

View File

@@ -1,6 +1,6 @@
import logging
import os
from flask import Flask, jsonify
from flask import Flask, jsonify, url_for
from flask_security import SQLAlchemyUserDatastore
from flask_security.signals import user_authenticated
from werkzeug.middleware.proxy_fix import ProxyFix
@@ -87,6 +87,18 @@ def create_app(config_file=None):
# Register Cache Handlers
register_cache_handlers(app)
# Custom url_for function for templates
@app.context_processor
def override_url_for():
return dict(url_for=_url_for)
def _url_for(endpoint, **values):
static_url = app.config.get('STATIC_URL')
if endpoint == 'static' and static_url:
filename = values.get('filename', '')
return f"{static_url}/{filename}"
return url_for(endpoint, **values)
# Debugging settings
if app.config['DEBUG'] is True:
app.logger.setLevel(logging.DEBUG)

View File

@@ -1,7 +1,7 @@
import logging
import os
from flask import Flask, jsonify, request
from flask import Flask, jsonify, request, url_for
from werkzeug.middleware.proxy_fix import ProxyFix
import logging.config
@@ -63,6 +63,18 @@ def create_app(config_file=None):
# Register Cache Handlers
register_cache_handlers(app)
# Custom url_for function for templates
@app.context_processor
def override_url_for():
return dict(url_for=_url_for)
def _url_for(endpoint, **values):
static_url = app.config.get('STATIC_URL')
if endpoint == 'static' and static_url:
filename = values.get('filename', '')
return f"{static_url}/{filename}"
return url_for(endpoint, **values)
# Debugging settings
if app.config['DEBUG'] is True:
app.logger.setLevel(logging.DEBUG)