142 Commits

Author SHA1 Message Date
92edbeacb2 .gitignore adaptation for Linux 2025-12-29 09:44:09 +01:00
Josako
30bfecc135 Merge branch 'feature/Convert_Git_Flow_Process_to_own_scripts' into develop 2025-12-11 09:57:58 +01:00
Josako
2c8347c91b - Writing custom git flow scripts - finishing up without extensive testing 2025-12-11 09:47:19 +01:00
Josako
fe9fc047ff - Writing custom git flow scripts - a start 2025-12-11 09:27:21 +01:00
Josako
0f8bda0aef - Ensure users cannot login when their valid_to date is expired. 2025-12-08 16:54:59 +01:00
Josako
bab9e89117 Merge tag 'v3.1.36-beta' into develop
Tagging version v3.1.36-beta v3.1.36-beta
2025-12-02 13:08:59 +01:00
Josako
e25698d6cf Merge branch 'release/v3.1.36-beta' 2025-12-02 13:08:59 +01:00
Josako
e30fe7807c - Release Notes voor 3.1.36-beta 2025-12-02 13:08:39 +01:00
Josako
94b805e0eb - TRA-89 - Problem solved where connection could get lost in sync between client and backend
- TRA-98 - End user could continue without accepting dpa & terms
- TRA-96 - Multiple-choice questions in mobile client not scrolling -> Solved by introducing new client layout
- TRA-101 - DPA-link was not working
- TRA-102 - Wrong responses when looking for affirmative answers.
2025-12-02 12:15:50 +01:00
Josako
9b86a220b1 - Introduction of Shells for Mobile client and Desktop client. Extensible with additional shells in the future 2025-12-01 14:07:16 +01:00
Josako
5a5d6b03af Merge branch 'feature/Introduce_tabs_in_mobile_chat_client' into develop 2025-11-28 10:04:49 +01:00
Josako
b1d8c9a17d - Small changes to allow for keyboard input, not finished 2025-11-28 10:04:31 +01:00
Josako
14273b8a70 - Full implementation of tab bar next to logo in mobile client
- Customisation option in Tenant Make
- Splitting all controls in the newly created tabs
2025-11-27 11:32:46 +01:00
Josako
5e25216b66 Merge branch 'release/v3.1.26-beta' 2025-11-26 11:40:01 +01:00
Josako
d68dfde52a Merge tag 'v3.1.26-beta' into develop
Tagging version v3.1.26-beta v3.1.26-beta
2025-11-26 11:40:01 +01:00
Josako
4bc2292c4c - Release Notes for 3.1.26-beta 2025-11-26 11:39:05 +01:00
Josako
f10bb6f395 - TRA-99 Solved. Unable to create a new Tenant Make
- Generic improvement of initialisation of Dynamic Forms, ensuring correct form processing
2025-11-26 11:31:25 +01:00
Josako
0d3c3949de - Wrap client in @vueuse/core to abstract mobile client dimensions 2025-11-26 08:01:33 +01:00
Josako
25adb4213b Merge branch 'release/v3.1.24-beta' 2025-11-25 13:23:24 +01:00
Josako
73125887a3 Merge tag 'v3.1.24-beta' into develop
Tagging version v3.1.24-beta v3.1.24-beta
2025-11-25 13:23:24 +01:00
Josako
c29ed37c09 - Release notes for 3.1.24-beta 2025-11-25 13:22:21 +01:00
Josako
9b1f9e8a3b Merge branch 'release/v3.1.23-beta' 2025-11-25 13:16:11 +01:00
Josako
e167df3032 Merge tag 'v3.1.23-beta' into develop
Tagging version v3.1.23-beta v3.1.23-beta
2025-11-25 13:16:11 +01:00
Josako
20fb2eee70 - Correction of behaviour where boolean fields were not properly initialised
- Ensure that primary and financial contact fields are properly saved
2025-11-25 13:15:11 +01:00
Josako
3815399a7e - Specialist Tuning now in a separate editor
- typeBadge formatter completed
2025-11-24 15:54:47 +01:00
Josako
f2bd90e6ae Merge branch 'release/v3.1.16-beta' 2025-11-13 10:25:09 +01:00
Josako
95c8282eb8 Merge tag 'v3.1.16-beta' into develop
Tagging version v3.1.16-beta v3.1.16-beta
2025-11-13 10:25:09 +01:00
Josako
04c9d8cf98 - Release notes v3.1.16-beta 2025-11-13 10:24:51 +01:00
Josako
03f6ef4408 Merge branch 'release/v3.1.15-beta' 2025-11-13 10:20:24 +01:00
Josako
e8bb66c2c2 Merge tag 'v3.1.15-beta' into develop
Tagging version v3.1.15-beta v3.1.15-beta
2025-11-13 10:20:24 +01:00
Josako
5dd711bcd2 - Add human_message_inactive_text_color 2025-11-13 10:19:39 +01:00
Josako
ee13de7fde - Release Notes 3.1.15-beta 2025-10-29 17:10:43 +01:00
Josako
82ca6b537a - small bugfix on old form being shown when no form was sent back. 2025-10-29 17:06:12 +01:00
Josako
af37aa7253 - Release notes for 3.1.14-beta 2025-10-28 17:43:50 +01:00
Josako
1748aebd38 Merge branch 'feature/Adding_Additional_configuration_and_capabilities_to_RAG_Agent' into develop 2025-10-28 17:35:55 +01:00
Josako
d6041ebb27 - Specialist Editor Change (all components in same overview), modal editors to allow for more complex configuration of Agents, Tasks and Tools
- Strengthening dynamic forms
2025-10-28 17:35:36 +01:00
Josako
b3ee2f7ce9 Bug Fix where - in exceptional cases - a connection without correct search path could be used (out of the connection pool). 2025-10-24 11:42:50 +02:00
Josako
c523250ccb - Forgotten migration? 2025-10-24 10:18:24 +02:00
Josako
a43825f5f0 - Ensure correct editing of additional Agent configuration possiblities when editing a specialist. 2025-10-24 10:17:20 +02:00
Josako
fb261ca0b9 Merge branch 'feature/Add_User_Actions_to_Specialist_Interaction' into develop 2025-10-23 10:56:33 +02:00
Josako
3ca2e0a3a9 Merge branch 'bugfix/Improve_and_Correct_Migration_Process' into develop 2025-10-23 10:18:56 +02:00
Josako
3aa2158a17 - Updated manifest.json 2025-10-23 10:18:31 +02:00
Josako
2bc5832db6 - Temporarily remove PartnerRagRetriever model (as it is not used yet)
- Ensure errors are being logged when migrating tenants
- Ensure migrations directory is copied into eveai_app
2025-10-23 10:17:57 +02:00
Josako
1720ddfa11 - cleanup of old TASKs, AGENTs and SPECIALISTs
- Add additional configuration options to agent (temperature and model choice)
- Define new PROOFREADING Agents and Tasks
2025-10-23 09:10:52 +02:00
Josako
59febb7fbb Merge branch 'bugfix/TRA-86_Sometimes_client_becomes_unresponsive' into develop
# Conflicts:
#	config/static-manifest/manifest.json
2025-10-22 14:03:08 +02:00
Josako
4ec1099925 - Changes to PROFESSIONAL_CONTACT_FORM
- Introducing first user action SHARE_PROFESSIONAL_CONTACT_FORM
2025-10-22 09:57:09 +02:00
Josako
8d1a8d9645 Merge branch 'feature/Improvement_of_RAG_Specialist' into develop 2025-10-21 14:00:20 +02:00
Josako
1d79a19981 - Refinenement of improved RAG_SPECIALIST
- Changed label of RetrieverType in Retriever form
2025-10-21 11:00:26 +02:00
Josako
aab766fe5e - New version of RAG_SPECIALIST and RAG_AGENT, including definition of conversation_purpose and response_depth. 2025-10-20 15:37:36 +02:00
Josako
05241ecdea Merge branch 'release/v3.1.13-beta' 2025-10-17 17:25:30 +02:00
Josako
451f95fbc1 Merge tag 'v3.1.13-beta' into develop
Tagging version v3.1.13-beta v3.1.13-beta
2025-10-17 17:25:30 +02:00
Josako
842429a659 - Release notes for v3.1.13-beta 2025-10-17 16:54:37 +02:00
Josako
225d494e15 Merge branch 'feature/Consent_for_DPA_and_T_C' into develop 2025-10-17 16:41:00 +02:00
Josako
5501061dd1 - Show markdown when signing a document
- Introduce consent history
- Centralise consent and content services and config
2025-10-17 14:06:51 +02:00
Josako
eeb76d57b7 - Consent giving UI introduced
- Possibility to view the document version the consent is given to
- Blocking functionality is no valid consent
2025-10-15 18:35:28 +02:00
Josako
3ea3a06de6 - Check for consent before allowing users to perform activities in the administrative app. 2025-10-14 16:20:30 +02:00
Josako
37819cd7e5 - Correctie reset password en confirm email adress by adapting the prefixed_url_for to use config setting
- Adaptation of DPA and T&Cs
- Refer to privacy statement as DPA, not a privacy statement
- Startup of enforcing signed DPA and T&Cs
- Adaptation of eveai_chat_client to ensure we retrieve correct DPA & T&Cs
2025-10-13 14:28:09 +02:00
Josako
a798217091 Merge branch 'release/v3.1.12-beta' 2025-10-03 11:56:51 +02:00
Josako
83272a4e2a Merge tag 'v3.1.12-beta' into develop
Tagging version v3.1.12-beta v3.1.12-beta
2025-10-03 11:56:51 +02:00
Josako
b66e2e99ed - Changelog for v3.1.12-beta 2025-10-03 11:56:36 +02:00
Josako
aeee22b305 - Allowing additional_positive_answer and additional_positive_answer in KOQuestions asset & selection specialist interaction. 2025-10-03 11:50:59 +02:00
Josako
5f387dcef8 - Error build nginx solved 2025-10-03 09:54:25 +02:00
Josako
b499add891 Merge branch 'release/v3.1.11-beta' 2025-10-03 09:50:52 +02:00
Josako
2f815616b1 Merge tag 'v3.1.11-beta' into develop
Tagging version v3.1.11-beta v3.1.11-beta
2025-10-03 09:50:52 +02:00
Josako
f23214bb6d - Changelog for v3.1.11-beta 2025-10-03 09:49:52 +02:00
Josako
6df9aa9c7e - Added some extra space for form rendering. 2025-10-03 09:43:03 +02:00
Josako
5465dae52f - Optimisation and streamlining of messages in ExecutionProgressTracker (ept)
- Adaptation of ProgressTracker to handle these optimised messages
- Hardening SSE-streaming in eveai_chat_client
2025-10-03 08:58:44 +02:00
Josako
79a3f94ac2 - improvement of marked editor in eveai_chat_client by modernising options approach
- removal of old and obsolete HTML files
- change of package.json to point to a specific version of marked
2025-10-03 07:59:43 +02:00
Josako
06586a1312 Merge branch 'release/v3.1.7-beta' 2025-09-30 17:52:20 +02:00
Josako
7b0e3cee7f Merge tag 'v3.1.7-beta' into develop
Tagging version v3.1.7-beta v3.1.7-beta
2025-09-30 17:52:20 +02:00
Josako
7bef4e69df - Changelog update for v3.1.7-beta 2025-09-30 17:51:36 +02:00
Josako
a3e18cb4db - Maximale hoogte voor AI message in ChatInput nu geldig voor zowel desktop als mobile devices.
- Correctie marked component in SideBarExplanation.vue
- AI messages ondersteunen nu markdown. Markdown rendering is als een centrale utility gedefinieerd.
2025-09-30 17:38:28 +02:00
Josako
471b8dd8c3 Merge branch 'feature/Activate_Pushgateway_Scraping' into develop 2025-09-30 15:23:59 +02:00
Josako
030d1b0e90 - cleaning script for monitoring namespace 2025-09-30 14:58:08 +02:00
Josako
fa452e4934 - Change manifests for Prometheus installation
- Change instructions for deploying Prometheus stack and Pushgateway
- Additional grouping to pushgateway to avoid overwriting of metrics in different pods / processes
- Bugfix to ensure good retrieval of css en js files in eveai_app
2025-09-30 14:56:08 +02:00
Josako
e24e7265b9 Merge branch 'release/v3.1.3-beta' 2025-09-25 17:38:48 +02:00
Josako
a76f87ba75 Merge tag 'v3.1.3-beta' into develop
Tagging version v3.1.3-beta v3.1.3-beta
2025-09-25 17:38:48 +02:00
Josako
c6fc8ca09a Release notes vor 3.1.3-beta 2025-09-25 17:38:30 +02:00
Josako
16ce59ae98 - Introduce cache busting (to circumvent aggressive caching on iOS - but ideal in other contexts as well)
- Change the build process to allow cache busting
- Optimisations to the build process
- Several improvements of UI geared towards mobile experience
-
2025-09-25 17:28:01 +02:00
Josako
cc47ce2d32 - Adaptation of the static url to be used.
- Solved problem of using pushgateway in the k8s cluster
2025-09-23 16:44:08 +02:00
Josako
b1e9fb71cb Merge branch 'release/v3.1.2-beta' 2025-09-23 10:14:09 +02:00
Josako
a57662db3f Merge tag 'v3.1.2-beta' into develop
Tagging version v3.1.2-beta v3.1.2-beta
2025-09-23 10:14:09 +02:00
Josako
66433f19b3 - Adaptation of push_to_scaleway.sh script 2025-09-23 10:13:52 +02:00
Josako
e7397a6d0d - Changelog update for 3.1.2-beta 2025-09-23 07:00:26 +02:00
Josako
d097451d42 Merge branch 'bugfix/Mobile_Chat_Client_Improvements' into develop 2025-09-23 06:57:06 +02:00
Josako
44e5dd5d02 - Ensuring good display of the eveai logo in the mobile version. 2025-09-23 06:55:31 +02:00
Josako
3b23be0ea4 - Ensure long messages do not take all available space, rendering the UI unusable. We now have limits built in in the chat-input as well as in the message history. 2025-09-22 22:41:43 +02:00
Josako
61ae9c3174 - Adaptation of the form message layout, in such a way that labels are shown on top of their values iso left, allowing a decent rendering on mobile devices
- refactoring message-content CSS
2025-09-22 22:24:46 +02:00
Josako
b6512b2d8c - Aanpassing layout van de chat-input. Character counter is ook weg op desktop. Scrollbar enkel zichtbaar indien nodig. Meer beschikbare ruimte in mobiele client. kleinere radius in de hoeken.
- Gewijzigde logica voor hoogtebepaling chat-input en message history, zodat ook de mobiele client correct functioneert.
2025-09-22 16:54:39 +02:00
Josako
0cd12a8491 Merge branch 'release/3.1.1-alfa' 2025-09-22 14:57:24 +02:00
Josako
ae36791ffe Merge tag '3.1.1-alfa' into develop
Tagging version 3.1.1-alfa 3.1.1-alfa
2025-09-22 14:57:24 +02:00
Josako
53bfc6bb23 Nog een paar laatste kleine bugfixes 2025-09-22 14:56:48 +02:00
Josako
2afee41c2a Release notes for 3.1.1-alfa 2025-09-16 11:25:58 +02:00
Josako
79b1fef5b6 - TRA-77 - Scroll behaviour in the Message History adapted to support both scrolling by the end user, and ensuring the last message is shown when new messages are added, or resizing is done. 2025-09-16 11:14:09 +02:00
Josako
2b04692fab - TRA-76 - Send Button color changes implemented
- TRA-72 - Translation of privacy statement and T&C
- TRA-73 - Strange characters in Tenant Make Name
- Addition of meta information in Specialist Form Fields
2025-09-15 17:57:13 +02:00
Josako
541d3862e6 Merge branch 'release/3.1.0-alfa' 2025-09-12 10:39:35 +02:00
Josako
43fd4ce9c1 Merge tag '3.1.0-alfa' into develop
Tagging version 3.1.0-alfa 3.1.0-alfa
2025-09-12 10:39:35 +02:00
Josako
14ba53e26b - adaptation of changelog for 3.1.0-alfa 2025-09-12 10:39:08 +02:00
Josako
4ab8b2a714 Merge branch 'feature/Scaleway_k8s_Integration' into develop 2025-09-12 10:26:15 +02:00
Josako
42cb1de0fd - eveai_chat_client updated to retrieve static files from the correct (bunny.net) location when a STATIC_URL is defined.
- Defined locations for crewai crew memory. This failed in k8s.
- Redis connection for pub/sub in ExecutionProgressTracker adapted to conform to TLS-enabled connections
2025-09-12 10:18:43 +02:00
Josako
a325fa5084 - error handling now uses a more comprehensive error communication system. 2025-09-11 14:46:28 +02:00
Josako
7cb19ca21e - Migratie van de test omgeving naar nieuwe realiteit 2025-09-10 14:59:07 +02:00
Josako
6ccba7d1e3 - Add test environment to __init__.py for all eveai services
- Add postgresql certificate to secrets for secure communication in staging and production environments
- Adapt for TLS communication with PostgreSQL
- Adapt tasks to handle invalid connections from the connection pool
- Migrate to psycopg3 for connection to PostgreSQL
2025-09-10 11:40:38 +02:00
Josako
6fbaff45a8 - Addition of FLASK_ENV setting for all eveai services
- Addition of flower to the monitoring stack
2025-09-09 21:07:10 +02:00
Josako
10ca344c84 - Adapted chat client to use correct apiPrefix. 2025-09-09 09:25:14 +02:00
Josako
a9bbd1f466 - Ensure prefix is passed for all services
- Add eveai-tem secret (Scaleway Transactional Email) to allow sending emails
- Adapted security URLs
- Certification problem in regions solved
- Redis insight added to tools in k8s
- Introduced new way of connection pooling for Redis
- TRA-79 - intrernal server error bij registreren catalog
2025-09-09 08:45:45 +02:00
Josako
804486664b - cleanup healthz logging in before_request
- Security and csrf added to eveai_ops. Otherwise the initialize_data.py script cannot initialize the Super User...
2025-09-07 16:19:53 +02:00
Josako
36575c17a8 - further healthz improvements 2025-09-07 14:55:01 +02:00
Josako
575bfa259e - further healthz improvements 2025-09-07 14:45:47 +02:00
Josako
362b2fe753 - healthz improvements 2025-09-07 08:28:02 +02:00
Josako
5c20e6c1f9 - eveai_app adapted to handle removal of complex rewrite rules in nginx.conf, which cannot be achieved in Ingress 2025-09-06 16:53:51 +02:00
Josako
b812aedb81 - Filtering healtz from logs in scaleway cockpit
- Removing startup-functionality from eveai_app (race conditions possible!)
- adapting blueprints to be pointing to admin (removed from Ingress)
2025-09-05 16:13:48 +02:00
Josako
d6ea3ba46c - Correcting SSL Certificate error in celery @startup 2025-09-05 14:03:07 +02:00
Josako
a6edd5c663 - Trying to solve database initialisation problem (no tables in tenant schema). 2025-09-05 11:11:08 +02:00
Josako
6115cc7e13 - Set static url (for staging and production) to (bunny.net) static storage 2025-09-05 07:55:57 +02:00
Josako
54a9641440 - TLS Refactoring 2025-09-04 15:22:45 +02:00
Josako
af8b5f54cd - Definition and Improvements to job-system
- Definition of k8s pods for application services
2025-09-04 11:49:19 +02:00
Josako
2a0c92b064 - Definition of extra eveai_ops service to run (db) jobs
- Definition of manifests for all jobs
- Definition of manifests for all eveai services
2025-09-03 15:20:54 +02:00
Josako
898bb32318 - Added PgAdmin4 tool to the cluster setup. 2025-09-02 16:42:21 +02:00
Josako
b0e1ad6e03 - removed obsolete run-scripts and start-scripts 2025-09-02 10:27:10 +02:00
Josako
84afc0b2ee - Debugging of redis setup issues
- Debugging of celery startup
- Moved flower to a standard image iso own build
2025-09-02 10:25:17 +02:00
Josako
593dd438aa - New Build and startup procedures for all services, compliant for both docker, podman and k8s 2025-09-01 19:58:28 +02:00
Josako
35f58f0c57 - Adaptations to support secure Redis Access
- Redis Connection Pooling set up for Celery, dogpile caching and flask session
2025-08-31 17:43:30 +02:00
Josako
25ab9ccf23 - Staging cluster werkend tot op phase 6 van cluster-install.md, inclusief HTTPS, Bunny, verificatie service. 2025-08-29 17:50:14 +02:00
Josako
2a4c9d7b00 - Voorlopige (werkende) setup tem verification service, bunny integratie, ... 2025-08-28 03:36:43 +02:00
Josako
e6c3c24bd8 - In Scaleway, we only have one bucket, and store information for each tenant in separate folders
- Added staging configuration to scaleway
2025-08-22 10:47:03 +02:00
Josako
481157fb31 Merge branch 'release/3.0.1-beta' 2025-08-21 15:25:07 +02:00
Josako
376ad328ca Merge tag '3.0.1-beta' into develop
Tagging version 3.0.1-beta 3.0.1-beta
2025-08-21 15:25:07 +02:00
Josako
2bb9d4b0be - Update of Changelog for 3.0.1-beta 2025-08-21 15:24:28 +02:00
Josako
6eae0ab1a3 - bug TRA-69 solution provided - Potential problem detected in Role Definition Specialist not returning plain text. But ... AI may still generate incorrect answer (chances lower). 2025-08-21 14:26:52 +02:00
Josako
4395d2e407 - bug TRA-70 solved - MiB convertor was not applied in edit_asset. 2025-08-21 08:49:12 +02:00
Josako
da61f5f9ec - bug TRA-68 solved - bug in javascript code did not pass changed json content. 2025-08-21 08:30:14 +02:00
Josako
53283b6687 - bug TRA-67 solved by re-introducing a 2-step process. Dynamic Attributes cannot be added to a non-existing, newly created object, it seems. 2025-08-21 07:38:25 +02:00
Josako
5d715a958c Merge branch 'feature/refinement_selection_specialist' into develop 2025-08-21 06:40:29 +02:00
Josako
0f969972d6 Merge branch 'feature/k8s_migration' into develop 2025-08-21 06:39:25 +02:00
Josako
4c00d33bc3 - Check-in voordat we aan bugfix beginnen te werken.
- Introductie van static-files serving met standaard nginx (niet ons docker nginx image), en een rsync service om static files te synchroniseren. Nog niet volledig afgewerkt!
2025-08-21 05:48:03 +02:00
Josako
9c63ecb17f - Metrics service toegevoegd
- Applicatie services starten op, behalve eveai_chat_client
- Connectiviteit naar admin / eveai_app niet functioneel
2025-08-20 11:49:19 +02:00
Josako
d6a2635e50 - Opzet cluster werkt
- Opstart redis en minio werkt
- Bezig om eigenlijke apps op te starten ... werkt nog niet.
2025-08-19 18:08:59 +02:00
Josako
84a9334c80 - Functional control plan 2025-08-18 11:44:23 +02:00
Josako
066f579294 - changes toward a fully functional k8s cluster. First running version of cluster, addition of services works, additional changes to app required. 2025-08-14 16:58:09 +02:00
Josako
ebf92b0474 - Finalised podman migration
- Some minor feature requests in the selection specialist
2025-08-13 07:39:21 +02:00
Josako
7e35549262 Migration to podman. Dev is OK, certificate problem with test 2025-08-12 06:33:17 +02:00
366 changed files with 29769 additions and 5288 deletions

5
.gitignore vendored
View File

@@ -55,3 +55,8 @@ scripts/__pycache__/run_eveai_app.cpython-312.pyc
/nginx/node_modules/
/nginx/.parcel-cache/
/nginx/static/
/docker/build_logs/
/content/.Ulysses-Group.plist
/content/.Ulysses-Settings.plist
/.python-version
/q

32
check_running_services.sh Normal file
View File

@@ -0,0 +1,32 @@
#!/bin/bash
# Diagnostic script to check what services are running
echo "=== KIND CLUSTER STATUS ==="
echo "Namespaces:"
kubectl get namespaces | grep eveai
echo -e "\nPods in eveai-dev:"
kubectl get pods -n eveai-dev
echo -e "\nServices in eveai-dev:"
kubectl get services -n eveai-dev
echo -e "\n=== TEST CONTAINERS STATUS ==="
echo "Running test containers:"
podman ps | grep eveai_test
echo -e "\n=== PORT ANALYSIS ==="
echo "What's listening on port 3080:"
lsof -i :3080 2>/dev/null || echo "Nothing found"
echo -e "\nWhat's listening on port 4080:"
lsof -i :4080 2>/dev/null || echo "Nothing found"
echo -e "\n=== SOLUTION ==="
echo "The application you see is from TEST CONTAINERS (6 days old),"
echo "NOT from the Kind cluster (3 minutes old)."
echo ""
echo "To test Kind cluster:"
echo "1. Stop test containers: podman stop eveai_test_nginx_1 eveai_test_eveai_app_1"
echo "2. Deploy Kind services: kup-all-structured"
echo "3. Restart test containers if needed"

View File

@@ -122,6 +122,8 @@ class EveAIAgent(db.Model):
role = db.Column(db.Text, nullable=True)
goal = db.Column(db.Text, nullable=True)
backstory = db.Column(db.Text, nullable=True)
temperature = db.Column(db.Float, nullable=True)
llm_model = db.Column(db.String(50), nullable=True)
tuning = db.Column(db.Boolean, nullable=True, default=False)
configuration = db.Column(JSONB, nullable=True)
arguments = db.Column(JSONB, nullable=True)

View File

@@ -1,4 +1,5 @@
from datetime import date
from enum import Enum
from common.extensions import db
from flask_security import UserMixin, RoleMixin
@@ -121,7 +122,6 @@ class User(db.Model, UserMixin):
def has_roles(self, *args):
return any(role.name in args for role in self.roles)
class TenantDomain(db.Model):
__bind_key__ = 'public'
__table_args__ = {'schema': 'public'}
@@ -311,6 +311,49 @@ class PartnerTenant(db.Model):
updated_by = db.Column(db.Integer, db.ForeignKey('public.user.id'), nullable=True)
class TenantConsent(db.Model):
__bind_key__ = 'public'
__table_args__ = {'schema': 'public'}
id = db.Column(db.Integer, primary_key=True)
tenant_id = db.Column(db.Integer, db.ForeignKey('public.tenant.id'), nullable=False)
partner_id = db.Column(db.Integer, db.ForeignKey('public.partner.id'), nullable=True)
partner_service_id = db.Column(db.Integer, db.ForeignKey('public.partner_service.id'), nullable=True)
user_id = db.Column(db.Integer, db.ForeignKey('public.user.id'), nullable=False)
consent_type = db.Column(db.String(50), nullable=False)
consent_date = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
consent_version = db.Column(db.String(20), nullable=False, default="1.0.0")
consent_data = db.Column(db.JSON, nullable=False)
# Tracking
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
created_by = db.Column(db.Integer, db.ForeignKey('public.user.id'), nullable=True)
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey('public.user.id'), nullable=True)
class ConsentVersion(db.Model):
__bind_key__ = 'public'
__table_args__ = {'schema': 'public'}
id = db.Column(db.Integer, primary_key=True)
consent_type = db.Column(db.String(50), nullable=False)
consent_version = db.Column(db.String(20), nullable=False)
consent_valid_from = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
consent_valid_to = db.Column(db.DateTime, nullable=True)
# Tracking
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
created_by = db.Column(db.Integer, db.ForeignKey('public.user.id'), nullable=True)
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey('public.user.id'), nullable=True)
class ConsentStatus(str, Enum):
CONSENTED = 'CONSENTED'
NOT_CONSENTED = 'NOT_CONSENTED'
RENEWAL_REQUIRED = 'RENEWAL_REQUIRED'
CONSENT_EXPIRED = 'CONSENT_EXPIRED'
UNKNOWN_CONSENT_VERSION = 'UNKNOWN_CONSENT_VERSION'
class SpecialistMagicLinkTenant(db.Model):
__bind_key__ = 'public'
__table_args__ = {'schema': 'public'}
@@ -343,14 +386,14 @@ class TranslationCache(db.Model):
last_used_at = db.Column(db.DateTime, nullable=True)
class PartnerRAGRetriever(db.Model):
__bind_key__ = 'public'
__table_args__ = (
db.PrimaryKeyConstraint('tenant_id', 'retriever_id'),
db.UniqueConstraint('partner_id', 'tenant_id', 'retriever_id'),
{'schema': 'public'},
)
partner_id = db.Column(db.Integer, db.ForeignKey('public.partner.id'), nullable=False)
tenant_id = db.Column(db.Integer, db.ForeignKey('public.tenant.id'), nullable=False)
retriever_id = db.Column(db.Integer, nullable=False)
# class PartnerRAGRetriever(db.Model):
# __bind_key__ = 'public'
# __table_args__ = (
# db.PrimaryKeyConstraint('tenant_id', 'retriever_id'),
# db.UniqueConstraint('partner_id', 'tenant_id', 'retriever_id'),
# {'schema': 'public'},
# )
#
# partner_id = db.Column(db.Integer, db.ForeignKey('public.partner.id'), nullable=False)
# tenant_id = db.Column(db.Integer, db.ForeignKey('public.tenant.id'), nullable=False)
# retriever_id = db.Column(db.Integer, nullable=False)

View File

@@ -19,6 +19,7 @@ class SpecialistServices:
@staticmethod
def execute_specialist(tenant_id, specialist_id, specialist_arguments, session_id, user_timezone) -> Dict[str, Any]:
current_app.logger.debug(f"Before sending task for {specialist_id} with arguments {specialist_arguments}")
task = current_celery.send_task(
'execute_specialist',
args=[tenant_id,
@@ -29,6 +30,7 @@ class SpecialistServices:
],
queue='llm_interactions'
)
current_app.logger.debug(f"Task sent for {specialist_id}, task ID: {task.id}")
return {
'task_id': task.id,

View File

@@ -1,5 +1,6 @@
from common.services.user.user_services import UserServices
from common.services.user.partner_services import PartnerServices
from common.services.user.tenant_services import TenantServices
from common.services.user.consent_services import ConsentServices
__all__ = ['UserServices', 'PartnerServices', 'TenantServices']
__all__ = ['UserServices', 'PartnerServices', 'TenantServices', 'ConsentServices']

View File

@@ -0,0 +1,254 @@
from __future__ import annotations
from dataclasses import dataclass
from datetime import datetime as dt, timezone as tz
from typing import List, Optional, Tuple, Dict
from flask import current_app, request, session
from flask_security import current_user
from sqlalchemy import desc
from sqlalchemy.exc import SQLAlchemyError, IntegrityError
from common.extensions import db
from common.models.user import TenantConsent, ConsentVersion, ConsentStatus, PartnerService, PartnerTenant, Tenant
@dataclass
class TypeStatus:
consent_type: str
status: ConsentStatus
active_version: Optional[str]
last_version: Optional[str]
class ConsentServices:
@staticmethod
def get_required_consent_types() -> List[str]:
return list(current_app.config.get("CONSENT_TYPES", []))
@staticmethod
def get_active_consent_version(consent_type: str) -> Optional[ConsentVersion]:
try:
# Active version: the one with consent_valid_to IS NULL, latest for this type
return (ConsentVersion.query
.filter_by(consent_type=consent_type, consent_valid_to=None)
.order_by(desc(ConsentVersion.consent_valid_from))
.first())
except SQLAlchemyError as e:
current_app.logger.error(f"DB error in get_active_consent_version({consent_type}): {e}")
return None
@staticmethod
def get_tenant_last_consent(tenant_id: int, consent_type: str) -> Optional[TenantConsent]:
try:
return (TenantConsent.query
.filter_by(tenant_id=tenant_id, consent_type=consent_type)
.order_by(desc(TenantConsent.id))
.first())
except SQLAlchemyError as e:
current_app.logger.error(f"DB error in get_tenant_last_consent({tenant_id}, {consent_type}): {e}")
return None
@staticmethod
def evaluate_type_status(tenant_id: int, consent_type: str) -> TypeStatus:
active = ConsentServices.get_active_consent_version(consent_type)
if not active:
current_app.logger.error(f"No active ConsentVersion found for type {consent_type}")
return TypeStatus(consent_type, ConsentStatus.UNKNOWN_CONSENT_VERSION, None, None)
last = ConsentServices.get_tenant_last_consent(tenant_id, consent_type)
if not last:
return TypeStatus(consent_type, ConsentStatus.NOT_CONSENTED, active.consent_version, None)
# If last consent equals active → CONSENTED
if last.consent_version == active.consent_version:
return TypeStatus(consent_type, ConsentStatus.CONSENTED, active.consent_version, last.consent_version)
# Else: last refers to an older version; check its ConsentVersion to see grace period
prev_cv = ConsentVersion.query.filter_by(consent_type=consent_type,
consent_version=last.consent_version).first()
if not prev_cv:
current_app.logger.error(f"Tenant {tenant_id} references unknown ConsentVersion {last.consent_version} for {consent_type}")
return TypeStatus(consent_type, ConsentStatus.UNKNOWN_CONSENT_VERSION, active.consent_version, last.consent_version)
if prev_cv.consent_valid_to:
now = dt.now(tz.utc)
if prev_cv.consent_valid_to >= now:
# Within transition window
return TypeStatus(consent_type, ConsentStatus.RENEWAL_REQUIRED, active.consent_version, last.consent_version)
else:
return TypeStatus(consent_type, ConsentStatus.NOT_CONSENTED, active.consent_version, last.consent_version)
else:
# Should not happen if a newer active exists; treat as unknown config
current_app.logger.error(f"Previous ConsentVersion without valid_to while a newer active exists for {consent_type}")
return TypeStatus(consent_type, ConsentStatus.UNKNOWN_CONSENT_VERSION, active.consent_version, last.consent_version)
@staticmethod
def aggregate_status(type_statuses: List[TypeStatus]) -> ConsentStatus:
# Priority: UNKNOWN > NOT_CONSENTED > RENEWAL_REQUIRED > CONSENTED
priorities = {
ConsentStatus.UNKNOWN_CONSENT_VERSION: 4,
ConsentStatus.NOT_CONSENTED: 3,
ConsentStatus.RENEWAL_REQUIRED: 2,
ConsentStatus.CONSENTED: 1,
}
if not type_statuses:
return ConsentStatus.CONSENTED
worst = max(type_statuses, key=lambda ts: priorities.get(ts.status, 0))
return worst.status
@staticmethod
def get_consent_status(tenant_id: int) -> ConsentStatus:
statuses = [ConsentServices.evaluate_type_status(tenant_id, ct) for ct in ConsentServices.get_required_consent_types()]
return ConsentServices.aggregate_status(statuses)
@staticmethod
def _is_tenant_admin_for(tenant_id: int) -> bool:
try:
return current_user.is_authenticated and current_user.has_roles('Tenant Admin') and getattr(current_user, 'tenant_id', None) == tenant_id
except Exception:
return False
@staticmethod
def _is_management_partner_for(tenant_id: int) -> Tuple[bool, Optional[int], Optional[int]]:
"""Return (allowed, partner_id, partner_service_id) for management partner context."""
try:
if not (current_user.is_authenticated and current_user.has_roles('Partner Admin')):
return False, None, None
# Check PartnerTenant relationship via MANAGEMENT_SERVICE
ps = PartnerService.query.filter_by(type='MANAGEMENT_SERVICE').all()
if not ps:
return False, None, None
ps_ids = [p.id for p in ps]
pt = PartnerTenant.query.filter_by(tenant_id=tenant_id).filter(PartnerTenant.partner_service_id.in_(ps_ids)).first()
if not pt:
return False, None, None
the_ps = PartnerService.query.get(pt.partner_service_id)
return True, the_ps.partner_id if the_ps else None, the_ps.id if the_ps else None
except Exception as e:
current_app.logger.error(f"Error in _is_management_partner_for: {e}")
return False, None, None
@staticmethod
def can_consent_on_behalf(tenant_id: int) -> Tuple[bool, str, Optional[int], Optional[int]]:
# Returns: allowed, mode('tenant_admin'|'management_partner'), partner_id, partner_service_id
if ConsentServices._is_tenant_admin_for(tenant_id):
return True, 'tenant_admin', None, None
allowed, partner_id, partner_service_id = ConsentServices._is_management_partner_for(tenant_id)
if allowed:
return True, 'management_partner', partner_id, partner_service_id
return False, 'none', None, None
@staticmethod
def _resolve_consent_content(consent_type: str, version: str) -> Dict:
"""Resolve canonical file ref and hash for a consent document.
Uses configurable base dir, type subpaths, and patch-dir strategy.
Defaults:
- base: 'content'
- map: {'Data Privacy Agreement':'dpa','Terms & Conditions':'terms'}
- strategy: 'major_minor' -> a.b.c => a.b/a.b.c.md
- ext: '.md'
"""
import hashlib
from pathlib import Path
cfg = current_app.config if current_app else {}
base_dir = cfg.get('CONSENT_CONTENT_BASE_DIR', 'content')
type_paths = cfg.get('CONSENT_TYPE_PATHS', {
'Data Privacy Agreement': 'dpa',
'Terms & Conditions': 'terms',
})
strategy = cfg.get('CONSENT_PATCH_DIR_STRATEGY', 'major_minor')
ext = cfg.get('CONSENT_MARKDOWN_EXT', '.md')
type_dir = type_paths.get(consent_type, consent_type.lower().replace(' ', '_'))
subpath = ''
filename = f"{version}{ext}"
try:
parts = version.split('.')
if strategy == 'major_minor' and len(parts) >= 2:
subpath = f"{parts[0]}.{parts[1]}"
filename = f"{parts[0]}.{parts[1]}.{parts[2] if len(parts)>2 else '0'}{ext}"
# Build canonical path
if subpath:
canonical_ref = f"{base_dir}/{type_dir}/{subpath}/{filename}"
else:
canonical_ref = f"{base_dir}/{type_dir}/{filename}"
except Exception:
canonical_ref = f"{base_dir}/{type_dir}/{version}{ext}"
# Read file and hash
content_hash = ''
try:
# project root = parent of app package
root = Path(current_app.root_path).parent if current_app else Path('.')
fpath = root / canonical_ref
content_bytes = fpath.read_bytes() if fpath.exists() else b''
content_hash = hashlib.sha256(content_bytes).hexdigest() if content_bytes else ''
except Exception:
content_hash = ''
return {
'canonical_document_ref': canonical_ref,
'content_hash': content_hash,
}
@staticmethod
def record_consent(tenant_id: int, consent_type: str) -> TenantConsent:
# Validate type
if consent_type not in ConsentServices.get_required_consent_types():
raise ValueError(f"Unknown consent type: {consent_type}")
active = ConsentServices.get_active_consent_version(consent_type)
if not active:
raise RuntimeError(f"No active ConsentVersion for type {consent_type}")
allowed, mode, partner_id, partner_service_id = ConsentServices.can_consent_on_behalf(tenant_id)
if not allowed:
raise PermissionError("Not authorized to record consent for this tenant")
# Idempotency: if already consented for active version, return existing
existing = (TenantConsent.query
.filter_by(tenant_id=tenant_id, consent_type=consent_type, consent_version=active.consent_version)
.first())
if existing:
return existing
# Build consent_data with audit info
ip = request.headers.get('X-Forwarded-For', '').split(',')[0].strip() or request.remote_addr or ''
ua = request.headers.get('User-Agent', '')
locale = session.get('locale') or request.accept_languages.best or ''
content_meta = ConsentServices._resolve_consent_content(consent_type, active.consent_version)
consent_data = {
'source_ip': ip,
'user_agent': ua,
'locale': locale,
**content_meta,
}
tc = TenantConsent(
tenant_id=tenant_id,
partner_id=partner_id,
partner_service_id=partner_service_id,
user_id=getattr(current_user, 'id', None) or 0,
consent_type=consent_type,
consent_version=active.consent_version,
consent_data=consent_data,
)
try:
db.session.add(tc)
db.session.commit()
current_app.logger.info(f"Consent recorded: tenant={tenant_id}, type={consent_type}, version={active.consent_version}, mode={mode}, user={getattr(current_user, 'id', None)}")
return tc
except IntegrityError as e:
db.session.rollback()
# In case of race, fetch existing
current_app.logger.warning(f"IntegrityError on consent insert, falling back: {e}")
existing = (TenantConsent.query
.filter_by(tenant_id=tenant_id, consent_type=consent_type, consent_version=active.consent_version)
.first())
if existing:
return existing
raise
except SQLAlchemyError as e:
db.session.rollback()
current_app.logger.error(f"DB error in record_consent: {e}")
raise

View File

@@ -6,7 +6,6 @@ from sqlalchemy.exc import SQLAlchemyError
from common.models.entitlements import PartnerServiceLicenseTier
from common.utils.eveai_exceptions import EveAINoManagementPartnerService, EveAINoSessionPartner
from common.utils.security_utils import current_user_has_role
class PartnerServices:

View File

@@ -1,15 +1,16 @@
from typing import Dict, List
from flask import session, current_app
from sqlalchemy import desc
from sqlalchemy.exc import SQLAlchemyError
from common.extensions import db, cache_manager
from common.models.user import Partner, PartnerTenant, PartnerService, Tenant
from common.models.user import Partner, PartnerTenant, PartnerService, Tenant, TenantConsent, ConsentStatus, \
ConsentVersion
from common.utils.eveai_exceptions import EveAINoManagementPartnerService
from common.utils.model_logging_utils import set_logging_information
from datetime import datetime as dt, timezone as tz
from common.utils.security_utils import current_user_has_role
class TenantServices:
@@ -173,3 +174,9 @@ class TenantServices:
except Exception as e:
current_app.logger.error(f"Error checking specialist type access: {str(e)}")
return False
@staticmethod
def get_consent_status(tenant_id: int) -> ConsentStatus:
# Delegate to centralized ConsentService to ensure consistent logic
from common.services.user.consent_services import ConsentServices
return ConsentServices.get_consent_status(tenant_id)

View File

@@ -1,4 +1,6 @@
import json
import copy
import re
from typing import Dict, Any, Optional
from flask import session
@@ -50,8 +52,8 @@ class TranslationServices:
if isinstance(config_data, str):
config_data = json.loads(config_data)
# Maak een kopie van de originele data om te wijzigen
translated_config = config_data.copy()
# Maak een deep copy van de originele data om te wijzigen en input-mutatie te vermijden
translated_config = copy.deepcopy(config_data)
# Haal type en versie op voor de Business Event span
config_type = config_data.get('type', 'Unknown')
@@ -65,71 +67,124 @@ class TranslationServices:
if not context and 'metadata' in config_data and 'description' in config_data['metadata']:
description_context = config_data['metadata']['description']
# Hulpfuncties
def is_nonempty_str(val: Any) -> bool:
return isinstance(val, str) and val.strip() != ''
def safe_translate(text: str, ctx: Optional[str]):
try:
res = cache_manager.translation_cache.get_translation(
text=text,
target_lang=target_language,
source_lang=source_language,
context=ctx
)
return res.translated_text if res else None
except Exception as e:
if current_event:
current_event.log_error('translation_error', {
'tenant_id': tenant_id,
'config_type': config_type,
'config_version': config_version,
'field_config': field_config,
'error': str(e)
})
return None
tag_pair_pattern = re.compile(r'<([a-zA-Z][\w-]*)>[\s\S]*?<\/\1>')
def extract_tag_counts(text: str) -> Dict[str, int]:
counts: Dict[str, int] = {}
for m in tag_pair_pattern.finditer(text or ''):
tag = m.group(1)
counts[tag] = counts.get(tag, 0) + 1
return counts
def tags_valid(source: str, translated: str) -> bool:
return extract_tag_counts(source) == extract_tag_counts(translated)
# Counters
meta_consentRich_translated_count = 0
meta_aria_translated_count = 0
meta_inline_tags_invalid_after_translation_count = 0
# Loop door elk veld in de configuratie
for field_name, field_data in fields.items():
# Vertaal name als het bestaat en niet leeg is
if 'name' in field_data and field_data['name']:
# Gebruik context indien opgegeven, anders description_context
# Vertaal name als het bestaat en niet leeg is (alleen strings)
if 'name' in field_data and is_nonempty_str(field_data['name']):
field_context = context if context else description_context
translated_name = cache_manager.translation_cache.get_translation(
text=field_data['name'],
target_lang=target_language,
source_lang=source_language,
context=field_context
)
if translated_name:
translated_config[field_config][field_name]['name'] = translated_name.translated_text
t = safe_translate(field_data['name'], field_context)
if t:
translated_config[field_config][field_name]['name'] = t
if 'title' in field_data and field_data['title']:
# Gebruik context indien opgegeven, anders description_context
if 'title' in field_data and is_nonempty_str(field_data.get('title')):
field_context = context if context else description_context
translated_title = cache_manager.translation_cache.get_translation(
text=field_data['title'],
target_lang=target_language,
source_lang=source_language,
context=field_context
)
if translated_title:
translated_config[field_config][field_name]['title'] = translated_title.translated_text
t = safe_translate(field_data['title'], field_context)
if t:
translated_config[field_config][field_name]['title'] = t
# Vertaal description als het bestaat en niet leeg is
if 'description' in field_data and field_data['description']:
# Gebruik context indien opgegeven, anders description_context
if 'description' in field_data and is_nonempty_str(field_data.get('description')):
field_context = context if context else description_context
translated_desc = cache_manager.translation_cache.get_translation(
text=field_data['description'],
target_lang=target_language,
source_lang=source_language,
context=field_context
)
if translated_desc:
translated_config[field_config][field_name]['description'] = translated_desc.translated_text
t = safe_translate(field_data['description'], field_context)
if t:
translated_config[field_config][field_name]['description'] = t
# Vertaal context als het bestaat en niet leeg is
if 'context' in field_data and field_data['context']:
translated_ctx = cache_manager.translation_cache.get_translation(
text=field_data['context'],
target_lang=target_language,
source_lang=source_language,
context=context
)
if translated_ctx:
translated_config[field_config][field_name]['context'] = translated_ctx.translated_text
if 'context' in field_data and is_nonempty_str(field_data.get('context')):
t = safe_translate(field_data['context'], context)
if t:
translated_config[field_config][field_name]['context'] = t
# vertaal allowed values als het veld bestaat en de waarden niet leeg zijn.
if 'allowed_values' in field_data and field_data['allowed_values']:
# vertaal allowed_values als het veld bestaat en waarden niet leeg zijn (alleen string-items)
if 'allowed_values' in field_data and isinstance(field_data['allowed_values'], list) and field_data['allowed_values']:
translated_allowed_values = []
for allowed_value in field_data['allowed_values']:
translated_allowed_value = cache_manager.translation_cache.get_translation(
text=allowed_value,
target_lang=target_language,
source_lang=source_language,
context=context
)
translated_allowed_values.append(translated_allowed_value.translated_text)
if is_nonempty_str(allowed_value):
t = safe_translate(allowed_value, context)
translated_allowed_values.append(t if t else allowed_value)
else:
translated_allowed_values.append(allowed_value)
if translated_allowed_values:
translated_config[field_config][field_name]['allowed_values'] = translated_allowed_values
# Vertaal meta.consentRich en meta.aria*
meta = field_data.get('meta')
if isinstance(meta, dict):
# consentRich
if is_nonempty_str(meta.get('consentRich')):
consent_ctx = (context if context else description_context) or ''
consent_ctx = f"Consent rich text with inline tags. Keep tag names intact and translate only inner text. {consent_ctx}".strip()
t = safe_translate(meta['consentRich'], consent_ctx)
if t and tags_valid(meta['consentRich'], t):
translated_config[field_config][field_name].setdefault('meta', {})['consentRich'] = t
meta_consentRich_translated_count += 1
else:
if t and not tags_valid(meta['consentRich'], t) and current_event:
src_counts = extract_tag_counts(meta['consentRich'])
dst_counts = extract_tag_counts(t)
current_event.log_error('inline_tags_validation_failed', {
'tenant_id': tenant_id,
'config_type': config_type,
'config_version': config_version,
'field_config': field_config,
'field_name': field_name,
'target_language': target_language,
'source_tag_counts': src_counts,
'translated_tag_counts': dst_counts
})
meta_inline_tags_invalid_after_translation_count += 1
# fallback: keep original (already in deep copy)
# aria*
for k, v in list(meta.items()):
if isinstance(k, str) and k.startswith('aria') and is_nonempty_str(v):
aria_ctx = (context if context else description_context) or ''
aria_ctx = f"ARIA label for accessibility. Short, imperative, descriptive. Form '{config_type} {config_version}', field '{field_name}'. {aria_ctx}".strip()
t2 = safe_translate(v, aria_ctx)
if t2:
translated_config[field_config][field_name].setdefault('meta', {})[k] = t2
meta_aria_translated_count += 1
return translated_config
@staticmethod

View File

@@ -0,0 +1,14 @@
from flask import current_app
class VersionServices:
@staticmethod
def split_version(full_version: str) -> tuple[str, str]:
parts = full_version.split(".")
if len(parts) < 3:
major_minor = '.'.join(parts[:2]) if len(parts) >= 2 else full_version
patch = ''
else:
major_minor = '.'.join(parts[:2])
patch = parts[2]
return major_minor, patch

View File

@@ -0,0 +1,22 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Unauthorized</title>
<style>
body { font-family: system-ui, -apple-system, Segoe UI, Roboto, Helvetica, Arial, sans-serif; background:#f7f7f9; color:#222; }
.wrap { max-width: 720px; margin: 10vh auto; background:#fff; border:1px solid #e5e7eb; border-radius:12px; padding:32px; box-shadow: 0 8px 24px rgba(0,0,0,0.06); }
h1 { margin: 0 0 8px; font-size: 28px; }
p { margin: 0 0 16px; line-height:1.6; }
a.btn { display:inline-block; padding:10px 16px; background:#2c3e50; color:#fff; text-decoration:none; border-radius:8px; }
</style>
</head>
<body>
<main class="wrap">
<h1>Not authorized</h1>
<p>Your session may have expired or this action is not permitted.</p>
<p><a class="btn" href="/">Go to home</a></p>
</main>
</body>
</html>

View File

@@ -0,0 +1,22 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Forbidden</title>
<style>
body { font-family: system-ui, -apple-system, Segoe UI, Roboto, Helvetica, Arial, sans-serif; background:#f7f7f9; color:#222; }
.wrap { max-width: 720px; margin: 10vh auto; background:#fff; border:1px solid #e5e7eb; border-radius:12px; padding:32px; box-shadow: 0 8px 24px rgba(0,0,0,0.06); }
h1 { margin: 0 0 8px; font-size: 28px; }
p { margin: 0 0 16px; line-height:1.6; }
a.btn { display:inline-block; padding:10px 16px; background:#2c3e50; color:#fff; text-decoration:none; border-radius:8px; }
</style>
</head>
<body>
<main class="wrap">
<h1>Access forbidden</h1>
<p>You don't have permission to access this resource.</p>
<p><a class="btn" href="/">Go to home</a></p>
</main>
</body>
</html>

View File

@@ -0,0 +1,22 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Page not found</title>
<style>
body { font-family: system-ui, -apple-system, Segoe UI, Roboto, Helvetica, Arial, sans-serif; background:#f7f7f9; color:#222; }
.wrap { max-width: 720px; margin: 10vh auto; background:#fff; border:1px solid #e5e7eb; border-radius:12px; padding:32px; box-shadow: 0 8px 24px rgba(0,0,0,0.06); }
h1 { margin: 0 0 8px; font-size: 28px; }
p { margin: 0 0 16px; line-height:1.6; }
a.btn { display:inline-block; padding:10px 16px; background:#2c3e50; color:#fff; text-decoration:none; border-radius:8px; }
</style>
</head>
<body>
<main class="wrap">
<h1>Page not found</h1>
<p>The page you are looking for doesnt exist or has been moved.</p>
<p><a class="btn" href="/">Go to home</a></p>
</main>
</body>
</html>

View File

@@ -0,0 +1,22 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Something went wrong</title>
<style>
body { font-family: system-ui, -apple-system, Segoe UI, Roboto, Helvetica, Arial, sans-serif; background:#f7f7f9; color:#222; }
.wrap { max-width: 720px; margin: 10vh auto; background:#fff; border:1px solid #e5e7eb; border-radius:12px; padding:32px; box-shadow: 0 8px 24px rgba(0,0,0,0.06); }
h1 { margin: 0 0 8px; font-size: 28px; }
p { margin: 0 0 16px; line-height:1.6; }
a.btn { display:inline-block; padding:10px 16px; background:#2c3e50; color:#fff; text-decoration:none; border-radius:8px; }
</style>
</head>
<body>
<main class="wrap">
<h1>Were sorry — something went wrong</h1>
<p>Please try again later. If the issue persists, contact support.</p>
<p><a class="btn" href="/">Go to home</a></p>
</main>
</body>
</html>

View File

@@ -0,0 +1,22 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Error</title>
<style>
body { font-family: system-ui, -apple-system, Segoe UI, Roboto, Helvetica, Arial, sans-serif; background:#f7f7f9; color:#222; }
.wrap { max-width: 720px; margin: 10vh auto; background:#fff; border:1px solid #e5e7eb; border-radius:12px; padding:32px; box-shadow: 0 8px 24px rgba(0,0,0,0.06); }
h1 { margin: 0 0 8px; font-size: 28px; }
p { margin: 0 0 16px; line-height:1.6; }
a.btn { display:inline-block; padding:10px 16px; background:#2c3e50; color:#fff; text-decoration:none; border-radius:8px; }
</style>
</head>
<body>
<main class="wrap">
<h1>Oops! Something went wrong</h1>
<p>Please try again. If the issue persists, contact support.</p>
<p><a class="btn" href="/">Go to home</a></p>
</main>
</body>
</html>

View File

@@ -0,0 +1,45 @@
import json
import os
from functools import lru_cache
from typing import Dict
# Default manifest path inside app images; override with env
DEFAULT_MANIFEST_PATH = os.environ.get(
'EVEAI_STATIC_MANIFEST_PATH',
'/app/config/static-manifest/manifest.json'
)
@lru_cache(maxsize=1)
def _load_manifest(manifest_path: str = DEFAULT_MANIFEST_PATH) -> Dict[str, str]:
try:
with open(manifest_path, 'r', encoding='utf-8') as f:
return json.load(f)
except Exception:
return {}
def resolve_asset(logical_path: str, manifest_path: str = DEFAULT_MANIFEST_PATH) -> str:
"""
Map a logical asset path (e.g. 'dist/chat-client.js') to the hashed path
found in the Parcel manifest. If not found or manifest missing, return the
original logical path for graceful fallback.
"""
if not logical_path:
return logical_path
manifest = _load_manifest(manifest_path)
# Try several key variants as Parcel manifests may use different keys
candidates = [
logical_path,
logical_path.lstrip('/'),
logical_path.replace('static/', ''),
logical_path.replace('dist/', ''),
]
for key in candidates:
if key in manifest:
return manifest[key]
return logical_path

View File

@@ -559,12 +559,24 @@ class BusinessEvent:
self._log_buffer = []
def _push_to_gateway(self):
# Push metrics to the gateway
# Push metrics to the gateway with grouping key to avoid overwrites across pods/processes
try:
# Determine grouping labels
pod_name = current_app.config.get('POD_NAME', current_app.config.get('COMPONENT_NAME', 'dev'))
pod_namespace = current_app.config.get('POD_NAMESPACE', current_app.config.get('FLASK_ENV', 'dev'))
worker_id = str(os.getpid())
grouping_key = {
'instance': pod_name,
'namespace': pod_namespace,
'process': worker_id,
}
push_to_gateway(
current_app.config['PUSH_GATEWAY_URL'],
job=current_app.config['COMPONENT_NAME'],
registry=REGISTRY
registry=REGISTRY,
grouping_key=grouping_key,
)
except Exception as e:
current_app.logger.error(f"Failed to push metrics to Prometheus Push Gateway: {e}")

View File

@@ -121,7 +121,7 @@ class CacheHandler(Generic[T]):
region_name = getattr(self.region, 'name', 'default_region')
key = CacheKey({k: identifiers[k] for k in self._key_components})
return f"{region_name}_{self.prefix}:{str(key)}"
return f"{region_name}:{self.prefix}:{str(key)}"
def get(self, creator_func, **identifiers) -> T:
"""
@@ -179,7 +179,7 @@ class CacheHandler(Generic[T]):
Deletes all keys that start with the region prefix.
"""
# Construct the pattern for all keys in this region
pattern = f"{self.region}_{self.prefix}:*"
pattern = f"{self.region}:{self.prefix}:*"
# Assuming Redis backend with dogpile, use `delete_multi` or direct Redis access
if hasattr(self.region.backend, 'client'):

View File

@@ -1,48 +1,64 @@
# common/utils/cache/regions.py
import time
import redis
from dogpile.cache import make_region
from urllib.parse import urlparse
import os
import ssl
def get_redis_config(app):
"""
Create Redis configuration dict based on app config
Handles both authenticated and non-authenticated setups
Create Redis configuration dict based on app config.
Handles both authenticated and non-authenticated setups.
"""
app.logger.debug(f"Creating Redis config")
# Parse the REDIS_BASE_URI to get all components
redis_uri = urlparse(app.config['REDIS_BASE_URI'])
# redis_uri = urlparse(app.config['REDIS_BASE_URI'])
config = {
'host': redis_uri.hostname,
'port': int(redis_uri.port or 6379),
'db': 4, # Keep this for later use
'redis_expiration_time': 3600,
'distributed_lock': True,
'thread_local_lock': False,
'host': app.config['REDIS_URL'],
'port': app.config['REDIS_PORT'],
'max_connections': 20,
'retry_on_timeout': True,
'socket_keepalive': True,
'socket_keepalive_options': {},
}
# Add authentication if provided
if redis_uri.username and redis_uri.password:
un = app.config.get('REDIS_USER')
pw = app.config.get('REDIS_PASS')
if un and pw:
config.update({
'username': redis_uri.username,
'password': redis_uri.password
'username': un,
'password': pw
})
# SSL support using centralised config
cert_path = app.config.get('REDIS_CA_CERT_PATH')
redis_scheme = app.config.get('REDIS_SCHEME')
if cert_path and redis_scheme == 'rediss':
config.update({
'connection_class': redis.SSLConnection,
'ssl_cert_reqs': ssl.CERT_REQUIRED,
'ssl_check_hostname': app.config.get('REDIS_SSL_CHECK_HOSTNAME', True),
'ssl_ca_certs': cert_path,
})
app.logger.debug(f"config for Redis connection: {config}")
return config
def create_cache_regions(app):
"""Initialize all cache regions with app config"""
"""Initialise all cache regions with app config"""
redis_config = get_redis_config(app)
redis_pool = redis.ConnectionPool(**redis_config)
regions = {}
startup_time = int(time.time())
# Region for model-related caching (ModelVariables etc)
model_region = make_region(name='eveai_model').configure(
'dogpile.cache.redis',
arguments={**redis_config, 'db': 6},
arguments={'connection_pool': redis_pool},
replace_existing_backend=True
)
regions['eveai_model'] = model_region
@@ -50,7 +66,7 @@ def create_cache_regions(app):
# Region for eveai_chat_workers components (Specialists, Retrievers, ...)
eveai_chat_workers_region = make_region(name='eveai_chat_workers').configure(
'dogpile.cache.redis',
arguments=redis_config, # arguments={**redis_config, 'db': 4}, # Different DB
arguments={'connection_pool': redis_pool},
replace_existing_backend=True
)
regions['eveai_chat_workers'] = eveai_chat_workers_region
@@ -58,14 +74,14 @@ def create_cache_regions(app):
# Region for eveai_workers components (Processors, ...)
eveai_workers_region = make_region(name='eveai_workers').configure(
'dogpile.cache.redis',
arguments=redis_config, # Same config for now
arguments={'connection_pool': redis_pool}, # Same config for now
replace_existing_backend=True
)
regions['eveai_workers'] = eveai_workers_region
eveai_config_region = make_region(name='eveai_config').configure(
'dogpile.cache.redis',
arguments=redis_config,
arguments={'connection_pool': redis_pool},
replace_existing_backend=True
)
regions['eveai_config'] = eveai_config_region

View File

@@ -1,3 +1,5 @@
import ssl
from celery import Celery
from kombu import Queue
from werkzeug.local import LocalProxy
@@ -17,17 +19,56 @@ def init_celery(celery, app, is_beat=False):
'accept_content': app.config.get('CELERY_ACCEPT_CONTENT', ['json']),
'timezone': app.config.get('CELERY_TIMEZONE', 'UTC'),
'enable_utc': app.config.get('CELERY_ENABLE_UTC', True),
# connection pools
# 'broker_pool_limit': app.config.get('CELERY_BROKER_POOL_LIMIT', 10),
}
# Transport options (timeouts, max_connections for Redis transport)
# broker_transport_options = {
# 'master_name': None, # only relevant for Sentinel; otherwise harmless
# 'max_connections': 20,
# 'retry_on_timeout': True,
# 'socket_connect_timeout': 5,
# 'socket_timeout': 5,
# }
# celery_config['broker_transport_options'] = broker_transport_options
#
# # Backend transport options (Redis backend accepts similar timeouts)
# result_backend_transport_options = {
# 'retry_on_timeout': True,
# 'socket_connect_timeout': 5,
# 'socket_timeout': 5,
# # max_connections may be supported on newer Celery/redis backends; harmless if ignored
# 'max_connections': 20,
# }
# celery_config['result_backend_transport_options'] = result_backend_transport_options
# TLS (only when cert is provided or your URLs are rediss://)
ssl_opts = None
cert_path = app.config.get('REDIS_CA_CERT_PATH')
if cert_path:
ssl_opts = {
'ssl_cert_reqs': ssl.CERT_REQUIRED,
'ssl_ca_certs': cert_path,
'ssl_check_hostname': app.config.get('REDIS_SSL_CHECK_HOSTNAME', True),
}
app.logger.info(
"SSL configured for Celery Redis connection (CA: %s, hostname-check: %s)",
cert_path,
'enabled' if app.config.get('REDIS_SSL_CHECK_HOSTNAME', True) else 'disabled (IP)'
)
celery_config['broker_use_ssl'] = ssl_opts
celery_config['redis_backend_use_ssl'] = ssl_opts
# Beat/RedBeat
if is_beat:
# Add configurations specific to Beat scheduler
celery_config['beat_scheduler'] = 'redbeat.RedBeatScheduler'
celery_config['redbeat_lock_key'] = 'redbeat::lock'
celery_config['beat_max_loop_interval'] = 10 # Adjust as needed
celery_config['beat_max_loop_interval'] = 10
celery_app.conf.update(**celery_config)
# Task queues for workers only
# Queues for workers (note: Redis ignores routing_key and priority features like RabbitMQ)
if not is_beat:
celery_app.conf.task_queues = (
Queue('default', routing_key='task.#'),
@@ -60,6 +101,7 @@ def init_celery(celery, app, is_beat=False):
def make_celery(app_name, config):
# keep API but return the single instance
return celery_app

View File

@@ -36,7 +36,10 @@ def get_default_chat_customisation(tenant_customisation=None):
'ai_message_text_color': '#212529',
'human_message_background': '#212529',
'human_message_text_color': '#ffffff',
'human_message_inactive_text_color': '#808080',
'tab_background': '#0a0a0a',
'tab_icon_active_color': '#ffffff',
'tab_icon_inactive_color': '#f0f0f0',
}
# If no tenant customization is provided, return the defaults

View File

@@ -1,9 +1,9 @@
"""Database related functions"""
from os import popen
from sqlalchemy import text
from sqlalchemy import text, event
from sqlalchemy.schema import CreateSchema
from sqlalchemy.exc import InternalError
from sqlalchemy.orm import sessionmaker, scoped_session
from sqlalchemy.orm import sessionmaker, scoped_session, Session as SASession
from sqlalchemy.exc import SQLAlchemyError
from flask import current_app
@@ -16,6 +16,66 @@ class Database:
def __init__(self, tenant: str) -> None:
self.schema = str(tenant)
# --- Session / Transaction events to ensure correct search_path per transaction ---
@event.listens_for(SASession, "after_begin")
def _set_search_path_per_tx(session, transaction, connection):
"""Ensure each transaction sees the right tenant schema, regardless of
which pooled connection is used. Uses SET LOCAL so it is scoped to the tx.
"""
schema = session.info.get("tenant_schema")
if schema:
try:
connection.exec_driver_sql(f'SET LOCAL search_path TO "{schema}", public')
# Optional visibility/logging for debugging
sp = connection.exec_driver_sql("SHOW search_path").scalar()
try:
current_app.logger.info(f"DBCTX tx_begin conn_id={id(connection.connection)} search_path={sp}")
except Exception:
pass
except Exception as e:
try:
current_app.logger.error(f"Failed to SET LOCAL search_path for schema {schema}: {e!r}")
except Exception:
pass
def _log_db_context(self, origin: str = "") -> None:
"""Log key DB context info to diagnose schema/search_path issues.
Collects and logs in a single structured line:
- current_database()
- inet_server_addr(), inet_server_port()
- SHOW search_path
- current_schema()
- to_regclass('interaction')
- to_regclass('<tenant>.interaction')
"""
try:
db_name = db.session.execute(text("SELECT current_database()"))\
.scalar()
host = db.session.execute(text("SELECT inet_server_addr()"))\
.scalar()
port = db.session.execute(text("SELECT inet_server_port()"))\
.scalar()
search_path = db.session.execute(text("SHOW search_path"))\
.scalar()
current_schema = db.session.execute(text("SELECT current_schema()"))\
.scalar()
reg_unqualified = db.session.execute(text("SELECT to_regclass('interaction')"))\
.scalar()
qualified = f"{self.schema}.interaction"
reg_qualified = db.session.execute(
text("SELECT to_regclass(:qn)"),
{"qn": qualified}
).scalar()
current_app.logger.info(
"DBCTX origin=%s db=%s host=%s port=%s search_path=%s current_schema=%s to_regclass(interaction)=%s to_regclass(%s)=%s",
origin, db_name, host, port, search_path, current_schema, reg_unqualified, qualified, reg_qualified
)
except SQLAlchemyError as e:
current_app.logger.error(
f"DBCTX logging failed at {origin} for schema {self.schema}: {e!r}"
)
def get_engine(self):
"""create new schema engine"""
return db.engine.execution_options(
@@ -46,12 +106,38 @@ class Database:
def create_tables(self):
"""create tables in for schema"""
try:
db.metadata.create_all(self.get_engine())
except SQLAlchemyError as e:
current_app.logger.error(f"💔 Error creating tables for schema {self.schema}: {e.args}")
def switch_schema(self):
"""switch between tenant/public database schema"""
"""switch between tenant/public database schema with diagnostics logging"""
# Record the desired tenant schema on the active Session so events can use it
try:
db.session.info["tenant_schema"] = self.schema
except Exception:
pass
# Log the context before switching
self._log_db_context("before_switch")
try:
db.session.execute(text(f'set search_path to "{self.schema}", public'))
db.session.commit()
except SQLAlchemyError as e:
# Rollback on error to avoid InFailedSqlTransaction and log details
try:
db.session.rollback()
except Exception:
pass
current_app.logger.error(
f"Error switching search_path to {self.schema}: {e!r}"
)
# Also log context after failure
self._log_db_context("after_switch_failed")
# Re-raise to let caller decide handling if needed
raise
# Log the context after successful switch
self._log_db_context("after_switch")
def migrate_tenant_schema(self):
"""migrate tenant database schema for new tenant"""

View File

@@ -10,41 +10,54 @@ from common.utils.nginx_utils import prefixed_url_for
def not_found_error(error):
profile = current_app.config.get('ERRORS_PROFILE', 'web_app')
if profile == 'web_app':
if not current_user.is_authenticated:
return redirect(prefixed_url_for('security.login'))
return redirect(prefixed_url_for('security.login', for_redirect=True))
current_app.logger.error(f"Not Found Error: {error}")
current_app.logger.error(traceback.format_exc())
return render_template('error/404.html'), 404
def internal_server_error(error):
profile = current_app.config.get('ERRORS_PROFILE', 'web_app')
if profile == 'web_app':
if not current_user.is_authenticated:
return redirect(prefixed_url_for('security.login'))
return redirect(prefixed_url_for('security.login', for_redirect=True))
current_app.logger.error(f"Internal Server Error: {error}")
current_app.logger.error(traceback.format_exc())
return render_template('error/500.html'), 500
def not_authorised_error(error):
profile = current_app.config.get('ERRORS_PROFILE', 'web_app')
if profile == 'web_app':
if not current_user.is_authenticated:
return redirect(prefixed_url_for('security.login'))
return redirect(prefixed_url_for('security.login', for_redirect=True))
current_app.logger.error(f"Not Authorised Error: {error}")
current_app.logger.error(traceback.format_exc())
return render_template('error/401.html')
return render_template('error/401.html'), 401
def access_forbidden(error):
profile = current_app.config.get('ERRORS_PROFILE', 'web_app')
if profile == 'web_app':
if not current_user.is_authenticated:
return redirect(prefixed_url_for('security.login'))
return redirect(prefixed_url_for('security.login', for_redirect=True))
current_app.logger.error(f"Access Forbidden: {error}")
current_app.logger.error(traceback.format_exc())
return render_template('error/403.html')
return render_template('error/403.html'), 403
def key_error_handler(error):
profile = current_app.config.get('ERRORS_PROFILE', 'web_app')
# Check if the KeyError is specifically for 'tenant'
if str(error) == "'tenant'":
return redirect(prefixed_url_for('security.login'))
if profile == 'web_app':
return redirect(prefixed_url_for('security.login', for_redirect=True))
else:
current_app.logger.warning("Session tenant missing in chat_client context")
return render_template('error/401.html'), 401
# For other KeyErrors, you might want to log the error and return a generic error page
current_app.logger.error(f"Key Error: {error}")
current_app.logger.error(traceback.format_exc())
@@ -79,19 +92,24 @@ def no_tenant_selected_error(error):
"""Handle errors when no tenant is selected in the current session.
This typically happens when a session expires or becomes invalid after
a long period of inactivity. The user will be redirected to the login page.
a long period of inactivity. The user will be redirected to the login page (web_app)
or shown an error page (chat_client).
"""
profile = current_app.config.get('ERRORS_PROFILE', 'web_app')
current_app.logger.error(f"No Session Tenant Error: {error}")
current_app.logger.error(traceback.format_exc())
flash('Your session expired. You will have to re-enter your credentials', 'warning')
if profile == 'web_app':
# Perform logout if user is authenticated
if current_user.is_authenticated:
from flask_security.utils import logout_user
logout_user()
# Redirect to login page
return redirect(prefixed_url_for('security.login'))
return redirect(prefixed_url_for('security.login', for_redirect=True))
else:
# chat_client: render 401 page
return render_template('error/401.html'), 401
def general_exception(e):
@@ -122,7 +140,10 @@ def template_syntax_error(error):
error_details=f"Error in template '{error.filename}' at line {error.lineno}: {error.message}"), 500
def register_error_handlers(app):
def register_error_handlers(app, profile: str = 'web_app'):
# Store profile in app config to drive handler behavior
app.config['ERRORS_PROFILE'] = profile
app.register_error_handler(404, not_found_error)
app.register_error_handler(500, internal_server_error)
app.register_error_handler(401, not_authorised_error)

View File

@@ -92,6 +92,13 @@ class EveAINoActiveLicense(EveAIException):
super().__init__(message, status_code, payload)
class EveAIUserExpired(EveAIException):
"""Raised when a user account is no longer valid (valid_to expired)"""
def __init__(self, message="Your account has expired", status_code=401, payload=None):
super().__init__(message, status_code, payload)
class EveAIInvalidCatalog(EveAIException):
"""Raised when a catalog cannot be found"""

View File

@@ -4,42 +4,67 @@ from typing import Generator
from redis import Redis, RedisError
import json
from flask import current_app
import time
class ExecutionProgressTracker:
"""Tracks progress of specialist executions using Redis"""
# Normalized processing types and aliases
PT_COMPLETE = 'EVEAI_COMPLETE'
PT_ERROR = 'EVEAI_ERROR'
_COMPLETE_ALIASES = {'EveAI Specialist Complete', 'Task Complete', 'task complete'}
_ERROR_ALIASES = {'EveAI Specialist Error', 'Task Error', 'task error'}
def __init__(self):
try:
redis_url = current_app.config['SPECIALIST_EXEC_PUBSUB']
self.redis = Redis.from_url(redis_url, socket_timeout=5)
# Test the connection
self.redis.ping()
# Use shared pubsub pool (lazy connect; no eager ping)
from common.utils.redis_pubsub_pool import get_pubsub_client
self.redis = get_pubsub_client(current_app)
self.expiry = 3600 # 1 hour expiry
except RedisError as e:
current_app.logger.error(f"Failed to connect to Redis: {str(e)}")
raise
except Exception as e:
current_app.logger.error(f"Unexpected error during Redis initialization: {str(e)}")
current_app.logger.error(f"Error initializing ExecutionProgressTracker: {str(e)}")
raise
def _get_key(self, execution_id: str) -> str:
return f"specialist_execution:{execution_id}"
prefix = current_app.config.get('REDIS_PREFIXES', {}).get('pubsub_execution', 'pubsub:execution:')
return f"{prefix}{execution_id}"
def _retry(self, op, attempts: int = 3, base_delay: float = 0.1):
"""Retry wrapper for Redis operations with exponential backoff."""
last_exc = None
for i in range(attempts):
try:
return op()
except RedisError as e:
last_exc = e
if i == attempts - 1:
break
delay = base_delay * (3 ** i) # 0.1, 0.3, 0.9
current_app.logger.warning(f"Redis operation failed (attempt {i+1}/{attempts}): {e}. Retrying in {delay}s")
time.sleep(delay)
# Exhausted retries
raise last_exc
def _normalize_processing_type(self, processing_type: str) -> str:
if not processing_type:
return processing_type
p = str(processing_type).strip()
if p in self._COMPLETE_ALIASES:
return self.PT_COMPLETE
if p in self._ERROR_ALIASES:
return self.PT_ERROR
return p
def send_update(self, ctask_id: str, processing_type: str, data: dict):
"""Send an update about execution progress"""
try:
current_app.logger.debug(f"Sending update for {ctask_id} with processing type {processing_type} and data:\n"
f"{data}")
key = self._get_key(ctask_id)
# First verify Redis is still connected
try:
self.redis.ping()
except RedisError:
current_app.logger.error("Lost Redis connection. Attempting to reconnect...")
self.__init__() # Reinitialize connection
processing_type = self._normalize_processing_type(processing_type)
update = {
'processing_type': processing_type,
'data': data,
@@ -48,7 +73,7 @@ class ExecutionProgressTracker:
# Log initial state
try:
orig_len = self.redis.llen(key)
orig_len = self._retry(lambda: self.redis.llen(key))
# Try to serialize the update and check the result
try:
@@ -58,13 +83,16 @@ class ExecutionProgressTracker:
raise
# Store update in list with pipeline for atomicity
def _pipeline_op():
with self.redis.pipeline() as pipe:
pipe.rpush(key, serialized_update)
pipe.publish(key, serialized_update)
pipe.expire(key, self.expiry)
results = pipe.execute()
return pipe.execute()
new_len = self.redis.llen(key)
results = self._retry(_pipeline_op)
new_len = self._retry(lambda: self.redis.llen(key))
if new_len <= orig_len:
current_app.logger.error(
@@ -81,32 +109,51 @@ class ExecutionProgressTracker:
def get_updates(self, ctask_id: str) -> Generator[str, None, None]:
key = self._get_key(ctask_id)
pubsub = self.redis.pubsub()
pubsub.subscribe(key)
# Subscribe with retry
self._retry(lambda: pubsub.subscribe(key))
try:
# Hint client reconnect interval (optional but helpful)
yield "retry: 3000\n\n"
# First yield any existing updates
length = self.redis.llen(key)
length = self._retry(lambda: self.redis.llen(key))
if length > 0:
updates = self.redis.lrange(key, 0, -1)
updates = self._retry(lambda: self.redis.lrange(key, 0, -1))
for update in updates:
update_data = json.loads(update.decode('utf-8'))
# Use processing_type for the event
yield f"event: {update_data['processing_type']}\n"
update_data['processing_type'] = self._normalize_processing_type(update_data.get('processing_type'))
yield f"data: {json.dumps(update_data)}\n\n"
# Then listen for new updates
while True:
try:
message = pubsub.get_message(timeout=30) # message['type'] is Redis pub/sub type
except RedisError as e:
current_app.logger.warning(f"Redis pubsub get_message error: {e}. Continuing...")
time.sleep(0.3)
continue
if message is None:
yield ": keepalive\n\n"
continue
if message['type'] == 'message': # This is Redis pub/sub type
update_data = json.loads(message['data'].decode('utf-8'))
yield f"data: {message['data'].decode('utf-8')}\n\n"
update_data['processing_type'] = self._normalize_processing_type(update_data.get('processing_type'))
yield f"data: {json.dumps(update_data)}\n\n"
# Check processing_type for completion
if update_data['processing_type'] in ['Task Complete', 'Task Error']:
# Unified completion check
if update_data['processing_type'] in [self.PT_COMPLETE, self.PT_ERROR]:
# Give proxies/clients a chance to flush
yield ": closing\n\n"
break
finally:
try:
pubsub.unsubscribe()
except Exception:
pass
try:
pubsub.close()
except Exception:
pass

View File

@@ -1,6 +1,6 @@
from minio import Minio
from minio.error import S3Error
from flask import Flask
from flask import Flask, current_app
import io
from werkzeug.datastructures import FileStorage
@@ -12,6 +12,7 @@ class MinioClient:
self.client = None
def init_app(self, app: Flask):
app.logger.debug(f"Initializing MinIO client with endpoint: {app.config['MINIO_ENDPOINT']} and secure: {app.config.get('MINIO_USE_HTTPS', False)}")
self.client = Minio(
app.config['MINIO_ENDPOINT'],
access_key=app.config['MINIO_ACCESS_KEY'],
@@ -21,9 +22,17 @@ class MinioClient:
app.logger.info(f"MinIO client initialized with endpoint: {app.config['MINIO_ENDPOINT']}")
def generate_bucket_name(self, tenant_id):
tenant_base = current_app.config.get('OBJECT_STORAGE_TENANT_BASE', 'bucket')
if tenant_base == 'bucket':
return f"tenant-{tenant_id}-bucket"
elif tenant_base == 'folder':
return current_app.config.get('OBJECT_STORAGE_BUCKET_NAME')
else:
raise ValueError(f"Invalid OBJECT_STORAGE_TENANT_BASE value: {tenant_base}")
def create_tenant_bucket(self, tenant_id):
tenant_base = current_app.config.get('OBJECT_STORAGE_TENANT_BASE', 'bucket')
if tenant_base == 'bucket':
bucket_name = self.generate_bucket_name(tenant_id)
try:
if not self.client.bucket_exists(bucket_name):
@@ -32,16 +41,32 @@ class MinioClient:
return bucket_name
except S3Error as err:
raise Exception(f"Error occurred while creating bucket: {err}")
elif tenant_base == 'folder': # In this case, we are working within a predefined bucket
return current_app.config.get('OBJECT_STORAGE_BUCKET_NAME')
else:
raise ValueError(f"Invalid OBJECT_STORAGE_TENANT_BASE value: {tenant_base}")
def generate_object_name(self, document_id, language, version_id, filename):
def generate_object_name(self, tenant_id, document_id, language, version_id, filename):
tenant_base = current_app.config.get('OBJECT_STORAGE_TENANT_BASE', 'bucket')
if tenant_base == 'bucket':
return f"{document_id}/{language}/{version_id}/{filename}"
elif tenant_base == 'folder':
return f"tenant-{tenant_id}/documents/{document_id}/{language}/{version_id}/{filename}"
else:
raise ValueError(f"Invalid OBJECT_STORAGE_TENANT_BASE value: {tenant_base}")
def generate_asset_name(self, asset_id, asset_type, content_type):
def generate_asset_name(self, tenant_id, asset_id, asset_type, content_type):
tenant_base = current_app.config.get('OBJECT_STORAGE_TENANT_BASE', 'bucket')
if tenant_base == 'bucket':
return f"assets/{asset_type}/{asset_id}.{content_type}"
elif tenant_base == 'folder':
return f"tenant-{tenant_id}/assets/{asset_type}/{asset_id}.{content_type}"
else:
raise ValueError(f"Invalid OBJECT_STORAGE_TENANT_BASE value: {tenant_base}")
def upload_document_file(self, tenant_id, document_id, language, version_id, filename, file_data):
bucket_name = self.generate_bucket_name(tenant_id)
object_name = self.generate_object_name(document_id, language, version_id, filename)
object_name = self.generate_object_name(tenant_id, document_id, language, version_id, filename)
try:
if isinstance(file_data, FileStorage):
@@ -63,7 +88,7 @@ class MinioClient:
def upload_asset_file(self, tenant_id: int, asset_id: int, asset_type: str, file_type: str,
file_data: bytes | FileStorage | io.BytesIO | str, ) -> tuple[str, str, int]:
bucket_name = self.generate_bucket_name(tenant_id)
object_name = self.generate_asset_name(asset_id, asset_type, file_type)
object_name = self.generate_asset_name(tenant_id, asset_id, asset_type, file_type)
try:
if isinstance(file_data, FileStorage):
@@ -111,7 +136,7 @@ class MinioClient:
def delete_document_file(self, tenant_id, document_id, language, version_id, filename):
bucket_name = self.generate_bucket_name(tenant_id)
object_name = self.generate_object_name(document_id, language, version_id, filename)
object_name = self.generate_object_name(tenant_id, document_id, language, version_id, filename)
try:
self.client.remove_object(bucket_name, object_name)
return True

View File

@@ -6,7 +6,6 @@ from langchain_core.language_models import BaseChatModel
from common.langchain.llm_metrics_handler import LLMMetricsHandler
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
from langchain_mistralai import ChatMistralAI
from flask import current_app

View File

@@ -1,27 +1,108 @@
from flask import request, current_app, url_for
from flask import request, url_for, current_app
from urllib.parse import urlsplit, urlunsplit
import re
VISIBLE_PREFIXES = ('/admin', '/api', '/chat-client')
def _normalize_prefix(raw_prefix: str) -> str:
"""Normalize config prefix to internal form '/admin' or '' if not set."""
if not raw_prefix:
return ''
s = str(raw_prefix).strip()
if not s:
return ''
# remove leading/trailing slashes, then add single leading slash
s = s.strip('/')
if not s:
return ''
return f"/{s}"
def _get_config_prefix() -> str:
"""Return normalized prefix from config EVEAI_APP_PREFIX (config-first)."""
try:
cfg_val = (current_app.config.get('EVEAI_APP_PREFIX') if current_app else None)
return _normalize_prefix(cfg_val)
except Exception:
return ''
def _derive_visible_prefix():
# 1) Edge-provided header (beste en meest expliciete bron)
xfp = request.headers.get('X-Forwarded-Prefix')
current_app.logger.debug(f"X-Forwarded-Prefix: {xfp}")
if xfp and any(str(xfp).startswith(p) for p in VISIBLE_PREFIXES):
return str(xfp).rstrip('/')
# 2) Referer fallback: haal het top-level segment uit de Referer path
ref = request.headers.get('Referer') or ''
try:
ref_path = urlsplit(ref).path or ''
m = re.match(r'^/(admin|api|chat-client)(?:\b|/)', ref_path)
if m:
return f"/{m.group(1)}"
except Exception:
pass
# 3) Geen prefix bekend
return ''
def _visible_prefix_for_runtime() -> str:
"""Decide which prefix to use at runtime.
Priority: config EVEAI_APP_PREFIX; optional dynamic fallback if enabled.
"""
cfg_prefix = _get_config_prefix()
if cfg_prefix:
current_app.logger.debug(f"prefixed_url_for: using config prefix: {cfg_prefix}")
return cfg_prefix
# Optional dynamic fallback
use_fallback = bool(current_app.config.get('EVEAI_USE_DYNAMIC_PREFIX_FALLBACK', False)) if current_app else False
if use_fallback:
dyn = _derive_visible_prefix()
current_app.logger.debug(f"prefixed_url_for: using dynamic fallback prefix: {dyn}")
return dyn
current_app.logger.debug("prefixed_url_for: no prefix configured, no fallback enabled")
return ''
def prefixed_url_for(endpoint, **values):
prefix = request.headers.get('X-Forwarded-Prefix', '')
scheme = request.headers.get('X-Forwarded-Proto', request.scheme)
host = request.headers.get('Host', request.host)
"""
Gedrag:
- Default (_external=False, for_redirect=False): retourneer relatief pad (zonder leading '/')
voor templates/JS. De dynamische <base> zorgt voor correcte resolutie onder het zichtbare prefix.
- _external=True: bouw absolute URL (schema/host). Pad wordt geprefixt met config prefix (indien gezet),
of optioneel met dynamische fallback wanneer geactiveerd.
- for_redirect=True: geef root-absoluut pad inclusief zichtbaar top-prefix, geschikt
voor HTTP Location headers. Backwards compat: _as_location=True wordt behandeld als for_redirect.
"""
external = values.pop('_external', False)
generated_url = url_for(endpoint, **values)
# Backwards compatibility met oudere paramnaam
if values.pop('_as_location', False):
values['for_redirect'] = True
for_redirect = values.pop('for_redirect', False)
generated_url = url_for(endpoint, **values) # bv. "/user/tenant_overview"
path, query, fragment = urlsplit(generated_url)[2:5]
if external:
path, query, fragment = urlsplit(generated_url)[2:5]
# Check if the prefix is already present in the path
if prefix and not path.startswith(prefix):
new_path = prefix + path
else:
new_path = path
scheme = request.headers.get('X-Forwarded-Proto', request.scheme)
host = request.headers.get('Host', request.host)
visible_prefix = _visible_prefix_for_runtime()
new_path = (visible_prefix.rstrip('/') + path) if (visible_prefix and not path.startswith(visible_prefix)) else path
current_app.logger.debug(f"prefixed_url_for external: {scheme}://{host}{new_path}")
return urlunsplit((scheme, host, new_path, query, fragment))
else:
# Check if the prefix is already present in the generated URL
if prefix and not generated_url.startswith(prefix):
return prefix + generated_url
else:
return generated_url
if for_redirect:
visible_prefix = _visible_prefix_for_runtime()
if visible_prefix and not path.startswith(visible_prefix):
composed = f"{visible_prefix}{path}"
current_app.logger.debug(f"prefixed_url_for redirect: {composed}")
return composed
current_app.logger.debug(f"prefixed_url_for redirect (no prefix): {path}")
return path
# Default: relatief pad (zonder leading '/')
rel = path[1:] if path.startswith('/') else path
return rel

View File

@@ -0,0 +1,84 @@
import ssl
from typing import Dict, Any
import redis
from flask import Flask
def _build_pubsub_redis_config(app: Flask) -> Dict[str, Any]:
"""Build Redis ConnectionPool config for the pubsub/EPT workload using app.config.
Does not modify cache or session pools.
"""
cfg = app.config
config: Dict[str, Any] = {
'host': cfg['REDIS_URL'],
'port': cfg['REDIS_PORT'],
'db': int(cfg.get('REDIS_SPECIALIST_EXEC_DB', '0')),
'max_connections': int(cfg.get('REDIS_PUBSUB_MAX_CONNECTIONS', 200)),
'retry_on_timeout': True,
'socket_keepalive': True,
'socket_keepalive_options': {},
'socket_timeout': float(cfg.get('REDIS_PUBSUB_SOCKET_TIMEOUT', 10.0)),
'socket_connect_timeout': float(cfg.get('REDIS_PUBSUB_CONNECT_TIMEOUT', 3.0)),
}
# Authentication if present
un = cfg.get('REDIS_USER')
pw = cfg.get('REDIS_PASS')
if un and pw:
config.update({'username': un, 'password': pw})
# TLS when configured
cert_path = cfg.get('REDIS_CA_CERT_PATH')
if cfg.get('REDIS_SCHEME') == 'rediss' and cert_path:
config.update({
'connection_class': redis.SSLConnection,
'ssl_cert_reqs': ssl.CERT_REQUIRED,
'ssl_check_hostname': cfg.get('REDIS_SSL_CHECK_HOSTNAME', True),
'ssl_ca_certs': cert_path,
})
return config
def create_pubsub_pool(app: Flask) -> redis.ConnectionPool:
"""Create and store the dedicated pubsub ConnectionPool in app.extensions."""
if not hasattr(app, 'extensions'):
app.extensions = {}
# Reuse existing if already created
pool = app.extensions.get('redis_pubsub_pool')
if pool is not None:
return pool
config = _build_pubsub_redis_config(app)
pool = redis.ConnectionPool(**config)
app.extensions['redis_pubsub_pool'] = pool
# Log a concise, non-sensitive summary
try:
summary = {
'scheme': app.config.get('REDIS_SCHEME'),
'host': app.config.get('REDIS_URL'),
'port': app.config.get('REDIS_PORT'),
'db': app.config.get('REDIS_SPECIALIST_EXEC_DB', '0'),
'ssl_check_hostname': app.config.get('REDIS_SSL_CHECK_HOSTNAME'),
'ca_present': bool(app.config.get('REDIS_CA_CERT_PATH')),
'max_connections': app.config.get('REDIS_PUBSUB_MAX_CONNECTIONS'),
'socket_timeout': app.config.get('REDIS_PUBSUB_SOCKET_TIMEOUT'),
'socket_connect_timeout': app.config.get('REDIS_PUBSUB_CONNECT_TIMEOUT'),
}
app.logger.info(f"Initialized Redis pubsub pool: {summary}")
except Exception:
pass
return pool
def get_pubsub_client(app: Flask) -> redis.Redis:
"""Get a Redis client bound to the dedicated pubsub pool."""
pool = app.extensions.get('redis_pubsub_pool')
if pool is None:
pool = create_pubsub_pool(app)
return redis.Redis(connection_pool=pool)

View File

@@ -6,6 +6,7 @@ from common.models.entitlements import License
from common.utils.database import Database
from common.utils.eveai_exceptions import EveAITenantNotFound, EveAITenantInvalid, EveAINoActiveLicense
from datetime import datetime as dt, timezone as tz
from common.services.user import TenantServices
# Definition of Trigger Handlers
@@ -19,25 +20,29 @@ def set_tenant_session_data(sender, user, **kwargs):
# Remove partner from session if it exists
session.pop('partner', None)
session['consent_status'] = str(TenantServices.get_consent_status(user.tenant_id))
def clear_tenant_session_data(sender, user, **kwargs):
session.pop('tenant', None)
session.pop('default_language', None)
session.pop('default_llm_model', None)
session.pop('partner', None)
session.pop('consent_status', None)
def is_valid_tenant(tenant_id):
if tenant_id == 1: # The 'root' tenant, is always valid
return True
tenant = Tenant.query.get(tenant_id)
Database(tenant).switch_schema()
if tenant is None:
raise EveAITenantNotFound()
elif tenant.type == 'Inactive':
raise EveAITenantInvalid(tenant_id)
else:
current_date = dt.now(tz=tz.utc).date()
Database(str(tenant_id)).switch_schema()
# TODO -> Check vervangen door Active License Period!
# active_license = (License.query.filter_by(tenant_id=tenant_id)
# .filter(and_(License.start_date <= current_date,

View File

@@ -1,8 +1,8 @@
from flask import current_app, render_template
from flask import current_app, render_template, request, redirect, session, flash
from flask_security import current_user
from itsdangerous import URLSafeTimedSerializer
from common.models.user import Role
from common.models.user import Role, ConsentStatus
from common.utils.nginx_utils import prefixed_url_for
from common.utils.mail_utils import send_email
@@ -36,7 +36,7 @@ def send_confirmation_email(user):
try:
send_email(user.email, f"{user.first_name} {user.last_name}", "Confirm your email", html)
current_app.logger.info(f'Confirmation email sent to {user.email}')
current_app.logger.info(f'Confirmation email sent to {user.email} with url: {confirm_url}')
except Exception as e:
current_app.logger.error(f'Failed to send confirmation email to {user.email}. Error: {str(e)}')
raise
@@ -51,7 +51,7 @@ def send_reset_email(user):
try:
send_email(user.email, f"{user.first_name} {user.last_name}", subject, html)
current_app.logger.info(f'Reset email sent to {user.email}')
current_app.logger.info(f'Reset email sent to {user.email} with url: {reset_url}')
except Exception as e:
current_app.logger.error(f'Failed to send reset email to {user.email}. Error: {str(e)}')
raise
@@ -96,3 +96,101 @@ def current_user_roles():
def all_user_roles():
roles = [(role.id, role.name) for role in Role.query.all()]
def is_exempt_endpoint(endpoint: str) -> bool:
"""Check if the endpoint is exempt from consent guard"""
if not endpoint:
return False
cfg = current_app.config or {}
endpoints_cfg = set(cfg.get('CONSENT_GUARD_EXEMPT_ENDPOINTS', []))
prefix_cfg = list(cfg.get('CONSENT_GUARD_EXEMPT_PREFIXES', []))
default_endpoints = {
'security_bp.login',
'security_bp.logout',
'security_bp.confirm_email',
'security_bp.forgot_password',
'security_bp.reset_password',
'security_bp.reset_password_request',
'user_bp.tenant_consent',
'user_bp.no_consent',
'user_bp.tenant_consent_renewal',
'user_bp.consent_renewal',
'user_bp.view_tenant_consents',
'user_bp.accept_tenant_consent',
'user_bp.view_consent_markdown',
'basic_bp.view_content',
}
default_prefixes = [
'security_bp.',
'healthz_bp.',
]
endpoints = default_endpoints.union(endpoints_cfg)
prefixes = default_prefixes + [p for p in prefix_cfg if isinstance(p, str)]
for p in prefixes:
if endpoint.startswith(p):
return True
if endpoint in endpoints:
return True
return False
def enforce_tenant_consent_ui():
"""Check if the user has consented to the terms of service"""
path = getattr(request, 'path', '') or ''
if path.startswith('/healthz') or path.startswith('/_healthz'):
return None
if not current_user.is_authenticated:
return None
endpoint = request.endpoint or ''
if is_exempt_endpoint(endpoint) or request.method == 'OPTIONS':
return None
# Global bypass: Super User and Partner Admin always allowed
if current_user.has_roles('Super User') or current_user.has_roles('Partner Admin'):
return None
tenant_id = getattr(current_user, 'tenant_id', None)
if not tenant_id:
tenant_id = session.get('tenant', {}).get('id') if session.get('tenant') else None
if not tenant_id:
return redirect(prefixed_url_for('security_bp.login', for_redirect=True))
raw_status = session.get('consent_status', ConsentStatus.NOT_CONSENTED)
# Coerce string to ConsentStatus enum if needed
status = raw_status
try:
if isinstance(raw_status, str):
# Accept formats like 'CONSENTED' or 'ConsentStatus.CONSENTED'
name = raw_status.split('.')[-1]
from common.models.user import ConsentStatus as CS
status = getattr(CS, name, CS.NOT_CONSENTED)
except Exception:
status = ConsentStatus.NOT_CONSENTED
if status == ConsentStatus.CONSENTED:
return None
if status == ConsentStatus.NOT_CONSENTED:
if current_user.has_roles('Tenant Admin'):
return redirect(prefixed_url_for('user_bp.tenant_consent', for_redirect=True))
return redirect(prefixed_url_for('user_bp.no_consent', for_redirect=True))
if status == ConsentStatus.RENEWAL_REQUIRED:
if current_user.has_roles('Tenant Admin'):
flash(
"You need to renew your consent to our DPA or T&Cs. Failing to do so in time will stop you from accessing our services.",
"danger")
elif current_user.has_roles('Partner Admin'):
flash(
"Please ensure renewal of our DPA or T&Cs for the current Tenant. Failing to do so in time will stop the tenant from accessing our services.",
"danger")
else:
flash(
"Please inform your administrator or partner to renew your consent to our DPA or T&Cs. Failing to do so in time will stop you from accessing our services.",
"danger")
return None
current_app.logger.debug('Unknown consent status')
return redirect(prefixed_url_for('user_bp.no_consent', for_redirect=True))

View File

@@ -6,7 +6,8 @@ from common.extensions import cache_manager
def perform_startup_actions(app):
perform_startup_invalidation(app)
pass
# perform_startup_invalidation(app)
def perform_startup_invalidation(app):

View File

@@ -107,6 +107,44 @@ def get_pagination_html(pagination, endpoint, **kwargs):
return Markup(''.join(html))
def asset_url(logical_path: str):
"""
Resolve an asset logical path to a hashed URL using Parcel manifest when available.
Return a URL that respects STATIC_URL (CDN) when configured; otherwise serve from /static/.
Examples:
- asset_url('dist/chat-client.js') -> 'https://cdn/.../dist/chat-client.abc123.js' (when STATIC_URL set)
- asset_url('dist/chat-client.css') -> '/static/dist/chat-client.def456.css' (when STATIC_URL not set)
"""
if not logical_path:
return logical_path
try:
from common.utils.asset_manifest import resolve_asset
# Resolve logical to possibly hashed path
resolved = resolve_asset(logical_path) or logical_path
# If manifest returns an absolute URL, return as-is
if resolved.startswith('http://') or resolved.startswith('https://'):
return resolved
# Normalize: strip any leading '/static/' and leading '/'
if resolved.startswith('/static/'):
rel = resolved[len('/static/'):]
else:
rel = resolved.lstrip('/')
# Build with STATIC_URL if configured
static_base = (current_app.config.get('STATIC_URL') or '').rstrip('/')
if static_base:
return f"{static_base}/{rel}"
# Fallback to app static
return f"/static/{rel}"
except Exception:
# Conservative fallback also respecting STATIC_URL
static_base = (current_app.config.get('STATIC_URL') or '').rstrip('/')
rel = logical_path.lstrip('/')
return f"{static_base}/{rel}" if static_base else f"/static/{rel}"
def register_filters(app):
"""
Registers custom filters with the Flask app.
@@ -123,4 +161,5 @@ def register_filters(app):
app.jinja_env.globals['prefixed_url_for'] = prefixed_url_for
app.jinja_env.globals['get_pagination_html'] = get_pagination_html
app.jinja_env.globals['get_base_background_color'] = get_base_background_color
app.jinja_env.globals['asset_url'] = asset_url

View File

@@ -1,17 +0,0 @@
version: "1.0.0"
name: "Email Content Agent"
role: >
Email Content Writer
goal: >
Craft a highly personalized email that resonates with the {end_user_role}'s context and identification (personal and
company if available).
{custom_goal}
backstory: >
You are an expert in writing compelling, personalized emails that capture the {end_user_role}'s attention and drive
engagement. You are perfectly multilingual, and can write the mail in the native language of the {end_user_role}.
{custom_backstory}
metadata:
author: "Josako"
date_added: "2025-01-08"
description: "An Agent that writes engaging emails."
changes: "Initial version"

View File

@@ -1,16 +0,0 @@
version: "1.0.0"
name: "Email Engagement Agent"
role: >
Engagement Optimization Specialist {custom_role}
goal: >
You ensure that the email includes strong CTAs and strategically placed engagement hooks that encourage the
{end_user_role} to take immediate action. {custom_goal}
backstory: >
You specialize in optimizing content to ensure that it not only resonates with the recipient but also encourages them
to take the desired action.
{custom_backstory}
metadata:
author: "Josako"
date_added: "2025-01-08"
description: "An Agent that ensures the email is engaging and lead to maximal desired action"
changes: "Initial version"

View File

@@ -1,20 +0,0 @@
version: "1.0.0"
name: "Identification Agent"
role: >
Identification Administrative force. {custom_role}
goal: >
You are an administrative force that tries to gather identification information to complete the administration of an
end-user, the company he or she works for, through monitoring conversations and advising on questions to help you do
your job. You are responsible for completing the company's backend systems (like CRM, ERP, ...) with inputs from the
end user in the conversation.
{custom_goal}
backstory: >
You are and administrative force for {company}, and very proficient in gathering information for the company's backend
systems. You do so by monitoring conversations between one of your colleagues (e.g. sales, finance, support, ...) and
an end user. You ask your colleagues to request additional information to complete your task.
{custom_backstory}
metadata:
author: "Josako"
date_added: "2025-01-08"
description: "An Agent that gathers administrative information"
changes: "Initial version"

View File

@@ -1,4 +1,4 @@
version: "1.0.0"
version: "1.1.0"
name: "Rag Agent"
role: >
{tenant_name} Spokesperson. {custom_role}
@@ -7,7 +7,7 @@ goal: >
of the current conversation.
{custom_goal}
backstory: >
You are the primary contact for {tenant_name}. You are known by {name}, and can be addressed by this name, or you. You are
You are the primary contact for {tenant_name}. You are known by {name}, and can be addressed by this name, or 'you'. You are
a very good communicator, and adapt to the style used by the human asking for information (e.g. formal or informal).
You always stay correct and polite, whatever happens. And you ensure no discriminating language is used.
You are perfectly multilingual in all known languages, and do your best to answer questions in {language}, whatever
@@ -15,7 +15,7 @@ backstory: >
include a salutation or closing greeting in your answer.
{custom_backstory}
full_model_name: "mistral.mistral-medium-latest"
temperature: 0.5
temperature: 0.4
metadata:
author: "Josako"
date_added: "2025-01-08"

View File

@@ -0,0 +1,29 @@
version: "1.2.0"
name: "Rag Agent"
role: >
{tenant_name}'s Spokesperson. {custom_role}
goal: >
You get questions by a human correspondent, and give answers based on a given context, taking into account the history
of the current conversation.
{custom_goal}
backstory: >
You are the primary contact for {tenant_name}, and have been it's spokesperson for a very long time. You are used to
addressing customers, prospects, press, ...
You are known by {name}, and can be addressed by this name, or 'you'.
You are a very good communicator, that knows how to adapt his style to the public your interacting with.
You always stay correct and polite, whatever happens. And you ensure no discriminating language is used.
You are perfectly multilingual in all known languages, and do your best to answer questions in {language}, whatever
language the context provided to you is in. You are participating in a conversation, not writing e.g. an email or
essay. Do not include a salutation or closing greeting in your answer.
{custom_backstory}
full_model_name: "mistral.mistral-medium-latest"
allowed_models:
- "mistral.mistral-small-latest"
- "mistral.mistral-medium-latest"
- "mistral.magistral-medium-latest"
temperature: 0.3
metadata:
author: "Josako"
date_added: "2025-01-08"
description: "An Agent that does RAG based on a user's question, RAG content & history"
changes: "Initial version"

View File

@@ -1,26 +0,0 @@
version: "1.0.0"
name: "Rag Communication Agent"
role: >
{company} Interaction Responsible. {custom_role}
goal: >
Your team has collected answers to a question asked. But it also created some additional questions to be asked. You
ensure the necessary answers are returned, and make an informed selection of the additional questions that can be
asked (combining them when appropriate), ensuring the human you're communicating to does not get overwhelmed.
{custom_goal}
backstory: >
You are the online communication expert for {company}. You handled a lot of online communications with both customers
and internal employees. You are a master in redacting one coherent reply in a conversation that includes all the
answers, and a selection of additional questions to be asked in a conversation. Although your backoffice team might
want to ask a myriad of questions, you understand that doesn't fit with the way humans communicate. You know how to
combine multiple related questions, and understand how to interweave the questions in the answers when related.
You are perfectly multilingual in all known languages, and do your best to answer questions in {language}, whatever
language the context provided to you is in. Also, ensure that questions asked do not contradict with the answers
given, or aren't obsolete given the answer provided.
You are participating in a conversation, not writing e.g. an email. Do not include a salutation or closing greeting
in your answer.
{custom_backstory}
metadata:
author: "Josako"
date_added: "2025-01-08"
description: "An Agent that consolidates both answers and questions in a consistent reply"
changes: "Initial version"

View File

@@ -0,0 +1,24 @@
version: "1.0.0"
name: "Rag Proofreader Agent"
role: >
Proofreader for {tenant_name}. {custom_role}
goal: >
You get a prepared answer to be send out, and adapt it to comply to best practices.
{custom_goal}
backstory: >
You are the primary contact for {tenant_name}, and have been it's spokesperson for a very long time. You are used to
addressing customers, prospects, press, ...
You are known by {name}, and can be addressed by this name, or 'you'.
You review communications and ensure they are clear and follow best practices.
{custom_backstory}
full_model_name: "mistral.mistral-medium-latest"
allowed_models:
- "mistral.mistral-small-latest"
- "mistral.mistral-medium-latest"
- "mistral.magistral-medium-latest"
temperature: 0.4
metadata:
author: "Josako"
date_added: "2025-10-22"
description: "An Agent that does QA Activities on provided answers"
changes: "Initial version"

View File

@@ -1,22 +0,0 @@
version: "1.0.0"
name: "SPIN Sales Assistant"
role: >
Sales Assistant for {company} on {products}. {custom_role}
goal: >
Your main job is to help your sales specialist to analyze an ongoing conversation with a customer, and detect
SPIN-related information. {custom_goal}
backstory: >
You are a sales assistant for {company} on {products}. You are known by {name}, and can be addressed by this name, or you. You are
trained to understand an analyse ongoing conversations. Your are proficient in detecting SPIN-related information in a
conversation.
SPIN stands for:
- Situation information - Understanding the customer's current context
- Problem information - Uncovering challenges and pain points
- Implication information - Exploring consequences of those problems
- Need-payoff information - Helping customers realize value of solutions
{custom_backstory}
metadata:
author: "Josako"
date_added: "2025-01-08"
description: "An Agent that detects SPIN information in an ongoing conversation"
changes: "Initial version"

View File

@@ -1,25 +0,0 @@
version: "1.0.0"
name: "SPIN Sales Specialist"
role: >
Sales Specialist for {company} on {products}. {custom_role}
goal: >
Your main job is to do sales using the SPIN selling methodology in a first conversation with a potential customer.
{custom_goal}
backstory: >
You are a sales specialist for {company} on {products}. You are known by {name}, and can be addressed by this name,
or you. You have an assistant that provides you with already detected SPIN-information in an ongoing conversation. You
decide on follow-up questions for more in-depth information to ensure we get the required information that may lead to
selling {products}.
SPIN stands for:
- Situation information - Understanding the customer's current context
- Problem information - Uncovering challenges and pain points
- Implication information - Exploring consequences of those problems
- Need-payoff information - Helping customers realize value of solutions
{custom_backstory}
You are acquainted with the following product information:
{product_information}
metadata:
author: "Josako"
date_added: "2025-01-08"
description: "An Agent that asks for Follow-up questions for SPIN-process"
changes: "Initial version"

View File

@@ -2,6 +2,9 @@ import os
from os import environ, path
from datetime import timedelta
import redis
import ssl
import tempfile
from ipaddress import ip_address
from common.utils.prompt_loader import load_prompt_templates
@@ -14,20 +17,145 @@ class Config(object):
SECRET_KEY = environ.get('SECRET_KEY')
COMPONENT_NAME = environ.get('COMPONENT_NAME')
# Database Settings
# Database Settings ---------------------------------------------------------------------------
DB_HOST = environ.get('DB_HOST')
DB_USER = environ.get('DB_USER')
DB_PASS = environ.get('DB_PASS')
DB_NAME = environ.get('DB_NAME')
DB_PORT = environ.get('DB_PORT')
SQLALCHEMY_DATABASE_URI = f'postgresql+pg8000://{DB_USER}:{DB_PASS}@{DB_HOST}:{DB_PORT}/{DB_NAME}'
SQLALCHEMY_DATABASE_URI = f'postgresql+psycopg://{DB_USER}:{DB_PASS}@{DB_HOST}:{DB_PORT}/{DB_NAME}'
SQLALCHEMY_BINDS = {'public': SQLALCHEMY_DATABASE_URI}
# Database Engine Options (health checks and keepalives)
PGSQL_CERT_DATA = environ.get('PGSQL_CERT')
PGSQL_CA_CERT_PATH = None
if PGSQL_CERT_DATA:
_tmp = tempfile.NamedTemporaryFile(mode='w', delete=False, suffix='.pem')
_tmp.write(PGSQL_CERT_DATA)
_tmp.flush()
_tmp.close()
PGSQL_CA_CERT_PATH = _tmp.name
# Psycopg3 connect args (libpq parameters)
_CONNECT_ARGS = {
'connect_timeout': 5,
'keepalives': 1,
'keepalives_idle': 60,
'keepalives_interval': 30,
'keepalives_count': 5,
}
if PGSQL_CA_CERT_PATH:
_CONNECT_ARGS.update({
'sslmode': 'require',
'sslrootcert': PGSQL_CA_CERT_PATH,
})
SQLALCHEMY_ENGINE_OPTIONS = {
'pool_pre_ping': True,
'pool_recycle': 180,
'pool_use_lifo': True,
'connect_args': _CONNECT_ARGS,
}
# Redis Settings ------------------------------------------------------------------------------
REDIS_URL = environ.get('REDIS_URL')
REDIS_PORT = environ.get('REDIS_PORT', '6379')
REDIS_USER = environ.get('REDIS_USER')
REDIS_PASS = environ.get('REDIS_PASS')
REDIS_CERT_DATA = environ.get('REDIS_CERT')
REDIS_SCHEME = None
# Determine if REDIS_URL is an IP; use it to control hostname checking
REDIS_IS_IP = False
try:
ip_address(REDIS_URL)
REDIS_IS_IP = True
except Exception:
REDIS_IS_IP = False
REDIS_SSL_CHECK_HOSTNAME = not REDIS_IS_IP
# Write CA once to a file, expose path
REDIS_CA_CERT_PATH = None
if REDIS_CERT_DATA:
_tmp = tempfile.NamedTemporaryFile(mode='w', delete=False, suffix='.pem')
_tmp.write(REDIS_CERT_DATA)
_tmp.flush()
_tmp.close()
REDIS_CA_CERT_PATH = _tmp.name
if not REDIS_CERT_DATA: # We are in a simple dev/test environment
REDIS_SCHEME = 'redis'
REDIS_BASE_URI = f'redis://{REDIS_URL}:{REDIS_PORT}'
else: # We are in a scaleway environment, providing name, user and certificate
REDIS_SCHEME = 'rediss'
REDIS_BASE_URI = f'rediss://{REDIS_USER}:{REDIS_PASS}@{REDIS_URL}:{REDIS_PORT}'
# Central SSL options dict for reuse (Celery/Dogpile/etc.)
REDIS_SSL_OPTIONS = None
if REDIS_CERT_DATA and REDIS_CA_CERT_PATH:
REDIS_SSL_OPTIONS = {
'ssl_cert_reqs': ssl.CERT_REQUIRED,
'ssl_ca_certs': REDIS_CA_CERT_PATH,
'ssl_check_hostname': REDIS_SSL_CHECK_HOSTNAME,
}
# PubSub/EPT specific configuration (dedicated pool)
REDIS_SPECIALIST_EXEC_DB = environ.get('REDIS_SPECIALIST_EXEC_DB', '0')
REDIS_PUBSUB_MAX_CONNECTIONS = int(environ.get('REDIS_PUBSUB_MAX_CONNECTIONS', '200'))
REDIS_PUBSUB_SOCKET_TIMEOUT = float(environ.get('REDIS_PUBSUB_SOCKET_TIMEOUT', '10'))
REDIS_PUBSUB_CONNECT_TIMEOUT = float(environ.get('REDIS_PUBSUB_CONNECT_TIMEOUT', '3'))
REDIS_PREFIXES = {
'celery_app': 'celery:app:',
'celery_chat': 'celery:chat:',
'session': 'session:',
'cache_workers': 'cache:workers:',
'pubsub_execution': 'pubsub:execution:',
'startup_ops': 'startup:ops:',
}
# Celery Redis settings
CELERY_BROKER_URL = f'{REDIS_BASE_URI}/0'
CELERY_RESULT_BACKEND = f'{REDIS_BASE_URI}/0'
CELERY_BROKER_URL_CHAT = f'{REDIS_BASE_URI}/0'
CELERY_RESULT_BACKEND_CHAT = f'{REDIS_BASE_URI}/0'
# SSE PubSub settings
SPECIALIST_EXEC_PUBSUB = f"{REDIS_BASE_URI}/{REDIS_SPECIALIST_EXEC_DB}"
# eveai_model cache Redis setting
MODEL_CACHE_URL = f'{REDIS_BASE_URI}/0'
# Session Settings with Redis -----------------------------------------------------------------
SESSION_TYPE = 'redis'
SESSION_PERMANENT = True
SESSION_USE_SIGNER = True
PERMANENT_SESSION_LIFETIME = timedelta(minutes=60)
SESSION_REFRESH_EACH_REQUEST = True
# Configure SESSION_REDIS with SSL when cert is provided
if REDIS_CERT_DATA and REDIS_CA_CERT_PATH:
SESSION_REDIS = redis.from_url(
f'{REDIS_BASE_URI}/0', # REDIS_BASE_URI is reeds rediss://user:pass@host:port
ssl_cert_reqs=ssl.CERT_REQUIRED,
ssl_ca_certs=REDIS_CA_CERT_PATH,
ssl_check_hostname=REDIS_SSL_CHECK_HOSTNAME,
)
else:
SESSION_REDIS = redis.from_url(f'{REDIS_BASE_URI}/0')
SESSION_KEY_PREFIX = f'session_{COMPONENT_NAME}:'
SESSION_COOKIE_NAME = f'{COMPONENT_NAME}_session'
SESSION_COOKIE_DOMAIN = None # Laat Flask dit automatisch bepalen
SESSION_COOKIE_PATH = '/'
SESSION_COOKIE_HTTPONLY = True
SESSION_COOKIE_SECURE = False # True voor production met HTTPS
SESSION_COOKIE_SAMESITE = 'Lax'
REMEMBER_COOKIE_SAMESITE = 'strict'
WTF_CSRF_ENABLED = True
WTF_CSRF_TIME_LIMIT = None
WTF_CSRF_SSL_STRICT = False # Set to True if using HTTPS
# flask-security-too settings
# flask-security-too settings -----------------------------------------------------------------
# SECURITY_URL_PREFIX = '/admin'
SECURITY_LOGIN_URL = '/admin/login'
SECURITY_LOGOUT_URL = '/admin/logout'
@@ -44,7 +172,7 @@ class Config(object):
SECURITY_CONFIRMABLE = True
SECURITY_TRACKABLE = True
SECURITY_PASSWORD_COMPLEXITY_CHECKER = 'zxcvbn'
SECURITY_POST_LOGIN_VIEW = '/user/tenant_overview'
SECURITY_POST_LOGIN_VIEW = '/admin/user/tenant_overview'
SECURITY_RECOVERABLE = True
SECURITY_EMAIL_SENDER = "eveai_super@flow-it.net"
SECURITY_EMAIL_SUBJECT_PASSWORD_RESET = 'Reset Your Password'
@@ -62,10 +190,10 @@ class Config(object):
SECURITY_CSRF_HEADER = 'X-XSRF-TOKEN'
WTF_CSRF_CHECK_DEFAULT = False
# file upload settings
# file upload settings ------------------------------------------------------------------------
MAX_CONTENT_LENGTH = 50 * 1024 * 1024
# supported languages
# supported languages -------------------------------------------------------------------------
SUPPORTED_LANGUAGE_DETAILS = {
"English": {
"iso 639-1": "en",
@@ -152,10 +280,10 @@ class Config(object):
SUPPORTED_LANGUAGES_FULL = list(SUPPORTED_LANGUAGE_DETAILS.keys())
SUPPORTED_LANGUAGE_ISO639_1_LOOKUP = {lang_details["iso 639-1"]: lang_name for lang_name, lang_details in SUPPORTED_LANGUAGE_DETAILS.items()}
# supported currencies
# supported currencies ------------------------------------------------------------------------
SUPPORTED_CURRENCIES = ['', '$']
# supported LLMs
# supported LLMs & settings -------------------------------------------------------------------
# SUPPORTED_EMBEDDINGS = ['openai.text-embedding-3-small', 'openai.text-embedding-3-large', 'mistral.mistral-embed']
SUPPORTED_EMBEDDINGS = ['mistral.mistral-embed']
SUPPORTED_LLMS = ['mistral.mistral-large-latest', 'mistral.mistral-medium_latest', 'mistral.mistral-small-latest']
@@ -166,69 +294,33 @@ class Config(object):
# Environemnt Loaders
OPENAI_API_KEY = environ.get('OPENAI_API_KEY')
MISTRAL_API_KEY = environ.get('MISTRAL_API_KEY')
GROQ_API_KEY = environ.get('GROQ_API_KEY')
ANTHROPIC_API_KEY = environ.get('ANTHROPIC_API_KEY')
# Celery settings
# Celery settings (see above for Redis settings) ----------------------------------------------
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TIMEZONE = 'UTC'
CELERY_ENABLE_UTC = True
# SocketIO settings
# SOCKETIO_ASYNC_MODE = 'threading'
# SOCKETIO_ASYNC_MODE = 'gevent'
# Session Settings
SESSION_TYPE = 'redis'
SESSION_PERMANENT = True
SESSION_USE_SIGNER = True
PERMANENT_SESSION_LIFETIME = timedelta(minutes=60)
SESSION_REFRESH_EACH_REQUEST = True
SESSION_COOKIE_NAME = f'{COMPONENT_NAME}_session'
SESSION_COOKIE_DOMAIN = None # Laat Flask dit automatisch bepalen
SESSION_COOKIE_PATH = '/'
SESSION_COOKIE_HTTPONLY = True
SESSION_COOKIE_SECURE = False # True voor production met HTTPS
SESSION_COOKIE_SAMESITE = 'Lax'
REMEMBER_COOKIE_SAMESITE = 'strict'
SESSION_KEY_PREFIX = f'{COMPONENT_NAME}_'
# JWT settings
# JWT settings --------------------------------------------------------------------------------
JWT_SECRET_KEY = environ.get('JWT_SECRET_KEY')
JWT_ACCESS_TOKEN_EXPIRES = timedelta(hours=1) # Set token expiry to 1 hour
JWT_ACCESS_TOKEN_EXPIRES_DEPLOY = timedelta(hours=24) # Set long-lived token for deployment
# API Encryption
# API Encryption ------------------------------------------------------------------------------
API_ENCRYPTION_KEY = environ.get('API_ENCRYPTION_KEY')
# Fallback Algorithms
FALLBACK_ALGORITHMS = [
"RAG_TENANT",
"RAG_WIKIPEDIA",
"RAG_GOOGLE",
"LLM"
]
# Interaction algorithms
INTERACTION_ALGORITHMS = {
"RAG_TENANT": {"name": "RAG_TENANT", "description": "Algorithm using only information provided by the tenant"},
"RAG_WIKIPEDIA": {"name": "RAG_WIKIPEDIA", "description": "Algorithm using information provided by Wikipedia"},
"RAG_GOOGLE": {"name": "RAG_GOOGLE", "description": "Algorithm using information provided by Google"},
"LLM": {"name": "LLM", "description": "Algorithm using information integrated in the used LLM"}
}
# Email settings for API key notifications
# Email settings for API key notifications ----------------------------------------------------
PROMOTIONAL_IMAGE_URL = 'https://askeveai.com/wp-content/uploads/2024/07/Evie-Call-scaled.jpg' # Replace with your actual URL
# Langsmith settings
LANGCHAIN_TRACING_V2 = True
LANGCHAIN_ENDPOINT = 'https://api.smith.langchain.com'
LANGCHAIN_PROJECT = "eveai"
# Type Definitions ----------------------------------------------------------------------------
TENANT_TYPES = ['Active', 'Demo', 'Inactive', 'Test']
CONSENT_TYPES = ["Data Privacy Agreement", "Terms & Conditions"]
# CONSENT_TYPE_MAP maps names with the actual base folders the consent documents are stored in
CONSENT_TYPE_MAP = {
"Data Privacy Agreement": "dpa",
"Terms & Conditions": "terms",
}
# The maximum number of seconds allowed for audio compression (to save resources)
MAX_COMPRESSION_DURATION = 60*10 # 10 minutes
@@ -261,9 +353,32 @@ class Config(object):
# Entitlement Constants
ENTITLEMENTS_MAX_PENDING_DAYS = 5 # Defines the maximum number of days a pending entitlement can be active
# Content Directory for static content like the changelog, terms & conditions, privacy statement, ...
# Content Directory for static content like the changelog, terms & conditions, dpa statement, ...
CONTENT_DIR = '/app/content'
# Ensure health check endpoints are exempt from CSRF protection
SECURITY_EXEMPT_URLS = [
r'^/healthz($|/.*)',
r'^/_healthz($|/.*)',
]
SECURITY_LOGIN_WITHOUT_VIEWS = True # Dit voorkomt automatische redirects
# Define the nginx prefix used for the specific apps
CHAT_CLIENT_PREFIX = 'chat-client/chat/'
EVEAI_APP_PREFIX = 'admin/'
# Whether to use dynamic fallback (X-Forwarded-Prefix/Referer) when EVEAI_APP_PREFIX is empty
EVEAI_USE_DYNAMIC_PREFIX_FALLBACK = False
# Consent guard configuration (config-driven whitelist)
# List of endpoint names to exempt from the global consent guard
# Example: ['security_bp.login', 'security_bp.logout', 'user_bp.tenant_consent']
CONSENT_GUARD_EXEMPT_ENDPOINTS = []
# List of endpoint name prefixes; any endpoint starting with one of these is exempt
# Example: ['security_bp.', 'healthz_bp.']
CONSENT_GUARD_EXEMPT_PREFIXES = []
# TTL for consent status stored in session (seconds)
CONSENT_SESSION_TTL_SECONDS = int(environ.get('CONSENT_SESSION_TTL_SECONDS', '45'))
class DevConfig(Config):
DEVELOPMENT = True
@@ -271,61 +386,16 @@ class DevConfig(Config):
FLASK_DEBUG = True
EXPLAIN_TEMPLATE_LOADING = False
# Define the nginx prefix used for the specific apps
EVEAI_APP_LOCATION_PREFIX = '/admin'
EVEAI_CHAT_LOCATION_PREFIX = '/chat'
CHAT_CLIENT_PREFIX = 'chat-client/chat/'
# file upload settings
# UPLOAD_FOLDER = '/app/tenant_files'
# Redis Settings
REDIS_URL = 'redis'
REDIS_PORT = '6379'
REDIS_BASE_URI = f'redis://{REDIS_URL}:{REDIS_PORT}'
# Celery settings
# eveai_app Redis Settings
CELERY_BROKER_URL = f'{REDIS_BASE_URI}/0'
CELERY_RESULT_BACKEND = f'{REDIS_BASE_URI}/0'
# eveai_chat Redis Settings
CELERY_BROKER_URL_CHAT = f'{REDIS_BASE_URI}/3'
CELERY_RESULT_BACKEND_CHAT = f'{REDIS_BASE_URI}/3'
# eveai_chat_workers cache Redis Settings
CHAT_WORKER_CACHE_URL = f'{REDIS_BASE_URI}/4'
# specialist execution pub/sub Redis Settings
SPECIALIST_EXEC_PUBSUB = f'{REDIS_BASE_URI}/5'
# eveai_model cache Redis setting
MODEL_CACHE_URL = f'{REDIS_BASE_URI}/6'
# Unstructured settings
# UNSTRUCTURED_API_KEY = 'pDgCrXumYhM3CNvjvwV8msMldXC3uw'
# UNSTRUCTURED_BASE_URL = 'https://flowitbv-16c4us0m.api.unstructuredapp.io'
# UNSTRUCTURED_FULL_URL = 'https://flowitbv-16c4us0m.api.unstructuredapp.io/general/v0/general'
# SocketIO settings
# SOCKETIO_MESSAGE_QUEUE = f'{REDIS_BASE_URI}/1'
# SOCKETIO_CORS_ALLOWED_ORIGINS = '*'
# SOCKETIO_LOGGER = True
# SOCKETIO_ENGINEIO_LOGGER = True
# SOCKETIO_PING_TIMEOUT = 20000
# SOCKETIO_PING_INTERVAL = 25000
# SOCKETIO_MAX_IDLE_TIME = timedelta(minutes=60) # Changing this value ==> change maxConnectionDuration value in
# eveai-chat-widget.js
# Google Cloud settings
GC_PROJECT_NAME = 'eveai-420711'
GC_LOCATION = 'europe-west1'
GC_KEY_RING = 'eveai-chat'
GC_CRYPTO_KEY = 'envelope-encryption-key'
# Session settings
SESSION_REDIS = redis.from_url(f'{REDIS_BASE_URI}/2')
# Define the static path
STATIC_URL = None
# PATH settings
ffmpeg_path = '/usr/bin/ffmpeg'
# OBJECT STORAGE
OBJECT_STORAGE_TYPE = 'MINIO'
OBJECT_STORAGE_TENANT_BASE = 'folder'
OBJECT_STORAGE_BUCKET_NAME = 'eveai-tenants'
# MINIO
MINIO_ENDPOINT = 'minio:9000'
MINIO_ACCESS_KEY = 'minioadmin'
@@ -333,6 +403,56 @@ class DevConfig(Config):
MINIO_USE_HTTPS = False
class TestConfig(Config):
DEVELOPMENT = True
DEBUG = True
FLASK_DEBUG = True
EXPLAIN_TEMPLATE_LOADING = False
# Define the static path
STATIC_URL = None
# PATH settings
ffmpeg_path = '/usr/bin/ffmpeg'
# OBJECT STORAGE
OBJECT_STORAGE_TYPE = 'MINIO'
OBJECT_STORAGE_TENANT_BASE = 'folder'
OBJECT_STORAGE_BUCKET_NAME = 'eveai-tenants'
# MINIO
MINIO_ENDPOINT = 'minio:9000'
MINIO_ACCESS_KEY = 'minioadmin'
MINIO_SECRET_KEY = 'minioadmin'
MINIO_USE_HTTPS = False
class StagingConfig(Config):
DEVELOPMENT = False
DEBUG = True
FLASK_DEBUG = True
EXPLAIN_TEMPLATE_LOADING = False
# Define the static path
STATIC_URL = 'https://evie-staging-static.askeveai.com/'
# PATH settings
ffmpeg_path = '/usr/bin/ffmpeg'
# OBJECT STORAGE
OBJECT_STORAGE_TYPE = 'SCALEWAY'
OBJECT_STORAGE_TENANT_BASE = 'folder'
OBJECT_STORAGE_BUCKET_NAME = 'eveai-staging'
# MINIO
MINIO_ENDPOINT = environ.get('MINIO_ENDPOINT')
MINIO_ACCESS_KEY = environ.get('MINIO_ACCESS_KEY')
MINIO_SECRET_KEY = environ.get('MINIO_SECRET_KEY')
MINIO_USE_HTTPS = True
# Push gateway grouping elements
pod_name = os.getenv('POD_NAME')
pod_namespace = os.getenv('POD_NAMESPACE')
class ProdConfig(Config):
DEVELOPMENT = False
DEBUG = False
@@ -345,53 +465,10 @@ class ProdConfig(Config):
WTF_CSRF_SSL_STRICT = True # Set to True if using HTTPS
# Define the nginx prefix used for the specific apps
EVEAI_APP_LOCATION_PREFIX = '/admin'
EVEAI_CHAT_LOCATION_PREFIX = '/chat'
EVEAI_CHAT_LOCATION_PREFIX = 'EVEAI_APP_LOCATION_PREFIX'
# flask-mailman settings
MAIL_USERNAME = 'eveai_super@flow-it.net'
MAIL_PASSWORD = '$6xsWGbNtx$CFMQZqc*'
# file upload settings
# UPLOAD_FOLDER = '/app/tenant_files'
# Redis Settings
REDIS_USER = environ.get('REDIS_USER')
REDIS_PASS = environ.get('REDIS_PASS')
REDIS_URL = environ.get('REDIS_URL')
REDIS_PORT = environ.get('REDIS_PORT', '6379')
REDIS_BASE_URI = f'redis://{REDIS_USER}:{REDIS_PASS}@{REDIS_URL}:{REDIS_PORT}'
# Celery settings
# eveai_app Redis Settings
CELERY_BROKER_URL = f'{REDIS_BASE_URI}/0'
CELERY_RESULT_BACKEND = f'{REDIS_BASE_URI}/0'
# eveai_chat Redis Settings
CELERY_BROKER_URL_CHAT = f'{REDIS_BASE_URI}/3'
CELERY_RESULT_BACKEND_CHAT = f'{REDIS_BASE_URI}/3'
# eveai_chat_workers cache Redis Settings
CHAT_WORKER_CACHE_URL = f'{REDIS_BASE_URI}/4'
# specialist execution pub/sub Redis Settings
SPECIALIST_EXEC_PUBSUB = f'{REDIS_BASE_URI}/5'
# Session settings
SESSION_REDIS = redis.from_url(f'{REDIS_BASE_URI}/2')
# SocketIO settings
# SOCKETIO_MESSAGE_QUEUE = f'{REDIS_BASE_URI}/1'
# SOCKETIO_CORS_ALLOWED_ORIGINS = '*'
# SOCKETIO_LOGGER = True
# SOCKETIO_ENGINEIO_LOGGER = True
# SOCKETIO_PING_TIMEOUT = 20000
# SOCKETIO_PING_INTERVAL = 25000
# SOCKETIO_MAX_IDLE_TIME = timedelta(minutes=60) # Changing this value ==> change maxConnectionDuration value in
# eveai-chat-widget.js
# Google Cloud settings
GC_PROJECT_NAME = 'eveai-420711'
GC_LOCATION = 'europe-west1'
GC_KEY_RING = 'eveai-chat'
GC_CRYPTO_KEY = 'envelope-encryption-key'
# Define the static path
STATIC_URL = 'https://evie-prod-static.askeveai.com'
# PATH settings
ffmpeg_path = '/usr/bin/ffmpeg'
@@ -406,6 +483,8 @@ class ProdConfig(Config):
def get_config(config_name='dev'):
configs = {
'dev': DevConfig,
'test': TestConfig,
'staging': StagingConfig,
'prod': ProdConfig,
'default': DevConfig,
}

View File

@@ -82,6 +82,26 @@ configuration:
description: "Human Message Text Color"
type: "color"
required: false
human_message_inactive_text_color:
name: "Human Message Inactive Text Color"
description: "Human Message Inactive Text Color"
type: "color"
required: false
tab_background:
name: "Tab Background Color"
description: "Tab Background Color"
type: "color"
required: false
tab_icon_active_color:
name: "Tab Icon Active Color"
description: "Tab Icon Active Color"
type: "color"
required: false
tab_icon_inactive_color:
name: "Tab Icon Inactive Color"
description: "Tab Icon Inactive Color"
type: "color"
required: false
metadata:
author: "Josako"
date_added: "2024-06-06"

View File

@@ -9,8 +9,11 @@ content: >
'{context}'
Do not translate text in between double square brackets, as these are names or terms that need to remain intact.
Remove the triple quotes in your translation!
These are best practices you should follow:
- Do not translate text in between double square brackets, as these are names or terms that need to remain intact. Remove the square brackets in the translation!
- We use inline tags (Custom HTML/XML-like tags). Ensure the tags themself are not translated and remain intact in the translation. The text inbetween the tags should be translated. e.g. "<terms_and_conditions>Terms & Conditions</terms_and_conditions>" translates in Dutch to <terms_and_conditions>Gebruiksvoorwaarden</terms_and_conditions>
- Remove the triple quotes in your translation!
I only want you to return the translation. No explanation, no options. I need to be able to directly use your answer
without further interpretation. If more than one option is available, present me with the most probable one.

View File

@@ -6,8 +6,11 @@ content: >
into '{target_language}'.
Do not translate text in between double square brackets, as these are names or terms that need to remain intact.
Remove the triple quotes in your translation!
These are best practices you should follow:
- Do not translate text in between double square brackets, as these are names or terms that need to remain intact. Remove the square brackets in the translation!
- We use inline tags (Custom HTML/XML-like tags). Ensure the tags themself are not translated and remain intact in the translation. The text inbetween the tags should be translated. e.g. "<terms_and_conditions>Terms & Conditions</terms_and_conditions>" translates in Dutch to <terms_and_conditions>Gebruiksvoorwaarden</terms_and_conditions>
- Remove the triple quotes in your translation!
I only want you to return the translation. No explanation, no options. I need to be able to directly use your answer
without further interpretation. If more than one option is available, present me with the most probable one.

View File

@@ -24,6 +24,11 @@ fields:
type: "boolean"
description: "Consent"
required: true
meta:
kind: "consent"
consentRich: "Ik Agree with the <terms>Terms and Conditions</terms> and the <dpa>Privacy Statement</dpa> of Ask Eve AI"
ariaPrivacy: "Open privacyverklaring in a modal dialog"
ariaTerms: "Open algemene voorwaarden in a modal dialog"
metadata:
author: "Josako"
date_added: "2025-07-29"

View File

@@ -11,7 +11,7 @@ fields:
email:
name: "Email"
type: "str"
description: "Your Name"
description: "Your Email"
required: true
phone:
name: "Phone Number"
@@ -28,16 +28,6 @@ fields:
type: "str"
description: "Job Title"
required: false
address:
name: "Address"
type: "str"
description: "Your Address"
required: false
zip:
name: "Postal Code"
type: "str"
description: "Postal Code"
required: false
city:
name: "City"
type: "str"

View File

@@ -0,0 +1,81 @@
version: "1.2.0"
name: "RAG Specialist"
framework: "crewai"
chat: true
configuration:
name:
name: "name"
type: "str"
description: "The name the specialist is called upon."
required: true
tone_of_voice:
name: "Tone of Voice"
description: "The tone of voice the specialist uses to communicate"
type: "enum"
allowed_values: [ "Professional & Neutral", "Warm & Empathetic", "Energetic & Enthusiastic", "Accessible & Informal", "Expert & Trustworthy", "No-nonsense & Goal-driven" ]
default: "Professional & Neutral"
required: true
language_level:
name: "Language Level"
description: "Language level to be used when communicating, relating to CEFR levels"
type: "enum"
allowed_values: [ "Basic", "Standard", "Professional" ]
default: "Standard"
required: true
response_depth:
name: "Response Depth"
description: "Response depth to be used when communicating"
type: "enum"
allowed_values: [ "Concise", "Balanced", "Detailed",]
default: "Balanced"
required: true
conversation_purpose:
name: "Conversation Purpose"
description: "Purpose of the conversation, resulting in communication style"
type: "enum"
allowed_values: [ "Informative", "Persuasive", "Supportive", "Collaborative" ]
default: "Informative"
required: true
welcome_message:
name: "Welcome Message"
type: "string"
description: "Welcome Message to be given to the end user"
required: false
arguments:
language:
name: "Language"
type: "str"
description: "Language code to be used for receiving questions and giving answers"
required: true
results:
rag_output:
answer:
name: "answer"
type: "str"
description: "Answer to the query"
required: true
citations:
name: "citations"
type: "List[str]"
description: "List of citations"
required: false
insufficient_info:
name: "insufficient_info"
type: "bool"
description: "Whether or not the query is insufficient info"
required: true
agents:
- type: "RAG_AGENT"
version: "1.2"
- type: "RAG_PROOFREADER_AGENT"
version: "1.0"
tasks:
- type: "RAG_TASK"
version: "1.1"
- type: "RAG_PROOFREADING_TASK"
version: "1.0"
metadata:
author: "Josako"
date_added: "2025-01-08"
changes: "Initial version"
description: "A Specialist that performs Q&A activities"

View File

@@ -1,183 +0,0 @@
version: "1.0.0"
name: "Spin Sales Specialist"
framework: "crewai"
chat: true
configuration:
name:
name: "name"
type: "str"
description: "The name the specialist is called upon."
required: true
company:
name: "company"
type: "str"
description: "The name of your company. If not provided, your tenant's name will be used."
required: false
products:
name: "products"
type: "List[str]"
description: "The products or services you're providing"
required: false
product_information:
name: "product_information"
type: "text"
description: "Information on the products you are selling, such as ICP (Ideal Customer Profile), Pitch, ..."
required: false
engagement_options:
name: "engagement_options"
type: "text"
description: "Engagement options such as email, phone number, booking link, ..."
tenant_language:
name: "tenant_language"
type: "str"
description: "The language code used for internal information. If not provided, the tenant's default language will be used"
required: false
nr_of_questions:
name: "nr_of_questions"
type: "int"
description: "The maximum number of questions to formulate extra questions"
required: true
default: 3
arguments:
language:
name: "Language"
type: "str"
description: "Language code to be used for receiving questions and giving answers"
required: true
query:
name: "query"
type: "str"
description: "Query or response to process"
required: true
identification:
name: "identification"
type: "text"
description: "Initial identification information when available"
required: false
results:
rag_output:
answer:
name: "answer"
type: "str"
description: "Answer to the query"
required: true
citations:
name: "citations"
type: "List[str]"
description: "List of citations"
required: false
insufficient_info:
name: "insufficient_info"
type: "bool"
description: "Whether or not the query is insufficient info"
required: true
spin:
situation:
name: "situation"
type: "str"
description: "A description of the customer's current situation / context"
required: false
problem:
name: "problem"
type: "str"
description: "The current problems the customer is facing, for which he/she seeks a solution"
required: false
implication:
name: "implication"
type: "str"
description: "A list of implications"
required: false
needs:
name: "needs"
type: "str"
description: "A list of needs"
required: false
additional_info:
name: "additional_info"
type: "str"
description: "Additional information that may be commercially interesting"
required: false
lead_info:
lead_personal_info:
name:
name: "name"
type: "str"
description: "name of the lead"
required: "true"
job_title:
name: "job_title"
type: "str"
description: "job title"
required: false
email:
name: "email"
type: "str"
description: "lead email"
required: "false"
phone:
name: "phone"
type: "str"
description: "lead phone"
required: false
additional_info:
name: "additional_info"
type: "str"
description: "additional info on the lead"
required: false
lead_company_info:
company_name:
name: "company_name"
type: "str"
description: "Name of the lead company"
required: false
industry:
name: "industry"
type: "str"
description: "The industry of the company"
required: false
company_size:
name: "company_size"
type: "int"
description: "The size of the company"
required: false
company_website:
name: "company_website"
type: "str"
description: "The main website for the company"
required: false
additional_info:
name: "additional_info"
type: "str"
description: "Additional information that may be commercially interesting"
required: false
agents:
- type: "RAG_AGENT"
version: "1.0"
- type: "RAG_COMMUNICATION_AGENT"
version: "1.0"
- type: "SPIN_DETECTION_AGENT"
version: "1.0"
- type: "SPIN_SALES_SPECIALIST_AGENT"
version: "1.0"
- type: "IDENTIFICATION_AGENT"
version: "1.0"
- type: "RAG_COMMUNICATION_AGENT"
version: "1.0"
tasks:
- type: "RAG_TASK"
version: "1.0"
- type: "SPIN_DETECT_TASK"
version: "1.0"
- type: "SPIN_QUESTIONS_TASK"
version: "1.0"
- type: "IDENTIFICATION_DETECTION_TASK"
version: "1.0"
- type: "IDENTIFICATION_QUESTIONS_TASK"
version: "1.0"
- type: "RAG_CONSOLIDATION_TASK"
version: "1.0"
metadata:
author: "Josako"
date_added: "2025-01-08"
changes: "Initial version"
description: "A Specialist that performs both Q&A as SPIN (Sales Process) activities"

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 387 KiB

View File

@@ -0,0 +1,6 @@
{
"dist/chat-client.js": "dist/chat-client.825210dd.js",
"dist/chat-client.css": "dist/chat-client.568d7be7.css",
"dist/main.js": "dist/main.6a617099.js",
"dist/main.css": "dist/main.7182aac3.css"
}

View File

@@ -1,35 +0,0 @@
version: "1.0.0"
name: "Email Lead Draft Creation"
task_description: >
Craft a highly personalized email using the lead's name, job title, company information, and any relevant personal or
company achievements when available. The email should speak directly to the lead's interests and the needs
of their company.
This mail is the consequence of a first conversation. You have information available from that conversation in the
- SPIN-context (in between triple %)
- personal and company information (in between triple $)
Information might be missing however, as it might not be gathered in that first conversation.
Don't use any salutations or closing remarks, nor too complex sentences.
Our Company and Product:
- Company Name: {company}
- Products: {products}
- Product information: {product_information}
{customer_role}'s Identification:
$$${Identification}$$$
SPIN context:
%%%{SPIN}%%%
{custom_description}
expected_output: >
A personalized email draft that:
- Addresses the lead by name
- Acknowledges their role and company
- Highlights how {company} can meet their specific needs or interests
{customer_expected_output}
metadata:
author: "Josako"
date_added: "2025-01-08"
description: "Email Drafting Task towards a Lead"
changes: "Initial version"

View File

@@ -1,28 +0,0 @@
version: "1.0.0"
name: "Email Lead Engagement Creation"
task_description: >
Review a personalized email and optimize it with strong CTAs and engagement hooks. Keep in mind that this email is
the consequence of a first conversation.
Don't use any salutations or closing remarks, nor too complex sentences. Keep it short and to the point.
Don't use any salutations or closing remarks, nor too complex sentences.
Ensure the email encourages the lead to schedule a meeting or take
another desired action immediately.
Our Company and Product:
- Company Name: {company}
- Products: {products}
- Product information: {product_information}
Engagement options:
{engagement_options}
{custom_description}
expected_output: >
An optimized email ready for sending, complete with:
- Strong CTAs
- Strategically placed engagement hooks that encourage immediate action
metadata:
author: "Josako"
date_added: "2025-01-08"
description: "Make an Email draft more engaging"
changes: "Initial version"

View File

@@ -1,24 +0,0 @@
version: "1.0.0"
name: "Identification Gathering"
task_description: >
You are asked to gather lead information in a conversation with a new prospect. This is information about the person
participating in the conversation, and information on the company he or she is working for. Try to be as precise as
possible.
Take into account information already gathered in the historic lead info (between triple backquotes) and add
information found in the latest reply. Also, some identification information may be given by the end user.
historic lead info:
```{historic_lead_info}```
latest reply:
{query}
identification:
{identification}
{custom_description}
expected_output: >
metadata:
author: "Josako"
date_added: "2025-01-08"
description: "A Task that gathers identification information from a conversation"
changes: "Initial version"

View File

@@ -1,19 +0,0 @@
version: "1.0.0"
name: "Define Identification Questions"
task_description: >
Gather the identification information gathered by your team mates. Ensure no information in the historic lead
information (in between triple backquotes) and the latest reply of the user is lost.
Define questions to be asked to complete the personal and company information for the end user in the conversation.
historic lead info:
```{historic_lead_info}```
latest reply:
{query}
{custom_description}
expected_output: >
metadata:
author: "Josako"
date_added: "2025-01-08"
description: "A Task to define identification (person & company) questions"
changes: "Initial version"

View File

@@ -1,27 +0,0 @@
version: "1.0.0"
name: "Rag Consolidation"
task_description: >
Your teams have collected answers to a user's query (in between triple backquotes), and collected additional follow-up
questions (in between triple %) to reach their goals. Ensure the answers are provided, and select a maximum of
{nr_of_questions} out of the additional questions to be asked in order not to overwhelm the user. The questions are
in no specific order, so don't just pick the first ones. Make a good mixture of different types of questions,
different topics or subjects!
Questions are to be asked when your team proposes questions. You ensure both answers and additional questions are
bundled into 1 clear communication back to the user. Use {language} for your consolidated communication.
Be sure to format your answer in markdown when appropriate. Ensure enumerations or bulleted lists are formatted as
lists in markdown.
{custom_description}
Anwers:
```{prepared_answers}```
Additional Questions:
%%%{additional_questions}%%%
expected_output: >
{custom_expected_output}
metadata:
author: "Josako"
date_added: "2025-01-08"
description: "A Task to consolidate questions and answers"
changes: "Initial version"

View File

@@ -0,0 +1,26 @@
version: "1.0.0"
name: "RAG QA Task"
task_description: >
You have to improve this first draft answering the following question:
£££
{question}
£££
We want you to pay extra attention and adapt to the following requirements:
- The answer uses the following Tone of Voice: {tone_of_voice}, i.e. {tone_of_voice_context}
- The answer is adapted to the following Language Level: {language_level}, i.e. {language_level_context}
- The answer is suited to be {conversation_purpose}, i.e. {conversation_purpose_context}
- And we want the answer to have the following depth: {response_depth}, i.e. {response_depth_context}
Ensure the following {language} is used.
If there was insufficient information to answer, answer "I have insufficient information to answer this
question." and give the appropriate indication.
expected_output: >
Your answer.
metadata:
author: "Josako"
date_added: "2025-01-08"
description: "A Task that gives RAG-based answers"
changes: "Initial version"

View File

@@ -3,30 +3,43 @@ name: "RAG Task"
task_description: >
Answer the following question (in between triple £):
£££{question}£££
£££
{question}
£££
Base your answer on the following context (in between triple $):
$$${context}$$$
Take into account the following history of the conversation (in between triple €):
€€€{history}€€€
The HUMAN parts indicate the interactions by the end user, the AI parts are your interactions.
Base your answer on the context below, in between triple '$'.
Take into account the history of the conversion , in between triple '€'. The parts in the history preceded by 'HUMAN'
indicate the interactions by the end user, the parts preceded with 'AI' are your interactions.
Best Practices are:
- Answer the provided question as precisely and directly as you can, combining elements of the provided context.
- Always focus your answer on the actual HUMAN question.
- Try not to repeat your answers (preceded by AI), unless absolutely necessary.
- Focus your answer on the question at hand.
- Answer the provided question, combining elements of the provided context.
- Always focus your answer on the actual question.
- Try not to repeat your historic answers, unless absolutely necessary.
- Always be friendly and helpful for the end user.
Tune your answer with the following:
- You use the following Tone of Voice for your answer: {tone_of_voice}, i.e. {tone_of_voice_context}
- You use the following Language Level for your answer: {language_level}, i.e. {language_level_context}
- The purpose of the conversation is to be {conversation_purpose}, i.e. {conversation_purpose_context}
- We expect you to answer with the following depth: {response_depth}, i.e. {response_depth_context}
{custom_description}
Use the following {language} in your communication.
If the question cannot be answered using the given context, answer "I have insufficient information to answer this
question." and give the appropriate indication.
Context:
$$$
{context}
$$$
History:
€€€
{history}
€€€
expected_output: >
Your answer.
metadata:

View File

@@ -1,18 +0,0 @@
version: "1.0.0"
name: "SPIN Information Detection"
task_description: >
Complement the historic SPIN context (in between triple backquotes) with information found in the latest reply of the
end user.
{custom_description}
Use the following {tenant_language} to define the SPIN-elements.
Historic SPIN:
```{historic_spin}```
Latest reply:
{query}
expected_output: >
metadata:
author: "Josako"
date_added: "2025-01-08"
description: "A Task that performs SPIN Information Detection"
changes: "Initial version"

View File

@@ -1,20 +0,0 @@
version: "1.0.0"
name: "SPIN Question Identification"
task_description: >
Revise the final SPIN provided by your colleague, and ensure no information is lost from the histoic SPIN and the
latest reply from the user. Define the top questions that need to be asked to understand the full SPIN context
of the customer. If you think this user could be a potential customer, please indicate so.
{custom_description}
Use the following {tenant_language} to define the SPIN-elements. If you have a satisfying SPIN context, just skip and
don't ask for more information or confirmation.
Historic SPIN:
```{historic_spin}```
Latest reply:
{query}
expected_output: >
metadata:
author: "Josako"
date_added: "2025-01-08"
description: "A Task that identifies questions to complete the SPIN context in a conversation"
changes: "Initial version"

View File

@@ -10,11 +10,22 @@ task_description: >
€€€{history}€€€
(In this history, user interactions are preceded by 'HUMAN', and your interactions with 'AI'.)
Take into account the last question asked by the you, the AI.
Check if the user has given an affirmative answer or not.
Check if the user has given an affirmative answer to that last question or not.
Please note that this answer can be very short:
- Affirmative answers: e.g. Yes, OK, Sure, Of Course
- Negative answers: e.g. No, not really, No, I'd rather not.
Also note that users may use emoticons, emojis, or other symbols to express their affirmative answers.
- Affirmative answers: e.g. 👍🏼 , 👌🏼 , ☺️
- Negative answers: e.g. 👎🏼 , 🙅🏼 , 😒
Finally, users may use a direct answer to the last question asked:
Example 1:
- Question: "Do you have any other questions, or shall we start the interview to see if theres a match with the job?"
- Affirmative Answer: "Start the interview" or "Start please"
Example 2:
- Question: "Is there anything still on your mind, or shall we begin the conversation to explore the match?"
- Affirmative Answer: "Let's start exploring" or "Let's go"
Please consider that the answer will be given in {language}!

View File

@@ -23,7 +23,7 @@ task_description: >
Create a prioritised list of the 10 most critical competencies as defined above, ranked in importance.
Treat this as a logical and professional reasoning exercise.
Respect the language of the vacancy text, and return answers / output in the same language.
Respect the language of the vacancy text, and return answers / output in the same language. Only use plain text.
{custom_description}

View File

@@ -1,32 +1,12 @@
# Agent Types
AGENT_TYPES = {
"EMAIL_CONTENT_AGENT": {
"name": "Email Content Agent",
"description": "An Agent that writes engaging emails.",
},
"EMAIL_ENGAGEMENT_AGENT": {
"name": "Email Engagement Agent",
"description": "An Agent that ensures the email is engaging and lead to maximal desired action",
},
"IDENTIFICATION_AGENT": {
"name": "Identification Agent",
"description": "An Agent that gathers identification information",
},
"RAG_AGENT": {
"name": "Rag Agent",
"description": "An Agent that does RAG based on a user's question, RAG content & history",
},
"RAG_COMMUNICATION_AGENT": {
"name": "Rag Communication Agent",
"description": "An Agent that consolidates both answers and questions in a consistent reply",
},
"SPIN_DETECTION_AGENT": {
"name": "SPIN Sales Assistant",
"description": "An Agent that detects SPIN information in an ongoing conversation",
},
"SPIN_SALES_SPECIALIST_AGENT": {
"name": "SPIN Sales Specialist",
"description": "An Agent that asks for Follow-up questions for SPIN-process",
"RAG_PROOFREADER_AGENT": {
"name": "Rag Proofreader Agent",
"description": "An Agent that checks the quality of RAG answers and adapts when required",
},
"TRAICIE_HR_BP_AGENT": {
"name": "Traicie HR BP Agent",

View File

@@ -9,10 +9,6 @@ SPECIALIST_TYPES = {
"description": "Q&A through Partner RAG Specialist (for documentation purposes)",
"partner": "evie_partner"
},
"SPIN_SPECIALIST": {
"name": "Spin Sales Specialist",
"description": "A specialist that allows to answer user queries, try to get SPIN-information and Identification",
},
"TRAICIE_ROLE_DEFINITION_SPECIALIST": {
"name": "Traicie Role Definition Specialist",
"description": "Assistant Defining Competencies and KO Criteria",

View File

@@ -1,36 +1,16 @@
# Agent Types
TASK_TYPES = {
"EMAIL_LEAD_DRAFTING_TASK": {
"name": "Email Lead Draft Creation",
"description": "Email Drafting Task towards a Lead",
},
"EMAIL_LEAD_ENGAGEMENT_TASK": {
"name": "Email Lead Engagement Creation",
"description": "Make an Email draft more engaging",
},
"IDENTIFICATION_DETECTION_TASK": {
"name": "Identification Gathering",
"description": "A Task that gathers identification information from a conversation",
},
"IDENTIFICATION_QUESTIONS_TASK": {
"name": "Define Identification Questions",
"description": "A Task to define identification (person & company) questions",
},
"RAG_TASK": {
"name": "RAG Task",
"description": "A Task that gives RAG-based answers",
},
"SPIN_DETECT_TASK": {
"name": "SPIN Information Detection",
"description": "A Task that performs SPIN Information Detection",
"ADVANCED_RAG_TASK": {
"name": "Advanced RAG Task",
"description": "A Task that gives RAG-based answers taking into account previous questions, tone of voice and language level",
},
"SPIN_QUESTIONS_TASK": {
"name": "SPIN Question Identification",
"description": "A Task that identifies questions to complete the SPIN context in a conversation",
},
"RAG_CONSOLIDATION_TASK": {
"name": "RAG Consolidation",
"description": "A Task to consolidate questions and answers",
"RAG_PROOFREADING_TASK": {
"name": "Rag Proofreading Task",
"description": "A Task that performs RAG Proofreading",
},
"TRAICIE_GET_COMPETENCIES_TASK": {
"name": "Traicie Get Competencies",

View File

@@ -0,0 +1,9 @@
type: "SHARE_PROFESSIONAL_CONTACT_DATA"
version: "1.0.0"
name: "Share Professional Contact Data"
icon: "account_circle"
title: "Share Contact Data"
action_type: "specialist_form"
configuration:
specialist_form_name: "PROFESSIONAL_CONTACT_FORM"
specialist_form_version: "1.0.0"

View File

@@ -0,0 +1,671 @@
# Data Protection Impact Assessment (DPIA) Template
## Ask Eve AI
**Date of Assessment**: [Date]
**Assessed By**: [Name, Role]
**Review Date**: [Date - recommend annual review]
---
## 1. Executive Summary
| Field | Details |
|-------|---------|
| **Processing Activity Name** | [e.g., "Job Candidate Assessment Specialist"] |
| **Brief Description** | [1-2 sentence summary] |
| **Risk Level** | ☐ Low ☐ Medium ☐ High |
| **DPIA Required?** | ☐ Yes ☐ No |
| **Status** | ☐ Draft ☐ Under Review ☐ Approved ☐ Requires Revision |
---
## 2. Description of the Processing
### 2.1 Nature of the Processing
**What Personal Data will be processed?**
- [ ] Contact information (name, email, phone)
- [ ] Identification data (ID numbers, passport)
- [ ] Professional data (CV, work history, qualifications)
- [ ] Assessment results or scores
- [ ] Communication records
- [ ] Behavioral data (how users interact with the system)
- [ ] Technical data (IP addresses, device information)
- [ ] Other: _______________
**Categories of Data Subjects:**
- [ ] Job applicants/candidates
- [ ] Employees
- [ ] Customers
- [ ] End users/consumers
- [ ] Other: _______________
**Volume of Data Subjects:**
- [ ] < 100
- [ ] 100-1,000
- [ ] 1,000-10,000
- [ ] > 10,000
### 2.2 Scope of the Processing
**What is the purpose of the processing?**
[Describe the specific business purpose, e.g., "To assess job candidates' suitability for specific roles by analyzing their responses to standardized questions"]
**How will the data be collected?**
- [ ] Directly from data subjects (forms, interviews)
- [ ] From third parties (recruiters, references)
- [ ] Automated collection (web forms, chatbots)
- [ ] Other: _______________
**Where will data be stored?**
- [ ] EU (specify: France - Scaleway)
- [ ] Non-EU (specify and justify): _______________
### 2.3 Context of the Processing
**Is this processing new or existing?**
- [ ] New processing activity
- [ ] Modification of existing processing
- [ ] Existing processing (periodic review)
**Who has access to the Personal Data?**
- [ ] Ask Eve AI employees (specify roles): _______________
- [ ] Customer/Tenant employees
- [ ] Partners (specify): _______________
- [ ] Sub-Processors (list): _______________
- [ ] Other: _______________
**How long will data be retained?**
[Specify retention period and justification, e.g., "Candidate data retained for 12 months to comply with recruitment record-keeping requirements"]
---
## 3. Necessity and Proportionality Assessment
### 3.1 Lawful Basis
**What is the lawful basis for processing? (Article 6 GDPR)**
- [ ] **Consent** - Data subject has given explicit consent
- [ ] **Contract** - Processing necessary for contract performance
- [ ] **Legal obligation** - Required by law
- [ ] **Vital interests** - Necessary to protect someone's life
- [ ] **Public task** - Performing a public interest task
- [ ] **Legitimate interests** - Necessary for legitimate interests (requires balancing test)
**Justification:**
[Explain why this lawful basis applies]
### 3.2 Special Categories of Data (if applicable)
**Does the processing involve special categories of data? (Article 9 GDPR)**
- [ ] No
- [ ] Yes - racial or ethnic origin
- [ ] Yes - political opinions
- [ ] Yes - religious or philosophical beliefs
- [ ] Yes - trade union membership
- [ ] Yes - genetic data
- [ ] Yes - biometric data for identification
- [ ] Yes - health data
- [ ] Yes - sex life or sexual orientation data
**If yes, what is the additional lawful basis?**
[Article 9(2) provides specific conditions - specify which applies]
### 3.3 Automated Decision-Making
**Does the processing involve automated decision-making or profiling?**
- [ ] No
- [ ] Yes - automated decision-making WITH human oversight
- [ ] Yes - fully automated decision-making (no human intervention)
**If yes:**
**Does it produce legal effects or similarly significant effects?**
- [ ] No
- [ ] Yes (explain): _______________
**What safeguards are in place?**
- [ ] Right to obtain human intervention
- [ ] Right to express point of view
- [ ] Right to contest the decision
- [ ] Regular accuracy reviews
- [ ] Transparency about logic involved
- [ ] Other: _______________
### 3.4 Necessity Test
**Is the processing necessary to achieve the stated purpose?**
☐ Yes ☐ No
**Justification:**
[Explain why this specific processing is necessary and whether less intrusive alternatives were considered]
**Could the purpose be achieved with less data or through other means?**
☐ Yes (explain why not pursued): _______________
☐ No
### 3.5 Proportionality Test
**Is the processing proportionate to the purpose?**
☐ Yes ☐ No
**Data Minimization:**
- Are you collecting only the minimum data necessary? ☐ Yes ☐ No
- Have you considered pseudonymization or anonymization? ☐ Yes ☐ No ☐ N/A
- Can data be aggregated instead of individual records? ☐ Yes ☐ No ☐ N/A
**Storage Limitation:**
- Is the retention period justified and documented? ☐ Yes ☐ No
- Is there an automated deletion process? ☐ Yes ☐ No ☐ Planned
---
## 4. Stakeholder Consultation
### 4.1 Data Subject Consultation
**Have data subjects been consulted about this processing?**
☐ Yes ☐ No ☐ Not required
**If yes, how were they consulted?**
[Describe consultation method: surveys, focus groups, user research, etc.]
**Key concerns raised by data subjects:**
[List any concerns and how they were addressed]
### 4.2 DPO or Security Contact Consultation
**Has the DPO or security contact been consulted?**
☐ Yes ☐ No ☐ N/A (no formal DPO)
**Comments from DPO/Security Contact:**
[Record any recommendations or concerns]
---
## 5. Risk Assessment
### 5.1 Risk Identification
For each risk, assess:
- **Likelihood**: Negligible / Low / Medium / High
- **Severity**: Negligible / Low / Medium / High
- **Overall Risk**: Low / Medium / High / Very High
**Risk 1: Unauthorized Access or Data Breach**
**Description**: Personal data could be accessed by unauthorized parties due to security vulnerabilities.
| Assessment | Rating |
|------------|--------|
| Likelihood | ☐ Negligible ☐ Low ☐ Medium ☐ High |
| Severity (if occurs) | ☐ Negligible ☐ Low ☐ Medium ☐ High |
| **Overall Risk** | ☐ Low ☐ Medium ☐ High ☐ Very High |
**Risk 2: Discrimination or Bias in Automated Decisions**
**Description**: Automated processing could result in discriminatory outcomes or unfair treatment.
| Assessment | Rating |
|------------|--------|
| Likelihood | ☐ Negligible ☐ Low ☐ Medium ☐ High |
| Severity (if occurs) | ☐ Negligible ☐ Low ☐ Medium ☐ High |
| **Overall Risk** | ☐ Low ☐ Medium ☐ High ☐ Very High |
**Risk 3: Lack of Transparency**
**Description**: Data subjects may not understand how their data is processed or decisions are made.
| Assessment | Rating |
|------------|--------|
| Likelihood | ☐ Negligible ☐ Low ☐ Medium ☐ High |
| Severity (if occurs) | ☐ Negligible ☐ Low ☐ Medium ☐ High |
| **Overall Risk** | ☐ Low ☐ Medium ☐ High ☐ Very High |
**Risk 4: Inability to Exercise Data Subject Rights**
**Description**: Data subjects may have difficulty exercising their rights (access, erasure, portability, etc.).
| Assessment | Rating |
|------------|--------|
| Likelihood | ☐ Negligible ☐ Low ☐ Medium ☐ High |
| Severity (if occurs) | ☐ Negligible ☐ Low ☐ Medium ☐ High |
| **Overall Risk** | ☐ Low ☐ Medium ☐ High ☐ Very High |
**Risk 5: Data Quality Issues**
**Description**: Inaccurate or outdated data could lead to incorrect decisions or outcomes.
| Assessment | Rating |
|------------|--------|
| Likelihood | ☐ Negligible ☐ Low ☐ Medium ☐ High |
| Severity (if occurs) | ☐ Negligible ☐ Low ☐ Medium ☐ High |
| **Overall Risk** | ☐ Low ☐ Medium ☐ High ☐ Very High |
**Risk 6: Function Creep / Scope Expansion**
**Description**: Data collected for one purpose could be used for other purposes without consent.
| Assessment | Rating |
|------------|--------|
| Likelihood | ☐ Negligible ☐ Low ☐ Medium ☐ High |
| Severity (if occurs) | ☐ Negligible ☐ Low ☐ Medium ☐ High |
| **Overall Risk** | ☐ Low ☐ Medium ☐ High ☐ Very High |
**Additional Risks:**
[Add any processing-specific risks]
---
## 6. Mitigation Measures
For each identified risk, document mitigation measures:
### Risk 1: Unauthorized Access or Data Breach
**Mitigation Measures:**
- [ ] Encryption in transit (TLS 1.2+)
- [ ] Encryption at rest
- [ ] Multi-factor authentication
- [ ] Access controls (RBAC)
- [ ] Regular security audits
- [ ] WAF and DDoS protection (Bunny.net Shield)
- [ ] Multi-tenant data isolation
- [ ] Regular security training
- [ ] Incident response plan
- [ ] Other: _______________
**Residual Risk After Mitigation:** ☐ Low ☐ Medium ☐ High ☐ Very High
### Risk 2: Discrimination or Bias in Automated Decisions
**Mitigation Measures:**
- [ ] Regular bias testing of AI models
- [ ] Diverse training data sets
- [ ] Human review of automated decisions
- [ ] Clear criteria for decision-making
- [ ] Right to contest decisions
- [ ] Transparency about decision logic
- [ ] Regular fairness audits
- [ ] Monitoring of outcomes by demographic groups
- [ ] Ability to request explanation
- [ ] Other: _______________
**Residual Risk After Mitigation:** ☐ Low ☐ Medium ☐ High ☐ Very High
### Risk 3: Lack of Transparency
**Mitigation Measures:**
- [ ] Clear Privacy Policy explaining processing
- [ ] Explicit consent mechanisms
- [ ] Plain language explanations
- [ ] Information provided before data collection
- [ ] Explanation of automated decision logic
- [ ] Contact information for questions
- [ ] Regular communication with data subjects
- [ ] Privacy-by-design approach (anonymous until consent)
- [ ] Other: _______________
**Residual Risk After Mitigation:** ☐ Low ☐ Medium ☐ High ☐ Very High
### Risk 4: Inability to Exercise Data Subject Rights
**Mitigation Measures:**
- [ ] Clear procedures for rights requests
- [ ] Multiple request channels (email, helpdesk)
- [ ] 30-day response timeframe
- [ ] Technical capability to extract data
- [ ] Data portability in standard formats
- [ ] Secure deletion processes
- [ ] Account disabling/restriction capability
- [ ] Identity verification procedures
- [ ] Other: _______________
**Residual Risk After Mitigation:** ☐ Low ☐ Medium ☐ High ☐ Very High
### Risk 5: Data Quality Issues
**Mitigation Measures:**
- [ ] Data validation on input
- [ ] Regular data accuracy reviews
- [ ] Ability for data subjects to correct errors
- [ ] Clear data update procedures
- [ ] Data quality monitoring
- [ ] Source verification for third-party data
- [ ] Archiving of outdated data
- [ ] Other: _______________
**Residual Risk After Mitigation:** ☐ Low ☐ Medium ☐ High ☐ Very High
### Risk 6: Function Creep / Scope Expansion
**Mitigation Measures:**
- [ ] Documented purpose limitation
- [ ] Access controls preventing unauthorized use
- [ ] Regular compliance audits
- [ ] Privacy Policy clearly states purposes
- [ ] Consent required for new purposes
- [ ] Technical controls preventing misuse
- [ ] Staff training on data protection
- [ ] Other: _______________
**Residual Risk After Mitigation:** ☐ Low ☐ Medium ☐ High ☐ Very High
### Additional Mitigation Measures
[Document any additional mitigation measures not covered above]
---
## 7. Data Subject Rights Implementation
**How will you ensure data subjects can exercise their rights?**
### Right of Access (Article 15)
- [ ] Procedure documented
- [ ] Technical capability implemented
- [ ] Response within 30 days
- Method: _______________
### Right to Rectification (Article 16)
- [ ] Procedure documented
- [ ] Technical capability implemented
- [ ] Response within 30 days
- Method: _______________
### Right to Erasure (Article 17)
- [ ] Procedure documented
- [ ] Technical capability implemented
- [ ] Response within 30 days
- Method: _______________
- Limitations: _______________
### Right to Restriction (Article 18)
- [ ] Procedure documented
- [ ] Technical capability implemented (account disabling)
- [ ] Response within 30 days
### Right to Data Portability (Article 20)
- [ ] Procedure documented
- [ ] Technical capability implemented
- [ ] Export format: JSON / CSV / XML / Other: _______________
### Right to Object (Article 21)
- [ ] Procedure documented
- [ ] Opt-out mechanisms implemented
- [ ] Clear in Privacy Policy
### Rights Related to Automated Decision-Making (Article 22)
- [ ] Human intervention available
- [ ] Explanation of logic provided
- [ ] Right to contest implemented
- [ ] Documented in Privacy Policy
---
## 8. Privacy by Design and Default
**Privacy Enhancing Technologies Implemented:**
- [ ] Data minimization (collect only necessary data)
- [ ] Pseudonymization (where applicable)
- [ ] Anonymization (where applicable)
- [ ] Anonymous interaction until consent (privacy-by-design)
- [ ] Encryption (in transit and at rest)
- [ ] Access controls and authentication
- [ ] Audit logging
- [ ] Secure deletion
- [ ] Data isolation (multi-tenant architecture)
- [ ] Other: _______________
**Default Settings:**
- [ ] Most privacy-protective settings by default
- [ ] Opt-in (not opt-out) for non-essential processing
- [ ] Clear consent mechanisms before data collection
- [ ] Limited data sharing by default
---
## 9. Compliance with Principles
**For each GDPR principle, confirm compliance:**
### Lawfulness, Fairness, Transparency (Article 5(1)(a))
- [ ] Lawful basis identified and documented
- [ ] Processing is fair and transparent
- [ ] Privacy Policy clearly explains processing
- Evidence: _______________
### Purpose Limitation (Article 5(1)(b))
- [ ] Specific purposes documented
- [ ] Data not used for incompatible purposes
- [ ] New purposes require new consent/legal basis
- Evidence: _______________
### Data Minimization (Article 5(1)(c))
- [ ] Only necessary data collected
- [ ] Regular review of data collected
- [ ] Excess data not retained
- Evidence: _______________
### Accuracy (Article 5(1)(d))
- [ ] Mechanisms to ensure data accuracy
- [ ] Ability to correct inaccurate data
- [ ] Regular data quality reviews
- Evidence: _______________
### Storage Limitation (Article 5(1)(e))
- [ ] Retention periods defined and documented
- [ ] Automated deletion where appropriate
- [ ] Justification for retention documented
- Evidence: _______________
### Integrity and Confidentiality (Article 5(1)(f))
- [ ] Appropriate security measures implemented
- [ ] Protection against unauthorized access
- [ ] Encryption and access controls in place
- Evidence: See Annex 2 of DPA
### Accountability (Article 5(2))
- [ ] Documentation of compliance measures
- [ ] Records of processing activities maintained
- [ ] DPIA conducted and documented
- [ ] DPA in place with processors
- Evidence: This DPIA, DPA with customers
---
## 10. International Transfers
**Does this processing involve transfer to third countries?**
☐ No - all processing within EU
☐ Yes (complete below)
**If yes:**
**Country/Region:** _______________
**Transfer Mechanism:**
- [ ] Adequacy decision (Article 45)
- [ ] Standard Contractual Clauses (Article 46)
- [ ] Binding Corporate Rules (Article 47)
- [ ] Other: _______________
**Transfer Impact Assessment Completed?** ☐ Yes ☐ No
**Additional Safeguards:**
[Document supplementary measures to ensure adequate protection]
---
## 11. Documentation and Records
**Documentation Maintained:**
- [ ] This DPIA
- [ ] Privacy Policy
- [ ] Data Processing Agreement
- [ ] Consent records (if applicable)
- [ ] Records of processing activities (Article 30)
- [ ] Data breach register
- [ ] Data Subject rights request log
- [ ] Staff training records
- [ ] Sub-processor agreements
**Record of Processing Activities (Article 30) Completed?**
☐ Yes ☐ No ☐ In Progress
---
## 12. Outcomes and Recommendations
### 12.1 Overall Risk Assessment
**After implementing mitigation measures, what is the residual risk level?**
☐ Low - processing can proceed
☐ Medium - additional measures recommended
☐ High - significant concerns, consult DPO/legal counsel
☐ Very High - processing should not proceed without major changes
### 12.2 Recommendations
**Recommended Actions Before Processing Begins:**
1. [Action item 1]
2. [Action item 2]
3. [Action item 3]
**Recommended Monitoring/Review Activities:**
1. [Monitoring item 1]
2. [Monitoring item 2]
3. [Monitoring item 3]
### 12.3 Consultation with Supervisory Authority
**Is consultation with supervisory authority required?**
☐ No - residual risk is acceptable
☐ Yes - high residual risk remains despite mitigation (Article 36)
**If yes, when will consultation occur?** _______________
### 12.4 Sign-Off
**DPIA Completed By:**
Name: _______________
Role: _______________
Date: _______________
Signature: _______________
**Reviewed and Approved By:**
Name: _______________
Role: _______________
Date: _______________
Signature: _______________
**Next Review Date:** _______________
*(Recommend annual review or when significant changes occur)*
---
## Appendix A: Completed Example - Job Candidate Assessment
This appendix provides a completed example for reference.
### Example: Job Candidate Assessment Specialist
**Processing Activity**: AI-powered job candidate assessment tool
**Personal Data Processed**:
- Assessment responses (text)
- Communication records (chatbot interactions)
- Contact information (name, email) - collected AFTER assessment with consent
- Assessment scores/results
**Purpose**: To assess candidates' suitability for job roles based on their responses to standardized questions
**Lawful Basis**:
- Consent (candidates explicitly consent before providing contact information)
- Contract (processing necessary to take steps at request of data subject prior to entering into contract)
**Automated Decision-Making**: Yes, with human oversight. Candidates are assessed by AI, but:
- Contact information only collected AFTER positive assessment
- Human recruiter makes final hiring decisions
- Candidates can restart assessment at any time
- Candidates informed about AI assessment before beginning
**Key Risks Identified**:
1. Bias/discrimination in assessment algorithms - MEDIUM risk
2. Lack of transparency about assessment criteria - MEDIUM risk
3. Data breach exposing candidate information - LOW risk (after mitigation)
**Key Mitigation Measures**:
- Anonymous assessment until consent obtained
- Clear explanation of assessment process
- Right to contest results
- Human review of all final decisions
- Regular bias testing of algorithms
- Strong technical security measures (encryption, access controls)
- 12-month retention period with secure deletion
**Residual Risk**: LOW - processing can proceed
**Special Considerations**:
- Candidates must be informed about automated decision-making
- Privacy Policy must explain assessment logic
- Contact information collected only after explicit consent
- Right to human intervention clearly communicated
---
## Appendix B: Resources and References
**GDPR Articles Referenced:**
- Article 5: Principles relating to processing
- Article 6: Lawfulness of processing
- Article 9: Special categories of data
- Article 13-14: Information to be provided
- Article 15-22: Data subject rights
- Article 22: Automated decision-making
- Article 28: Processor obligations
- Article 30: Records of processing activities
- Article 33-34: Data breach notification
- Article 35: Data Protection Impact Assessment
- Article 36: Prior consultation with supervisory authority
- Article 45-46: International transfers
**Additional Guidance:**
- WP29 Guidelines on DPIAs (WP 248)
- WP29 Guidelines on Automated Decision-Making (WP 251)
- ICO DPIA Guidance
- EDPB Guidelines on processing personal data for scientific research
- Belgian DPA Guidance (https://www.gegevensbeschermingsautoriteit.be)
**Internal Documents:**
- Ask Eve AI Data Protection Agreement
- Ask Eve AI Privacy Policy
- Technical and Organizational Measures (DPA Annex 2)
---
**End of DPIA Template**

View File

@@ -0,0 +1,720 @@
## Does your organisation have an approved information security policy?
No, we do not currently have a formal information security policy document. As a small startup with 2 employees, we have chosen to focus on implementing practical security controls rather than formal documentation at this stage.
However, we do maintain several key security practices:
Product Security (Evie SaaS Platform):
- Multi-tenant architecture with strict data isolation (separate database schemas and object storage folders per tenant)
- Hosted exclusively with European providers (Scaleway, Bunny.net, Mistral) compliant with EU regulations
- Published Privacy Policy and Terms & Conditions
- GDPR-compliant data handling practices
Internal Operations Security:
- Secure password and credential management using Proton Pass
- All internal business data maintained in secure cloud environments (Proton, Dropbox, Canva)
- Code versioning and backup through GitHub
- Controlled access to all systems and services
We plan to formalise our security practices into a comprehensive security policy as our organisation scales beyond 10 employees.
## Does your organisation conduct Pre Employment Checks?
No, we do not currently conduct formal pre-employment checks. Our current team consists of 2 employees in a family business structure, where trust and accountability are inherent to the relationship.
However, we recognize the importance of vetting as we scale. For future hires, we will implement the following practices:
Planned Hiring Procedures:
- In-depth personal interviews conducted by the founder
- Professional reference checks with former colleagues and employers
- Review of professional background and experience
Access Control for New Hires:
- New employees will not have immediate access to production systems or customer data
- Onboarding will begin with development and test environments using internally-created synthetic data
- Access to production systems will only be granted after thorough training and demonstration of competency with our platform
We anticipate hiring 2-3 additional resources once we secure appropriate investment, at which point these procedures will be formalized and documented.
## Does your organisation have an information security awareness program that provides training for all its employees/contractors?
No, we do not currently have a formal information security awareness training program. As a 2-person team, security awareness is maintained through the founder's 30+ years of IT and application development experience, including hands-on security implementation.
However, we maintain ongoing security awareness through several practices:
Current Security Practices:
- Active monitoring of security updates and advisories from our infrastructure providers (Scaleway, Bunny.net, Mistral)
- Proactive system updates and patching in response to security notifications
- Implementation of industry best practices (e.g., Kubernetes security configurations)
- Regular review and updates of security measures as the threat landscape evolves
- Security-conscious architecture decisions (e.g., Bunny.net as additional security layer in front of our K8s cluster)
Planned Training for Future Hires:
- Structured onboarding covering our security practices and data handling procedures
- Training on secure credential management (Proton Pass usage)
- Introduction to our multi-tenant architecture and data isolation principles
- Access control procedures and least-privilege principles
As we scale and onboard additional resources, we will implement enhanced security controls (NPM hardening, role-based access in PgAdmin, etc.) and formalize our security awareness program with documented training materials and periodic refresher sessions.
## Does your organisation have a policy for ensuring that Customer information is protected?
No, we do not currently have a formal customer information protection policy document. As a small startup with 2 employees, we have chosen to focus on implementing practical technical and operational controls rather than formal documentation at this stage.
However, we maintain comprehensive protection measures for customer data:
Technical Protections:
- Multi-tenant architecture with strict data isolation (separate database schemas and object storage folders per tenant)
- Password hashing (no plain text storage)
- Encryption in transit across all layers (browser → Bunny.net CDN → Kubernetes cluster)
- TLS encryption for internal services (Redis, PostgreSQL)
- Managed database and storage services with automated backup procedures
- Limited superuser access in production environments
Access Controls:
- Production customer data access restricted to founder only
- Customer-controlled partner access model (customers can grant/revoke partner access as needed)
- Change tracking showing who modified data
- Secure data deletion process (removal of database schema and object storage folder)
Data Sharing & Processing:
- Third-party data sharing limited to operationally necessary information only (billing, support)
- Data Processing Agreement included in our Privacy Statement
- Privacy Statement and Terms & Conditions are interlinked and published
Infrastructure:
- Hosted exclusively with European providers (Scaleway, Bunny.net, Mistral) compliant with EU regulations
GDPR-compliant data handling practices
We plan to formalise these protections into a comprehensive customer data protection policy, including encryption at rest, formal data retention policies, and incident response procedures, as our organization scales.
## Does your organisation have a procedure to manage the access rights of user accounts?
No, we do not currently have a formal documented procedure for managing access rights. However, we maintain practical access control measures appropriate to our current scale and structure.
Internal Access Management:
- Founder maintains sole authority for granting access to systems and environments
- Production superuser access restricted to founder only
- Access granted on a need-to-know and need-to-have basis
- Planned role-based access for future hires (e.g., operations staff access to infrastructure, developers limited to dev/test environments with no production access)
Customer/Tenant Access Management:
- Customers have self-service user account creation and management within their tenant
- Role-based access control implemented in the platform
- Customers manage their own users' permissions and roles
- User accounts can be disabled to revoke platform access
Partner Access Management:
- Partners granted access only to their specific customers' tenants
- Partner access provided for implementation and troubleshooting support
- Founder can revoke partner access as needed
- Access controlled through customer-partner agreements
As we scale and onboard additional employees, we will formalise access management procedures including:
- Documented approval workflows
- Periodic access reviews
- Formalized offboarding procedures for partners and employees
- Enhanced audit logging of access changes
## Does your organisation enforce a strong password criteria (e.g. password length, complexity) for all user/system accounts?
Yes, we enforce strong password criteria for internal systems and accounts, with planned implementation for customer accounts.
Internal Systems & Accounts:
- All passwords generated using Proton Pass password manager with strong complexity
- Secure password storage and management across all internal systems
- Multi-factor authentication (MFA) enabled on critical infrastructure systems (EuroDNS, Bunny.net, Scaleway)
- Operational passwords (databases, Kubernetes secrets) generated via Proton Pass and stored securely in cluster secrets
- Password changes implemented on a risk-based approach (when compromised or when security context requires), following NIST SP 800-63B guidelines
Customer Accounts in Evie:
- Strong password enforcement is planned for implementation in an upcoming release
- This will include minimum password length and complexity requirements
- Multi-factor authentication support is on our product roadmap
Planned Enhancements:
- Formal password rotation policy for system accounts
- MFA implementation for customer accounts
- Enhanced password strength requirements in the platform
## Does your organisation ensure that all systems are securely configured (hardened).
Yes, we ensure our systems are securely configured and hardened appropriate to our current scale and operational needs.
Production Infrastructure Hardening:
- Kubernetes cluster deployed in private network (Scaleway VPC)
- External access exclusively through Bunny.net Shield (WAF, advanced rate limiting, DDoS mitigation)
- Access to Scaleway resources controlled via IAM with API passwords
- Internal resource access through Kubernetes secrets
- Administrative access via secure port-forwarding only (no direct external external exposure)
- TLS encryption for all internal services (PostgreSQL, Redis)
- Automated infrastructure updates through Scaleway's managed services
Application Security:
- Security headers implemented
- Standard Flask security frameworks in place
- Client-side DOM inspection for XSS prevention
- Input validation and SQL injection prevention
- Regular updates of Python package dependencies for security patches
Network Segmentation:
- Production environment isolated in private network
- Default Scaleway firewall protections applied
- Administrative access restricted and controlled
Monitoring & Observability:
- Business event monitoring (Prometheus, Grafana)
- System monitoring through Scaleway Cockpit
- Internal tools (PgAdmin, RedisInsight, Flower) accessible only within controlled environments
Planned Enhancements as We Scale:
- Formal vulnerability assessments and security scanning
- Hardening of internal tooling access (NPM with additional access controls, role-based access in PgAdmin)
- Automated security testing in CI/CD pipeline
- Regular penetration testing
We actively maintain and update container images, system components, and application dependencies.
## Does your organisation only use licensed and supported software/applications
Yes, we exclusively use licensed and supported software and applications across our organisation.
Commercial Software & Services:
- All commercial tools and services are properly licensed (business or individual licenses as appropriate)
- Cloud infrastructure: Scaleway, Bunny.net, GitHub
- AI services: Mistral, OpenAI, Anthropic, Midjourney through paid plans (no free tier)
- Business tools: Proton (email, password management), Dropbox, Canva
- Development tools: PyCharm IDE
- Operating systems: macOS, Linux (supported distributions)
Open Source Software:
- We utilise industry-standard open source frameworks and tools (Flask, Kubernetes, Linux components, and associated libraries)
- We actively track releases and updates for all open source components
- Regular updates applied to maintain security and stability
- All open source software used in compliance with respective license terms
Software Management:
- Active monitoring of software updates and security patches
- Regular dependency updates for application components
- Use of supported and maintained versions of all software
We maintain awareness of licensing obligations and ensure compliance across both commercial and open source software usage.
## Does your organisation deploy an anti-virus and anti-malware management solution on all servers and client devices.
Our approach to anti-virus and anti-malware protection is adapted to our cloud-native, containerised architecture and the nature of modern security threats.
Production Environment:
- Cloud-native architecture running containerised workloads on Kubernetes (Scaleway)
- Traditional server-based antivirus is not applicable to our containerised infrastructure
- Infrastructure hosted on ISO/IEC 27001:2022 certified cloud provider (Scaleway) with continuous security monitoring
- Planned implementation of Harbor container registry with integrated vulnerability scanning for container images
- Multi-layered security approach including WAF, DDoS protection, and network isolation provides protection against malicious traffic
Client Devices (macOS):
- CleanMyMac X deployed with malware removal capabilities, used regularly
- macOS built-in security features enabled (XProtect, Gatekeeper, FileVault)
- Regular software updates and security patches applied
Email Security:
- Proton Mail Business with integrated spam and malware filtering
- Proton Sentinel enabled for advanced threat protection
Security Philosophy:
For modern cloud-native applications, security is achieved through:
- Immutable infrastructure and containerization
- Network segmentation and access controls
- Regular container image updates and dependency patching
- Web application firewall and DDoS protection
- Secure development practices
We recognise that cloud-native security requires a different approach than traditional antivirus solutions, focusing on container security, vulnerability management, and defense-in-depth strategies.
## Does your organisation maintain a patching program?
Yes, we maintain an active patching programme appropriate to our cloud-native infrastructure and operational scale.
Infrastructure Patching:
- Managed services (Kubernetes Kapsule, PostgreSQL, Redis) automatically patched by Scaleway
- Kubernetes cluster platform updates managed by Scaleway
- Internal cluster components patched by our team
- Critical security patches applied as soon as possible upon notification
- Regular security update notifications from infrastructure providers actively monitored
Application & Dependency Patching:
- Quarterly review and update cycle for Python dependencies
- Weekly to monthly container image rebuilds using latest available base images
- Regular deployment cycle ensures current security patches are in production
- All updates tested in dedicated test (Podman) and staging (Kubernetes) environments before production deployment
Client System Patching:
- macOS and Linux systems regularly updated with latest security patches
- Development tools (PyCharm) updated immediately upon release
- Supporting software (CleanMyMac) configured for automatic updates
Planned Enhancements:
- Implementation of automated security advisory monitoring for dependencies
- Container vulnerability scanning through planned Harbor registry deployment
- Formalised patch prioritisation process based on severity ratings
Our patching approach balances the need for security with operational stability through testing in non-production environments before deployment.
## Does your organisation have a procedure to control changes to systems?
Yes, we have established procedures to control changes to our systems, appropriate to our current scale and operational needs.
Change Management Process:
- Changes managed through YouTrack issue tracking system
- Kanban board used to track progress and status of changes
- Changes discussed and communicated with customers as appropriate
- Release cycles managed based on change type (emergency fix, bugfix, features) with preference for short deployment cycles
Version Control & Release Management:
- GitFlow workflow implemented for all code changes
- GitHub-based version control with branch management
- Official releases tagged in container registry
- Rollback capability maintained for all deployments
- Documented changelog viewable within the application
Deployment Process:
- Standard deployment path: Development → Test (Podman) → Staging (Kubernetes) → Production
- Production deployments restricted to authorised personnel with appropriate access rights
- Deployment scripts used for consistent, repeatable deployments
- Release guide documentation maintained
Change Types:
- Standard changes follow full development and testing cycle
- Emergency changes (critical security patches, major bugs) can be fast-tracked whilst maintaining appropriate testing and documentation
Planned Enhancements:
- Formalisation of deployment windows and maintenance schedules as customer base grows
- Expansion of authorised deployment personnel with appropriate training
- Enhanced change approval workflow for larger team structure
Our change control approach ensures stability and traceability whilst maintaining the agility needed for responsive software development.
## Does your organisation have a web filtering control in place (URL/reputation/category/content filtering)?
No, we do not currently have formal web filtering controls in place for employee internet access. As a 2-person startup, we have focused our security investments on protecting our production infrastructure and customer data.
Current Protections:
Network Level:
- UniFi Dream Machine Pro enterprise-grade router/firewall in place
- Network segmentation between company and production environments
Endpoint Level:
- Modern browser built-in phishing and malware protection (Safari, Chrome) enabled by default
- macOS built-in security features (XProtect, Gatekeeper) providing baseline protection against known threats
Application Level (Inbound Protection):
- Bunny.net Shield provides web application firewall, DDoS protection, and malicious traffic filtering for our production platform
- Multi-layered security approach protecting customer-facing systems
Planned Enhancements as We Scale:
- Implementation of DNS-based web filtering solution
- Enhanced threat management features on network infrastructure
- Input validation and URL/file scanning for customer-submitted content in the application
- Formalised acceptable use policies for internet access
We recognise that corporate web filtering becomes increasingly important as organisations grow and will implement appropriate controls as we expand our team beyond the current 2 employees.
## Does your organisation have a email filtering control in place (SPAM/reputation/content filtering)?
Yes, we have robust email filtering controls in place through our Proton Mail Business subscription.
Email Security Features:
Advanced Spam and Phishing Protection:
- Proton's custom spam filtering system, which is at least 60% more accurate than popular systems like SpamAssassin and catches millions of dangerous phishing emails every month The Proton Sentinel high-security program | Proton
- PhishGuard protection to defend against phishing attacks and spoofed email addresses
- Link protection feature that displays full URLs before opening to prevent malicious link clicks
- Automatic filtering of malicious content and attachments
Proton Sentinel Advanced Security:
- 24/7 monitoring by security analysts who review suspicious events using a combination of AI and human expertise The Proton Sentinel high-security program | Proton
- Advanced protection that detects and challenges suspicious login attempts and account takeover attempts The Proton Sentinel high-security program | Proton
- Enhanced account security logs with detailed login information
- Protection against social engineering attacks
Email Authentication:
- SPF (Sender Policy Framework) configured to prevent email spoofing
- DKIM (DomainKeys Identified Mail) configured to ensure message integrity
- DMARC (Domain-based Message Authentication, Reporting, and Conformance) configured to prevent abuse
- All authentication protocols verified and active for both askeveal.com and flow-it.net domains
Additional Protections:
- End-to-end encryption for emails between Proton Mail accounts
- TLS encryption for emails to external recipients
- Two-factor authentication (2FA) available for all accounts
Our email security infrastructure provides enterprise-grade protection against spam, phishing, malware, and account compromise attempts.
## Does your organisation have mechanisms for managing information security incidents that include reporting, detection, resolution, and recovery?
No, we do not currently have a formal documented incident response procedure. However, we have established capabilities and practices that form the foundation for incident management.
Detection Capabilities:
- Comprehensive monitoring infrastructure in place (Prometheus, Grafana, Scaleway Cockpit)
- Extensive backend logging for investigation and root cause analysis
- Ability to detect anomalies in system usage and performance
- Planned implementation of automated alerting as we scale
Reporting Mechanisms:
- Customer and partner security issue reporting through established helpdesk channels
- Tiered support structure (partners handle first and second line, we provide third line support)
- Clear escalation path for security-related issues
Response & Resolution Capabilities:
- Root cause analysis capabilities using system logs and monitoring data
- Ability to patch and deploy fixes rapidly through our CI/CD pipeline
- Access to third-party infrastructure support (Scaleway) for infrastructure-level incidents
- Version control allowing rollback to previous stable versions
- Regular backup procedures for all critical systems (databases, object storage)
Recovery Capabilities:
- Managed backup services for PostgreSQL, Redis, and Object Storage
- Container rollback capabilities through versioned registry
- Ability to restore services from backups
- Disaster recovery supported by cloud infrastructure redundancy
Communication:
- Commitment to immediate notification of affected customers and partners
- Understanding of GDPR breach notification requirements (72-hour reporting to supervisory authority)
Planned Formalisation:
As we scale, we will develop and document a comprehensive incident response plan including:
Formal incident classification and escalation procedures
- Automated alerting and monitoring rules
- Documented communication templates and timelines
- Detailed recovery procedures and runbooks
- Regular incident response training and tabletop exercises
- Contact information for relevant authorities and third-party support
Whilst we don't currently have a formal documented process, our technical capabilities and operational practices provide the essential elements needed to detect, respond to, and recover from security incidents.
## Has you organisation had a reportable information security incident in the last three (3) years
No, we have not had any reportable information security incidents in the last three years.
Our organisation is newly operational, having just entered production with our first customers. We have not experienced any security breaches, data leaks, unauthorised access, or other security incidents that would require reporting under GDPR or other regulatory frameworks.
## Does the provided system/service use a certified Cloud Service Provider?
Yes, our entire infrastructure is hosted exclusively on certified cloud service providers, chosen specifically for their strong security certifications and European compliance.
Primary Cloud Providers & Certifications:
Scaleway (Infrastructure & Kubernetes):
- ISO/IEC 27001:2022 certified for Information Security Management Systems Our certifications & security | Scaleway
- HDS (Hébergeur de Données de Santé) certified for health data hosting since July 2024 Our certifications & security | Scaleway
- Pursuing SecNumCloud qualification, the French certification for highest standards of security and compliance for sensitive data Our certifications & security | Scaleway
- GDPR compliant
- French cloud provider subject to European data protection regulations
Bunny.net (CDN & Security Layer):
- ISO 27001 certified (achieved January 2025) Raising the bar on security, bunny.net achieves ISO 27001 certification
- SOC 2 Type II certified
- GDPR compliant
- European company with full EU data routing capabilities
- PCI compliant
Mistral AI (AI Services):
- SOC 2 Type II certified Do you have SOC2 or ISO27001 Certification? | Mistral AI - Help Center
- ISO 27001/27701 certified Do you have SOC2 or ISO27001 Certification? | Mistral AI - Help Center
- GDPR compliant
- French AI provider
Strategic Provider Selection:
We deliberately selected European cloud service providers for several critical reasons:
- Full compliance with GDPR and European data protection regulations
- Data sovereignty - all customer data remains within European jurisdiction
- Subject to strict European privacy laws and oversight
- Alignment with our commitment to data protection and privacy
All our cloud providers maintain current, independently audited security certifications and undergo regular compliance assessments. This ensures that our infrastructure meets internationally recognised standards for information security, data protection, and operational reliability.
## Are communications between your organization and cloud provider environments restricted to only authenticated and authorized connections?
Yes, all communications between our organisation and cloud provider environments are strictly restricted to authenticated and authorised connections only.
Access Control & Authentication:
Scaleway (Infrastructure):
- All access controlled through Scaleway IAM (Identity and Access Management)
- API access exclusively via IAM-authenticated credentials
- Access restricted to authorised personnel only
- All services deployed within private network infrastructure
- No direct SSH access to production systems
Bunny.net (CDN & Security):
- Multi-factor authentication (MFA) enabled and enforced
- Secure password management via Proton Pass
- Dashboard access restricted to authorised personnel only
- Administrative access tightly controlled
Mistral AI (AI Services):
- API key-based authentication for service integration
- Credentials securely stored in Scaleway Secret Manager
- API keys imported into Kubernetes secrets for runtime use
- Console access with standard authentication mechanisms
Secure Communications:
Encryption in Transit:
- All connections encrypted using HTTPS/TLS protocols
- TLS encryption for internal service communications (PostgreSQL, Redis)
- Certificate-based authentication for database connections
Network Security:
- All Scaleway services deployed in private network (VPC)
- Database and cache services accessible only within secure network perimeter
- Connection pooling for secure service-to-container communication
Credential Management:
- All service credentials stored in Scaleway Secret Manager
- Secrets automatically imported into Kubernetes secrets for runtime access
- Username/password authentication for Redis and PostgreSQL with connection pooling
- API keys for external service integration (Mistral AI) securely managed
- No credentials stored in code or configuration files
Multi-Factor Authentication:
- MFA enabled on all platforms that support it (Bunny.net, Scaleway, etc.)
- Additional authentication layer for administrative access
All authentication mechanisms follow the principle of least privilege, ensuring that access is granted only to authorised users and services, with credentials securely managed and communications encrypted end-to-end.
## Are cloud assets (related to the provided system/service) protected at the network and application layers from malicious attacks?
Yes, our cloud assets are protected at both the network and application layers through multiple security controls designed to defend against malicious attacks.
Network Layer Protection:
Perimeter Security:
- Bunny.net Shield providing comprehensive protection including:
- Web Application Firewall (WAF) with cutting-edge threat detection
- Advanced rate limiting to prevent abuse
- Robust DDoS mitigation capabilities
- All external traffic routed exclusively through Bunny.net security layer
- No direct exposure of backend infrastructure to the internet
Network Segmentation:
- Kubernetes cluster deployed in private network (Scaleway VPC)
- Internal services (PostgreSQL, Redis) isolated within private network
- Scaleway firewall protections applied at infrastructure level
- Administrative access restricted via secure port-forwarding only
Encryption:
- TLS encryption for all external communications (browser → Bunny.net → Kubernetes)
- TLS encryption for internal service communications (PostgreSQL, Redis)
- End-to-end encrypted data transmission
Application Layer Protection:
Authentication & Authorisation:
- API authentication using API keys and JWT tokens
- Multi-tenant architecture with strict data isolation per tenant
- Role-based access control within the application
- Password hashing (no plain text storage)
Common Vulnerability Protection:
- Security headers implemented across the application
- SQL injection prevention through parameterised queries and ORM usage
- Cross-Site Scripting (XSS) protection via DOM inspection and input sanitisation
- Input validation and sanitisation for all user-supplied data
- Standard Flask security frameworks deployed
- OWASP Top 10 awareness with ongoing verification
Data Protection:
- Multi-tenant data isolation (separate database schemas and object storage per tenant)
- Secure session management
- Protection against common web vulnerabilities
Defence-in-Depth Strategy:
Our security approach implements multiple layers of protection:
- Edge protection via Bunny.net Shield (WAF, DDoS, rate limiting)
- Network isolation and segmentation
- Application-level security controls
- Data-level protections (encryption, isolation)
Planned Enhancements:
- Application-level rate limiting
- Enhanced security monitoring and alerting for attack detection
- Comprehensive OWASP Top 10 verification and remediation
- Automated security testing in CI/CD pipeline
Our layered security approach ensures that even if one layer is compromised, additional protections remain in place to safeguard our systems and customer data.
## Are the security-relevant events (related to provided system/service) in cloud environments identified and logged?
Yes, security-relevant events are identified and logged throughout our cloud environments, providing visibility into security incidents and operational issues.
Centralised Logging Infrastructure:
Scaleway Cockpit (Prometheus & Grafana):
- Centralised log aggregation for all infrastructure and application components
- Default retention: 7 days for logs, 31 days for metrics
- Structured log collection using Promtail from all application containers
- Kubernetes cluster logs (control plane, nodes, system applications)
- Infrastructure logs (PostgreSQL, Redis, Object Storage)
- Application logs from all containerised services
Application-Level Logging:
Security Events:
- Authentication attempts (successful and failed) logged
- User actions and data modifications tracked (current state stored in database)
- Errors and exceptions comprehensively logged
- AI specialist interactions fully logged (with limited retention for data protection)
Operational Logging:
- API request and response logging (selective, not full scope)
- Backend application logging for investigation and root cause analysis
- Container and pod logs automatically collected
Infrastructure Security Logging:
Network & Infrastructure:
- Kubernetes cluster events and system logs
- Managed service logs (PostgreSQL, Redis) captured via Scaleway Cockpit
- Infrastructure changes and configuration modifications
External Security Layer (Bunny.net):
- WAF logs accessible via API showing blocked and allowed traffic, triggered rules, and security events Bunny StreamBunny Stream
- Real-time monitoring of processed requests, triggered rules, logged events, and blocked requests Metrics & Logging
- DDoS mitigation events
- Rate limiting violations
- Access logs and traffic patterns
Monitoring Capabilities:
- Business event monitoring (Prometheus, Grafana)
- Systems monitoring for all Scaleway resources (Scaleway Cockpit)
- Real-time log viewing and analysis through Grafana dashboards
- Ability to correlate events within Scaleway infrastructure
Planned Enhancements:
- Automated alerting for security events (infrastructure ready, configuration pending)
- Extended log retention periods for compliance requirements
- Cross-provider log correlation capabilities
- Enhanced security event monitoring and automated response
- Formal security event review procedures
Whilst we have comprehensive logging in place, we recognise that as we scale, we will need to implement automated alerting, extended retention policies, and more sophisticated security event analysis to proactively detect and respond to threats.
## Is there a defined encryption protocol in place for data in transit to/from the cloud environment.
Yes, we have defined encryption protocols in place for all data in transit to and from our cloud environments.
External Communications Encryption:
Client to Application:
- All browser traffic encrypted using HTTPS/TLS
- TLS termination at Bunny.net CDN layer
- Modern TLS protocols enforced (TLS 1.2 minimum, with TLS 1.3 support)
- Let's Encrypt certificates for domain authentication
- Automatic certificate renewal through Let's Encrypt integration
- Traffic flow: Browser (HTTPS/TLS) → Bunny.net (HTTPS/TLS) → Scaleway Kubernetes
Email Communications:
- Proton Mail Business with TLS 1.2+ encryption for all email transit
- End-to-end encryption for Proton-to-Proton communications
- TLS encryption for external email providers
- Zero-access encryption for emails at rest
Internal Service Communications:
Database and Cache Connections:
- PostgreSQL: TLS encryption with certificate-based authentication
- Redis: TLS encryption for all connections
- Secure connection pooling for application-to-database communications
API Communications:
- Mistral AI: HTTPS/TLS encrypted API calls
- All external service integrations use HTTPS/TLS protocols
- API keys securely transmitted over encrypted channels
Object Storage:
- Scaleway Object Storage: HTTPS for all uploads and downloads through platform
- Bunny.net Storage: TLS encryption for static file uploads (via Forklift)
Administrative Access:
Cluster Management:
- kubectl port-forward: Encrypted TLS tunnel between local machine and Kubernetes API server
- Secure, encrypted connection for accessing internal tools (PgAdmin, RedisInsight, etc.)
- No direct SSH access to production systems
Infrastructure Management:
- Scaleway IAM API access over HTTPS/TLS
- Bunny.net dashboard access over HTTPS with MFA
- All administrative interfaces accessed via encrypted connections
Certificate Management:
- Let's Encrypt as certificate authority for all domains
- Certificates stored securely in Kubernetes secrets
- Automatic certificate validation and trust chain verification
Protocol Standards:
- Minimum TLS version: TLS 1.2 (industry best practice)
- Support for TLS 1.3 where available
- Deprecated and insecure protocols (SSL 3.0, TLS 1.0, TLS 1.1) not supported
- Strong cipher suites enforced
Areas for Enhancement:
- Inter-pod communication within Kubernetes cluster currently relies on network isolation rather than encryption (service mesh/mTLS not yet implemented)
- Formal TLS version enforcement policy to be documented
- Certificate rotation policy to be formalised
- Consideration of service mesh (e.g., Istio, Linkerd) for mutual TLS between microservices as we scale
All data in transit between our systems, cloud providers, and end-users is protected using industry-standard encryption protocols, ensuring confidentiality and integrity of communications.

View File

@@ -5,7 +5,208 @@ All notable changes to EveAI will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [3.0.0-beta]
## 3.1.36-beta
Release date: 2025-12-02
### Added
- Refactoring of the chat client to use tabs for the active conversation, history and settings.
- Introduction of shells for Mobile and Desktop clients, allowing for additional shells like plugins to be added in the future.
### Fixed
- TRA-89—Problem solved where connection could get lost in sync between client and backend
- TRA-98—End user could continue without accepting dpa & terms
- TRA-96—Multiple-choice questions in the mobile client not scrolling → Solved by introducing a new client layout
- TRA-101—DPA-link was not working
- TRA-102—Wrong responses when looking for affirmative answers.
## 3.1.26-beta
Release date: 2025-11-26
### Changed
- Introduction of vueuse/core in the chat client, to ensure abstraction of ui behaviour for different mobile devices.
### Fixed
- TRA-99, fixed creation of a new Tenant Make.
- Improvement of DynamicFormBase to better align behaviour with the standard FlaskForm.
## 3.1.24-beta
Release date: 2025-11-25
### Changed
- Functionality to edit standard attributes of a specialist (e.g. name, description, retrievers, dynamic attributes) and deep tuning of a specialist are split in the administrative client.
### Fixed
- Ensure boolean fields are properly initialised when editing in the administrative client.
- Ensure Primary and Financial contact fields are properly displayed and saved in the user model.
### Security
- In case of vulnerabilities.
## 3.1.16-beta
Release date: 2025-11-13
### Added
- human_message_inactive_text_color added to Tenant Make configuration options, to allow for customisation of inactive messages in the chat client.
## 3.1.15-beta
Release date: 2025-10-29
### Fixed
- small bugfix where an old form being shown when no form was sent back.
## 3.1.14-beta
Release date: 2025-10-28
### Added
- Introduction of User Actions - TBC
- Additional configuration options for Agents: temperature and llm_model can now be configured (if allowed in Agent configuration)
### Changed
- Improvement of RAG Specialist, including proofreading on generated output.
- Specialist Editor - separate modal editors for Agents, Tasks and Tools to allow for more complex configuration.
### Removed
- PartnerRagRetriever model - not used.
### Fixed
- Bug fix where client appears to return no result on an interaction, due to connections without correct search path (out of the connection pool)
## 3.1.13-beta
Release date: 2025-10-17
### Added
- Introduce consent for DPA and T&C in the administrative client
- Refuse activities in the administrative client if no consent is given or consent needs to be renewed
### Changed
- Centralise content versioning and markdown rendering in the administrative client
### Fixed
- Improvement of reset password and confirm email address functionality.
## 3.1.12-beta
Release date: 2025-10-03
### Added
- TRA-90: Additional positive & negative KO Criteria Questions added to TRAICIE_KO_QUESTIONS asset and interaction and evaluation in TRAICIE_SELECTION_SPECIALIST
## 3.1.11-beta
Release date: 2025-10-03
### Changed
- Improved marked integration in the Chat Client, allowing for more complex markdown rendering
### Fixed
- TRA-86: Chat client sometimes 'hangs' waiting for a response from the server
- TRA-92: Form rendering given some extra space in AI message bubble
## 3.1.7-beta
Release date: 2025-09-30
### Added
- Pushgateway for logging business events to Prometheus
### Changed
- Prometheus deployment improvements and alignment with Pushgateway
- Metrics logging in Business events to support multiple pod and processes
- Maximum height for AI message in chat input also available in desktop client
- AI message rendering now allows markdown
- markdown rendering defined in a centralized utility
### Fixed
- Bug preventing correct loading of cache busted css en js in eveai_app solved
- Fix on rare bug preventing marked component to be displayed in SideBarExplanation
### Security
- DOM checks on markdown text to prevent XSS
## 3.1.3-beta
Release date: 2025-09-25
### Added
- Cache busting for static files
### Changed
- Build process optimised for cache busting
### Fixed
- TRA-83 - numerous changes to the mobile version of the chat client
## 3.1.2-beta
Release date: 2025-09-23
### Changed
- Several improvements to the mobile version of the chat client
## 3.1.1-alfa
Release date: 2025-09-22
### Fixed
- TRA-76 - Send Button color changes implemented
- TRA-72 - Translation of privacy statement and T&C
- TRA-73 - Strange characters in Tenant Make Name
- TRA-77 - Adapted Scroll behavior for Chat Client in Message History
### Security
- In case of vulnerabilities.
## 3.1.0-alfa
Release date: 2025-09-12
### Added
- Configuration of the full k8s staging environment
- k8s installation manual (cluster-install.md)
### Changed
- Minio Bucket based approach adapted to a Folder-based approach
- Startup scripts of all eveai services adapted to a more generic and reusable approach
- Base image creation for all eveai services (iso building from scratch)
- Enabled secure Redis connection (TLS + un/pw) + connection pooling for Session, Caches and Pub/Sub
- Enable secure PostgreSQL connection (TLS)
- Integration of monitoring possibilities in cluster (Prometheus, Grafana, Alertmanager, PgAdmin4, Redis Insight, Flower)
- Introduction of Bunny.net as CDN & WAF
- Isolation of initialisation tasks (in eveai_app) into a separate 'Job' container eveai_ops
- Static files served from static pull zone (storage) on bunny.net
- Improving health and readyness checks
- Migration of dev and test with changes required for k8s
- More comprehensive error communication system
## 3.0.1-beta
Release date: 2025-08-21
### Changed
- Podman now replaces Docker for building images
- Local registry now replaces Docker Hub
- Start for k8s integration
- RAG possibilities throughout usage of the TRAICIE_SELECTION_SPECIALIST
### Fixed
- TRA-67 - initial edit of Tenant Make --> 2-step proces
- TRA-68 - Correction of javascript for json editor, resulting in Asset Changes not being saved
- TRA-70 - Wrong File Size display for Assets
- TRA-69 - Wrong number of questions in TRAICIE_KO_INTERVIEW_DEFINITION_SPECIALIST (required correction in TRACIE_ROLE_DEFINITION_SPECIALIST)
### Security
- In case of vulnerabilities.
## 3.0.0-beta
Release date: 2025-08-15
### Added
- Mobile Support for the chat client.
@@ -17,7 +218,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Humanisation of cached interaction messages (random choice)
- Adding specialist configuration information to be added as arguments for retrievers
## [2.3.12-alfa]
## 2.3.12-alfa
### Added
- Modal display of privacy statement and terms & conditions documents in eveai_Chat_client
@@ -34,7 +235,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Error Messages for adding documents in 'alert'
- Correction of error in Template variable replacement, resulting in missing template variable value
## [2.3.11-alfa]
## 2.3.11-alfa
### Added
- RQC (Recruitment Qualified Candidate) export to EveAIDataCapsule
@@ -46,7 +247,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Adapting TRAICIE_SELECTION_SPECIALIST to retrieve preferred contact times using a form iso free text
- Improvement of DynamicForm en FormField to handle boolean values.
## [2.3.10-alfa]
## 2.3.10-alfa
### Added
- introduction of eveai-listview that is sortable and filterable (using tabulator), with client-side pagination
@@ -62,7 +263,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Chat-client converted to vue components and composables
## [2.3.9-alfa]
## 2.3.9-alfa
### Added
- Translation functionality for Front-End, configs (e.g. Forms) and free text
@@ -78,7 +279,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Catalogs & Retrievers now fully type-based, removing need for end-user definition of Tagging Fields
- RAG_SPECIALIST to support new possibilities
## [2.3.8-alfa]
## 2.3.8-alfa
### Added
- Translation Service
@@ -102,7 +303,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Security
- In case of vulnerabilities.
## [2.3.7-alfa]
## 2.3.7-alfa
### Added
- Basic Base Specialist additions for handling phases and transferring data between state and output
@@ -112,13 +313,13 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Logging improvement & simplification (remove Graylog)
- Traicie Selection Specialist v1.3 - full roundtrip & full process
## [2.3.6-alfa]
## 2.3.6-alfa
### Added
- Full Chat Client functionality, including Forms, ESS, theming
- First Demo version of Traicie Selection Specialist
## [2.3.5-alfa]
## 2.3.5-alfa
### Added
- Chat Client Initialisation (based on SpecialistMagicLink code)
@@ -132,7 +333,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Fixed
- Several Bugfixes to administrative app
## [2.3.4-alfa]
## 2.3.4-alfa
### Added
- Introduction of Tenant Make
@@ -143,7 +344,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Enable Specialist 'activation' / 'deactivation'
- Unique constraints introduced for Catalog Name (tenant level) and make name (public level)
## [2.3.3-alfa]
## 2.3.3-alfa
### Added
- Add Tenant Make
@@ -156,7 +357,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Ensure document version is selected in UI before trying to view it.
- Remove obsolete tab from tenant overview
## [2.3.2-alfa]
## 2.3.2-alfa
### Added
- Changelog display
@@ -169,7 +370,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- data-type dynamic field needs conversion to isoformat
- Add public tables to env.py of tenant schema
## [2.3.1-alfa]
## 2.3.1-alfa
### Added
- Introduction of ordered_list dynamic field type (using tabulator)
@@ -180,7 +381,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Role Definition Specialist creates Selection Specialist from generated competencies
- Improvements to Selection Specialist (Agent definition to be started)
## [2.3.0-alfa]
## 2.3.0-alfa
### Added
- Introduction of Push Gateway for Prometheus
@@ -227,7 +428,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Security
- In case of vulnerabilities.
## [2.2.0-alfa]
## 2.2.0-alfa
### Added
- Mistral AI as main provider for embeddings, chains and specialists
@@ -256,7 +457,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Fixed
- Set default language when registering Documents or URLs.
## [2.1.0-alfa]
## 2.1.0-alfa
### Added
- Zapier Refresh Document
@@ -279,7 +480,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Deprecated
- eveai_chat - using sockets - will be replaced with new specialist_execution_api and SSE
## [2.0.1-alfa]
## 2.0.1-alfa
### Added
- Zapîer Integration (partial - only adding files).
@@ -304,7 +505,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Security
- In case of vulnerabilities.
## [2.0.0-alfa]
## 2.0.0-alfa
### Added
- Introduction of dynamic Retrievers & Specialists
@@ -323,7 +524,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Security
- Security improvements to Docker images
## [1.0.14-alfa]
## 1.0.14-alfa
### Added
- New release script added to tag images with release number
@@ -348,7 +549,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Security
- In case of vulnerabilities.
## [1.0.13-alfa]
## 1.0.13-alfa
### Added
- Finished Catalog introduction
@@ -361,7 +562,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Fixed
- Overall bugfixes as result from the Catalog introduction
## [1.0.12-alfa]
## 1.0.12-alfa
### Added
- Added Catalog functionality
@@ -381,7 +582,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Security
- In case of vulnerabilities.
## [1.0.11-alfa]
## 1.0.11-alfa
### Added
- License Usage Calculation realised
@@ -396,7 +597,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Various fixes as consequence of changing file_location / file_name ==> bucket_name / object_name
- Celery Routing / Queuing updated
## [1.0.10-alfa]
## 1.0.10-alfa
### Added
- BusinessEventLog monitoring using Langchain native code
@@ -409,7 +610,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Removed
- Portkey removed for monitoring usage
## [1.0.9-alfa] - 2024/10/01
## 1.0.9-alfa - 2024/10/01
### Added
- Business Event tracing (eveai_workers & eveai_chat_workers)
@@ -428,7 +629,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Fixed
- Set default language when registering Documents or URLs.
## [1.0.8-alfa] - 2024-09-12
## 1.0.8-alfa - 2024-09-12
### Added
- Tenant type defined to allow for active, inactive, demo ... tenants
@@ -441,7 +642,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Fixed
- Refine audio_processor and srt_processor to reduce duplicate code and support larger files
## [1.0.7-alfa] - 2024-09-12
## 1.0.7-alfa - 2024-09-12
### Added
- Full Document API allowing for creation, updating and invalidation of documents.
@@ -451,14 +652,14 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Fixed
- Maximal deduplication of code between views and api in document_utils.py
## [1.0.6-alfa] - 2024-09-03
## 1.0.6-alfa - 2024-09-03
### Fixed
- Problems with tenant scheme migrations - may have to be revisited
- Correction of default language settings when uploading docs or URLs
- Addition of a CHANGELOG.md file
## [1.0.5-alfa] - 2024-09-02
## 1.0.5-alfa - 2024-09-02
### Added
- Allow chatwidget to connect to multiple servers (e.g. development and production)
@@ -472,10 +673,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Removed
- Removed direct upload of Youtube URLs, due to continuous changes in Youtube website
## [1.0.4-alfa] - 2024-08-27
## 1.0.4-alfa - 2024-08-27
Skipped
## [1.0.3-alfa] - 2024-08-27
## 1.0.3-alfa - 2024-08-27
### Added
- Refinement of HTML processing - allow for excluded classes and elements.
@@ -485,12 +686,12 @@ Skipped
- PDF Processing extracted in new PDF Processor class.
- Allow for longer and more complex PDFs to be uploaded.
## [1.0.2-alfa] - 2024-08-22
## 1.0.2-alfa - 2024-08-22
### Fixed
- Bugfix for ResetPasswordForm in config.py
## [1.0.1-alfa] - 2024-08-21
## 1.0.1-alfa - 2024-08-21
### Added
- Full Document Version Overview
@@ -498,7 +699,7 @@ Skipped
### Changed
- Improvements to user creation and registration, renewal of passwords, ...
## [1.0.0-alfa] - 2024-08-16
## 1.0.0-alfa - 2024-08-16
### Added
- Initial release of the project.

View File

@@ -170,13 +170,7 @@ personal information is gathered:
> contact or provide us with information to establish your identity or
> age.
>
>
> \
**Technical Data:**\
**Technical Data:**\\
When you visit, use, or interact with the Services, we receive the
following information about your visit, use, or interactions ("Technical
Information"):
@@ -253,11 +247,11 @@ and not attempt to reidentify the information, unless required by law.
As noted above, Ask Eve AI may use content the Customer provides Ask Eve
AI to improve the Services, for example to train the models that power
Ask Eve AI. Read [**our instructions**(opens in a new
window)**](https://help.openai.com/en/articles/5722486-how-your-data-is-used-to-improve-model-performance) on
how you can opt out of our use of your Content to train our models.\
Ask Eve AI. Read [\**our instructions*\*(opens in a new
window)\*\*](https://help.openai.com/en/articles/5722486-how-your-data-is-used-to-improve-model-performance) on
how you can opt out of our use of your Content to train our models.\\
1. 1. ## Instructions {#instructions-3}
1. 1. \#\# Instructions {#instructions-3}
Data Processor shall only Process Personal Data of Data Controller on
behalf of the Data Controller and in accordance with this Data
@@ -267,12 +261,12 @@ manner, as is reasonably necessary to provide the Services in accordance
with the Agreement. Data Controller shall only give instructions that
comply with the Data Protection legislation.
2. 1. ## Applicable mandatory laws {#applicable-mandatory-laws-3}
2. 1. \#\# Applicable mandatory laws {#applicable-mandatory-laws-3}
Data Processor shall only Process as required by applicable mandatory
laws and always in compliance with Data Protection Legislation.\
laws and always in compliance with Data Protection Legislation.\\
3. 1. ## Transfer to a third party {#transfer-to-a-third-party-3}
3. 1. \#\# Transfer to a third party {#transfer-to-a-third-party-3}
Data Processor uses functionality of third party services to realise
it's functionality. For the purpose of realising Ask Eve AI's
@@ -284,7 +278,7 @@ other third party and/or appoint any third party as a sub-processor of
Personal Data unless it is legally required or in case of a notification
to the Data Controller by which he gives his consent.
4. 1. ## Transfer to a Third Country {#transfer-to-a-third-country-3}
4. 1. \#\# Transfer to a Third Country {#transfer-to-a-third-country-3}
Data Processor shall not transfer Personal Data (including any transfer
via electronic media) to any Third Country without the prior written
@@ -305,9 +299,9 @@ Data Controller about the particular measures taken to guarantee the
protection of the Personal Data of the Data Subject in accordance with
the Regulation.
\
\\
5. 1. ## Data secrecy {#data-secrecy-3}
5. 1. \#\# Data secrecy {#data-secrecy-3}
The Data Processor shall maintain data secrecy in accordance with
applicable Data Protection Legislation and shall take all reasonable
@@ -324,7 +318,7 @@ steps to ensure that:
> in accordance with applicable Data Protection Legislation and at all
> times act in compliance with the Data Protection Obligations.
6. 1. ## Appropriate technical and organizational measures {#appropriate-technical-and-organizational-measures-3}
6. 1. \#\# Appropriate technical and organizational measures {#appropriate-technical-and-organizational-measures-3}
Data Processor has implemented (and shall comply with) all appropriate
technical and organizational measures to ensure the security of the
@@ -348,7 +342,7 @@ registration, de-registration and withdrawal of automation access codes
(API Keys), and is also responsible for the complete physical security
of its environment.
7. 1. ## Assistance and co-operation {#assistance-and-co-operation-3}
7. 1. \#\# Assistance and co-operation {#assistance-and-co-operation-3}
The Data Processor shall provide the Data Controller with such
assistance and co-operation as the Data Controller may reasonably
@@ -358,7 +352,7 @@ Data processed by the Data Processor, including but not limited to:
> \(1\) on request of the Data Controller, promptly providing written
> information regarding the technical and organizational measures which
> the Data Processor has implemented to safeguard Personal Data;\
> the Data Processor has implemented to safeguard Personal Data;\\
> \(2\) disclosing full and relevant details in respect of any and all
> government, law enforcement or other access protocols or controls
@@ -401,7 +395,7 @@ Data processed by the Data Processor, including but not limited to:
> Processor shall support the Data Controller in the provision of such
> information when explicitly requested by the Data Controller.
4. # Audit {#audit-1}
4. \# Audit {#audit-1}
At the Data Controller's request the Data Processor shall provide the
Data Controller with all information needed to demonstrate that it
@@ -423,7 +417,7 @@ minimum, and the Data Controller shall impose sufficient confidentiality
obligations on its auditors. Every auditor who does an inspection will
be at all times accompanied by a dedicated employee of the Processor.
4. # Liability {#liability-1}
4. \# Liability {#liability-1}
Each Party shall be liable for any suffered foreseeable, direct and
personal damages ("Direct Damages") resulting from any attributable
@@ -458,7 +452,7 @@ immediately prior to the cause of damages. In no event shall the Data
Processor be held liable if the Data Processor can prove he is not
responsible for the event or cause giving rise to the damage.
4. # Term {#term-1}
4. \# Term {#term-1}
This Data Processing Agreement shall be valid for as long as the
Customer uses the Services.
@@ -469,7 +463,7 @@ use of Personal Data and delete all Personal Data and copies thereof in
its possession unless otherwise agreed or when deletion of the Personal
Data should be technically impossible.
4. # Governing law -- jurisdiction {#governing-law-jurisdiction-1}
4. \# Governing law -- jurisdiction {#governing-law-jurisdiction-1}
This Data Processing Agreement and any non-contractual obligations
arising out of or in connection with it shall be governed by and
@@ -490,79 +484,6 @@ The Data Controller hereby agrees to the following list of
Sub-Processors, engaged by the Data Processor for the Processing of
Personal Data under the Agreement:
+-------------+--------------------------------------------------------+
| | |
+=============+========================================================+
| **Open AI** | |
+-------------+--------------------------------------------------------+
| Address | OpenAI, L.L.C., |
| | |
| | 3180 18th St, San Francisco, |
| | |
| | CA 94110, |
| | |
| | United States of America. |
+-------------+--------------------------------------------------------+
| Contact | OpenAI's Data Protection team |
| | |
| | dsar@openai.com |
+-------------+--------------------------------------------------------+
| Description | Ask Eve AI accesses Open AI's models through Open AI's |
| | API to realise it's functionality. |
| | |
| | Services are GDPR compliant. |
+-------------+--------------------------------------------------------+
| | |
+-------------+--------------------------------------------------------+
+---------------+------------------------------------------------------+
| | |
+===============+======================================================+
| **StackHero** | |
+---------------+------------------------------------------------------+
| Address | Stackhero |
| | |
| | 1 rue de Stockholm |
| | |
| | 75008 Paris |
| | |
| | France |
+---------------+------------------------------------------------------+
| Contact | support@stackhero.io |
+---------------+------------------------------------------------------+
| Description | StackHero is Ask Eve AI's cloud provider, and hosts |
| | the services for PostgreSQL, Redis, Docker, Minio |
| | and Greylog. |
| | |
| | Services are GDPR compliant. |
+---------------+------------------------------------------------------+
| **** | |
+---------------+------------------------------------------------------+
+----------------+-----------------------------------------------------+
| | |
+================+=====================================================+
| **A2 Hosting** | |
+----------------+-----------------------------------------------------+
| Address | A2 Hosting, Inc. |
| | |
| | PO Box 2998 |
| | |
| | Ann Arbor, MI 48106 |
| | |
| | United States |
+----------------+-----------------------------------------------------+
| Contact | [*+1 734-222-4678*](tel:+1(734)222-4678) |
+----------------+-----------------------------------------------------+
| Description | A2 hosting is hosting our main webserver and |
| | mailserver. They are all hosted on European servers |
| | (Iceland). It does not handle data of our business |
| | applications. |
| | |
| | Services are GDPR compliant. |
+----------------+-----------------------------------------------------+
| **** | |
+----------------+-----------------------------------------------------+
# Annex 2
@@ -614,7 +535,7 @@ infrastructure. Ask Eve AI uses an intent-based approach where
activities are constantly monitored, analysed and benchmarked instead of
relying solely on a simple authentication/authorization trust model.
4. 1. ## General Governance & Awareness {#general-governance-awareness-3}
4. 1. \#\# General Governance & Awareness {#general-governance-awareness-3}
As a product company, Ask Eve AI is committed to maintain and preserve
an IT infrastructure that has a robust security architecture, complies
@@ -676,7 +597,7 @@ enabled.
Key management governance is implemented and handled by Facilities.
1. 1. ## Endpoint Security & User Accounts {#endpoint-security-user-accounts-3}
1. 1. \#\# Endpoint Security & User Accounts {#endpoint-security-user-accounts-3}
All endpoints and any information stored are encrypted using
enterprise-grade encryption on all operating systems supported by Ask
@@ -701,7 +622,7 @@ ensure endpoint integrity and policy compliance.
Access is managed according to role-based access control principles and
all user behavior on Ask Eve AI platforms is audited.
1. 1. ## Data Storage, Recovery & Securing Personal Data {#data-storage-recovery-securing-personal-data-3}
1. 1. \#\# Data Storage, Recovery & Securing Personal Data {#data-storage-recovery-securing-personal-data-3}
> Ask Eve AI has deployed:
@@ -720,7 +641,7 @@ all user behavior on Ask Eve AI platforms is audited.
- Records of the processing activities.
- Data Retention Policies
1. 1. ## Protection & Insurance {#protection-insurance-3}
1. 1. \#\# Protection & Insurance {#protection-insurance-3}
Ask Eve AI has a cyber-crime insurance policy. Details on the policy can
be requested through the legal department.

1143
content/dpa/1.1/1.1.0.md Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -18,7 +18,7 @@ To access certain features of the Service, you must register for an account. You
### 4. Privacy
Your use of the Service is also governed by our Privacy Policy, which can be found [here](/content/privacy).
Your use of the Service is also governed by our Privacy Policy, which can be found [here](/content/dpa).
### 5. Intellectual Property

454
content/terms/1.1/1.1.0.md Normal file
View File

@@ -0,0 +1,454 @@
# Terms of Service
## Ask Eve AI
**Version 1.0.0**
**Effective Date: October 3, 2025**
---
## Introduction
These Terms of Service ("Terms") constitute a legally binding agreement between **Flow IT BV**, with registered office at Toekomststraat 62, 9800 Deinze, Belgium, with company number BE0877.273.542, operating under the trademark **Ask Eve AI** ("Ask Eve AI," "AskEveAI," "we," "us," or "our"), and the Customer (as defined below) that governs the use of the Services (as defined below).
By signing up to use the Services, accessing the Services, or clicking to accept these Terms, you ("Customer," "you," or "your") agree to be bound by these Terms. You represent that you are lawfully able to enter into contracts and, if you are entering into these Terms on behalf of an entity, that you have legal authority to bind that entity.
**For commercial customers**: Your use of the Services is also subject to our [Data Protection Agreement](link-to-dpa), which governs the processing of personal data. In the event of any conflict between these Terms and the Data Protection Agreement regarding data protection matters, the Data Protection Agreement shall prevail.
---
## 1. Services
### 1.1 Provision of Services
1. Upon payment of the applicable fees, Ask Eve AI grants to Customer a non-exclusive, non-transferable, non-sublicensable right to access and use the Ask Eve AI platform ("Platform" or "Services") during the term as stated in these Terms and as specified in the applicable subscription for Customer's business operations.
2. Ask Eve AI may subcontract to third parties any part of the Services. In particular, Ask Eve AI utilizes third-party service providers to provide, amongst others, connectivity, AI services (including large language models), data centre services, database services, content delivery, and security services. A complete list of Sub-Processors is available in Annex 1 of our Data Protection Agreement.
3. Customer must provide accurate and up-to-date account information. Customer is responsible for all activities that occur under its account, including the activities of any authorized user or Partner. Customer shall:
- Notify Ask Eve AI immediately of any unauthorized use of any password, API key, or user ID, or any other known or suspected breach of security
- Use reasonable efforts to stop any unauthorized use of the Services that is known or suspected by Customer
- Not provide false identity information to gain access to or use the Services
- Maintain proper access controls for all users and API credentials
### 1.2 Limitations on Use of Services
1. **Prohibited Actions**: Customer shall not:
- Remove any identification, proprietary, copyright, or other notices in the Services or documentation
- Represent that output was human-generated when it was not
- Reverse engineer the Services into source code, decompile, disassemble, or analyze the Services by "reverse engineering"
- Create derivative works of the Services
- Merge the Services with other software
- Sublicense, sell, lease, or otherwise encumber rights granted by Ask Eve AI (unless expressly authorized by Ask Eve AI in writing)
- Use the Services in any way that causes, or may cause, damage to the Services or impairment of the availability or accessibility of the Services
- Use the Services in any way that is unlawful, illegal, fraudulent, or harmful, or in connection with any unlawful, illegal, fraudulent, or harmful purpose or activity
- Attempt to gain unauthorized access to any portion of the Services or related systems or networks
- Overload, flood, or perform denial-of-service attacks on the Services
- Use automated means to access the Services except through approved APIs and within documented rate limits
2. **Prohibited Content**: Customer shall not use the Services to create, upload, transmit, distribute, or store content that:
- Is illegal, including content depicting or facilitating child exploitation, terrorism, illegal drugs, or other criminal activity
- Contains malware, viruses, or malicious code
- Infringes intellectual property rights, including pirated material or unauthorized use of trademarks
- Constitutes spam, phishing attempts, or fraudulent schemes
- Includes personal data without proper consent or legal basis under applicable data protection laws
- Promotes hate speech, violence, or discrimination
- Attempts to manipulate AI systems to produce harmful, misleading, or unauthorized outputs
- Creates deepfakes or other misleading content intended to deceive
- Violates any applicable laws or regulations
3. **Enforcement**: In case of infringement of these limitations, Ask Eve AI reserves all rights to prove and obtain compensation for its full damages incurred by such infringement. This provision does not prevent Ask Eve AI from obtaining equitable relief in summary or other proceedings. Ask Eve AI may immediately suspend or terminate access to the Services upon discovery of any violation.
### 1.3 Acceptable Use and Compliance
1. **Data Protection Compliance**:
- **Customers** and **Partners** must comply with all applicable data protection laws, including the General Data Protection Regulation (GDPR) and the Belgian Data Protection Act, when using the Services.
- Customers and Partners are responsible for obtaining all necessary consents, authorizations, and legal bases required to process personal data through the Services.
- Customers and Partners must ensure their end users are properly informed about data processing activities and that appropriate privacy notices are provided.
- Although Ask Eve AI provides consent management functionality within the Platform, Customers and Partners remain solely responsible for ensuring their use of the Services complies with all applicable data protection requirements.
2. **Customer and Partner Indemnification for GDPR Violations**: Customer and Partner agree to indemnify, defend, and hold Ask Eve AI harmless from and against any claims, damages, losses, liabilities, costs, and expenses (including reasonable legal fees) arising from or related to Customer's or Partner's failure to comply with GDPR or other applicable data protection laws.
3. **Export Controls and Trade Compliance**: Customer certifies that it will comply with all applicable EU trade restrictions, export controls, and economic sanctions. Customer represents and warrants that it will not use the Services in any country or territory subject to EU or international sanctions, or in violation of any applicable trade restrictions.
---
## 2. Content
### 2.1 Input and Output
1. Customer may provide input to the Services ("Input") and receive output from the Services based on the Input ("Output"). Input and Output are collectively "Content."
2. Customer is responsible for all Content, including ensuring that it does not violate any applicable law or these Terms. Customer represents and warrants that it has all rights, licenses, and permissions needed to provide Input to the Services.
### 2.2 Ownership
1. **Customer Ownership**: Customer:
- Retains all ownership rights in Input
- Owns all Output generated by the Services based on Customer's Input
- Owns all specialist configurations, prompts, business logic, and custom implementations created by Customer on the Platform
2. **Ask Eve AI Assignment**: Ask Eve AI hereby assigns to Customer all of our right, title, and interest, if any, in and to Output generated specifically for Customer.
3. **Platform Ownership**: Ask Eve AI retains all ownership rights in and to the Platform itself, including all software, improvements, enhancements, modifications, AI models, core functionality, and intellectual property rights related thereto.
### 2.3 Non-Unique Outputs
Due to the nature of AI services and machine learning generally, Output may not be unique. Other users may receive similar output from the Services. Ask Eve AI's assignment of Output to Customer does not extend to other users' output or any third-party output.
### 2.4 Use of Content by Ask Eve AI
Ask Eve AI may use Content to:
- Provide, maintain, develop, and improve the Services
- Comply with applicable law
- Enforce our terms and policies
- Keep the Services safe and secure
- Generate aggregated or de-identified data for research, development, and model improvement, subject to the opt-out provisions in our Data Protection Agreement
### 2.5 Nature of AI and Customer Responsibilities
1. **AI Limitations**: Artificial intelligence and machine learning are rapidly evolving fields. Ask Eve AI is constantly working to improve the Services to make them more accurate, reliable, safe, and beneficial. However, given the probabilistic nature of machine learning, use of the Services may, in some situations, result in Output that does not accurately reflect real people, places, or facts.
2. **Customer Acknowledgments**: When Customer uses the Services, Customer understands and agrees that:
- **Output may not always be accurate**: Customer should not rely on Output from the Services as a sole source of truth or factual information, or as a substitute for professional advice
- **Human review required**: Customer must evaluate Output for accuracy and appropriateness for its use case, including using human review as appropriate, before using or sharing Output from the Services
- **No automated decisions affecting individuals**: Customer must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making credit, educational, employment, housing, insurance, legal, medical, or other important decisions about them, without appropriate human oversight and intervention
- **Potential for inappropriate content**: The Services may provide incomplete, incorrect, or offensive Output that does not represent Ask Eve AI's views
- **No endorsements**: If Output references any third-party products or services, it does not mean the third party endorses or is affiliated with Ask Eve AI
---
## 3. Intellectual Property
### 3.1 Ask Eve AI Ownership
Except as expressly set forth in these Terms, Ask Eve AI owns and retains all right, title, and interest in and to the Services, including:
- The Platform with all software, improvements, enhancements, or modifications thereto
- Any software, applications, inventions, or other technology developed as part of any maintenance or support
- All AI models, algorithms, and training methodologies
- All Intellectual Property Rights related to any of the foregoing
"Intellectual Property Rights" means current and future worldwide rights under patent, copyright, trade secret, trademark, moral rights, and other similar rights.
### 3.2 Reservation of Rights
All rights in and to Ask Eve AI not expressly granted to Customer in these Terms are reserved by Ask Eve AI. No license is granted to Customer except as to use of the Services as expressly stated herein. These Terms do not grant Customer:
- Any rights to the Intellectual Property Rights in the Platform or Services
- Any rights to use the Ask Eve AI trademarks, logos, domain names, or other brand features unless otherwise agreed in writing
### 3.3 Partner Implementations
Where Partners implement functionality on the Platform involving Ask Eve AI:
- Partners retain ownership of their specific implementations, configurations, and custom code
- Partners grant Ask Eve AI a license to host, operate, and provide their implementations as part of the Services
- Ask Eve AI retains ownership of the underlying Platform infrastructure and core functionality
- Partners are responsible for ensuring their implementations comply with these Terms and all applicable laws
---
## 4. Pricing and Payment
### 4.1 Subscription Model
1. **Paid Subscriptions**: Customer can only purchase a paid subscription ("Paid Subscription") by paying Basic Fees in advance on a monthly or yearly basis, or at another recurring interval agreed upon prior to purchase, through a third-party payment platform as indicated by Ask Eve AI.
2. **Third-Party Payment Terms**: Where payment is processed through a third-party payment platform, the separate terms and conditions of that payment platform shall apply in addition to these Terms.
### 4.2 Fee Structure
1. **Basic Fees**: Prepaid fees for the base subscription tier, covering specified usage limits for the billing period. Basic Fees must be paid in advance for each billing period to maintain access to the Services.
2. **Additional Fees**: Additional Fees will be charged to Customer on a monthly basis on top of the Basic Fees when the effective usage of the Services exceeds the usage limits covered by the Basic Fee for the respective month. Additional Fees will be calculated and invoiced to Customer through the same third-party payment platform.
3. **Overage Options**:
- Customer may enable or disable overage usage for each service element (storage, embeddings, interactions) as defined in the subscription agreement
- If overage is disabled and usage limits are reached, Services will be suspended until the next billing period or until Customer enables overage
- Customer may request changes to overage settings mid-period by contacting Ask Eve AI or their managing Partner
- Usage metrics are displayed in the administrative interface
### 4.3 Payment Terms
1. **Currency and Taxes**: All prices are quoted in EUR unless otherwise agreed. Tax rates are calculated based on the information Customer provides and the applicable rate at the time of payment. Prices do not include VAT, which will be added at the applicable rate.
2. **Billing Cycle**: Unless otherwise specified between the Parties, Paid Subscriptions will continue indefinitely until cancelled. Customer will receive a recurring invoice on the first day of each billing period for Basic Fees and will authorize the applicable third-party payment platform to charge the payment method for the then-current subscription fee.
3. **Payment Deadline**: Payment of each invoiced amount for Additional Fees, taxes included, must be completed within thirty (30) days after the date of the invoice.
4. **Late Payment**: Any payment after the fixed payment date shall be subject to delay interest for late payment in accordance with the Law of 2 August 2002 on combating late payment in commercial transactions, calculated at the legal interest rate as determined by the Belgian government. This provision shall not in any event exclude the possible payment of damages.
5. **Invoice Complaints**: Complaints relating to invoices must be notified to Ask Eve AI directly and in writing within fifteen (15) days after the invoice date via registered letter or via a proven received email to finance@askeveai.com, stating the precise nature and extent of the complaints.
### 4.4 Cancellation and Refunds
1. **Customer Cancellation**: Customer may cancel a Paid Subscription at any time by following the cancellation instructions provided in the administrative interface or by contacting Ask Eve AI. Unless otherwise stated, cancellation will take effect at the end of the billing period in which Customer cancels.
2. **No Refunds**: Ask Eve AI does not offer refunds or reimbursements for partial subscription periods unless otherwise agreed between the Parties in writing.
3. **Ask Eve AI Termination**: In addition to, and without prejudice to any other rights Ask Eve AI may have under these Terms, Ask Eve AI reserves the right to terminate a Paid Subscription at any time upon at least fourteen (14) days' notice. Unless Ask Eve AI notifies Customer otherwise, Ask Eve AI will grant Customer access to the Paid Subscription for the remainder of the then-current billing period.
### 4.5 Price Changes
Ask Eve AI may from time to time change the prices for Paid Subscriptions, including recurring Basic Fees and Additional Fees, in response to circumstances such as:
- Changes to product offerings and features
- Changes in business operations or economic environment
- Changes in costs from subcontractors or service providers
- Security, legal, or regulatory reasons
Ask Eve AI will provide reasonable notice of price changes by any reasonable means, including by email or in-app notice, which will in any event not be less than fourteen (14) days. Price changes will become effective at the start of the next subscription period following the date of the price change.
Subject to applicable law, Customer will have accepted the new price by continuing to use the Services after the new price comes into effect. If Customer does not agree to a price change, Customer may reject the change by unsubscribing from the applicable Paid Subscription before the price change comes into effect.
---
## 5. Suspension and Termination
### 5.1 Suspension for Non-Payment
1. **Basic Fees**: If Basic Fees are not paid when due, Ask Eve AI reserves the right to immediately suspend Customer's access to the Services without prior notice.
2. **Additional Fees**: If Additional Fees are not paid within thirty (30) days of the invoice date, Ask Eve AI may suspend Customer's access to the Services.
3. **Reactivation**: Suspended accounts may be reactivated upon payment of all outstanding amounts. However, time elapsed during suspension still counts toward the applicable billing period, and no pro-rata refunds or credits will be provided.
### 5.2 Immediate Termination by Ask Eve AI
Ask Eve AI reserves the right to suspend or terminate Customer's access to the Services or delete Customer's account immediately without any notice, compensation, or court intervention if Ask Eve AI determines:
1. Customer has breached these Terms, including violation of Section 1.2 (Limitations on Use of Services) or Section 1.3 (Acceptable Use and Compliance)
2. Customer becomes insolvent, files a petition of bankruptcy (or any similar petition under any insolvency law of any jurisdiction), ceases its activities, or proposes any dissolution
3. Ask Eve AI must do so to comply with applicable law
4. Customer's use of the Services could cause risk or harm to Ask Eve AI, its users, or anyone else
### 5.3 Service Discontinuation
Ask Eve AI may decide to discontinue the Services. In such case, Ask Eve AI will give Customer advance notice and a refund for any prepaid, unused Services on a pro-rata basis.
### 5.4 Data Upon Termination
1. **License Suspension**: When a subscription is suspended or cancelled, Customer loses access to the Services, but tenant data is not automatically deleted. Customer may resume access by reactivating the subscription and paying applicable fees.
2. **Tenant Termination**: Customer may request full termination of its tenant account and deletion of all associated tenant data by contacting Ask Eve AI. Upon such request:
- Tenant-specific content will be isolated and marked for deletion
- Deletion will occur within ninety (90) days as specified in the Data Protection Agreement
- Financial and billing records will be retained for seven (7) years as required by Belgian law
- User accounts will be disabled to maintain audit trail integrity
3. **Data Export**: Customer may export accessible data through the API while subscription remains active and fees are current. Ask Eve AI does not provide separate data export services.
---
## 6. Warranties and Disclaimers
### 6.1 Service Availability
Ask Eve AI strives to provide high availability of the Services but does not guarantee any specific uptime or service level. Ask Eve AI reserves the right to:
- Perform scheduled maintenance between 22:00 and 05:00 CET without prior notice
- Perform scheduled maintenance outside these hours with at least seven (7) days' advance notice
- Perform emergency maintenance at any time without notice when necessary to protect the security, integrity, or availability of the Services
### 6.2 Warranty Disclaimer
THE SERVICES ARE PROVIDED "AS IS" AND "AS AVAILABLE." TO THE FULLEST EXTENT PERMITTED BY LAW, ASK EVE AI AND ITS PARTNERS MAKE NO WARRANTY OF ANY KIND, WHETHER EXPRESS, IMPLIED, STATUTORY, OR OTHERWISE, INCLUDING WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT.
Specifically, Ask Eve AI does not warrant that:
- The Services will meet Customer's performance requirements or operate in accordance with Customer's expectations
- The Services will be uninterrupted, secure, or error-free
- Any errors or defects will be corrected
- The Services will be free from viruses or other harmful components
- Results obtained from use of the Services will be accurate or reliable
Customer acknowledges that before entering into these Terms, Customer has evaluated the Services and accepts responsibility for selection of the Services, their use, and the results to be obtained therefrom.
### 6.3 AI-Specific Disclaimers
Neither Ask Eve AI nor its partners make any warranty about:
- The accuracy, completeness, or appropriateness of any Output generated by the Services
- Any content or information in or from an end user or Customer account
- The reliability of AI models or the absence of AI hallucinations, errors, or biases
- The suitability of Output for any particular purpose or decision-making process
Customer accepts and agrees that any use of Output from the Services is at Customer's sole risk and that Customer will not rely on Output as a sole source of truth or factual information, or as a substitute for professional advice.
---
## 7. Limitation of Liability
### 7.1 Liability Cap
TO THE FULLEST EXTENT PERMITTED BY LAW, THE TOTAL AGGREGATE LIABILITY OF ASK EVE AI UNDER THESE TERMS SHALL BE LIMITED TO THE TOTAL AMOUNT OF BASIC FEES PAID BY CUSTOMER TO ASK EVE AI DURING THE THREE (3) MONTHS IMMEDIATELY PRIOR TO THE EVENT GIVING RISE TO THE LIABILITY. ADDITIONAL FEES (OVERAGE) ARE EXCLUDED FROM THIS CALCULATION.
### 7.2 Exclusion of Consequential Damages
IN NO EVENT SHALL ASK EVE AI BE LIABLE FOR ANY INDIRECT, INCIDENTAL, SPECIAL, CONSEQUENTIAL, OR PUNITIVE DAMAGES, INCLUDING BUT NOT LIMITED TO:
- Loss of profits or revenue
- Loss of business or anticipated savings
- Loss of goodwill or reputation
- Loss of data or information
- Business interruption
- Cost of procurement of substitute services
- Any other indirect or consequential loss or damage
This exclusion applies regardless of the legal theory on which the claim is based (contract, tort, negligence, strict liability, or otherwise) and whether or not Ask Eve AI has been advised of the possibility of such damages.
### 7.3 Specific Exclusions
Ask Eve AI shall have no liability whatsoever for:
- **AI Output**: Any damages or claims resulting from Customer's use of, reliance on, or decisions made based on Output generated by the Services
- **Third-Party Services**: Deficiencies in infrastructure services or third-party software provided by Ask Eve AI's Sub-Processors, beyond the liability such Sub-Processors have toward Ask Eve AI
- **Customer Content**: Any claims arising from Customer's Input, including claims of infringement, defamation, or violation of privacy rights
- **End User Claims**: Claims brought by Customer's end users arising from Customer's use of the Services
- **Unauthorized Use**: Damages resulting from unauthorized access to or use of Customer's account
- **Force Majeure**: Events beyond Ask Eve AI's reasonable control, including acts of God, natural disasters, war, terrorism, riots, labor disputes, governmental actions, internet disturbances, epidemics, pandemics, or failures of third-party infrastructure providers
### 7.4 Customer Indemnification
Customer shall, at its own expense, indemnify, defend, and hold Ask Eve AI harmless from and against any claim(s), damages, losses, liabilities, costs, and expenses (including reasonable legal fees) brought against Ask Eve AI by a third party arising out of or related to:
- Customer's use of Output obtained from the Services
- Customer's breach of these Terms
- Customer's violation of any applicable laws or regulations
- Customer's violation of any third-party rights
- Customer's failure to comply with GDPR or other data protection laws
### 7.5 Mandatory Liability
Nothing in these Terms shall limit or exclude liability to the extent such limitation or exclusion is prohibited by mandatory applicable law, including liability for:
- Death or personal injury caused by negligence
- Fraud or fraudulent misrepresentation
- Intentional misconduct or gross negligence
- Any other liability that cannot be excluded or limited under Belgian or EU law
### 7.6 Basis of the Bargain
Customer acknowledges and agrees that the limitations of liability set forth in this Section 7 are fundamental elements of the basis of the bargain between Ask Eve AI and Customer, and that Ask Eve AI would not be able to provide the Services on an economically reasonable basis without these limitations.
---
## 8. Confidential Information
### 8.1 Mutual Confidentiality Obligations
1. **Ask Eve AI's Confidential Information**: Customer acknowledges that information and data (including general business information) it receives from Ask Eve AI concerning the Services and any documentation related to the Services are confidential and proprietary and a valuable commercial asset of Ask Eve AI.
2. **Customer's Confidential Information**: Ask Eve AI acknowledges that general business information and Customer data it receives from Customer is confidential and proprietary.
3. **Confidentiality Obligations**: Both Parties agree to:
- Keep confidential information received from the other Party in confidence
- Not disclose any such information to third parties without prior written consent of the disclosing Party
- Not use confidential information for its own benefit or purposes other than fulfilling contractual obligations
- Disclose confidential information only to employees or advisors who require the information to enable that Party to fulfill its contractual obligations and who are bound by similar confidentiality obligations
### 8.2 Exclusions from Confidentiality
A Party's Confidential Information shall not be deemed to include information that:
- Is or becomes publicly known other than through any act or omission of the receiving Party
- Was in the receiving Party's lawful possession before the disclosure
- Is lawfully disclosed to the receiving Party by a third party without restriction on disclosure
- Is independently developed by the receiving Party, which independent development can be shown by written evidence
- Is required to be disclosed by law, by any court of competent jurisdiction, or by any regulatory or administrative body
---
## 9. Data Protection
### 9.1 Data Protection Agreement
For commercial customers, the processing of personal data is governed by our Data Protection Agreement, which is incorporated into these Terms by reference. The Data Protection Agreement can be found at [link to DPA].
### 9.2 Precedence
In the event of any conflict between these Terms and the Data Protection Agreement regarding data protection matters, the Data Protection Agreement shall prevail.
### 9.3 Customer Responsibilities
Customer is responsible for:
- Ensuring it has a lawful basis for processing personal data through the Services
- Providing appropriate privacy notices to data subjects
- Obtaining necessary consents where required
- Responding to data subject rights requests
- Implementing appropriate technical and organizational measures for data it controls
---
## 10. General Provisions
### 10.1 Assignment
Customer may not assign any part of these Terms without Ask Eve AI's prior written consent, except that no such consent will be required with respect to an assignment of these Terms to an Affiliate or in connection with a merger, acquisition, corporate reorganization, or sale of all or substantially all of its assets. Any other attempt to transfer or assign is void.
Ask Eve AI may assign these Terms or any rights hereunder without Customer's consent.
### 10.2 Dispute Resolution
1. **Informal Negotiation**: Before initiating any formal legal proceedings, the Parties agree to first attempt to resolve any dispute, claim, or controversy arising out of or relating to these Terms through good faith negotiations for a period of thirty (30) days.
2. **Formal Proceedings**: If the dispute cannot be resolved through informal negotiation, either Party may pursue formal legal proceedings, including through a Belgian bailiff (deurwaarder/huissier de justice) or other legal collection methods available under Belgian law.
### 10.3 Governing Law and Jurisdiction
These Terms are exclusively governed by Belgian law, without regard to its conflict of laws principles. Any litigation relating to the conclusion, validity, interpretation, and/or performance of these Terms, or any other dispute concerning or related to these Terms, shall be submitted to the exclusive jurisdiction of the courts of Ghent (Gent), Belgium.
### 10.4 Severability
If any provision of these Terms is held to be void, invalid, or unenforceable under applicable law, this shall not cause the other provisions of these Terms to be void or unenforceable. In such cases, the Parties shall replace the affected provision with a different provision that is not void or unenforceable and that represents the same intention that the Parties had with the original provision.
### 10.5 Force Majeure
Neither Ask Eve AI nor Customer will be liable for inadequate performance to the extent caused by a condition that was beyond the Party's reasonable control, including but not limited to natural disaster, act of war or terrorism, riot, labor condition, governmental action, internet disturbance, epidemic, pandemic, or failure of third-party infrastructure providers.
Any delay resulting from such causes shall extend performance accordingly or excuse performance, in whole or in part, as may be reasonable under the circumstances. In such an event, each Party shall notify the other Party of the expected duration of the force majeure event.
### 10.6 Modification of Terms
1. **Notice of Changes**: Ask Eve AI reserves the right to modify these Terms at any time. We will provide reasonable notice of any material changes to these Terms by any reasonable means, including by email, in-app notification, or by posting notice of the changes on our website, which notice will in any event be provided at least fourteen (14) days before the changes take effect.
2. **Acceptance**: Customer's continued use of the Services after such modifications will constitute acceptance of the modified Terms. If Customer does not agree to the modified Terms, Customer must discontinue use of the Services and may cancel the subscription in accordance with Section 4.4.
3. **Non-Material Changes**: Ask Eve AI may make non-material changes (such as corrections of typos, clarifications, or updates to contact information) without advance notice.
### 10.7 Entire Agreement
These Terms, together with the Data Protection Agreement and any other documents expressly incorporated by reference, constitute the entire agreement between the Parties concerning the subject matter hereof and supersede all prior agreements, understandings, and arrangements, whether written or oral, relating to such subject matter.
### 10.8 No Waiver
The failure of either Party to enforce any provision of these Terms shall not constitute a waiver of that provision or any other provision. No waiver shall be effective unless made in writing and signed by an authorized representative of the waiving Party.
### 10.9 Notices
All notices required or permitted under these Terms shall be in writing and shall be deemed given:
- When delivered personally
- When sent by confirmed email to the email address provided by the receiving Party
- Three (3) business days after being sent by registered mail to the address provided by the receiving Party
Notices to Ask Eve AI should be sent to: legal@askeveai.com
### 10.10 Language
These Terms are executed in English. In case of any discrepancy between language versions, the English version shall prevail.
### 10.11 Survival
The following provisions shall survive termination or expiration of these Terms: Sections 2.2 (Ownership), 3 (Intellectual Property), 6.2 and 6.3 (Disclaimers), 7 (Limitation of Liability), 8 (Confidential Information), and 10 (General Provisions).
---
## Contact Information
For questions about these Terms, please contact:
**Ask Eve AI (Flow IT BV)**
Toekomststraat 62
9800 Deinze
Belgium
Company Number: BE0877.273.542
Email: legal@askeveai.com
Website: https://askeveai.com
---
**By using the Services, you acknowledge that you have read, understood, and agree to be bound by these Terms of Service.**
---
*Last updated: October 3, 2025*

37
docker/Dockerfile.base Normal file
View File

@@ -0,0 +1,37 @@
ARG PYTHON_VERSION=3.12.11
FROM python:${PYTHON_VERSION}-slim as base
ENV PYTHONDONTWRITEBYTECODE=1 PYTHONUNBUFFERED=1
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
gcc \
postgresql-client \
curl \
tini \
&& rm -rf /var/lib/apt/lists/*
ARG UID=10001
ARG GID=10001
RUN groupadd -g ${GID} appuser && useradd -u ${UID} -g ${GID} -M -d /nonexistent -s /usr/sbin/nologin appuser
WORKDIR /app
RUN mkdir -p /app/logs && chown -R appuser:appuser /app
COPY ../requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY ../common /app/common
COPY ../config /app/config
COPY ../scripts /app/scripts
COPY ../patched_packages /app/patched_packages
RUN chown -R appuser:appuser /app && chmod +x /app/scripts/start.sh
ENV PYTHONPATH=/app:/app/patched_packages:${PYTHONPATH}
USER appuser
EXPOSE 8080
ENTRYPOINT ["/usr/bin/tini","-g","--"]
CMD ["bash","-lc","scripts/start.sh"]

View File

@@ -1,21 +1,73 @@
#!/bin/bash
# Exit on any error
set -e
# Safer bash: we manage errors manually (no -e) but detect pipeline failures
set -o pipefail
source ./docker_env_switch.sh dev
# Quiet mode default; enable verbose with --verbose
QUIET=${QUIET:-true}
# Parse --verbose early (we'll reparse fully later as well)
for arg in "$@"; do
if [[ "$arg" == "--verbose" ]]; then QUIET=false; fi
done
# Per-run logs directory
RUN_TS=$(date +%Y%m%d_%H%M%S)
LOG_DIR="./build_logs/$RUN_TS"
mkdir -p "$LOG_DIR"
# Error aggregation
ERRORS=()
ERROR_LINES=()
EXIT_CODE=0
# Helper: run_quiet SERVICE STEP -- CMD ARGS...
run_quiet() {
local SERVICE="$1"; shift
local STEP="$1"; shift
# Expect a literal "--" separator before the command
if [[ "$1" == "--" ]]; then shift; fi
local LOG_FILE="$LOG_DIR/${SERVICE}.${STEP}.log"
if [[ "$QUIET" == "true" ]]; then
"$@" > /dev/null 2> >(tee -a "$LOG_FILE" >&2)
else
"$@" > >(tee -a "$LOG_FILE") 2> >(tee -a "$LOG_FILE" >&2)
fi
local RC=$?
echo "$LOG_FILE" > "$LOG_DIR/.last_${SERVICE}_${STEP}.path"
return $RC
}
record_error() {
local SERVICE="$1"; local STEP="$2"; local MESSAGE="$3"; local LOG_FILE="$4"
ERRORS+=("$SERVICE|$STEP|$LOG_FILE|$MESSAGE")
ERROR_LINES+=("$MESSAGE")
EXIT_CODE=1
}
source ./podman_env_switch.sh dev
# Load environment variables
source .env
# Docker registry
REGISTRY="josakola"
# Check if podman is available
if ! command -v podman &> /dev/null; then
echo "Error: podman not found"
exit 1
fi
echo "Using container runtime: podman"
# Local registry
REGISTRY="registry.ask-eve-ai-local.com"
# Account prefix voor consistency met Docker Hub
ACCOUNT="josakola"
# Tag (you might want to use a version or git commit hash)
TAG="latest"
# Platforms to build for
PLATFORMS="linux/amd64,linux/arm64"
# Single platform - AMD64 only for simplicity
PLATFORM="linux/amd64"
# Default action
ACTION="both"
@@ -24,22 +76,32 @@ ACTION="both"
NO_CACHE=""
PROGRESS=""
DEBUG=""
BUILD_BASE=""
BASE_ONLY=""
# Function to display usage information
usage() {
echo "Usage: $0 [-b|-p] [--no-cache] [--progress=plain] [--debug] [service1 service2 ...]"
echo " -b: Build only (for current platform)"
echo " -p: Push only (multi-platform)"
echo "Usage: $0 [-b|-p|-bb|--base-only] [--no-cache] [--progress=plain] [--debug] [--verbose] [service1 service2 ...]"
echo " -b: Build only"
echo " -p: Push only"
echo " -bb: Build base image (in addition to services)"
echo " --base-only: Build only base image (skip services)"
echo " --no-cache: Perform a clean build without using cache"
echo " --progress=plain: Show detailed progress of the build"
echo " --debug: Enable debug mode for the build"
echo " --verbose: Show full output of build/push (default is quiet; logs always saved under ./build_logs/<timestamp>)"
echo " If no option is provided, both build and push will be performed."
echo " If no services are specified, all eveai_ services and nginx will be processed."
echo " All images are built for AMD64 platform (compatible with both x86_64 and Apple Silicon via emulation)."
}
# Parse command-line options
while [[ $# -gt 0 ]]; do
case $1 in
--verbose)
QUIET=false
shift
;;
-b)
ACTION="build"
shift
@@ -48,6 +110,14 @@ while [[ $# -gt 0 ]]; do
ACTION="push"
shift
;;
-bb)
BUILD_BASE="true"
shift
;;
--base-only)
BASE_ONLY="true"
shift
;;
--no-cache)
NO_CACHE="--no-cache"
shift
@@ -71,10 +141,57 @@ while [[ $# -gt 0 ]]; do
esac
done
# Function to build base image
build_base_image() {
echo "🏗️ Building base image... =============================================================="
local BASE_IMAGE_NAME="$REGISTRY/$ACCOUNT/eveai-base:$TAG"
echo "Building base image for platform: $PLATFORM"
echo "Base image tag: $BASE_IMAGE_NAME"
run_quiet base build -- podman build \
--platform "$PLATFORM" \
$NO_CACHE \
$PROGRESS \
$DEBUG \
-t "$ACCOUNT/eveai-base:$TAG" \
-t "$BASE_IMAGE_NAME" \
-f Dockerfile.base \
..
if [ $? -ne 0 ]; then
LOG_FILE="$LOG_DIR/base.build.log"
echo "❌ Failed to build base image"
record_error base build "❌ Failed to build base image" "$LOG_FILE"
return 1
fi
if [ "$ACTION" = "push" ] || [ "$ACTION" = "both" ]; then
echo "Pushing base image to registry..."
run_quiet base push -- podman push "$BASE_IMAGE_NAME"
if [ $? -ne 0 ]; then
LOG_FILE="$LOG_DIR/base.push.log"
echo "❌ Failed to push base image"
record_error base push "❌ Failed to push base image" "$LOG_FILE"
return 1
fi
fi
echo "✅ Base image built successfully"
}
# Function to check if we should build base image
should_build_base() {
if [ "$BUILD_BASE" = "true" ] || [ "$BASE_ONLY" = "true" ]; then
return 0 # true
else
return 1 # false
fi
}
# Function to build and/or push a service
process_service() {
local SERVICE="$1"
echo "Processing $SERVICE..."
echo "Processing $SERVICE... =================================================================="
# Extract the build context and dockerfile from the compose file
CONTEXT=$(yq e ".services.$SERVICE.build.context" compose_dev.yaml)
@@ -92,47 +209,70 @@ process_service() {
return 1
fi
# Construct image names
LOCAL_IMAGE_NAME="$ACCOUNT/$SERVICE:$TAG"
REGISTRY_IMAGE_NAME="$REGISTRY/$ACCOUNT/$SERVICE:$TAG"
echo "Building for platform: $PLATFORM"
echo "Local tag: $LOCAL_IMAGE_NAME"
echo "Registry tag: $REGISTRY_IMAGE_NAME"
# Build and/or push based on ACTION
if [ "$ACTION" = "build" ]; then
echo "Building $SERVICE for current platform..."
docker build \
echo "🛠️ Building $SERVICE for $PLATFORM..."
run_quiet "$SERVICE" build -- podman build \
--platform "$PLATFORM" \
$NO_CACHE \
$PROGRESS \
$DEBUG \
-t "$REGISTRY/$SERVICE:$TAG" \
-f "$CONTEXT/$DOCKERFILE" \
"$CONTEXT"
elif [ "$ACTION" = "push" ]; then
echo "Building and pushing $SERVICE for multiple platforms..."
docker buildx build \
$NO_CACHE \
$PROGRESS \
$DEBUG \
--platform "$PLATFORMS" \
-t "$REGISTRY/$SERVICE:$TAG" \
-f "$CONTEXT/$DOCKERFILE" \
"$CONTEXT" \
--push
else
echo "Building $SERVICE for current platform..."
docker build \
$NO_CACHE \
$PROGRESS \
$DEBUG \
-t "$REGISTRY/$SERVICE:$TAG" \
-t "$LOCAL_IMAGE_NAME" \
-t "$REGISTRY_IMAGE_NAME" \
-f "$CONTEXT/$DOCKERFILE" \
"$CONTEXT"
if [ $? -ne 0 ]; then
LOG_FILE="$LOG_DIR/${SERVICE}.build.log"
echo "❌ Failed to build $SERVICE"
record_error "$SERVICE" build "❌ Failed to build $SERVICE" "$LOG_FILE"
return 1
fi
echo "Building and pushing $SERVICE for multiple platforms..."
docker buildx build \
elif [ "$ACTION" = "push" ]; then
echo "📤 Pushing $SERVICE to registry..."
run_quiet "$SERVICE" push -- podman push "$REGISTRY_IMAGE_NAME"
if [ $? -ne 0 ]; then
LOG_FILE="$LOG_DIR/${SERVICE}.push.log"
echo "❌ Failed to push $SERVICE"
record_error "$SERVICE" push "❌ Failed to push $SERVICE" "$LOG_FILE"
return 1
fi
else
# Both build and push
echo "🛠️ Building $SERVICE for $PLATFORM..."
run_quiet "$SERVICE" build -- podman build \
--platform "$PLATFORM" \
$NO_CACHE \
$PROGRESS \
$DEBUG \
--platform "$PLATFORMS" \
-t "$REGISTRY/$SERVICE:$TAG" \
-t "$LOCAL_IMAGE_NAME" \
-t "$REGISTRY_IMAGE_NAME" \
-f "$CONTEXT/$DOCKERFILE" \
"$CONTEXT" \
--push
"$CONTEXT"
if [ $? -ne 0 ]; then
LOG_FILE="$LOG_DIR/${SERVICE}.build.log"
echo "❌ Failed to build $SERVICE"
record_error "$SERVICE" build "❌ Failed to build $SERVICE" "$LOG_FILE"
return 1
fi
echo "📤 Pushing $SERVICE to registry..."
run_quiet "$SERVICE" push -- podman push "$REGISTRY_IMAGE_NAME"
if [ $? -ne 0 ]; then
LOG_FILE="$LOG_DIR/${SERVICE}.push.log"
echo "❌ Failed to push $SERVICE"
record_error "$SERVICE" push "❌ Failed to push $SERVICE" "$LOG_FILE"
return 1
fi
fi
}
@@ -141,36 +281,109 @@ if [ $# -eq 0 ]; then
SERVICES=()
while IFS= read -r line; do
SERVICES+=("$line")
done < <(yq e '.services | keys | .[]' compose_dev.yaml | grep -E '^(nginx|eveai_|flower|prometheus|grafana)')
done < <(yq e '.services | keys | .[]' compose_dev.yaml | grep -E '^(nginx|eveai_|prometheus|grafana)')
else
SERVICES=("$@")
fi
# Check if eveai_builder exists, if not create it
if ! docker buildx inspect eveai_builder > /dev/null 2>&1; then
echo "Creating eveai_builder..."
docker buildx create --name eveai_builder
# Handle base-only mode
if [ "$BASE_ONLY" = "true" ]; then
echo "🎯 Base-only mode: Building only base image"
build_base_image
echo -e "\033[32m✅ Base image build completed!\033[0m"
exit 0
fi
# Use eveai_builder
echo "Using eveai_builder..."
docker buildx use eveai_builder
# Build base image if requested
if should_build_base; then
build_base_image
echo "" # Empty line for readability
fi
echo "Using simplified AMD64-only approach for maximum compatibility..."
echo "Images will be tagged as: $REGISTRY/$ACCOUNT/[service]:$TAG"
# Reorder to ensure nginx builds before eveai_* if both are present
HAS_NGINX=false
HAS_APPS=false
for S in "${SERVICES[@]}"; do
if [[ "$S" == "nginx" ]]; then HAS_NGINX=true; fi
if [[ "$S" == eveai_* ]]; then HAS_APPS=true; fi
done
if $HAS_NGINX && $HAS_APPS; then
ORDERED_SERVICES=("nginx")
for S in "${SERVICES[@]}"; do
if [[ "$S" != "nginx" ]]; then ORDERED_SERVICES+=("$S"); fi
done
SERVICES=("${ORDERED_SERVICES[@]}")
fi
# Loop through services
for SERVICE in "${SERVICES[@]}"; do
if [[ "$SERVICE" == "nginx" ]]; then
./copy_specialist_svgs.sh ../config ../nginx/static/assets
run_quiet nginx copy-specialist-svgs -- ./copy_specialist_svgs.sh ../config ../nginx/static/assets
if [ $? -ne 0 ]; then
LOG_FILE="$LOG_DIR/nginx.copy-specialist-svgs.log"
echo "⚠️ copy_specialist_svgs.sh not found or failed"
record_error nginx copy-specialist-svgs "⚠️ copy_specialist_svgs.sh not found or failed" "$LOG_FILE"
fi
if [[ "$SERVICE" == "nginx" || "$SERVICE" == eveai_* || "$SERVICE" == "flower" || "$SERVICE" == "prometheus" || "$SERVICE" == "grafana" ]]; then
run_quiet nginx rebuild-chat-client -- ./rebuild_chat_client.sh
if [ $? -ne 0 ]; then
LOG_FILE="$LOG_DIR/nginx.rebuild-chat-client.log"
echo "❌ rebuild_chat_client.sh failed"
record_error nginx rebuild-chat-client "❌ rebuild_chat_client.sh failed" "$LOG_FILE"
fi
MANIFEST_SRC="../nginx/static/dist/manifest.json"
MANIFEST_DST_DIR="../config/static-manifest"
MANIFEST_DST="$MANIFEST_DST_DIR/manifest.json"
if [ ! -f "$MANIFEST_SRC" ]; then
if $HAS_NGINX; then
echo "⚠️ manifest.json not found at $MANIFEST_SRC yet. nginx should be built first in this run."
else
echo "❌ manifest.json not found at $MANIFEST_SRC. Please build nginx (assets) first."
exit 1
fi
fi
mkdir -p "$MANIFEST_DST_DIR"
if [ -f "$MANIFEST_SRC" ]; then
cp -f "$MANIFEST_SRC" "$MANIFEST_DST"
echo "📄 Staged manifest at $MANIFEST_DST"
fi
fi
if [[ "$SERVICE" == "nginx" || "$SERVICE" == eveai_* || "$SERVICE" == "prometheus" || "$SERVICE" == "grafana" ]]; then
if process_service "$SERVICE"; then
echo "Successfully processed $SERVICE"
echo "Successfully processed $SERVICE"
else
echo "Failed to process $SERVICE"
echo "Failed to process $SERVICE"
ERROR_LINES+=("❌ Failed to process $SERVICE")
EXIT_CODE=1
fi
else
echo "Skipping $SERVICE as it's not nginx, flower, prometheus, grafana or doesn't start with eveai_"
echo "⏭️ Skipping $SERVICE as it's not nginx, prometheus, grafana or doesn't start with eveai_"
fi
done
echo -e "\033[35mAll specified services processed.\033[0m"
echo -e "\033[35mFinished at $(date +"%d/%m/%Y %H:%M:%S")\033[0m"
if [ ${#ERRORS[@]} -eq 0 ]; then
echo -e "\033[32m✅ All specified services processed successfully!\033[0m"
echo -e "\033[32m📦 Images are available locally and in registry\033[0m"
else
echo -e "\033[31m❌ One or more errors occurred during build/push\033[0m"
# Reprint short failure lines (your concise messages)
for LINE in "${ERROR_LINES[@]}"; do
echo "$LINE"
done
echo ""
echo "Details (see logs for full output):"
for ITEM in "${ERRORS[@]}"; do
SERVICE_STEP_MSG_LOG=$(echo "$ITEM")
IFS='|' read -r SVC STEP LOGFILE MSG <<< "$SERVICE_STEP_MSG_LOG"
echo "- Service: $SVC | Step: $STEP"
echo " ↳ Log: $LOGFILE"
done
EXIT_CODE=1
fi
# Always print finished timestamp
echo -e "\033[32m🕐 Finished at $(date +"%d/%m/%Y %H:%M:%S")\033[0m"
exit $EXIT_CODE

View File

@@ -1,13 +1,4 @@
# Comments are provided throughout this file to help you get started.
# If you need more help, visit the Docker Compose reference guide at
# https://docs.docker.com/go/compose-spec-reference/
# Here the instructions define your application as a service called "server".
# This service is built from the Dockerfile in the current directory.
# You can add other services your application may depend on here, such as a
# database or a cache. For examples, see the Awesome Compose repository:
# https://github.com/docker/awesome-compose
# Podman Compose compatible versie met port schema compliance
x-common-variables: &common-variables
DB_HOST: db
DB_USER: luke
@@ -23,17 +14,13 @@ x-common-variables: &common-variables
FLOWER_USER: 'Felucia'
FLOWER_PASSWORD: 'Jungles'
OPENAI_API_KEY: 'sk-proj-8R0jWzwjL7PeoPyMhJTZT3BlbkFJLb6HfRB2Hr9cEVFWEhU7'
GROQ_API_KEY: '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'
MISTRAL_API_KEY: '0f4ZiQ1kIpgIKTHX8d0a8GOD2vAgVqEn'
ANTHROPIC_API_KEY: 'sk-ant-api03-c2TmkzbReeGhXBO5JxNH6BJNylRDonc9GmZd0eRbrvyekec2'
JWT_SECRET_KEY: 'bsdMkmQ8ObfMD52yAFg4trrvjgjMhuIqg2fjDpD/JqvgY0ccCcmlsEnVFmR79WPiLKEA3i8a5zmejwLZKl4v9Q=='
API_ENCRYPTION_KEY: 'xfF5369IsredSrlrYZqkM9ZNrfUASYYS6TCcAR9UKj4='
MINIO_ENDPOINT: minio:9000
MINIO_ACCESS_KEY: minioadmin
MINIO_SECRET_KEY: minioadmin
NGINX_SERVER_NAME: 'localhost http://macstudio.ask-eve-ai-local.com/'
LANGCHAIN_API_KEY: "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"
SERPER_API_KEY: "e4c553856d0e6b5a171ec5e6b69d874285b9badf"
CREWAI_STORAGE_DIR: "/app/crewai_storage"
PUSH_GATEWAY_HOST: "pushgateway"
PUSH_GATEWAY_PORT: "9091"
@@ -45,16 +32,12 @@ x-common-variables: &common-variables
services:
nginx:
image: josakola/nginx:latest
image: ${REGISTRY_PREFIX:-}josakola/nginx:latest
build:
context: ..
dockerfile: ./docker/nginx/Dockerfile
platforms:
- linux/amd64
- linux/arm64
ports:
- 80:80
- 8080:8080
- 3080:80 # Dev nginx proxy volgens port schema
environment:
<<: *common-variables
volumes:
@@ -72,23 +55,74 @@ services:
- eveai_api
- eveai_chat_client
networks:
- eveai-network
- eveai-dev-network
eveai_ops:
image: ${REGISTRY_PREFIX:-}josakola/eveai_ops:latest
build:
context: ..
dockerfile: ./docker/eveai_ops/Dockerfile
ports:
- 3002:8080 # Dev app volgens port schema
expose:
- 8000
environment:
<<: *common-variables
COMPONENT_NAME: eveai_ops
ROLE: web
PORT: 8080
WORKERS: 1 # Dev: lagere concurrency
WORKER_CLASS: gevent
WORKER_CONN: 100
LOGLEVEL: debug # Lowercase voor gunicorn
MAX_REQUESTS: 1000
MAX_REQUESTS_JITTER: 100
volumes:
- ../eveai_ops:/app/eveai_ops
- ../common:/app/common
- ../content:/app/content
- ../config:/app/config
- ../migrations:/app/migrations
- ../scripts:/app/scripts
- ../patched_packages:/app/patched_packages
- ./eveai_logs:/app/logs
- ../db_backups:/app/db_backups
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
minio:
condition: service_healthy
healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:8080/healthz/ready" ]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
networks:
- eveai-dev-network
eveai_app:
image: josakola/eveai_app:latest
image: ${REGISTRY_PREFIX:-}josakola/eveai_app:latest
build:
context: ..
dockerfile: ./docker/eveai_app/Dockerfile
platforms:
- linux/amd64
- linux/arm64
ports:
- 5001:5001
- 3001:8080 # Dev app volgens port schema
expose:
- 8000
environment:
<<: *common-variables
COMPONENT_NAME: eveai_app
ROLE: web
PORT: 8080
WORKERS: 1 # Dev: lagere concurrency
WORKER_CLASS: gevent
WORKER_CONN: 100
LOGLEVEL: debug # Lowercase voor gunicorn
MAX_REQUESTS: 1000
MAX_REQUESTS_JITTER: 100
volumes:
- ../eveai_app:/app/eveai_app
- ../common:/app/common
@@ -106,27 +140,30 @@ services:
minio:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5001/healthz/ready"]
test: ["CMD", "curl", "-f", "http://localhost:8080/healthz/ready"]
interval: 30s
timeout: 1s
timeout: 10s
retries: 3
start_period: 30s
start_period: 60s
networks:
- eveai-network
- eveai-dev-network
eveai_workers:
image: josakola/eveai_workers:latest
image: ${REGISTRY_PREFIX:-}josakola/eveai_workers:latest
build:
context: ..
dockerfile: ./docker/eveai_workers/Dockerfile
platforms:
- linux/amd64
- linux/arm64
expose:
- 8000
environment:
<<: *common-variables
COMPONENT_NAME: eveai_workers
ROLE: worker
CELERY_CONCURRENCY: 1 # Dev: lagere concurrency
CELERY_LOGLEVEL: DEBUG # Uppercase voor celery
CELERY_MAX_TASKS_PER_CHILD: 1000
CELERY_PREFETCH: 1
CELERY_QUEUE_NAME: embeddings
volumes:
- ../eveai_workers:/app/eveai_workers
- ../common:/app/common
@@ -142,23 +179,28 @@ services:
minio:
condition: service_healthy
networks:
- eveai-network
- eveai-dev-network
eveai_chat_client:
image: josakola/eveai_chat_client:latest
image: ${REGISTRY_PREFIX:-}josakola/eveai_chat_client:latest
build:
context: ..
dockerfile: ./docker/eveai_chat_client/Dockerfile
platforms:
- linux/amd64
- linux/arm64
ports:
- 5004:5004
- 3004:8080 # Dev chat client volgens port schema
expose:
- 8000
environment:
<<: *common-variables
COMPONENT_NAME: eveai_chat_client
ROLE: web
PORT: 8080
WORKERS: 1 # Dev: lagere concurrency
WORKER_CLASS: gevent
WORKER_CONN: 100
LOGLEVEL: debug # Lowercase voor gunicorn
MAX_REQUESTS: 1000
MAX_REQUESTS_JITTER: 100
volumes:
- ../eveai_chat_client:/app/eveai_chat_client
- ../common:/app/common
@@ -174,27 +216,30 @@ services:
minio:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5004/healthz/ready"]
test: ["CMD", "curl", "-f", "http://localhost:8080/healthz/ready"]
interval: 30s
timeout: 1s
timeout: 10s
retries: 3
start_period: 30s
start_period: 60s
networks:
- eveai-network
- eveai-dev-network
eveai_chat_workers:
image: josakola/eveai_chat_workers:latest
image: ${REGISTRY_PREFIX:-}josakola/eveai_chat_workers:latest
build:
context: ..
dockerfile: ./docker/eveai_chat_workers/Dockerfile
platforms:
- linux/amd64
- linux/arm64
expose:
- 8000
environment:
<<: *common-variables
COMPONENT_NAME: eveai_chat_workers
ROLE: worker
CELERY_CONCURRENCY: 8 # Dev: lagere concurrency
CELERY_LOGLEVEL: DEBUG # Uppercase voor celery
CELERY_MAX_TASKS_PER_CHILD: 1000
CELERY_PREFETCH: 1
CELERY_QUEUE_NAME: llm_interactions
volumes:
- ../eveai_chat_workers:/app/eveai_chat_workers
- ../common:/app/common
@@ -208,26 +253,28 @@ services:
redis:
condition: service_healthy
networks:
- eveai-network
- eveai-dev-network
eveai_api:
image: josakola/eveai_api:latest
image: ${REGISTRY_PREFIX:-}josakola/eveai_api:latest
build:
context: ..
dockerfile: ./docker/eveai_api/Dockerfile
platforms:
- linux/amd64
- linux/arm64
ports:
- 5003:5003
- 3003:8080 # Dev API volgens port schema
expose:
- 8000
environment:
<<: *common-variables
COMPONENT_NAME: eveai_api
WORDPRESS_HOST: host.docker.internal
WORDPRESS_PORT: 10003
WORDPRESS_PROTOCOL: http
ROLE: web
PORT: 8080
WORKERS: 1 # Dev: lagere concurrency
WORKER_CLASS: gevent
WORKER_CONN: 100
LOGLEVEL: debug # Lowercase voor gunicorn
MAX_REQUESTS: 1000
MAX_REQUESTS_JITTER: 100
volumes:
- ../eveai_api:/app/eveai_api
- ../common:/app/common
@@ -243,51 +290,53 @@ services:
minio:
condition: service_healthy
healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:5003/healthz/ready" ]
test: [ "CMD", "curl", "-f", "http://localhost:8080/healthz/ready" ]
interval: 30s
timeout: 1s
timeout: 10s
retries: 3
start_period: 30s
start_period: 60s
networks:
- eveai-network
- eveai-dev-network
eveai_beat:
image: josakola/eveai_beat:latest
build:
context: ..
dockerfile: ./docker/eveai_beat/Dockerfile
platforms:
- linux/amd64
- linux/arm64
environment:
<<: *common-variables
COMPONENT_NAME: eveai_beat
volumes:
- ../eveai_beat:/app/eveai_beat
- ../common:/app/common
- ../config:/app/config
- ../scripts:/app/scripts
- ../patched_packages:/app/patched_packages
- ./eveai_logs:/app/logs
depends_on:
redis:
condition: service_healthy
networks:
- eveai-network
# eveai_beat:
# image: ${REGISTRY_PREFIX:-}josakola/eveai_beat:latest
# build:
# context: ..
# dockerfile: ./docker/eveai_beat/Dockerfile
# environment:
# <<: *common-variables
# COMPONENT_NAME: eveai_beat
# ROLE: beat
# CELERY_LOGLEVEL: INFO # Uppercase voor celery
# volumes:
# - ../eveai_beat:/app/eveai_beat
# - ../common:/app/common
# - ../config:/app/config
# - ../scripts:/app/scripts
# - ../patched_packages:/app/patched_packages
# - ./eveai_logs:/app/logs
# depends_on:
# redis:
# condition: service_healthy
# networks:
# - eveai-dev-network
eveai_entitlements:
image: josakola/eveai_entitlements:latest
image: ${REGISTRY_PREFIX:-}josakola/eveai_entitlements:latest
build:
context: ..
dockerfile: ./docker/eveai_entitlements/Dockerfile
platforms:
- linux/amd64
- linux/arm64
expose:
- 8000
environment:
<<: *common-variables
COMPONENT_NAME: eveai_entitlements
ROLE: worker
CELERY_CONCURRENCY: 1 # Dev: lagere concurrency
CELERY_LOGLEVEL: DEBUG # Uppercase voor celery
CELERY_MAX_TASKS_PER_CHILD: 1000
CELERY_PREFETCH: 1
CELERY_QUEUE_NAME: entitlements
volumes:
- ../eveai_entitlements:/app/eveai_entitlements
- ../common:/app/common
@@ -303,13 +352,13 @@ services:
minio:
condition: service_healthy
networks:
- eveai-network
- eveai-dev-network
db:
hostname: db
image: ankane/pgvector
ports:
- 5432:5432
- 3005:5432 # Dev database volgens port schema (vermijd standaard 5432)
restart: always
environment:
- POSTGRES_DB=eveai
@@ -324,13 +373,13 @@ services:
timeout: 5s
retries: 5
networks:
- eveai-network
- eveai-dev-network
redis:
image: redis:7.2.5
restart: always
ports:
- "6379:6379"
- "3006:6379" # Dev Redis volgens port schema (vermijd standaard 6379)
volumes:
- ./db/redis:/data
healthcheck:
@@ -339,29 +388,25 @@ services:
timeout: 5s
retries: 5
networks:
- eveai-network
- eveai-dev-network
flower:
image: josakola/flower:latest
build:
context: ..
dockerfile: ./docker/flower/Dockerfile
image: mher/flower:latest
environment:
<<: *common-variables
volumes:
- ../scripts:/app/scripts
- CELERY_BROKER_URL=redis://redis:6379/0
- FLOWER_BASIC_AUTH=Felucia:Jungles
- FLOWER_URL_PREFIX=/flower
- FLOWER_PORT=8080
ports:
- "5555:5555"
- "3007:8080"
depends_on:
- redis
networks:
- eveai-network
minio:
image: minio/minio
ports:
- "9000:9000"
- "9001:9001"
- "3008:9000" # Dev MinIO volgens port schema
- "3009:9001" # Dev MinIO console
expose:
- 9000
volumes:
@@ -376,18 +421,17 @@ services:
interval: 30s
timeout: 20s
retries: 3
start_period: 30s
start_period: 60s
networks:
- eveai-network
- eveai-dev-network
prometheus:
image: prom/prometheus:latest
image: ${REGISTRY_PREFIX:-}josakola/prometheus:latest
build:
context: ./prometheus
dockerfile: Dockerfile
container_name: prometheus
ports:
- "9090:9090"
- "3010:9090" # Dev Prometheus volgens port schema
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
- ./prometheus/data:/prometheus
@@ -399,50 +443,40 @@ services:
- '--web.enable-lifecycle'
restart: unless-stopped
networks:
- eveai-network
- eveai-dev-network
pushgateway:
image: prom/pushgateway:latest
restart: unless-stopped
ports:
- "9091:9091"
- "3011:9091" # Dev Pushgateway volgens port schema
networks:
- eveai-network
- eveai-dev-network
grafana:
image: grafana/grafana:latest
build:
context: ./grafana
dockerfile: Dockerfile
container_name: grafana
ports:
- "3000:3000"
volumes:
- ./grafana/provisioning:/etc/grafana/provisioning
- ./grafana/data:/var/lib/grafana
environment:
- GF_SECURITY_ADMIN_USER=admin
- GF_SECURITY_ADMIN_PASSWORD=admin
- GF_USERS_ALLOW_SIGN_UP=false
restart: unless-stopped
depends_on:
- prometheus
networks:
- eveai-network
# grafana:
# image: ${REGISTRY_PREFIX:-}josakola/grafana:latest
# build:
# context: ./grafana
# dockerfile: Dockerfile
# ports:
# - "3012:3000" # Dev Grafana volgens port schema
# volumes:
# - ./grafana/provisioning:/etc/grafana/provisioning
# - ./grafana/data:/var/lib/grafana
# environment:
# - GF_SECURITY_ADMIN_USER=admin
# - GF_SECURITY_ADMIN_PASSWORD=admin
# - GF_USERS_ALLOW_SIGN_UP=false
# restart: unless-stopped
# depends_on:
# - prometheus
# networks:
# - eveai-dev-network
networks:
eveai-network:
eveai-dev-network:
driver: bridge
# This enables the containers to access the host network
driver_opts:
com.docker.network.bridge.host_ipc: "true"
volumes:
minio_data:
eveai_logs:
# db-data:
# redis-data:
# tenant-files:
#secrets:
# db-password:
# file: ./db/password.txt

View File

@@ -29,8 +29,6 @@ x-common-variables: &common-variables
FLOWER_USER: 'Felucia'
FLOWER_PASSWORD: 'Jungles'
OPENAI_API_KEY: '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'
GROQ_API_KEY: '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'
ANTHROPIC_API_KEY: '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'
MISTRAL_API_KEY: 'PjnUeDRPD7B144wdHlH0CzR7m0z8RHXi'
JWT_SECRET_KEY: '0d99e810e686ea567ef305d8e9b06195c4db482952e19276590a726cde60a408'
API_ENCRYPTION_KEY: 'Ly5XYWwEKiasfAwEqdEMdwR-k0vhrq6QPYd4whEROB0='
@@ -40,8 +38,6 @@ x-common-variables: &common-variables
MINIO_ACCESS_KEY: 04JKmQln8PQpyTmMiCPc
MINIO_SECRET_KEY: 2PEZAD1nlpAmOyDV0TUTuJTQw1qVuYLF3A7GMs0D
NGINX_SERVER_NAME: 'evie.askeveai.com mxz536.stackhero-network.com'
LANGCHAIN_API_KEY: "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"
SERPER_API_KEY: "e4c553856d0e6b5a171ec5e6b69d874285b9badf"
CREWAI_STORAGE_DIR: "/app/crewai_storage"
networks:

View File

@@ -12,16 +12,12 @@ x-common-variables: &common-variables
DB_HOST: minty.ask-eve-ai-local.com
DB_USER: luke
DB_PASS: 'Skywalker!'
DB_NAME: eveai
DB_NAME: eveai_test
DB_PORT: '5432'
FLASK_ENV: test
FLASK_DEBUG: true
SECRET_KEY: '31f87c24d691a5ee8e6a36eb14bf7ba6a19ff53ab1b37ecba140d0f7e577e41'
SECURITY_PASSWORD_SALT: '331694859419473264015565568764321607531'
MAIL_USERNAME: evie_test@askeveai.be
MAIL_PASSWORD: '8pF6AucbXi9Rt6R'
MAIL_SERVER: mail.flow-it.net
MAIL_PORT: 465
REDIS_URL: redis
REDIS_PORT: '6379'
FLOWER_USER: 'Felucia'
@@ -32,223 +28,296 @@ x-common-variables: &common-variables
MINIO_ENDPOINT: minio:9000
MINIO_ACCESS_KEY: minioadmin
MINIO_SECRET_KEY: minioadmin
NGINX_SERVER_NAME: 'localhost http://macstudio.ask-eve-ai-local.com/'
NGINX_SERVER_NAME: 'localhost http://test.ask-eve-ai-local.com/'
CREWAI_STORAGE_DIR: "/app/crewai_storage"
PUSH_GATEWAY_HOST: "pushgateway"
PUSH_GATEWAY_PORT: "9091"
COMPONENT_NAME: ${COMPONENT_NAME:-unknown}
SW_EMAIL_ACCESS_KEY: "SCWFMQ871RE4XGF04SW0"
SW_EMAIL_SECRET_KEY: "ec84604c-e2d4-4b0d-a120-40420693f42a"
SW_EMAIL_SENDER: "admin_test@mail.askeveai.be"
SW_EMAIL_NAME: "Evie Admin (test)"
SW_PROJECT: "f282f55a-ea52-4538-a979-5bcb890717ab"
name: eveai_test
services:
nginx:
image: josakola/nginx:${EVEAI_VERSION:-latest}
image: ${REGISTRY_PREFIX:-}josakola/nginx:latest
ports:
- 80:80
- 8080:8080
- 4080:80
environment:
<<: *common-variables
volumes:
- eveai_logs:/var/log/nginx
- test_eveai_logs:/var/log/nginx
depends_on:
- eveai_app
- eveai_api
- eveai_chat_client
networks:
- eveai-network
restart: "no"
- eveai-test-network
restart: unless-stopped
eveai_ops:
image: ${REGISTRY_PREFIX:-}josakola/eveai_ops:latest
build:
context: ..
dockerfile: ./docker/eveai_ops/Dockerfile
ports:
- 4002:8080 # Dev app volgens port schema
expose:
- 8000
environment:
<<: *common-variables
COMPONENT_NAME: eveai_ops
ROLE: web
PORT: 8080
WORKERS: 1 # Dev: lagere concurrency
WORKER_CLASS: gevent
WORKER_CONN: 100
LOGLEVEL: debug # Lowercase voor gunicorn
MAX_REQUESTS: 1000
MAX_REQUESTS_JITTER: 100
volumes:
- ./eveai_logs:/app/logs
depends_on:
redis:
condition: service_healthy
minio:
condition: service_healthy
healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:8080/healthz/ready" ]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
networks:
- eveai-test-network
eveai_app:
image: josakola/eveai_app:${EVEAI_VERSION:-latest}
image: ${REGISTRY_PREFIX:-}josakola/eveai_app:latest
ports:
- 5001:5001
- 4001:8080
expose:
- 8000
environment:
<<: *common-variables
COMPONENT_NAME: eveai_app
ROLE: web
PORT: 8080
WORKERS: 2 # Test: hogere concurrency
WORKER_CLASS: gevent
WORKER_CONN: 100
LOGLEVEL: debug # Lowercase voor gunicorn
MAX_REQUESTS: 1000
MAX_REQUESTS_JITTER: 100
volumes:
- eveai_logs:/app/logs
- crewai_storage:/app/crewai_storage
- test_eveai_logs:/app/logs
depends_on:
redis:
condition: service_healthy
minio:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5001/healthz/ready"]
test: ["CMD", "curl", "-f", "http://localhost:8080/healthz/ready"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
networks:
- eveai-network
restart: "no"
- eveai-test-network
restart: unless-stopped
eveai_workers:
image: josakola/eveai_workers:${EVEAI_VERSION:-latest}
image: ${REGISTRY_PREFIX:-}josakola/eveai_workers:latest
expose:
- 8000
environment:
<<: *common-variables
COMPONENT_NAME: eveai_workers
ROLE: worker
CELERY_CONCURRENCY: 2 # Test: hogere concurrency
CELERY_LOGLEVEL: DEBUG # Uppercase voor celery
CELERY_MAX_TASKS_PER_CHILD: 1000
CELERY_PREFETCH: 1
CELERY_QUEUE_NAME: embeddings
volumes:
- eveai_logs:/app/logs
- crewai_storage:/app/crewai_storage
- test_eveai_logs:/app/logs
depends_on:
redis:
condition: service_healthy
minio:
condition: service_healthy
networks:
- eveai-network
restart: "no"
- eveai-test-network
restart: unless-stopped
eveai_chat_client:
image: josakola/eveai_chat_client:${EVEAI_VERSION:-latest}
image: ${REGISTRY_PREFIX:-}josakola/eveai_chat_client:latest
ports:
- 5004:5004
- 4004:8080
expose:
- 8000
environment:
<<: *common-variables
COMPONENT_NAME: eveai_chat_client
ROLE: web
PORT: 8080
WORKERS: 2 # Test: hogere concurrency
WORKER_CLASS: gevent
WORKER_CONN: 100
LOGLEVEL: debug # Lowercase voor gunicorn
MAX_REQUESTS: 1000
MAX_REQUESTS_JITTER: 100
volumes:
- eveai_logs:/app/logs
- crewai_storage:/app/crewai_storage
- test_eveai_logs:/app/logs
depends_on:
redis:
condition: service_healthy
minio:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5004/healthz/ready"]
test: ["CMD", "curl", "-f", "http://localhost:8080/healthz/ready"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
networks:
- eveai-network
restart: "no"
- eveai-test-network
restart: unless-stopped
eveai_chat_workers:
image: josakola/eveai_chat_workers:${EVEAI_VERSION:-latest}
image: ${REGISTRY_PREFIX:-}josakola/eveai_chat_workers:latest
expose:
- 8000
environment:
<<: *common-variables
COMPONENT_NAME: eveai_chat_workers
ROLE: worker
CELERY_CONCURRENCY: 2 # Test: hogere concurrency
CELERY_LOGLEVEL: DEBUG # Uppercase voor celery
CELERY_MAX_TASKS_PER_CHILD: 1000
CELERY_PREFETCH: 1
CELERY_QUEUE_NAME: llm_interactions
volumes:
- eveai_logs:/app/logs
- crewai_storage:/app/crewai_storage
- test_eveai_logs:/app/logs
depends_on:
redis:
condition: service_healthy
networks:
- eveai-network
restart: "no"
- eveai-test-network
restart: unless-stopped
eveai_api:
image: josakola/eveai_api:${EVEAI_VERSION:-latest}
image: ${REGISTRY_PREFIX:-}josakola/eveai_api:latest
ports:
- 5003:5003
- 4003:8080
expose:
- 8000
environment:
<<: *common-variables
COMPONENT_NAME: eveai_api
ROLE: web
PORT: 8080
WORKERS: 2 # Test: hogere concurrency
WORKER_CLASS: gevent
WORKER_CONN: 100
LOGLEVEL: debug # Lowercase voor gunicorn
MAX_REQUESTS: 1000
MAX_REQUESTS_JITTER: 100
volumes:
- eveai_logs:/app/logs
- crewai_storage:/app/crewai_storage
- test_eveai_logs:/app/logs
depends_on:
redis:
condition: service_healthy
minio:
condition: service_healthy
healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:5003/healthz/ready" ]
test: [ "CMD", "curl", "-f", "http://localhost:8080/healthz/ready" ]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
networks:
- eveai-network
restart: "no"
- eveai-test-network
restart: unless-stopped
eveai_beat:
image: josakola/eveai_beat:${EVEAI_VERSION:-latest}
environment:
<<: *common-variables
COMPONENT_NAME: eveai_beat
volumes:
- eveai_logs:/app/logs
- crewai_storage:/app/crewai_storage
depends_on:
redis:
condition: service_healthy
networks:
- eveai-network
restart: "no"
# eveai_beat:
# image: ${REGISTRY_PREFIX:-}josakola/eveai_beat:latest
# environment:
# <<: *common-variables
# COMPONENT_NAME: eveai_beat
# ROLE: beat
# CELERY_LOGLEVEL: INFO # Uppercase voor celery
# volumes:
# - test_eveai_logs:/app/logs
# depends_on:
# redis:
# condition: service_healthy
# networks:
# - eveai-test-network
# restart: unless-stopped
eveai_entitlements:
image: josakola/eveai_entitlements:${EVEAI_VERSION:-latest}
image: ${REGISTRY_PREFIX:-}josakola/eveai_entitlements:latest
expose:
- 8000
environment:
<<: *common-variables
COMPONENT_NAME: eveai_entitlements
ROLE: worker
CELERY_CONCURRENCY: 2 # Test: hogere concurrency
CELERY_LOGLEVEL: DEBUG # Uppercase voor celery
CELERY_MAX_TASKS_PER_CHILD: 1000
CELERY_PREFETCH: 1
CELERY_QUEUE_NAME: entitlements
volumes:
- eveai_logs:/app/logs
- crewai_storage:/app/crewai_storage
- test_eveai_logs:/app/logs
depends_on:
redis:
condition: service_healthy
minio:
condition: service_healthy
networks:
- eveai-network
restart: "no"
- eveai-test-network
restart: unless-stopped
redis:
image: redis:7.2.5
restart: no
restart: unless-stopped
ports:
- "6379:6379"
- "4006:6379"
volumes:
- redisdata:/data
- test_redisdata:/data
healthcheck:
test: [ "CMD", "redis-cli", "ping" ]
interval: 10s
timeout: 5s
retries: 5
networks:
- eveai-network
- eveai-test-network
flower:
image: josakola/flower:${EVEAI_VERSION:-latest}
image: ${REGISTRY_PREFIX:-}josakola/flower:latest
environment:
<<: *common-variables
ports:
- "5555:5555"
- "4007:5555"
depends_on:
- redis
networks:
- eveai-network
restart: "no"
- eveai-test-network
restart: unless-stopped
minio:
image: minio/minio
ports:
- "9000:9000"
- "9001:9001"
- "4008:9000"
- "4009:9001"
expose:
- 9000
volumes:
- miniodata:/data
- minioconfig:/root/.minio
- test_miniodata:/data
- test_minioconfig:/root/.minio
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER:-minioadmin}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD:-minioadmin}
@@ -260,64 +329,57 @@ services:
retries: 3
start_period: 30s
networks:
- eveai-network
restart: "no"
- eveai-test-network
restart: unless-stopped
prometheus:
image: josakola/prometheus:${EVEAI_VERSION:-latest}
container_name: prometheus
image: ${REGISTRY_PREFIX:-}josakola/prometheus:${EVEAI_VERSION:-latest}
ports:
- "9090:9090"
- "4010:9090"
volumes:
- prometheusdata:/prometheus
- test_prometheusdata:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--web.enable-lifecycle'
restart: no
restart: unless-stopped
networks:
- eveai-network
- eveai-test-network
pushgateway:
image: prom/pushgateway:latest
restart: unless-stopped
ports:
- "9091:9091"
- "4011:9091"
networks:
- eveai-network
- eveai-test-network
grafana:
image: josakola/grafana:${EVEAI_VERSION:-latest}
container_name: grafana
image: ${REGISTRY_PREFIX:-}josakola/grafana:${EVEAI_VERSION:-latest}
ports:
- "3000:3000"
- "4012:3000"
volumes:
- grafanadata:/var/lib/grafana
- test_grafanadata:/var/lib/grafana
environment:
- GF_SECURITY_ADMIN_USER=admin
- GF_SECURITY_ADMIN_PASSWORD=admin
- GF_USERS_ALLOW_SIGN_UP=false
restart: no
restart: unless-stopped
depends_on:
- prometheus
networks:
- eveai-network
- eveai-test-network
networks:
eveai-network:
eveai-test-network:
driver: bridge
# This enables the containers to access the host network
driver_opts:
com.docker.network.bridge.host_ipc: "true"
volumes:
eveai_logs:
pgdata:
redisdata:
miniodata:
minioconfig:
prometheusdata:
grafanadata:
crewai_storage:
test_eveai_logs:
test_redisdata:
test_miniodata:
test_minioconfig:
test_prometheusdata:
test_grafanadata:

View File

@@ -1,155 +0,0 @@
#!/bin/zsh
# or use #!/usr/bin/env zsh
# Function to display usage information
usage() {
echo "Usage: source $0 <environment> [version]"
echo " environment: The environment to use (dev, prod, test, integration, bugfix)"
echo " version : (Optional) Specific release version to deploy"
echo " If not specified, uses 'latest' (except for dev environment)"
}
# Replace the existing check at the beginning of docker_env_switch.sh
# Check if the script is sourced
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
# Script is being executed directly from terminal
echo "Error: This script must be sourced, not executed directly."
echo "Please run: source $0 <environment> [version]"
exit 1
fi
# If we reach here, script is being sourced (either by terminal or another script)
# Check if an environment is provided
if [ $# -eq 0 ]; then
usage
return 1
fi
ENVIRONMENT=$1
VERSION=${2:-latest} # Default to latest if not specified
# Set variables based on the environment
case $ENVIRONMENT in
dev)
DOCKER_CONTEXT="default"
COMPOSE_FILE="compose_dev.yaml"
VERSION="latest" # Always use latest for dev
;;
prod)
DOCKER_CONTEXT="mxz536.stackhero-network.com"
COMPOSE_FILE="compose_stackhero.yaml"
;;
test)
DOCKER_CONTEXT="test-environment" # Change to your actual test Docker context
COMPOSE_FILE="compose_test.yaml"
;;
integration)
DOCKER_CONTEXT="integration-environment" # Change to your actual integration Docker context
COMPOSE_FILE="compose_integration.yaml"
;;
bugfix)
DOCKER_CONTEXT="bugfix-environment" # Change to your actual bugfix Docker context
COMPOSE_FILE="compose_bugfix.yaml"
;;
*)
echo "Invalid environment: $ENVIRONMENT"
usage
return 1
;;
esac
# Set Docker account
DOCKER_ACCOUNT="josakola"
# Check if Docker context exists
if ! docker context ls --format '{{.Name}}' | grep -q "^$DOCKER_CONTEXT$"; then
echo "Warning: Docker context '$DOCKER_CONTEXT' does not exist."
# Prompt user if they want to create the context
if [[ "$DOCKER_CONTEXT" != "default" ]]; then
echo "Do you want to set up this context now? (y/n): "
read CREATE_CONTEXT
if [[ "$CREATE_CONTEXT" == "y" || "$CREATE_CONTEXT" == "Y" ]]; then
# You would add here the specific code to create each context type
# For example, for remote contexts you might need SSH settings
echo "Please specify the Docker host URL (e.g., ssh://user@remote_host or tcp://remote_host:2375):"
read DOCKER_HOST
docker context create "$DOCKER_CONTEXT" --docker "host=$DOCKER_HOST"
if [ $? -ne 0 ]; then
echo "Failed to create Docker context. Please create it manually."
return 1
fi
else
echo "Using default context instead."
DOCKER_CONTEXT="default"
fi
fi
fi
# Check if compose file exists
if [ ! -f "$COMPOSE_FILE" ]; then
echo "Warning: Compose file '$COMPOSE_FILE' does not exist."
echo "Do you want to create it based on compose_dev.yaml? (y/n): "
read CREATE_FILE
if [[ "$CREATE_FILE" == "y" || "$CREATE_FILE" == "Y" ]]; then
# Create new compose file based on compose_dev.yaml with version variables
sed 's/\(image: josakola\/[^:]*\):latest/\1:${EVEAI_VERSION:-latest}/g' compose_dev.yaml > "$COMPOSE_FILE"
echo "Created $COMPOSE_FILE with version placeholders."
else
echo "Cannot proceed without a valid compose file."
return 1
fi
fi
# Switch Docker context
echo "Switching to Docker context: $DOCKER_CONTEXT"
docker context use $DOCKER_CONTEXT
# Set environment variables
export COMPOSE_FILE=$COMPOSE_FILE
export EVEAI_VERSION=$VERSION
export DOCKER_ACCOUNT=$DOCKER_ACCOUNT
echo "Set COMPOSE_FILE to $COMPOSE_FILE"
echo "Set EVEAI_VERSION to $VERSION"
echo "Set DOCKER_ACCOUNT to $DOCKER_ACCOUNT"
docker-compose() {
docker compose -f $COMPOSE_FILE "$@"
}
dc() {
docker compose -f $COMPOSE_FILE "$@"
}
dcup() {
docker compose -f $COMPOSE_FILE up -d --remove-orphans "$@"
}
dcdown() {
docker compose -f $COMPOSE_FILE down "$@"
}
dcps() {
docker compose -f $COMPOSE_FILE ps "$@"
}
dclogs() {
docker compose -f $COMPOSE_FILE logs "$@"
}
dcpull() {
docker compose -f $COMPOSE_FILE pull "$@"
}
dcrefresh() {
docker compose -f $COMPOSE_FILE pull && docker compose -f $COMPOSE_FILE up -d --remove-orphans "$@"
}
# Exporteer de functies zodat ze beschikbaar zijn in andere scripts
export -f docker-compose dc dcup dcdown dcps dclogs dcpull dcrefresh
echo "Docker environment switched to $ENVIRONMENT with version $VERSION"
echo "You can now use 'docker-compose', 'dc', 'dcup', 'dcdown', 'dcps', 'dclogs', 'dcpull' or 'dcrefresh' commands"

View File

@@ -1,70 +1,5 @@
ARG PYTHON_VERSION=3.12.7
FROM python:${PYTHON_VERSION}-slim as base
FROM registry.ask-eve-ai-local.com/josakola/eveai-base:latest
# Prevents Python from writing pyc files.
ENV PYTHONDONTWRITEBYTECODE=1
# Keeps Python from buffering stdout and stderr to avoid situations where
# the application crashes without emitting any logs due to buffering.
ENV PYTHONUNBUFFERED=1
# Create directory for patched packages and set permissions
RUN mkdir -p /app/patched_packages && \
chmod 777 /app/patched_packages
# Ensure patches are applied to the application.
ENV PYTHONPATH=/app/patched_packages:$PYTHONPATH
WORKDIR /app
# Create a non-privileged user that the app will run under.
# See https://docs.docker.com/go/dockerfile-user-best-practices/
ARG UID=10001
RUN adduser \
--disabled-password \
--gecos "" \
--home "/nonexistent" \
--shell "/bin/bash" \
--no-create-home \
--uid "${UID}" \
appuser
# Install necessary packages and build tools
RUN apt-get update && apt-get install -y \
build-essential \
gcc \
postgresql-client \
curl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Create logs directory and set permissions
RUN mkdir -p /app/logs && chown -R appuser:appuser /app/logs
# Download dependencies as a separate step to take advantage of Docker's caching.
# Leverage a cache mount to /root/.cache/pip to speed up subsequent builds.
# Leverage a bind mount to requirements.txt to avoid having to copy them into
# into this layer.
COPY requirements.txt /app/
RUN python -m pip install -r /app/requirements.txt
# Copy the source code into the container.
# Copy the service-specific source code into the container.
COPY eveai_api /app/eveai_api
COPY common /app/common
COPY config /app/config
COPY scripts /app/scripts
COPY patched_packages /app/patched_packages
# Set permissions for entrypoint script
RUN chmod 777 /app/scripts/entrypoint.sh
# Set ownership of the application directory to the non-privileged user
RUN chown -R appuser:appuser /app
# Expose the port that the application listens on.
EXPOSE 5003
# Set entrypoint and command
ENTRYPOINT ["/app/scripts/entrypoint.sh"]
CMD ["/app/scripts/start_eveai_api.sh"]

View File

@@ -1,72 +1,5 @@
ARG PYTHON_VERSION=3.12.7
FROM python:${PYTHON_VERSION}-slim as base
# Prevents Python from writing pyc files.
ENV PYTHONDONTWRITEBYTECODE=1
# Keeps Python from buffering stdout and stderr to avoid situations where
# the application crashes without emitting any logs due to buffering.
ENV PYTHONUNBUFFERED=1
# Create directory for patched packages and set permissions
RUN mkdir -p /app/patched_packages && \
chmod 777 /app/patched_packages
# Ensure patches are applied to the application.
ENV PYTHONPATH=/app/patched_packages:$PYTHONPATH
WORKDIR /app
# Create a non-privileged user that the app will run under.
# See https://docs.docker.com/go/dockerfile-user-best-practices/
ARG UID=10001
RUN adduser \
--disabled-password \
--gecos "" \
--home "/nonexistent" \
--shell "/bin/bash" \
--no-create-home \
--uid "${UID}" \
appuser
# Install necessary packages and build tools
RUN apt-get update && apt-get install -y \
build-essential \
gcc \
postgresql-client \
curl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Create logs directory and set permissions
RUN mkdir -p /app/logs && chown -R appuser:appuser /app/logs
# Download dependencies as a separate step to take advantage of Docker's caching.
# Leverage a cache mount to /root/.cache/pip to speed up subsequent builds.
# Leverage a bind mount to requirements.txt to avoid having to copy them into
# into this layer.
COPY requirements.txt /app/
RUN python -m pip install -r /app/requirements.txt
FROM registry.ask-eve-ai-local.com/josakola/eveai-base:latest
# Copy the source code into the container.
COPY eveai_app /app/eveai_app
COPY common /app/common
COPY config /app/config
COPY migrations /app/migrations
COPY scripts /app/scripts
COPY patched_packages /app/patched_packages
COPY content /app/content
# Set permissions for entrypoint script
RUN chmod 777 /app/scripts/entrypoint.sh
# Set ownership of the application directory to the non-privileged user
RUN chown -R appuser:appuser /app
# Expose the port that the application listens on.
EXPOSE 5001
# Set entrypoint and command
ENTRYPOINT ["/app/scripts/entrypoint.sh"]
CMD ["/app/scripts/start_eveai_app.sh"]
COPY migrations /app/migrations

View File

@@ -1,65 +1,5 @@
ARG PYTHON_VERSION=3.12.7
FROM python:${PYTHON_VERSION}-slim as base
FROM registry.ask-eve-ai-local.com/josakola/eveai-base:latest
# Prevents Python from writing pyc files.
ENV PYTHONDONTWRITEBYTECODE=1
# Keeps Python from buffering stdout and stderr to avoid situations where
# the application crashes without emitting any logs due to buffering.
ENV PYTHONUNBUFFERED=1
# Create directory for patched packages and set permissions
RUN mkdir -p /app/patched_packages && \
chmod 777 /app/patched_packages
# Ensure patches are applied to the application.
ENV PYTHONPATH=/app/patched_packages:$PYTHONPATH
WORKDIR /app
# Create a non-privileged user that the app will run under.
# See https://docs.docker.com/go/dockerfile-user-best-practices/
ARG UID=10001
RUN adduser \
--disabled-password \
--gecos "" \
--home "/nonexistent" \
--shell "/bin/bash" \
--no-create-home \
--uid "${UID}" \
appuser
# Install necessary packages and build tools
#RUN apt-get update && apt-get install -y \
# build-essential \
# gcc \
# && apt-get clean \
# && rm -rf /var/lib/apt/lists/*
# Create logs directory and set permissions
RUN mkdir -p /app/logs && chown -R appuser:appuser /app/logs
# Install Python dependencies.
# Download dependencies as a separate step to take advantage of Docker's caching.
# Leverage a cache mount to /root/.cache/pip to speed up subsequent builds.
# Leverage a bind mount to requirements.txt to avoid having to copy them into
# into this layer.
COPY requirements.txt /app/
RUN python -m pip install -r /app/requirements.txt
# Copy the source code into the container.
# Copy the service-specific source code into the container.
COPY eveai_beat /app/eveai_beat
COPY common /app/common
COPY config /app/config
COPY scripts /app/scripts
COPY patched_packages /app/patched_packages
COPY --chown=root:root scripts/entrypoint_no_db.sh /app/scripts/
# Set ownership of the application directory to the non-privileged user
RUN chown -R appuser:appuser /app
# Set entrypoint and command
ENTRYPOINT ["/app/scripts/entrypoint_no_db.sh"]
CMD ["/app/scripts/start_eveai_beat.sh"]

View File

@@ -1,72 +1,5 @@
ARG PYTHON_VERSION=3.12.7
FROM python:${PYTHON_VERSION}-slim as base
FROM registry.ask-eve-ai-local.com/josakola/eveai-base:latest
# Prevents Python from writing pyc files.
ENV PYTHONDONTWRITEBYTECODE=1
# Keeps Python from buffering stdout and stderr to avoid situations where
# the application crashes without emitting any logs due to buffering.
ENV PYTHONUNBUFFERED=1
# Create directory for patched packages and set permissions
RUN mkdir -p /app/patched_packages && \
chmod 777 /app/patched_packages
# Ensure patches are applied to the application.
ENV PYTHONPATH=/app/patched_packages:$PYTHONPATH
WORKDIR /app
# Create a non-privileged user that the app will run under.
# See https://docs.docker.com/go/dockerfile-user-best-practices/
ARG UID=10001
RUN adduser \
--disabled-password \
--gecos "" \
--home "/nonexistent" \
--shell "/bin/bash" \
--no-create-home \
--uid "${UID}" \
appuser
# Install necessary packages and build tools
RUN apt-get update && apt-get install -y \
build-essential \
gcc \
postgresql-client \
curl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Create logs directory and set permissions
RUN mkdir -p /app/logs && chown -R appuser:appuser /app/logs
# Download dependencies as a separate step to take advantage of Docker's caching.
# Leverage a cache mount to /root/.cache/pip to speed up subsequent builds.
# Leverage a bind mount to requirements.txt to avoid having to copy them into
# into this layer.
COPY requirements.txt /app/
RUN python -m pip install -r /app/requirements.txt
# Copy the source code into the container.
# Copy the service-specific source code into the container.
COPY eveai_chat_client /app/eveai_chat_client
COPY common /app/common
COPY config /app/config
COPY scripts /app/scripts
COPY patched_packages /app/patched_packages
COPY content /app/content
# Set permissions for scripts
RUN chmod 777 /app/scripts/entrypoint.sh && \
chmod 777 /app/scripts/start_eveai_chat_client.sh
# Set ownership of the application directory to the non-privileged user
RUN chown -R appuser:appuser /app
# Expose the port that the application listens on.
EXPOSE 5004
# Set entrypoint and command
ENTRYPOINT ["/app/scripts/entrypoint.sh"]
CMD ["/app/scripts/start_eveai_chat_client.sh"]

View File

@@ -1,68 +1,10 @@
ARG PYTHON_VERSION=3.12.7
FROM python:${PYTHON_VERSION}-slim as base
FROM registry.ask-eve-ai-local.com/josakola/eveai-base:latest
# Prevents Python from writing pyc files.
ENV PYTHONDONTWRITEBYTECODE=1
# Keeps Python from buffering stdout and stderr to avoid situations where
# the application crashes without emitting any logs due to buffering.
ENV PYTHONUNBUFFERED=1
# Create directory for patched packages and set permissions
RUN mkdir -p /app/patched_packages && \
chmod 777 /app/patched_packages
# Ensure patches are applied to the application.
ENV PYTHONPATH=/app/patched_packages:$PYTHONPATH
WORKDIR /app
# Create a non-privileged user that the app will run under.
# See https://docs.docker.com/go/dockerfile-user-best-practices/
ARG UID=10001
RUN adduser \
--disabled-password \
--gecos "" \
--home "/nonexistent" \
--shell "/bin/bash" \
--no-create-home \
--uid "${UID}" \
appuser
# Install necessary packages and build tools
RUN apt-get update && apt-get install -y \
build-essential \
gcc \
postgresql-client \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Create logs directory and set permissions
RUN mkdir -p /app/logs && chown -R appuser:appuser /app/logs
# Service-specific directories (preserve crewai_storage)
USER root
RUN mkdir -p /app/crewai_storage && chown -R appuser:appuser /app/crewai_storage
USER appuser
# Download dependencies as a separate step to take advantage of Docker's caching.
# Leverage a cache mount to /root/.cache/pip to speed up subsequent builds.
# Leverage a bind mount to requirements.txt to avoid having to copy them into
# into this layer.
COPY requirements.txt /app/
RUN python -m pip install -r /app/requirements.txt
# Copy the source code into the container.
# Copy the service-specific source code into the container.
COPY eveai_chat_workers /app/eveai_chat_workers
COPY common /app/common
COPY config /app/config
COPY scripts /app/scripts
COPY patched_packages /app/patched_packages
COPY --chown=root:root scripts/entrypoint.sh /app/scripts/
# Set permissions for entrypoint script
RUN chmod 777 /app/scripts/entrypoint.sh
# Set ownership of the application directory to the non-privileged user
RUN chown -R appuser:appuser /app
# Set entrypoint and command
ENTRYPOINT ["/app/scripts/entrypoint.sh"]
CMD ["/app/scripts/start_eveai_chat_workers.sh"]

View File

@@ -1,69 +1,5 @@
ARG PYTHON_VERSION=3.12.7
FROM python:${PYTHON_VERSION}-slim as base
FROM registry.ask-eve-ai-local.com/josakola/eveai-base:latest
# Prevents Python from writing pyc files.
ENV PYTHONDONTWRITEBYTECODE=1
# Keeps Python from buffering stdout and stderr to avoid situations where
# the application crashes without emitting any logs due to buffering.
ENV PYTHONUNBUFFERED=1
# Create directory for patched packages and set permissions
RUN mkdir -p /app/patched_packages && \
chmod 777 /app/patched_packages
# Ensure patches are applied to the application.
ENV PYTHONPATH=/app/patched_packages:$PYTHONPATH
WORKDIR /app
# Create a non-privileged user that the app will run under.
# See https://docs.docker.com/go/dockerfile-user-best-practices/
ARG UID=10001
RUN adduser \
--disabled-password \
--gecos "" \
--home "/nonexistent" \
--shell "/bin/bash" \
--no-create-home \
--uid "${UID}" \
appuser
# Install necessary packages and build tools
RUN apt-get update && apt-get install -y \
build-essential \
gcc \
postgresql-client \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Create logs directory and set permissions
RUN mkdir -p /app/logs && chown -R appuser:appuser /app/logs
# Install Python dependencies.
# Download dependencies as a separate step to take advantage of Docker's caching.
# Leverage a cache mount to /root/.cache/pip to speed up subsequent builds.
# Leverage a bind mount to requirements.txt to avoid having to copy them into
# into this layer.
COPY requirements.txt /app/
RUN python -m pip install -r /app/requirements.txt
# Copy the source code into the container.
# Copy the service-specific source code into the container.
COPY eveai_entitlements /app/eveai_entitlements
COPY common /app/common
COPY config /app/config
COPY scripts /app/scripts
COPY patched_packages /app/patched_packages
COPY --chown=root:root scripts/entrypoint.sh /app/scripts/
# Set permissions for entrypoint script
RUN chmod 777 /app/scripts/entrypoint.sh
# Set ownership of the application directory to the non-privileged user
RUN chown -R appuser:appuser /app
# Set entrypoint and command
ENTRYPOINT ["/app/scripts/entrypoint.sh"]
CMD ["/app/scripts/start_eveai_entitlements.sh"]

View File

@@ -0,0 +1,6 @@
FROM registry.ask-eve-ai-local.com/josakola/eveai-base:latest
# Copy the source code into the container.
COPY eveai_ops /app/eveai_ops
COPY migrations /app/migrations
COPY db_backups /app/db_backups

View File

@@ -1,70 +1,12 @@
ARG PYTHON_VERSION=3.12.7
FROM python:${PYTHON_VERSION}-slim as base
FROM registry.ask-eve-ai-local.com/josakola/eveai-base:latest
# Prevents Python from writing pyc files.
ENV PYTHONDONTWRITEBYTECODE=1
# Service-specific packages: ffmpeg only needed for this service - maar op dit ogenblik overbodig.
#USER root
#RUN apt-get update && apt-get install -y --no-install-recommends \
# ffmpeg \
# && rm -rf /var/lib/apt/lists/*
USER appuser
# Keeps Python from buffering stdout and stderr to avoid situations where
# the application crashes without emitting any logs due to buffering.
ENV PYTHONUNBUFFERED=1
# Create directory for patched packages and set permissions
RUN mkdir -p /app/patched_packages && \
chmod 777 /app/patched_packages
# Ensure patches are applied to the application.
ENV PYTHONPATH=/app/patched_packages:$PYTHONPATH
WORKDIR /app
# Create a non-privileged user that the app will run under.
# See https://docs.docker.com/go/dockerfile-user-best-practices/
ARG UID=10001
RUN adduser \
--disabled-password \
--gecos "" \
--home "/nonexistent" \
--shell "/bin/bash" \
--no-create-home \
--uid "${UID}" \
appuser
# Install necessary packages and build tools
RUN apt-get update && apt-get install -y \
build-essential \
gcc \
postgresql-client \
ffmpeg \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Create logs directory and set permissions
RUN mkdir -p /app/logs && chown -R appuser:appuser /app/logs
# Install Python dependencies.
# Download dependencies as a separate step to take advantage of Docker's caching.
# Leverage a cache mount to /root/.cache/pip to speed up subsequent builds.
# Leverage a bind mount to requirements.txt to avoid having to copy them into
# into this layer.
COPY requirements.txt /app/
RUN python -m pip install -r /app/requirements.txt
# Copy the source code into the container.
# Copy the service-specific source code into the container.
COPY eveai_workers /app/eveai_workers
COPY common /app/common
COPY config /app/config
COPY scripts /app/scripts
COPY patched_packages /app/patched_packages
COPY --chown=root:root scripts/entrypoint.sh /app/scripts/
# Set permissions for entrypoint script
RUN chmod 777 /app/scripts/entrypoint.sh
# Set ownership of the application directory to the non-privileged user
RUN chown -R appuser:appuser /app
# Set entrypoint and command
ENTRYPOINT ["/app/scripts/entrypoint.sh"]
CMD ["/app/scripts/start_eveai_workers.sh"]

View File

@@ -1,6 +1,9 @@
# Use the official Nginx image as the base image
FROM nginx:latest
ARG TARGETPLATFORM
FROM --platform=$TARGETPLATFORM nginx:latest
# Ensure we use user root
USER root
# Copy the custom Nginx configuration file into the container
COPY ../../nginx/nginx.conf /etc/nginx/nginx.conf
@@ -13,7 +16,7 @@ RUN mkdir -p /etc/nginx/static /etc/nginx/public
COPY ../../nginx/static /etc/nginx/static
# Copy public files
COPY ../../nginx/public /etc/nginx/public
# COPY ../../nginx/public /etc/nginx/public
# Copy site-specific configurations
RUN mkdir -p /etc/nginx/sites-enabled

257
docker/podman_env_switch.sh Executable file
View File

@@ -0,0 +1,257 @@
#!/usr/bin/env zsh
# Function to display usage information
usage() {
echo "Usage: source $0 <environment> [version]"
echo " environment: The environment to use (dev, prod, test, integration, bugfix)"
echo " version : (Optional) Specific release version to deploy"
echo " If not specified, uses 'latest' (except for dev environment)"
}
# Check if the script is sourced - improved for both bash and zsh
is_sourced() {
if [[ -n "$ZSH_VERSION" ]]; then
# In zsh, check if we're in a sourced context
[[ "$ZSH_EVAL_CONTEXT" =~ "(:file|:cmdsubst)" ]] || [[ "$0" != "$ZSH_ARGZERO" ]]
else
# In bash, compare BASH_SOURCE with $0
[[ "${BASH_SOURCE[0]}" != "${0}" ]]
fi
}
if ! is_sourced; then
echo "Error: This script must be sourced, not executed directly."
echo "Please run: source $0 <environment> [version]"
if [[ -n "$ZSH_VERSION" ]]; then
return 1 2>/dev/null || exit 1
else
exit 1
fi
fi
# Check if an environment is provided
if [ $# -eq 0 ]; then
usage
return 1
fi
ENVIRONMENT=$1
VERSION=${2:-latest} # Default to latest if not specified
# Check if podman and podman-compose are available
if ! command -v podman &> /dev/null; then
echo "Error: podman is not installed or not in PATH"
echo "Please install podman first"
return 1
fi
if ! command -v podman-compose &> /dev/null; then
echo "Error: podman-compose is not installed or not in PATH"
echo "Please install podman-compose first"
return 1
fi
CONTAINER_CMD="podman"
# Store the actual path to podman-compose to avoid recursion
COMPOSE_CMD_PATH=$(command -v podman-compose)
echo "Using container runtime: $CONTAINER_CMD"
echo "Using compose command: $COMPOSE_CMD_PATH"
# Set default platform to AMD64 for consistency
export BUILDAH_PLATFORM=linux/amd64
export PODMAN_PLATFORM=linux/amd64
# Set variables based on the environment
case $ENVIRONMENT in
dev)
PODMAN_CONNECTION="default"
COMPOSE_FILE="compose_dev.yaml"
REGISTRY_PREFIX=""
COMPOSE_PROJECT_NAME="eveai_dev"
VERSION="latest" # Always use latest for dev
;;
prod)
# TO BE DEFINED
PODMAN_CONNECTION="mxz536.stackhero-network.com"
COMPOSE_FILE="compose_stackhero.yaml"
REGISTRY_PREFIX=""
COMPOSE_PROJECT_NAME="eveai_prod"
;;
test)
PODMAN_CONNECTION="test-environment"
COMPOSE_FILE="compose_test.yaml"
REGISTRY_PREFIX="registry.ask-eve-ai-local.com/"
COMPOSE_PROJECT_NAME="eveai_test"
;;
bugfix)
# TO BE DEFINED
PODMAN_CONNECTION="bugfix-environment"
COMPOSE_FILE="compose_bugfix.yaml"
COMPOSE_PROJECT_NAME="eveai_bugfix"
;;
*)
echo "Invalid environment: $ENVIRONMENT"
usage
return 1
;;
esac
# Set container registry account
CONTAINER_ACCOUNT="josakola"
# Handle remote connections for podman
if [[ "$PODMAN_CONNECTION" != "default" ]]; then
echo "Setting up remote podman connection: $PODMAN_CONNECTION"
# Check if podman connection exists
if ! podman system connection list --format '{{.Name}}' 2>/dev/null | grep -q "^$PODMAN_CONNECTION$"; then
echo "Warning: Podman connection '$PODMAN_CONNECTION' does not exist."
echo -n "Do you want to set up this connection now? (y/n): "
read -r CREATE_CONNECTION
if [[ "$CREATE_CONNECTION" == "y" || "$CREATE_CONNECTION" == "Y" ]]; then
echo -n "Please specify the SSH connection string (e.g., user@remote_host): "
read -r SSH_CONNECTION
if [[ -n "$SSH_CONNECTION" ]]; then
podman system connection add "$PODMAN_CONNECTION" --identity ~/.ssh/id_rsa "ssh://$SSH_CONNECTION/run/user/1000/podman/podman.sock"
if [[ $? -ne 0 ]]; then
echo "Failed to create podman connection. Please create it manually."
return 1
fi
else
echo "No SSH connection string provided."
return 1
fi
else
echo "Using local podman setup instead."
PODMAN_CONNECTION="default"
fi
fi
# Set the connection
if [[ "$PODMAN_CONNECTION" != "default" ]]; then
# Use podman context instead of manually setting CONTAINER_HOST
podman system connection default "$PODMAN_CONNECTION" 2>/dev/null
if [[ $? -eq 0 ]]; then
echo "Switched to remote podman connection: $PODMAN_CONNECTION"
else
echo "Warning: Failed to switch to connection $PODMAN_CONNECTION, using local setup"
PODMAN_CONNECTION="default"
fi
fi
else
echo "Using local podman setup with AMD64 platform"
# Ensure we're using the default local connection
podman system connection default "" 2>/dev/null || true
fi
# Check if compose file exists
if [[ ! -f "$COMPOSE_FILE" ]]; then
echo "Warning: Compose file '$COMPOSE_FILE' does not exist."
if [[ -f "compose_dev.yaml" ]]; then
echo -n "Do you want to create it based on compose_dev.yaml? (y/n): "
read -r CREATE_FILE
if [[ "$CREATE_FILE" == "y" || "$CREATE_FILE" == "Y" ]]; then
# Create new compose file based on compose_dev.yaml with version variables
if sed 's/\(image: josakola\/[^:]*\):latest/\1:${EVEAI_VERSION:-latest}/g' compose_dev.yaml > "$COMPOSE_FILE" 2>/dev/null; then
echo "Created $COMPOSE_FILE with version placeholders."
else
echo "Failed to create $COMPOSE_FILE"
return 1
fi
else
echo "Cannot proceed without a valid compose file."
return 1
fi
else
echo "Cannot create $COMPOSE_FILE: compose_dev.yaml not found."
return 1
fi
fi
# Set environment variables
export COMPOSE_FILE=$COMPOSE_FILE
export EVEAI_VERSION=$VERSION
export CONTAINER_ACCOUNT=$CONTAINER_ACCOUNT
export CONTAINER_CMD=$CONTAINER_CMD
export COMPOSE_CMD_PATH=$COMPOSE_CMD_PATH
export REGISTRY_PREFIX=$REGISTRY_PREFIX
export COMPOSE_PROJECT_NAME=$COMPOSE_PROJECT_NAME
echo "Set COMPOSE_FILE to $COMPOSE_FILE"
echo "Set EVEAI_VERSION to $VERSION"
echo "Set CONTAINER_ACCOUNT to $CONTAINER_ACCOUNT"
echo "Set platform to AMD64 (linux/amd64)"
echo "Set registry prefix to $REGISTRY_PREFIX"
echo "Set project name to $COMPOSE_PROJECT_NAME"
# Define compose wrapper functions using the full path to avoid recursion
pc() {
$COMPOSE_CMD_PATH -p ${COMPOSE_PROJECT_NAME} -f $COMPOSE_FILE "$@"
}
pcup() {
$COMPOSE_CMD_PATH -p ${COMPOSE_PROJECT_NAME} -f $COMPOSE_FILE up -d --remove-orphans "$@"
}
pcdown() {
$COMPOSE_CMD_PATH -p ${COMPOSE_PROJECT_NAME} -f $COMPOSE_FILE down "$@"
}
pcps() {
$COMPOSE_CMD_PATH -p ${COMPOSE_PROJECT_NAME} -f $COMPOSE_FILE ps "$@"
}
pclogs() {
$COMPOSE_CMD_PATH -p ${COMPOSE_PROJECT_NAME} -f $COMPOSE_FILE logs "$@"
}
pcpull() {
echo "Pulling AMD64 images..."
$COMPOSE_CMD_PATH -p ${COMPOSE_PROJECT_NAME} -f $COMPOSE_FILE pull "$@"
}
pcrefresh() {
$COMPOSE_CMD_PATH -p ${COMPOSE_PROJECT_NAME} -f $COMPOSE_FILE pull && $COMPOSE_CMD_PATH -p ${COMPOSE_PROJECT_NAME} -f $COMPOSE_FILE up -d --remove-orphans "$@"
}
pcbuild() {
$COMPOSE_CMD_PATH -p ${COMPOSE_PROJECT_NAME} -f $COMPOSE_FILE build "$@"
}
pcrestart() {
$COMPOSE_CMD_PATH -p ${COMPOSE_PROJECT_NAME} -f $COMPOSE_FILE restart "$@"
}
pcstop() {
$COMPOSE_CMD_PATH -p ${COMPOSE_PROJECT_NAME} -f $COMPOSE_FILE stop "$@"
}
pcstart() {
$COMPOSE_CMD_PATH -p ${COMPOSE_PROJECT_NAME} -f $COMPOSE_FILE start "$@"
}
# Export functions - handle both bash and zsh
if [[ -n "$ZSH_VERSION" ]]; then
# In zsh, functions are automatically available in subshells
# But we can make them available globally with typeset
typeset -f pc pcup pcdown pcps pclogs pcpull pcrefresh pcbuild pcrestart pcstop pcstart > /dev/null
else
# Bash style export
export -f pc pcup pcdown pcps pclogs pcpull pcrefresh pcbuild pcrestart pcstop pcstart
fi
echo "✅ Podman environment switched to $ENVIRONMENT with version $VERSION"
echo "🖥️ Platform: AMD64 (compatible with both Intel and Apple Silicon)"
echo "Available commands:"
echo " pc - podman-compose shorthand"
echo " pcup - start services in background"
echo " pcdown - stop and remove services"
echo " pcps - list running services"
echo " pclogs - view service logs"
echo " pcpull - pull latest images"
echo " pcrefresh - pull and restart services"
echo " pcbuild - build services"
echo " pcrestart - restart services"
echo " pcstop - stop services"
echo " pcstart - start stopped services"

View File

@@ -1,20 +1,12 @@
#!/bin/bash
cd /Volumes/OWC4M2_1/Development/Josako/EveAI/TBD/docker
source ./docker_env_switch.sh dev
echo "Copying client images"
cp -fv ../eveai_chat_client/static/assets/img/* ../nginx/static/assets/img
dcdown eveai_chat_client nginx
cd ../nginx
npm run clean
npm run build
cd ../docker
./build_and_push_eveai.sh -b nginx
cd ../docker/
dcup eveai_chat_client nginx

238
docker/tag_registry_version.sh Executable file
View File

@@ -0,0 +1,238 @@
#!/bin/bash
# Exit on any error
set -e
# Function to display usage information
usage() {
echo "Usage: $0 <version> [options]"
echo " version : Version to tag (e.g., v1.2.3, v1.2.3-alpha, v2.0.0-beta)"
echo ""
echo "Options:"
echo " --services <service1,service2,...> : Specific services to tag (default: all EveAI services)"
echo " --dry-run : Show what would be done without executing"
echo " --force : Overwrite existing version tags"
echo ""
echo "Examples:"
echo " $0 v1.2.3-alpha"
echo " $0 v2.0.0 --services eveai_api,eveai_workers"
echo " $0 v1.0.0-beta --dry-run"
}
# Check if version is provided
if [ $# -eq 0 ]; then
echo "❌ Error: Version is required"
usage
exit 1
fi
VERSION=$1
shift
# Default values
SERVICES=""
DRY_RUN=false
FORCE=false
# Parse options
while [[ $# -gt 0 ]]; do
case $1 in
--services)
SERVICES="$2"
shift 2
;;
--dry-run)
DRY_RUN=true
shift
;;
--force)
FORCE=true
shift
;;
-*)
echo "❌ Unknown option: $1"
usage
exit 1
;;
*)
echo "❌ Unexpected argument: $1"
usage
exit 1
;;
esac
done
# Validate version format (flexible semantic versioning)
if [[ ! "$VERSION" =~ ^v?[0-9]+\.[0-9]+\.[0-9]+(-[a-zA-Z0-9\-]+)?$ ]]; then
echo "❌ Error: Invalid version format. Expected format: v1.2.3 or v1.2.3-alpha"
echo " Examples: v1.0.0, v2.1.3-beta, v1.0.0-rc1"
exit 1
fi
# Ensure version starts with 'v'
if [[ ! "$VERSION" =~ ^v ]]; then
VERSION="v$VERSION"
fi
# Local registry configuration
REGISTRY="registry.ask-eve-ai-local.com"
ACCOUNT="josakola"
# Check if podman is available
if ! command -v podman &> /dev/null; then
echo "❌ Error: podman not found"
exit 1
fi
# Check if yq is available
if ! command -v yq &> /dev/null; then
echo "❌ Error: yq not found (required for parsing compose file)"
exit 1
fi
# Check if compose file exists
COMPOSE_FILE="compose_dev.yaml"
if [[ ! -f "$COMPOSE_FILE" ]]; then
echo "❌ Error: Compose file '$COMPOSE_FILE' not found"
exit 1
fi
echo "🏷️ EveAI Registry Version Tagging Script"
echo "📦 Version: $VERSION"
echo "🏪 Registry: $REGISTRY"
echo "👤 Account: $ACCOUNT"
# Get services to process
if [[ -n "$SERVICES" ]]; then
# Convert comma-separated list to array
IFS=',' read -ra SERVICE_ARRAY <<< "$SERVICES"
else
# Get all EveAI services (excluding nginx as per requirements)
SERVICE_ARRAY=()
while IFS= read -r line; do
SERVICE_ARRAY+=("$line")
done < <(yq e '.services | keys | .[]' "$COMPOSE_FILE" | grep -E '^eveai_')
fi
echo "🔍 Services to process: ${SERVICE_ARRAY[*]}"
# Function to check if image exists in registry
check_image_exists() {
local image_name="$1"
if podman image exists "$image_name" 2>/dev/null; then
return 0
else
return 1
fi
}
# Function to check if version tag already exists
check_version_exists() {
local service="$1"
local version_tag="$REGISTRY/$ACCOUNT/$service:$VERSION"
# Try to inspect the image in the registry
if podman image exists "$version_tag" 2>/dev/null; then
return 0
else
return 1
fi
}
# Process each service
PROCESSED_SERVICES=()
FAILED_SERVICES=()
for SERVICE in "${SERVICE_ARRAY[@]}"; do
echo ""
echo "🔄 Processing service: $SERVICE"
# Check if service exists in compose file
if ! yq e ".services.$SERVICE" "$COMPOSE_FILE" | grep -q "image:"; then
echo "⚠️ Warning: Service '$SERVICE' not found in $COMPOSE_FILE, skipping"
continue
fi
# Construct image names
LATEST_IMAGE="$REGISTRY/$ACCOUNT/$SERVICE:latest"
VERSION_IMAGE="$REGISTRY/$ACCOUNT/$SERVICE:$VERSION"
echo " 📥 Source: $LATEST_IMAGE"
echo " 🏷️ Target: $VERSION_IMAGE"
# Check if version already exists
if check_version_exists "$SERVICE" && [[ "$FORCE" != true ]]; then
echo " ⚠️ Version $VERSION already exists for $SERVICE"
echo " 💡 Use --force to overwrite existing tags"
continue
fi
if [[ "$DRY_RUN" == true ]]; then
echo " 🔍 [DRY RUN] Would tag $LATEST_IMAGE as $VERSION_IMAGE"
PROCESSED_SERVICES+=("$SERVICE")
continue
fi
# Check if latest image exists
if ! check_image_exists "$LATEST_IMAGE"; then
echo " ❌ Latest image not found: $LATEST_IMAGE"
echo " 💡 Run build_and_push_eveai.sh first to create latest images"
FAILED_SERVICES+=("$SERVICE")
continue
fi
# Pull latest image
echo " 📥 Pulling latest image..."
if ! podman pull "$LATEST_IMAGE"; then
echo " ❌ Failed to pull $LATEST_IMAGE"
FAILED_SERVICES+=("$SERVICE")
continue
fi
# Tag with version
echo " 🏷️ Tagging with version $VERSION..."
if ! podman tag "$LATEST_IMAGE" "$VERSION_IMAGE"; then
echo " ❌ Failed to tag $LATEST_IMAGE as $VERSION_IMAGE"
FAILED_SERVICES+=("$SERVICE")
continue
fi
# Push version tag to registry
echo " 📤 Pushing version tag to registry..."
if ! podman push "$VERSION_IMAGE"; then
echo " ❌ Failed to push $VERSION_IMAGE"
FAILED_SERVICES+=("$SERVICE")
continue
fi
echo " ✅ Successfully tagged $SERVICE with version $VERSION"
PROCESSED_SERVICES+=("$SERVICE")
done
# Summary
echo ""
echo "📊 Summary:"
echo "✅ Successfully processed: ${#PROCESSED_SERVICES[@]} services"
if [[ ${#PROCESSED_SERVICES[@]} -gt 0 ]]; then
printf " - %s\n" "${PROCESSED_SERVICES[@]}"
fi
if [[ ${#FAILED_SERVICES[@]} -gt 0 ]]; then
echo "❌ Failed: ${#FAILED_SERVICES[@]} services"
printf " - %s\n" "${FAILED_SERVICES[@]}"
fi
if [[ "$DRY_RUN" == true ]]; then
echo "🔍 This was a dry run - no actual changes were made"
fi
echo ""
if [[ ${#FAILED_SERVICES[@]} -eq 0 ]]; then
echo "🎉 All services successfully tagged with version $VERSION!"
echo "📦 Images are available in registry: $REGISTRY/$ACCOUNT/[service]:$VERSION"
else
echo "⚠️ Some services failed to process. Check the errors above."
exit 1
fi
echo "🕐 Finished at $(date +"%d/%m/%Y %H:%M:%S")"

View File

@@ -0,0 +1,79 @@
# Pushgateway Grouping Keys (instance, namespace, process)
Goal: prevent metrics pushed by different Pods or worker processes from overwriting each other, while keeping Prometheus/Grafana queries simple.
Summary of decisions
- WORKER_ID source = OS process ID (PID)
- Always include namespace in grouping labels
What this changes
- Every push to Prometheus Pushgateway now includes a grouping_key with:
- instance = POD_NAME (fallback to HOSTNAME, then "dev")
- namespace = POD_NAMESPACE (fallback to ENVIRONMENT, then "dev")
- process = WORKER_ID (fallback to current PID)
- Prometheus will expose these as exported_instance, exported_namespace, and exported_process on the scraped series.
Code changes (already implemented)
- common/utils/business_event.py
- push_to_gateway(..., grouping_key={instance, namespace, process})
- Safe fallbacks ensure dev/test (Podman) keeps working with no K8s-specific env vars.
Kubernetes manifests (already implemented)
- All Deployments that push metrics set env vars via Downward API:
- POD_NAME from metadata.name
- POD_NAMESPACE from metadata.namespace
- Files updated:
- scaleway/manifests/base/applications/frontend/eveai-app/deployment.yaml
- scaleway/manifests/base/applications/frontend/eveai-api/deployment.yaml
- scaleway/manifests/base/applications/frontend/eveai-chat-client/deployment.yaml
- scaleway/manifests/base/applications/backend/eveai-workers/deployment.yaml
- scaleway/manifests/base/applications/backend/eveai-chat-workers/deployment.yaml
- scaleway/manifests/base/applications/backend/eveai-entitlements/deployment.yaml
No changes needed to secrets
- PUSH_GATEWAY_HOST/PORT remain provided via eveai-secrets; code composes PUSH_GATEWAY_URL internally.
How to verify
1) Pushgateway contains per-pod/process groups
- Port-forward Pushgateway (namespace monitoring):
- kubectl -n monitoring port-forward svc/monitoring-pushgateway-prometheus-pushgateway 9091:9091
- Inspect:
- curl -s http://127.0.0.1:9091/api/v1/metrics | jq '.[].labels'
- You should see labels including job (your service), instance (pod), namespace, process (pid).
2) Prometheus shows the labels as exported_*
- Port-forward Prometheus (namespace monitoring):
- kubectl -n monitoring port-forward svc/monitoring-prometheus 9090:9090
- Queries:
- label_values(eveai_llm_calls_total, exported_instance)
- label_values(eveai_llm_calls_total, exported_namespace)
- label_values(eveai_llm_calls_total, exported_process)
PromQL query patterns
- Hide per-process by aggregating away exported_process:
- sum without(exported_process) (rate(eveai_llm_calls_total[5m])) by (exported_job, exported_instance, exported_namespace)
- Service-level totals (hide instance and process):
- sum without(exported_instance, exported_process) (rate(eveai_llm_calls_total[5m])) by (exported_job, exported_namespace)
- Histogram example (p95 per service):
- histogram_quantile(0.95, sum without(exported_process) (rate(eveai_llm_duration_seconds_bucket[5m])) by (le, exported_job, exported_namespace))
Dev/Test (Podman) behavior
- No Kubernetes Downward API: POD_NAME/POD_NAMESPACE are not set.
- Fallbacks used by the code:
- instance = HOSTNAME if available, else "dev"
- namespace = ENVIRONMENT if available, else "dev"
- process = current PID
- This guarantees no crashes and still avoids process-level overwrites.
Operational notes
- Cardinality: adding process creates more series (one per worker). This is required to avoid data loss when multiple workers push concurrently. Dashboards should aggregate away exported_process unless you need per-worker detail.
- Batch jobs (future): use the same grouping and consider delete_from_gateway on successful completion to remove stale groups for that job/instance/process.
Troubleshooting
- If you still see overwriting:
- Confirm that instance, namespace, and process all appear in Pushgateway JSON labels for each group.
- Ensure that all pods set POD_NAME and POD_NAMESPACE (kubectl -n eveai-staging exec <pod> -- env | egrep "POD_NAME|POD_NAMESPACE").
- Verify that your app processes run push_to_gateway through the shared business_event wrapper.
Change log reference
- Implemented on 2025-09-26 by adding grouping_key in business_event push and env vars in Deployments.

View File

@@ -0,0 +1,935 @@
# EveAI Cluster Installation Guide (Updated for Modular Kustomize Setup)
## Prerequisites
### Required Tools
```bash
# Verify required tools are installed
kubectl version --client
kustomize version
helm version
# Configure kubectl for Scaleway cluster
scw k8s kubeconfig install <cluster-id>
kubectl cluster-info
```
### Scaleway Prerequisites
- Kubernetes cluster running
- Managed services configured (PostgreSQL, Redis, MinIO)
- Secrets stored in Scaleway Secret Manager:
- `eveai-app-keys`, `eveai-mistral`, `eveai-object-storage`, `eveai-tem`
- `eveai-openai`, `eveai-postgresql`, `eveai-redis`, `eveai-redis-certificate`
- Flexible IP address (LoadBalancer)
- Eerst een loadbalancer aanmaken met publiek IP
- Daarna de loadbalancer verwijderen maar flexible IPs behouden
- Dit externe IP is het IP adres dat moet worden verwerkt in ingress-values.yaml!
## CDN Setup (Bunny.net - Optional)
### Configure Pull Zone
- Create Pull zone: evie-staging
- Origin: https://[LoadBalancer-IP] (note HTTPS!) -> pas later in het proces gekend
- Host header: evie-staging.askeveai.com
- Force SSL: Enabled
- In the pull zone's Caching - General settings, ensure to disable 'Strip Response Cookies'
- Define edge rules for
- Redirecting the root
- Redirecting security urls
### Update DNS (eurodns) for CDN
- Change A-record to CNAME pointing to CDN endpoint
- Or update A-record to CDN IP
## New Modular Deployment Process
### Phase 1: Infrastructure Foundation
Deploy core infrastructure components in the correct order:
```bash
# 1. Deploy namespaces
kubectl apply -f scaleway/manifests/base/infrastructure/00-namespaces.yaml
# 2. Add NGINX Ingress Helm repository
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
# 3. Deploy NGINX ingress controller via Helm
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
--values scaleway/manifests/base/infrastructure/ingress-values.yaml
# 4. Wait for ingress controller to be ready
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=300s
# 5. Add cert-manager Helm repository
helm repo add jetstack https://charts.jetstack.io
helm repo update
# 6. Install cert-manager CRDs
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.15.3/cert-manager.crds.yaml
# 7. Deploy cert-manager via Helm
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--values scaleway/manifests/base/infrastructure/cert-manager-values.yaml
# 8. Wait for cert-manager to be ready
kubectl wait --namespace cert-manager \
--for=condition=ready pod \
--selector=app.kubernetes.io/name=cert-manager \
--timeout=300s
# 9. Deploy cluster issuers
kubectl apply -f scaleway/manifests/base/infrastructure/03-cluster-issuers.yaml
```
### Phase 2: Verification Infrastructure Components
```bash
# Verify ingress controller
kubectl get pods -n ingress-nginx
kubectl get svc -n ingress-nginx
# Verify cert-manager
kubectl get pods -n cert-manager
kubectl get clusterissuers
# Check LoadBalancer external IP
kubectl get svc -n ingress-nginx ingress-nginx-controller
```
### Phase 3: Monitoring Stack (Optional but Recommended)
#### Add Prometheus Community Helm Repository
```bash
# Add Prometheus community Helm repository
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
# Verify chart availability
helm search repo prometheus-community/kube-prometheus-stack
```
#### Create Monitoring Values File
Create `scaleway/manifests/base/monitoring/values-monitoring.yaml`:
#### Deploy Monitoring Stack
```bash
# Install complete monitoring stack via Helm
helm install monitoring prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--create-namespace \
--values scaleway/manifests/base/monitoring/prometheus-values.yaml
# Install pushgateway
helm install monitoring-pushgateway prometheus-community/prometheus-pushgateway \
-n monitoring --create-namespace \
--set serviceMonitor.enabled=true \
--set serviceMonitor.additionalLabels.release=monitoring
# Monitor deployment progress
kubectl get pods -n monitoring -w
# Wait until all pods show STATUS: Running
```
#### Verify Monitoring Deployment
```bash
# Check Helm release
helm list -n monitoring
# Verify all components are running
kubectl get all -n monitoring
# Check persistent volumes are created
kubectl get pvc -n monitoring
# Check ServiceMonitor CRDs are available (for application monitoring)
kubectl get crd | grep monitoring.coreos.com
```
#### Enable cert-manager Monitoring Integration
```bash
# Enable Prometheus monitoring in cert-manager now that ServiceMonitor CRDs exist
helm upgrade cert-manager jetstack/cert-manager \
--namespace cert-manager \
--set prometheus.enabled=true \
--set prometheus.servicemonitor.enabled=true \
--reuse-values
```
#### Access Monitoring Services
##### Grafana Dashboard
```bash
# Port forward to access Grafana
kubectl port-forward -n monitoring svc/monitoring-grafana 3000:80
# Access via browser: http://localhost:3000
# Username: admin
# Password: admin123 (from values file)
```
##### Prometheus UI
```bash
# Port forward to access Prometheus
kubectl port-forward -n monitoring svc/monitoring-prometheus 9090:9090 &
# Access via browser: http://localhost:9090
# Check targets: http://localhost:9090/targets
```
#### Cleanup Commands (if needed)
If you need to completely remove monitoring for a fresh start:
```bash
# Uninstall Helm release
helm uninstall monitoring -n monitoring
# Remove namespace
kubectl delete namespace monitoring
# Remove any remaining cluster-wide resources
kubectl get clusterroles | grep monitoring | awk '{print $1}' | xargs -r kubectl delete clusterrole
kubectl get clusterrolebindings | grep monitoring | awk '{print $1}' | xargs -r kubectl delete clusterrolebinding
```
#### What we installed
With monitoring successfully deployed:
- Grafana provides pre-configured Kubernetes dashboards
- Prometheus collects metrics from all cluster components
- ServiceMonitor CRDs are available for application-specific metrics
- AlertManager handles alert routing and notifications
### Phase 4: Secrets
#### Stap 1: Installeer External Secrets Operator
```bash
# Add Helm repository
helm repo add external-secrets https://charts.external-secrets.io
helm repo update
# Install External Secrets Operator
helm install external-secrets external-secrets/external-secrets \
--namespace external-secrets-system \
--create-namespace
# Verify installation
kubectl get pods -n external-secrets-system
# Check CRDs zijn geïnstalleerd
kubectl get crd | grep external-secrets
```
#### Stap 2: Maak Scaleway API credentials aan
Je hebt Scaleway API credentials nodig voor de operator:
```bash
# Create secret with Scaleway API credentials
kubectl create secret generic scaleway-credentials \
--namespace eveai-staging \
--from-literal=access-key="JOUW_SCALEWAY_ACCESS_KEY" \
--from-literal=secret-key="JOUW_SCALEWAY_SECRET_KEY"
```
**Note:** Je krijgt deze credentials via:
- Scaleway Console → Project settings → API Keys
- Of via `scw iam api-key list` als je de CLI gebruikt
#### Stap 3: Verifieer SecretStore configuratie
Verifieer bestand: `scaleway/manifests/base/secrets/clustersecretstore-scaleway.yaml`. Daar moet de juiste project ID worden ingevoerd.
#### Stap 4: Verifieer ExternalSecret resource
Verifieer bestand: `scaleway/manifests/base/secrets/eveai-external-secrets.yaml`
**Belangrijk:**
- Scaleway provider vereist `key: name:secret-name` syntax
- SSL/TLS certificaten kunnen niet via `dataFrom/extract` worden opgehaald
- Certificaten moeten via `data` sectie worden toegevoegd
#### Stap 5: Deploy secrets
```bash
# Deploy SecretStore
kubectl apply -f scaleway/manifests/base/secrets/clustersecretstore-scaleway.yaml
# Deploy ExternalSecret
kubectl apply -f scaleway/manifests/base/secrets/eveai-external-secrets.yaml
```
#### Stap 6: Verificatie
```bash
# Check ExternalSecret status
kubectl get externalsecrets -n eveai-staging
# Check of het Kubernetes secret is aangemaakt
kubectl get secret eveai-secrets -n eveai-staging
# Check alle keys in het secret
kubectl get secret eveai-secrets -n eveai-staging -o jsonpath='{.data}' | jq 'keys'
# Check specifieke waarde (base64 decoded)
kubectl get secret eveai-secrets -n eveai-staging -o jsonpath='{.data.DB_HOST}' | base64 -d
# Check ExternalSecret events voor troubleshooting
kubectl describe externalsecret eveai-external-secrets -n eveai-staging
```
#### Stap 7: Gebruik in deployment
Je kunt nu deze secrets gebruiken in de deployment van de applicatie services die deze nodig hebben (TODO):
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: eveai-app
namespace: eveai-staging
spec:
selector:
matchLabels:
app: eveai-app
template:
metadata:
labels:
app: eveai-app
spec:
containers:
- name: eveai-app
envFrom:
- secretRef:
name: eveai-secrets # Alle environment variables uit één secret
# Je Python code gebruikt gewoon environ.get('DB_HOST') etc.
```
#### Stap 8: Redis certificaat gebruiken in Python
Voor SSL Redis connecties met het certificaat:
```python
# Voorbeeld in je config.py
import tempfile
import ssl
import redis
from os import environ
class StagingConfig:
def __init__(self):
self.REDIS_CERT_DATA = environ.get('REDIS_CERT')
self.REDIS_BASE_URI = environ.get('REDIS_BASE_URI', 'redis://localhost:6379/0')
def create_redis_connection(self):
if self.REDIS_CERT_DATA:
# Schrijf certificaat naar tijdelijk bestand
with tempfile.NamedTemporaryFile(mode='w', delete=False, suffix='.pem') as f:
f.write(self.REDIS_CERT_DATA)
cert_path = f.name
# Redis connectie met SSL certificaat
return redis.from_url(
self.REDIS_BASE_URI,
ssl_cert_reqs=ssl.CERT_REQUIRED,
ssl_ca_certs=cert_path
)
else:
return redis.from_url(self.REDIS_BASE_URI)
# Gebruik voor session Redis
@property
def SESSION_REDIS(self):
return self.create_redis_connection()
```
#### Scaleway Secret Manager Vereisten
Voor deze setup moeten je secrets in Scaleway Secret Manager correct gestructureerd zijn:
**JSON secrets (eveai-postgresql, eveai-redis, etc.):**
```json
{
"DB_HOST": "your-postgres-host.rdb.fr-par.scw.cloud",
"DB_USER": "eveai_user",
"DB_PASS": "your-password",
"DB_NAME": "eveai_staging",
"DB_PORT": "5432"
}
```
**SSL/TLS Certificaat (eveai-redis-certificate):**
```
-----BEGIN CERTIFICATE-----
MIIDGTCCAgGg...z69LXyY=
-----END CERTIFICATE-----
```
#### Voordelen van deze setup
- **Automatische sync**: Secrets worden elke 5 minuten geüpdatet
- **Geen code wijzigingen**: Je `environ.get()` calls blijven werken
- **Secure**: Credentials zijn niet in manifests, alleen in cluster
- **Centralized**: Alle secrets in Scaleway Secret Manager
- **Auditable**: External Secrets Operator logt alle acties
- **SSL support**: TLS certificaten worden correct behandeld
#### File structuur
```
scaleway/manifests/base/secrets/
├── scaleway-secret-store.yaml
└── eveai-external-secrets.yaml
```
### Phase 5: TLS en Network setup
#### Deploy HTTP ACME ingress
Om het certificaat aan te maken, moet in de DNS-zone een A-record worden aangemaakt dat rechtstreeks naar het IP van de loadbalancer wijst.
We maken nog geen CNAME aan naar Bunny.net. Anders gaat bunny.net het ACME proces mogelijks onderbreken.
Om het certificaat aan te maken, moeten we een HTTP ACME ingress gebruiken. Anders kan het certificaat niet worden aangemaakt.
```
kubectl apply -f scaleway/manifests/base/networking/ingress-http-acme.yaml
```
Check of het certificaat is aangemaakt (READY moet true zijn):
```
kubectl get certificate evie-staging-tls -n eveai-staging
# of met meer detail
kubectl -n eveai-staging describe certificate evie-staging-tls
```
Dit kan even duren. Maar zodra het certificaat is aangemaakt, kan je de de https-only ingress opzetten:
#### Apply per-prefix headers (moet bestaan vóór de Ingress die ernaar verwijst)
```bash
kubectl apply -f scaleway/manifests/base/networking/headers-configmaps.yaml
```
#### Apply ingresses
```bash
kubectl apply -f scaleway/manifests/base/networking/ingress-https.yaml # alleen /verify
kubectl apply -f scaleway/manifests/base/networking/ingress-admin.yaml # /admin → eveai-app-service
kubectl apply -f scaleway/manifests/base/networking/ingress-api.yaml # /api → eveai-api-service
kubectl apply -f scaleway/manifests/base/networking/ingress-chat-client.yaml # /chat-client → eveai-chat-client-service
# Alternatief: via overlay (mits kustomization.yaml is bijgewerkt)
kubectl apply -k scaleway/manifests/overlays/staging/
```
Om bunny.net te gebruiken:
- Nu kan het CNAME-record dat verwijst naar de Bunny.net Pull zone worden aangemaakt.
- In bunny.net moet in de pull-zone worden verwezen naar de loadbalancer IP via het HTTPS-protocol.
### Phase 6: Verification Service
Deze service kan ook al in Phase 5 worden geïnstalleerd om te verifiëren of de volledige netwerkstack (over bunny, certificaat, ...) werkt.
```bash
# Deploy verification service
kubectl apply -k scaleway/manifests/base/applications/verification/
### Phase 7: Complete Staging Deployment
```bash
# Deploy everything using the staging overlay
kubectl apply -k scaleway/manifests/overlays/staging/
# Verify complete deployment
kubectl get all -n eveai-staging
kubectl get ingress -n eveai-staging
kubectl get certificates -n eveai-staging
```
### Verificatie commando's
Controleer ingresses en headers:
```bash
kubectl -n eveai-staging get ing
kubectl -n eveai-staging describe ing eveai-admin-ingress
kubectl -n eveai-staging describe ing eveai-api-ingress
kubectl -n eveai-staging describe ing eveai-chat-client-ingress
kubectl -n eveai-staging describe ing eveai-staging-ingress # bevat /verify
kubectl -n eveai-staging get cm eveai-admin-headers eveai-api-headers eveai-chat-headers -o yaml
```
- In elke prefix-Ingress moeten de annotations zichtbaar zijn: use-regex: true, rewrite-target: /$2, proxy-set-headers: eveai-staging/eveai--headers.
- In de ConfigMaps moet de key X-Forwarded-Prefix de juiste waarde hebben (/admin, /api, /chat-client).
End-to-end testen:
- https://evie-staging.askeveai.com/admin/login → loginpagina. In app-logs zie je PATH zonder /admin (door rewrite) maar URL met /admin.
- Na login: 302 Location: /admin/user/tenant_overview.
- API: https://evie-staging.askeveai.com/api/… → backend ontvangt pad zonder /api.
- Chat client: https://evie-staging.askeveai.com/chat-client/… → juiste service.
- Verify: https://evie-staging.askeveai.com/verify → ongewijzigd via ingress-https.yaml.
- Root: zolang Bunny rule niet actief is, geen automatische redirect op / (verwacht gedrag).
### Phase 7: Install PgAdmin Tool
#### Secret eveai-pgadmin-admin in Scaleway Secret Manager aanmaken (indien niet bestaat)
2 Keys:
- `PGADMIN_DEFAULT_EMAIL`: E-mailadres voor de admin
- `PGADMIN_DEFAULT_PASSWORD`: voor de admin
#### Secrets deployen
```bash
kubectl apply -f scaleway/manifests/base/tools/pgadmin/externalsecrets.yaml
# Check
kubectl get externalsecret -n tools
kubectl get secret -n tools | grep pgadmin
```
#### Helm chart toepassen
```bash
helm repo add runix https://helm.runix.net
helm repo update
helm install pgadmin runix/pgadmin4 \
-n tools \
--create-namespace \
-f scaleway/manifests/base/tools/pgadmin/values.yaml
# Check status
kubectl get pods,svc -n tools
kubectl logs -n tools deploy/pgadmin-pgadmin4 || true
```
#### Port Forward, Local Access
```bash
# Find the service name (often "pgadmin")
kubectl -n tools get svc
# Forward local port 8080 to service port 80
kubectl -n tools port-forward svc/pgadmin-pgadmin4 8080:80
# Browser: http://localhost:8080
# Login with PGADMIN_DEFAULT_EMAIL / PGADMIN_DEFAULT_PASSWORD (from eveai-pgadmin-admin)
```
### Phase 8: RedisInsight Tool Deployment
#### Installatie via kubectl (zonder Helm)
Gebruik een eenvoudig manifest met Deployment + Service + PVC in de `tools` namespace. Dit vermijdt externe chart repositories en extra authenticatie.
```bash
# Apply manifest (maakt namespace tools aan indien nodig)
kubectl apply -f scaleway/manifests/base/tools/redisinsight/redisinsight.yaml
# Controleer resources
kubectl -n tools get pods,svc,pvc
```
#### (Optioneel) ExternalSecrets voor gemak (eigenlijk niet nodig)
Indien je de Redis-credentials en CA-cert in namespace `tools` wil spiegelen (handig om het CA-bestand eenvoudig te exporteren en/of later provisioning te doen):
```bash
kubectl apply -f scaleway/manifests/base/tools/redisinsight/externalsecrets.yaml
kubectl -n tools get externalsecret
kubectl -n tools get secret | grep redisinsight
```
CA-bestand lokaal opslaan voor UI-upload (alleen nodig als je ExternalSecrets gebruikte):
```bash
kubectl -n tools get secret redisinsight-ca -o jsonpath='{.data.REDIS_CERT}' | base64 -d > /tmp/redis-ca.pem
```
#### Port Forward, Local Access
```bash
# RedisInsight v2 luistert op poort 5540
kubectl -n tools port-forward svc/redisinsight 5540:5540
# Browser: http://localhost:5540
```
#### UI: Redis verbinden
- Host: `172.16.16.2`
- Port: `6379`
- Auth: username `luke`, password uit secret (eveai-redis of redisinsight-redis)
- TLS: zet TLS aan en upload het CA-certificaat (PEM)
- Certificaatverificatie: omdat je via IP verbindt en geen hostname in het certificaat staat, kan strict verify falen. Zet dan "Verify server certificate"/"Check server identity" uit in de UI. Dit is normaal bij private networking via IP.
#### Troubleshooting
- Controleer pods, service en PVC in `tools`:
```bash
kubectl -n tools get pods,svc,pvc
```
- NetworkPolicies: indien actief, laat egress toe van `tools``172.16.16.2:6379`.
- TLS-issues via IP: zet verify uit of gebruik een DNS-hostnaam die met het cert overeenkomt (indien beschikbaar).
- PVC niet bound: specificeer een geldige `storageClassName` in het manifest.
### Phase 9: Application Services Deployment
#### Create Scaleway Registry Secret
Create docker pull secret via External Secrets (once):
```bash
kubectl apply -f scaleway/manifests/base/secrets/scaleway-registry-secret.yaml
kubectl -n eveai-staging get secret scaleway-registry-cred -o yaml | grep "type: kubernetes.io/dockerconfigjson"
```
#### Ops Jobs Invocation (if required)
Run the DB ops scripts manually in order. Each manifest uses generateName; use kubectl create.
```bash
kubectl create -f scaleway/manifests/base/applications/ops/jobs/00-env-check-job.yaml
kubectl wait --for=condition=complete job -n eveai-staging -l job-type=env-check --timeout=600s
kubectl create -f scaleway/manifests/base/applications/ops/jobs/02-db-bootstrap-ext-job.yaml
kubectl wait --for=condition=complete job -n eveai-staging -l job-type=db-bootstrap-ext --timeout=1800s
kubectl create -f scaleway/manifests/base/applications/ops/jobs/03-db-migrate-public-job.yaml
kubectl wait --for=condition=complete job -n eveai-staging -l job-type=db-migrate-public --timeout=1800s
kubectl create -f scaleway/manifests/base/applications/ops/jobs/04-db-migrate-tenant-job.yaml
kubectl wait --for=condition=complete job -n eveai-staging -l job-type=db-migrate-tenant --timeout=3600s
kubectl create -f scaleway/manifests/base/applications/ops/jobs/05-seed-or-init-data-job.yaml
kubectl wait --for=condition=complete job -n eveai-staging -l job-type=db-seed-or-init --timeout=1800s
kubectl create -f scaleway/manifests/base/applications/ops/jobs/06-verify-minimal-job.yaml
kubectl wait --for=condition=complete job -n eveai-staging -l job-type=db-verify-minimal --timeout=900s
```
View logs (you can see the created job name as a result from the create command):
```bash
kubectl -n eveai-staging get jobs
kubectl -n eveai-staging logs job/<created-job-name>
```
#### Creating volume for eveai_chat_worker's crewai storage
```bash
kubectl apply -n eveai-staging -f scaleway/manifests/base/applications/backend/eveai-chat-workers/pvc.yaml
```
#### Application Services Deployment
Use the staging overlay to deploy apps with registry rewrite and imagePullSecrets:
```bash
kubectl apply -k scaleway/manifests/overlays/staging/
```
##### Deploy backend workers
```bash
kubectl apply -k scaleway/manifests/base/applications/backend/
kubectl -n eveai-staging get deploy | egrep 'eveai-(workers|chat-workers|entitlements)'
# Optional: quick logs
kubectl -n eveai-staging logs deploy/eveai-workers --tail=100 || true
kubectl -n eveai-staging logs deploy/eveai-chat-workers --tail=100 || true
kubectl -n eveai-staging logs deploy/eveai-entitlements --tail=100 || true
```
##### Deploy frontend services
```bash
kubectl apply -k scaleway/manifests/base/applications/frontend/
kubectl -n eveai-staging get deploy,svc | egrep 'eveai-(app|api|chat-client)'
```
##### Verify Ingress routes (Ingress managed separately)
Ingress is intentionally not managed by the staging Kustomize overlay. Apply or update it manually using your existing manifest and handle it per your cluster-install.md guide:
```bash
kubectl apply -f scaleway/manifests/base/networking/ingress-https.yaml
kubectl -n eveai-staging describe ingress eveai-staging-ingress
```
Then verify the routes:
```bash
curl -k https://evie-staging.askeveai.com/verify/health
curl -k https://evie-staging.askeveai.com/admin/healthz/ready
curl -k https://evie-staging.askeveai.com/api/healthz/ready
curl -k https://evie-staging.askeveai.com/client/healthz/ready
```
#### Updating the staging deployment
- Als je de images met dezelfde tag (bijv. :staging) opnieuw hebt gepusht én je staging pods gebruiken imagePullPolicy: Always (zoals in de handleiding), dan hoef je alleen een rollout te triggeren zodat de pods opnieuw starten en de nieuwste image pullen.
- Doe dit in de juiste namespace (waarschijnlijk eveai-staging) met kubectl rollout restart.
##### Snelste manier (alle deployments in één keer)
```bash
# Staging namespace (pas aan als je een andere gebruikt)
kubectl -n eveai-staging rollout restart deployment
# Optioneel: status volgen totdat alles klaar is
kubectl -n eveai-staging rollout status deploy --all
# Controleren welke image draait per pod
kubectl -n eveai-staging get pods -o=jsonpath='{range .items[*]}{@.metadata.name}{"\t"}{range .spec.containers[*]}{@.image}{" "}{end}{"\n"}{end}'
```
Dit herstart alle Deployments in de namespace. Omdat imagePullPolicy: Always staat, zal Kubernetes de nieuwste image voor de gebruikte tag (bijv. :staging) ophalen.
##### Specifieke services opnieuw starten
Wil je alleen bepaalde services restarten:
```bash
kubectl -n eveai-staging rollout restart deployment/eveai-app
kubectl -n eveai-staging rollout restart deployment/eveai-api
kubectl -n eveai-staging rollout restart deployment/eveai-chat-client
kubectl -n eveai-staging rollout restart deployment/eveai-workers
kubectl -n eveai-staging rollout restart deployment/eveai-chat-workers
kubectl -n eveai-staging rollout restart deployment/eveai-entitlements
kubectl -n eveai-staging rollout status deployment/eveai-app
```
##### Alternatief: (her)apply van manifesten
De handleiding plaatst de manifests in scaleway/manifests en beschrijft het gebruik van Kustomize overlays. Je kunt ook simpelweg opnieuw apply-en:
```bash
# Overlay die images herschrijft naar de Scaleway registry en imagePullSecrets toevoegt
kubectl apply -k scaleway/manifests/overlays/staging/
# Backend en frontend (indien je base afzonderlijk gebruikt)
kubectl apply -k scaleway/manifests/base/applications/backend/
kubectl apply -k scaleway/manifests/base/applications/frontend/
```
Let op: apply alleen triggert niet altijd een rollout als er geen inhoudelijke spec-wijziging is. Combineer dit zo nodig met een rollout restart zoals hierboven.
##### Als je met versie-tags werkt (productie-achtig)
- Gebruik je géén channel tag (:staging/:production) maar een vaste, versiegebonden tag (bijv. :v1.2.3) en imagePullPolicy: IfNotPresent, dan moet je óf:
- de tag in je manifest/overlay aanpassen naar de nieuwe versie en opnieuw apply-en, of
- met een eenmalige set-image een nieuwe ReplicaSet forceren:
```bash
kubectl -n eveai-staging set image deploy/eveai-api eveai-api=rg.fr-par.scw.cloud/<namespace>/josakola/eveai-api:v1.2.4
kubectl -n eveai-staging rollout status deploy/eveai-api
```
##### Troubleshooting
- Check of de registry pull secret aanwezig is (volgens handleiding):
```bash
kubectl apply -f scaleway/manifests/base/secrets/scaleway-registry-secret.yaml
kubectl -n eveai-staging get secret scaleway-registry-cred
```
- Bekijk events/logs als pods niet up komen:
```bash
kubectl get events -n eveai-staging --sort-by=.lastTimestamp
kubectl -n eveai-staging describe pod <pod-naam>
kubectl -n eveai-staging logs deploy/eveai-api --tail=200
```
### Phase 10: Cockpit Setup
#### Standard Cockpit Setup
- Create a grafana user (Cockpit > Grafana Users > Add user)
- Open Grafana Dashboard (Cockpit > Open Dashboards)
- Er zijn heel wat dashboards beschikbaar.
- Kubernetes cluster overview (metrics)
- Kubernetes cluster logs (controlplane logs)
### Phase 11: Flower Setup
#### Overzicht
Flower is de Celery monitoring UI. We deployen Flower in de namespace `monitoring` via de bjw-s/app-template Helm chart. Er is geen Ingress; toegang gebeurt enkel lokaal via `kubectl port-forward`. Verbinding naar Redis gebruikt TLS met je private CA; hostnameverificatie staat uit omdat je via IP verbindt.
#### Helm repository toevoegen
```bash
helm repo add bjw-s https://bjw-s-labs.github.io/helm-charts
helm repo update
helm search repo bjw-s/app-template
```
#### Deploy (aanbevolen: alleen Flower via Helm CLI)
Gebruik gerichte commandos zodat enkel Flower wordt beheerd door Helm en de rest van de monitoring stack ongemoeid blijft.
```bash
# 1) ExternalSecrets en NetworkPolicy aanmaken
kubectl apply -f scaleway/manifests/base/monitoring/flower/externalsecrets.yaml
kubectl apply -f scaleway/manifests/base/monitoring/flower/networkpolicy.yaml
# 2) Flower installeren via Helm (alleen deze release)
helm upgrade --install flower bjw-s/app-template \
-n monitoring --create-namespace \
-f scaleway/manifests/base/monitoring/flower/values.yaml
```
Wat dit deployt:
- ExternalSecrets: `flower-redis` (REDIS_USER/PASS/URL/PORT) en `flower-ca` (REDIS_CERT) uit `scaleway-cluster-secret-store`
- Flower via Helm (bjw-s/app-template):
- Image: `mher/flower:2.0.1` (gepind)
- Start: `/usr/local/bin/celery --broker=$(BROKER) flower --address=0.0.0.0 --port=5555`
- TLS naar Redis met CA-mount op `/etc/ssl/redis/ca.pem` en `ssl_check_hostname=false`
- Hardened securityContext (non-root, read-only rootfs, capabilities drop)
- Probes en resource requests/limits
- Service: ClusterIP `flower` op poort 5555
- NetworkPolicy: ingress default-deny; egress enkel naar Redis (172.16.16.2:6379/TCP) en CoreDNS (53 TCP/UDP)
#### Verifiëren
```bash
# Helm release en resources
helm list -n monitoring
kubectl -n monitoring get externalsecret
kubectl -n monitoring get secret | grep flower
kubectl -n monitoring get deploy,po,svc | grep flower
kubectl -n monitoring logs deploy/flower --tail=200 || true
```
#### Toegang (port-forward)
```bash
kubectl -n monitoring port-forward svc/flower 5555:5555
# Browser: http://localhost:5555
```
#### Security & TLS
- Geen Ingress/extern verkeer; enkel port-forward.
- TLS naar Redis met CA-mount op `/etc/ssl/redis/ca.pem`.
- Omdat je Redis via IP aanspreekt, staat `ssl_check_hostname=false`.
- Strikte egress NetworkPolicy: update het IP indien je Redis IP verandert.
#### Troubleshooting
```bash
# Secrets en ExternalSecrets
kubectl -n monitoring describe externalsecret flower-redis
kubectl -n monitoring describe externalsecret flower-ca
# Pods & logs
kubectl -n monitoring get pods -l app=flower -w
kubectl -n monitoring logs deploy/flower --tail=200
# NetworkPolicy
kubectl -n monitoring describe networkpolicy flower-policy
```
#### Alternatief: Kustomize rendering (let op!)
Je kunt Flower ook via Kustomize renderen samen met de monitoring chart:
```bash
kubectl kustomize --enable-helm scaleway/manifests/base/monitoring | kubectl apply -f -
```
Let op: dit rendert en applyt álle resources in de monitoring Kustomization, inclusief de kube-prometheus-stack chart. Gebruik dit alleen als je bewust de volledige monitoring stack declaratief wil bijwerken.
#### Migratie & Opschonen
Als je eerder de losse Deployment/Service hebt gebruikt:
```bash
kubectl -n monitoring delete deploy flower --ignore-not-found
kubectl -n monitoring delete svc flower --ignore-not-found
```
## Verification and Testing
### Check Infrastructure Status
```bash
# Verify ingress controller
kubectl get pods -n ingress-nginx
kubectl describe service ingress-nginx-controller -n ingress-nginx
# Verify cert-manager
kubectl get pods -n cert-manager
kubectl get clusterissuers
# Check certificate status (may take a few minutes to issue)
kubectl describe certificate evie-staging-tls -n eveai-staging
```
### Test Services
```bash
# Get external IP from LoadBalancer
kubectl get svc -n ingress-nginx ingress-nginx-controller
# Test HTTPS access (replace with your domain)
curl -k https://evie-staging.askeveai.com/verify/health
curl -k https://evie-staging.askeveai.com/verify/info
# Test monitoring (if deployed)
kubectl port-forward -n monitoring svc/monitoring-grafana 3000:80
# Access Grafana at http://localhost:3000 (admin/admin123)
```
## DNS Configuration
### Update DNS Records
- Create A-record pointing to LoadBalancer external IP
- Or set up CNAME if using CDN
### Test Domain Access
```bash
# Test domain resolution
nslookup evie-staging.askeveai.com
# Test HTTPS access via domain
curl https://evie-staging.askeveai.com/verify/
```
## EveAI Chat Workers: Persistent logs storage and Celery process behavior
This addendum describes how to enable persistent storage for CrewAI tuning runs under /app/logs for the eveai-chat-workers Deployment and clarifies Celery process behavior relevant to environment variables.
### Celery prefork behavior and env variables
- Pool: prefork (default). Each worker process (child) handles multiple tasks sequentially.
- Implication: any environment variable changed inside a child process persists for subsequent tasks handled by that same child, until it is changed again or the process is recycled.
- Our practice: set required env vars (e.g., CREWAI_STORAGE_DIR/CREWAI_STORAGE_PATH) immediately before initializing CrewAI and restore them immediately after. This prevents leakage to the next task in the same process.
- CELERY_MAX_TASKS_PER_CHILD: the number of tasks a child will process before being recycled. Suggested starting range for heavy LLM/RAG workloads: 200500; 1000 is acceptable if memory growth is stable. Monitor RSS and adjust.
### Create and mount a PersistentVolumeClaim for /app/logs
We persist tuning outputs under /app/logs by mounting a PVC in the worker pod.
Manifests added/updated (namespace: eveai-staging):
- scaleway/manifests/base/applications/backend/eveai-chat-workers/pvc.yaml
- scaleway/manifests/base/applications/backend/eveai-chat-workers/deployment.yaml (volume mount added)
Apply with kubectl (no Kustomize required):
```bash
# Create or update the PVC for logs
kubectl apply -n eveai-staging -f scaleway/manifests/base/applications/backend/eveai-chat-workers/pvc.yaml
# Update the Deployment to mount the PVC at /app/logs
kubectl apply -n eveai-staging -f scaleway/manifests/base/applications/backend/eveai-chat-workers/deployment.yaml
```
Verify PVC is bound and the pod mounts the volume:
```bash
# Check PVC status
kubectl get pvc -n eveai-staging eveai-chat-workers-logs -o wide
# Inspect the pod to confirm the volume mount
kubectl get pods -n eveai-staging -l app=eveai-chat-workers -o name
kubectl describe pod -n eveai-staging <pod-name>
# (Optional) Exec into the pod to check permissions and path
kubectl exec -n eveai-staging -it <pod-name> -- sh -lc 'id; ls -ld /app/logs'
```
Permissions and securityContext notes:
- The container runs as a non-root user (appuser) per Dockerfile.base. Some storage classes mount volumes owned by root. If you encounter permission issues (EACCES) writing to /app/logs:
- Option A: set a pod-level fsGroup so the mounted volume is group-writable by the container user.
- Option B: use an initContainer to chown/chmod /app/logs on the mounted volume.
- Keep monitoring PVC usage and set alerts to avoid running out of space.
Retention / cleanup recommendation:
- For a 14-day retention, create a CronJob that runs daily to remove files older than 14 days and then delete empty directories, mounting the same PVC at /app/logs. Example command:
```bash
find /app/logs -type f -mtime +14 -print -delete; find /app/logs -type d -empty -mtime +14 -print -delete
```
Operational checks after deployment:
1) Trigger a CrewAI tuning run; verify files appear under /app/logs and remain after pod restarts.
2) Trigger a non-tuning run; verify temporary directories are created and cleaned up automatically.
3) Monitor memory while varying CELERY_CONCURRENCY and CELERY_MAX_TASKS_PER_CHILD.

View File

@@ -0,0 +1,461 @@
# EveAI Cloud Architectuur
## Overzicht
De EveAI applicatie draait op een moderne cloud-native architectuur met Kubernetes op Scaleway, beschermd door Bunny.net CDN en ondersteund door diverse managed services.
## Architectuurdiagram (Aanbevolen Setup)
```
Internet
DNS (askeveai.com - alle subdomains)
Bunny.net CDN (Multi-domain setup)
├─ askeveai.com → WordPress Hosting -> Scaleway hosting (voorlopig enkel via plugin)
├─ evie-staging.askeveai.com → Scaleway LB → Staging Cluster
└─ evie.askeveai.com → Scaleway LB → Production Cluster
Scaleway Load Balancer (Statisch IP)
Kubernetes Cluster (Scaleway)
Ingress Controller
┌─────────────────────────────────────┐
│ Applicaties │
├─────────────────────────────────────┤
│ • eveai_app (staging/production) │
│ • eveai_api (staging/production) │
│ • eveai_workers (staging/production)│
│ • [andere pods] │
└─────────────────────────────────────┘
┌─────────────────────────────────────┐
│ Managed Services │
├─────────────────────────────────────┤
│ • Redis (per environment) │
│ • PostgreSQL (per environment) │
│ • Object Storage (S3/Minio) │
└─────────────────────────────────────┘
```
## Componenten
### 1. CDN & Security Layer
**Bunny.net CDN**
- **Functie**: Content Delivery Network en security gateway
- **Voordelen**:
- DDoS bescherming en attack mitigation
- Caching van statische bestanden
- Ontlasting van de backend cluster
- Verbeterde loading times voor eindgebruikers
- Web Application Firewall functionaliteit
### 2. DNS & Multi-Domain Routing
**DNS Provider: EuroDNS**
- **Hosting**: hosting.com (alleen WordPress hosting)
- **Email**: ProtonMail (via domein records)
- **Application**: Scaleway cluster
**Bunny.net Pull Zone Setup**
- **Zone 1**: `askeveai.com` → Origin: hosting.com WordPress
- **Zone 2**: `evie-staging.askeveai.com` → Origin: Scaleway LB IP
- **Zone 3**: `evie.askeveai.com` → Origin: Scaleway LB IP
**DNS Records (EuroDNS) - Uitgebreid**
```
; Web traffic via Bunny.net
A askeveai.com → Scaleway hosting IP
A evie-staging.askeveai.com → Bunny.net IP
A evie.askeveai.com → Bunny.net IP
A static.askeveai.com → Bunny.net IP (voor static assets)
; Email records (ProtonMail) - blijven direct
MX askeveai.com → mail.protonmail.ch (priority 10)
MX askeveai.com → mailsec.protonmail.ch (priority 20)
TXT askeveai.com → "v=spf1 include:_spf.protonmail.ch ~all"
TXT protonmail._domainkey.askeveai.com → [DKIM key van ProtonMail]
TXT _dmarc.askeveai.com → "v=DMARC1; p=quarantine; rua=..."
; Subdomains for email (if needed)
CNAME autodiscover.askeveai.com → autodiscover.protonmail.ch
CNAME autoconfig.askeveai.com → autoconfig.protonmail.ch
```
### 3. Infrastructure Layer
**Scaleway Load Balancer**
- **Type**: Statisch extern IP adres
- **Functie**: Entry point naar Kubernetes cluster
- **Locatie**: Voor de cluster, distribueert verkeer naar Ingress
**Kubernetes Cluster (Scaleway)**
- **Ingress Controller**: Routeert aanvragen naar juiste services
- **Workloads**:
- `eveai_app`: Frontend applicatie
- `eveai_api`: Backend API services
- `eveai_workers`: Background processing
- Aanvullende applicatieve pods
### 4. Monitoring & Observability
**Prometheus Stack (In-cluster)**
- **Functie**: Business events monitoring
- **Scope**: Applicatie-specifieke metrics en events
**Scaleway Cockpit**
- **Functie**: Infrastructure monitoring
- **Scope**: Performance en infrastructuur componenten
### 5. Managed Services
**Redis (Scaleway Managed)**
- **Functie**: Caching layer
- **Voordeel**: Reduced latency, session storage
**PostgreSQL (Scaleway Managed)**
- **Functie**: Primaire database
- **Voordeel**: Managed backups, high availability
**Object Storage (Scaleway)**
- **Interface**: S3-compatible via Minio client
- **Functie**: File storage, static assets, backups
## Architectuuroverwegingen
### Huidige Setup Evaluatie
**Sterke Punten:**
- ✅ Goede separation of concerns
- ✅ Gebruik van managed services vermindert operationele overhead
- ✅ CDN voor performance en security
- ✅ Container-native met Kubernetes
- ✅ Comprehensive monitoring setup
**Potentiële Verbeteringen:**
-**Multi-domain setup via Bunny.net**: Alle traffic via CDN
-**Environment isolation**: Aparte origins voor staging/production
- 🤔 **Origin Protection**: Firewall rules om direct access te voorkomen
- 🤔 **Kubernetes Ingress**: Host-based routing configureren voor multi-environment
## Email & DNS Overwegingen
### Email via ProtonMail (Blijft Direct)
**Belangrijke opmerking**: Email records gaan **NIET** via Bunny.net. CDN's zijn alleen voor web traffic (HTTP/HTTPS). Email gebruikt andere protocollen (SMTP, IMAP, POP3) die niet via een CDN kunnen.
**Wat blijft hetzelfde:**
- MX records blijven wijzen naar ProtonMail servers
- SPF, DKIM, DMARC records blijven ongewijzigd
- Email functionaliteit wordt niet beïnvloed door Bunny.net
**Voordeel van je setup:**
- DNS bij EuroDNS: Flexibel om records te beheren
- Hosting bij hosting.com: Makkelijk te migreren later
- Email bij ProtonMail: Blijft stabiel tijdens migraties
### DNS Migratie Strategie (Vereenvoudigd)
**Huidige situatie:**
```
EuroDNS → hosting.com (WordPress + email config via cPanel)
```
**Nieuwe situatie:**
```
EuroDNS → Bunny.net (web) + ProtonMail (email direct)
```
**Migratiestappen:**
1. **Preparatie**: Email records van cPanel naar EuroDNS overbrengen
2. **Bunny.net setup**: Pull zones configureren
3. **DNS switch**: A records naar Bunny.net, MX records direct naar ProtonMail
4. **Later**: hosting.com opzeggen
## Bunny.net Setup Guide
### Stap 1: Pull Zones Aanmaken
**Pull Zone 1: WordPress Site**
```
Name: askeveai-wordpress
Hostname: askeveai.com
Origin URL: [hosting.com server IP/URL]
```
**Pull Zone 2: Staging Environment**
```
Name: evie-staging
Hostname: evie-staging.askeveai.com
Origin URL: http://[scaleway-lb-ip]
Host Header: evie-staging.askeveai.com
```
**Pull Zone 3: Production Environment**
```
Name: evie-production
Hostname: evie.askeveai.com
Origin URL: http://[scaleway-lb-ip]
Host Header: evie.askeveai.com
```
**Pull Zone 4: Static Assets - Bunny Storage (Aanbevolen)**
```
Name: static-assets
Type: Push Zone (Bunny Storage)
Hostname: static.askeveai.com
Storage: Direct upload to Bunny Storage
API: FTP/SFTP/REST API upload
```
**Alternatief: Pull Zone van Scaleway S3**
```
Name: static-assets-s3
Type: Pull Zone
Hostname: static.askeveai.com
Origin URL: https://[scaleway-s3-bucket].s3.fr-par.scw.cloud
```
### Stap 2: SSL/TLS Configuratie
- **Force SSL**: Aan voor alle zones
- **SSL Certificate**: Let's Encrypt (gratis) of Bunny.net certificates
- **Origin Shield**: Europa (voor betere performance naar Scaleway)
### Stap 3: Security Settings
- **Origin Shield Protection**: Alleen Bunny.net IP's kunnen origin bereiken
- **WAF Rules**: Basis DDoS en attack protection
- **Rate Limiting**: Per domain/endpoint configureren
## Static Assets Optimalisatie
### Huidige Aanpak (Sub-optimaal)
```
Browser → Bunny.net → Scaleway LB → Ingress → App Pod → Static file
```
### Aanbevolen Aanpak: Direct Static Delivery
```
Browser → Bunny.net Edge → Static file (gecached op edge)
```
### Implementatie Strategieën
**Optie 1: Bunny Storage (Aanbevolen)**
```
Build Process → Bunny Storage → Bunny CDN Edge → Browser
- Upload: Direct naar Bunny Storage via API/FTP
- Serve: Native performance, geen extra hops
- Cost: Meestal goedkoper dan S3 + CDN
- Speed: Optimaal, storage en CDN geïntegreerd
```
**Optie 2: Scaleway Object Storage + Pull Zone**
```
Build Process → Scaleway S3 → Bunny Pull Zone → Browser
- Upload: App → Scaleway S3 bucket
- Serve: Bunny.net cache van S3 bucket
- Voordeel: Backup in je eigen cloud, data sovereignty
- Nadeel: Extra latency voor eerste request
```
**Optie 3: Hybrid Approach**
```
- Critical assets: Bunny Storage (logo, CSS, JS)
- User uploads: Scaleway S3 → Bunny Pull Zone
- Development: Local static serving
```
### Bunny Storage vs Scaleway S3
| Aspect | Bunny Storage | Scaleway S3 + Pull Zone |
|--------|---------------|-------------------------|
| **Performance** | ⭐⭐⭐⭐⭐ Native CDN | ⭐⭐⭐⭐ Extra hop |
| **Cost** | ⭐⭐⭐⭐⭐ Integrated pricing | ⭐⭐⭐ S3 + CDN costs |
| **Simplicity** | ⭐⭐⭐⭐⭐ One provider | ⭐⭐⭐ Two systems |
| **Data Control** | ⭐⭐⭐ At Bunny | ⭐⭐⭐⭐⭐ In your cloud |
| **Backup/Sync** | ⭐⭐⭐ Bunny dependent | ⭐⭐⭐⭐⭐ Full control |
### File Types voor Static Delivery
**Ideaal voor CDN:**
- ✅ Images (JPG, PNG, WebP, SVG)
- ✅ CSS files
- ✅ JavaScript bundles
- ✅ Fonts (WOFF2, etc.)
- ✅ Videos/audio files
- ✅ PDF documents
- ✅ Icons en favicons
**Blijven via app:**
- ❌ Dynamic API responses
- ❌ User-generated content (tenzij via upload flow)
- ❌ Authentication-required files
## Kubernetes Ingress Configuratie
Met de multi-domain setup via Bunny.net moet je Ingress ook aangepast worden:
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: eveai-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false" # SSL handled by Bunny.net
spec:
rules:
- host: evie-staging.askeveai.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: eveai-staging-service
port:
number: 80
- host: evie.askeveai.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: eveai-production-service
port:
number: 80
```
## Migratiestrategie (Uitgebreid)
### Fase 1: Bunny.net Setup (Geen downtime)
1. Maak Pull Zones aan in Bunny.net
2. Test via Bunny.net hostnames (zonder DNS wijziging)
3. Configureer caching en security rules
### Fase 2: DNS Migratie (Minimale downtime)
1. Kopieer email records van cPanel naar EuroDNS
2. Verlaag TTL van huidige DNS records (1 uur van tevoren)
3. Wijzig A records naar Bunny.net (MX records blijven ProtonMail)
4. Monitor traffic en performance
### Fase 3: Origin Protection
1. Configureer Scaleway firewall om alleen Bunny.net IP's toe te laten
2. Test alle functionaliteit
3. Monitor security logs
### Fase 4: WordPress Migratie naar Scaleway (Optioneel)
**Planning overwegingen:**
- **Database**: WordPress DB naar Scaleway PostgreSQL of aparte MySQL
- **Files**: wp-content naar Scaleway Object Storage
- **SSL**: Blijft via Bunny.net (geen wijzigingen)
- **Performance**: Mogelijk sneller door proximity met EveAI
**Migratie opties:**
1. **Lift & Shift**: VM op Scaleway met traditionele LAMP stack
2. **Modernisering**: WordPress in Kubernetes container
3. **Hybrid**: Behoud hosting.com tot je tevreden bent met K8s setup
### Fase 5: Hosting.com Opzegging
1. Bevestig WordPress werkt 100% op Scaleway
2. Final backup van hosting.com
3. Annuleer hosting.com contract
4. Email en EveAI blijven ongestoord werken
## Toekomstige Evolutie: WordPress op Scaleway
### Optie 1: WordPress als Managed Service
**Scaleway WordPress Hosting** (als beschikbaar)
- Managed WordPress environment
- Automatische updates en backups
- Geïntegreerd met andere Scaleway services
### Optie 2: WordPress in Kubernetes Cluster
**Voordelen:**
- ✅ Alles op één platform (Scaleway)
- ✅ Gedeelde resources en monitoring
- ✅ Consistent deployment pipeline
- ✅ Cost optimization
- ✅ Uniform backup/disaster recovery
**WordPress in K8s Setup:**
```yaml
# WordPress Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-deployment
spec:
replicas: 2
selector:
matchLabels:
app: wordpress
template:
spec:
containers:
- name: wordpress
image: wordpress:6-apache
env:
- name: WORDPRESS_DB_HOST
value: [scaleway-postgresql-endpoint]
- name: WORDPRESS_DB_NAME
value: wordpress_db
volumeMounts:
- name: wordpress-storage
mountPath: /var/www/html/wp-content
volumes:
- name: wordpress-storage
persistentVolumeClaim:
claimName: wordpress-pvc
```
### Optie 3: WordPress op Scaleway Instances
**Instance-based hosting:**
- Dedicated VM voor WordPress
- Meer controle over environment
- Traditionele hosting aanpak op moderne cloud
### Aanbevolen Aanpak: Kubernetes
**Architectuur zou worden:**
```
Bunny.net CDN
├─ askeveai.com → Scaleway LB → WordPress Pod
├─ evie-staging.askeveai.com → Scaleway LB → EveAI Staging
└─ evie.askeveai.com → Scaleway LB → EveAI Production
```
**Gedeelde Resources:**
- **PostgreSQL**: Aparte database voor WordPress + EveAI
- **Object Storage**: WordPress media + EveAI assets
- **Redis**: WordPress caching + EveAI caching
- **Monitoring**: Unified observability voor alles
## Disaster Recovery & Backup
- **Database**: Managed PostgreSQL automated backups
- **Object Storage**: Cross-region replication overwegen
- **Application State**: Stateless design waar mogelijk
- **Configuration**: GitOps approach voor cluster configuration
## Conclusie
De voorgestelde architectuur biedt een uitstekende balans tussen performance, security en operationele eenvoud. Door alles via Bunny.net te routeren krijg je:
**Directe voordelen:**
- Uniforme security en performance voor alle domeinen
- Eenvoudige SSL management
- Cost-effective CDN voor alle content
- Flexibiliteit voor toekomstige migraties
**Strategische voordelen:**
- **Scaleway consolidatie**: Mogelijk om WordPress ook naar Scaleway te migreren
- **Operational simplicity**: Eén cloud provider voor applicatie infrastructure
- **Cost optimization**: Gedeelde resources en bundelvoordelen
- **Technical consistency**: Uniform tooling en monitoring
**Aanbevolen roadmap:**
1. **Nu**: Bunny.net implementeren voor alle domeinen
2. **Q1 2026**: WordPress evalueren voor Scaleway migratie
3. **Q2 2026**: hosting.com contract beëindigen
4. **Resultaat**: Volledig cloud-native platform op Scaleway + Bunny.net
Deze aanpak maximaliseert flexibiliteit terwijl risico's worden geminimaliseerd door gefaseerde implementatie.
---
*Architectuurdocument gegenereerd op: Augustus 2025*

View File

@@ -0,0 +1,104 @@
graph TB
%% External Users
Users[👥 Users] --> Internet[🌐 Internet]
%% DNS Layer
Internet --> EuroDNS[📡 EuroDNS<br/>askeveai.com]
%% Email Flow (Direct)
EuroDNS --> ProtonMail[📧 ProtonMail<br/>MX Records]
%% Web Traffic via Bunny.net
EuroDNS --> BunnyNet[🐰 Bunny.net CDN]
%% Bunny.net Pull Zones + Storage
BunnyNet --> WP_Zone[📝 WordPress Zone<br/>askeveai.com]
BunnyNet --> Staging_Zone[🧪 Staging Zone<br/>evie-staging.askeveai.com]
BunnyNet --> Prod_Zone[🚀 Production Zone<br/>evie.askeveai.com]
BunnyNet --> Static_Zone[📦 Static Assets Zone<br/>static.askeveai.com]
BunnyNet --> BunnyStorage[🗂️ Bunny Storage<br/>Static Files]
%% WordPress Origin
WP_Zone --> HostingCom[🏠 hosting.com<br/>WordPress Site]
%% Scaleway Infrastructure
subgraph Scaleway["☁️ Scaleway Cloud Platform"]
%% Load Balancer
ScalewayLB[⚖️ Load Balancer<br/>Static IP]
%% Kubernetes Cluster
subgraph K8sCluster["🐳 Kubernetes Cluster"]
Ingress[🚪 Ingress Controller<br/>Host-based Routing]
%% Application Pods
subgraph AppPods["📱 Application Pods"]
EveAI_App[evie_app<br/>Frontend]
EveAI_API[evie_api<br/>Backend API]
EveAI_Workers[evie_workers<br/>Background Jobs]
Other_Pods[... other pods]
end
%% Monitoring
subgraph Monitoring["📊 Monitoring"]
Prometheus[🔥 Prometheus<br/>Business Events]
Grafana[📈 Grafana<br/>Dashboards]
end
end
%% Managed Services
subgraph ManagedServices["🛠️ Managed Services"]
Redis[🔴 Redis<br/>Caching Layer]
PostgreSQL[🐘 PostgreSQL<br/>Database]
ObjectStorage[📂 Object Storage<br/>S3 Compatible]
end
%% Cockpit Monitoring
Cockpit[🚁 Scaleway Cockpit<br/>Infrastructure Monitoring]
end
%% Connections to Scaleway
Staging_Zone --> ScalewayLB
Prod_Zone --> ScalewayLB
Static_Zone --> BunnyStorage
%% Internal Scaleway Connections
ScalewayLB --> Ingress
Ingress --> EveAI_App
Ingress --> EveAI_API
Ingress --> EveAI_Workers
Ingress --> Other_Pods
EveAI_App --> Redis
EveAI_API --> PostgreSQL
EveAI_API --> Redis
EveAI_Workers --> PostgreSQL
EveAI_Workers --> Redis
EveAI_API --> ObjectStorage
%% Monitoring Connections
EveAI_App --> Prometheus
EveAI_API --> Prometheus
EveAI_Workers --> Prometheus
Prometheus --> Grafana
%% Cockpit monitors everything
ScalewayLB --> Cockpit
K8sCluster --> Cockpit
ManagedServices --> Cockpit
%% Styling
classDef bunnynet fill:#ff6b35,stroke:#333,stroke-width:2px,color:#fff
classDef scaleway fill:#4c1d95,stroke:#333,stroke-width:2px,color:#fff
classDef external fill:#10b981,stroke:#333,stroke-width:2px,color:#fff
classDef monitoring fill:#f59e0b,stroke:#333,stroke-width:2px,color:#fff
classDef managed fill:#8b5cf6,stroke:#333,stroke-width:2px,color:#fff
classDef apps fill:#06b6d4,stroke:#333,stroke-width:2px,color:#fff
class BunnyNet,WP_Zone,Staging_Zone,Prod_Zone,Static_Zone,BunnyStorage bunnynet
class EuroDNS,ProtonMail,HostingCom,Users,Internet external
class ScalewayLB,Ingress,Cockpit scaleway
class Prometheus,Grafana monitoring
class Redis,PostgreSQL,ObjectStorage managed
class EveAI_App,EveAI_API,EveAI_Workers,Other_Pods apps

View File

@@ -0,0 +1,194 @@
# Phase 8: Application Services (Staging)
This guide describes how to deploy EveAI application services to the Scaleway Kubernetes cluster, building on Phases 17 in cluster-install.md.
## Prerequisites
- Ingress-NGINX running with external IP
- cert-manager installed and Certificate evie-staging-tls is READY (via HTTP ACME first, then HTTPS-only)
- External Secrets Operator installed; Kubernetes Secret eveai-secrets exists in namespace eveai-staging
- Verification service deployed and reachable via /verify
- Optional: Monitoring stack running, Pushgateway deployed or reachable; PUSH_GATEWAY_HOST/PORT available to apps (via eveai-secrets)
## What we deploy (structure)
- Frontend (web) services
- eveai-app → exposed at /admin
- eveai-api → exposed at /api
- eveai-chat-client → exposed at /client
- Backend worker services (internal)
- eveai-workers (queue: embeddings)
- eveai-chat-workers (queue: llm_interactions)
- eveai-entitlements (queue: entitlements)
- Ops Jobs (manual DB ops)
- 00-env-check
- 02-db-bootstrap-ext
- 03-db-migrate-public
- 04-db-migrate-tenant
- 05-seed-or-init-data
- 06-verify-minimal
Manifests are under:
- scaleway/manifests/base/applications/frontend/
- scaleway/manifests/base/applications/backend/
- scaleway/manifests/base/applications/ops/jobs/
- Aggregate kustomization (apps only): scaleway/manifests/base/applications/kustomization.yaml
Note:
- The staging Kustomize overlay deploys only frontend and backend apps.
- Ingress remains managed manually via scaleway/manifests/base/networking/ingress-https.yaml and your cluster-install.md guide.
- Ops Jobs are not part of the overlay and should be executed manually with kubectl create -f.
## Step 1: Validate secrets
```bash
kubectl get secret eveai-secrets -n eveai-staging
kubectl get secret eveai-secrets -n eveai-staging -o jsonpath='{.data}' | jq 'keys'
```
Confirm presence of DB_*, REDIS_*, OPENAI_API_KEY, MISTRAL_API_KEY, JWT_SECRET_KEY, API_ENCRYPTION_KEY, MINIO_*, PUSH_GATEWAY_HOST, PUSH_GATEWAY_PORT.
## Step 2: Deploy Ops Jobs (manual pre-deploy)
Run the DB ops scripts manually in order. Each manifest uses generateName; use kubectl create.
Notes for images:
- Ops Jobs now reference the private Scaleway registry directly and set imagePullPolicy: Always.
- Ensure the docker pull secret exists (scaleway-registry-cred) — see the Private registry section.
- After pushing a new :staging image, delete any previous Job (if present) and create a new one to force a fresh Pod pull.
```bash
kubectl create -f scaleway/manifests/base/applications/ops/jobs/00-env-check-job.yaml
kubectl wait --for=condition=complete job -n eveai-staging -l job-type=env-check --timeout=600s
kubectl create -f scaleway/manifests/base/applications/ops/jobs/02-db-bootstrap-ext-job.yaml
kubectl wait --for=condition=complete job -n eveai-staging -l job-type=db-bootstrap-ext --timeout=1800s
kubectl create -f scaleway/manifests/base/applications/ops/jobs/03-db-migrate-public-job.yaml
kubectl wait --for=condition=complete job -n eveai-staging -l job-type=db-migrate-public --timeout=1800s
kubectl create -f scaleway/manifests/base/applications/ops/jobs/04-db-migrate-tenant-job.yaml
kubectl wait --for=condition=complete job -n eveai-staging -l job-type=db-migrate-tenant --timeout=3600s
kubectl create -f scaleway/manifests/base/applications/ops/jobs/05-seed-or-init-data-job.yaml
kubectl wait --for=condition=complete job -n eveai-staging -l job-type=db-seed-or-init --timeout=1800s
kubectl create -f scaleway/manifests/base/applications/ops/jobs/06-verify-minimal-job.yaml
kubectl wait --for=condition=complete job -n eveai-staging -l job-type=db-verify-minimal --timeout=900s
```
View logs:
```bash
kubectl -n eveai-staging get jobs
kubectl -n eveai-staging logs job/<created-job-name>
```
### Runtime environment for Ops Jobs
Each Ops Job sets the same non-secret runtime variables required by the shared bootstrap (start.sh/run.py):
- FLASK_APP=/app/scripts/run.py
- COMPONENT_NAME=eveai_ops
- PYTHONUNBUFFERED=1
- LOGLEVEL=debug (for staging)
- ROLE=web
- PORT=8080
- WORKERS=1
- WORKER_CLASS=gevent
- WORKER_CONN=100
- MAX_REQUESTS=1000
- MAX_REQUESTS_JITTER=100
Secrets (DB_*, REDIS_*, etc.) still come from `envFrom: secretRef: eveai-secrets`.
Tip: After pushing a new :staging image, delete any previous Job with the same label to force a fresh Pod and pull:
```bash
kubectl -n eveai-staging delete job -l component=ops,job-type=db-migrate-public || true
kubectl create -f scaleway/manifests/base/applications/ops/jobs/03-db-migrate-public-job.yaml
```
## Step 3: Deploy backend workers
```bash
kubectl apply -k scaleway/manifests/base/applications/backend/
kubectl -n eveai-staging get deploy | egrep 'eveai-(workers|chat-workers|entitlements)'
# Optional: quick logs
kubectl -n eveai-staging logs deploy/eveai-workers --tail=100 || true
kubectl -n eveai-staging logs deploy/eveai-chat-workers --tail=100 || true
kubectl -n eveai-staging logs deploy/eveai-entitlements --tail=100 || true
```
## Step 4: Deploy frontend services
```bash
kubectl apply -k scaleway/manifests/base/applications/frontend/
kubectl -n eveai-staging get deploy,svc | egrep 'eveai-(app|api|chat-client)'
```
## Step 5: Verify Ingress routes (Ingress managed separately)
Ingress is intentionally not managed by the staging Kustomize overlay. Apply or update it manually using your existing manifest and handle it per your cluster-install.md guide:
```bash
kubectl apply -f scaleway/manifests/base/networking/ingress-https.yaml
kubectl -n eveai-staging describe ingress eveai-staging-ingress
```
Then verify the routes:
```bash
curl -k https://evie-staging.askeveai.com/verify/health
curl -k https://evie-staging.askeveai.com/admin/healthz/ready
curl -k https://evie-staging.askeveai.com/api/healthz/ready
curl -k https://evie-staging.askeveai.com/client/healthz/ready
```
## Resources and probes (staging defaults)
- Web (app, api, chat-client):
- requests: 150m CPU, 256Mi RAM; limits: 500m CPU, 512Mi RAM; replicas: 1
- readiness/liveness: GET /healthz/ready
- Workers:
- eveai-workers: req 200m/512Mi, lim 1CPU/1Gi
- eveai-chat-workers: req 500m/1Gi, lim 2CPU/3Gi
- eveai-entitlements: req 100m/256Mi, lim 500m/512Mi
## Pushgateway usage
- Ensure PUSH_GATEWAY_HOST and PUSH_GATEWAY_PORT are provided (e.g., pushgateway.monitoring.svc.cluster.local:9091), typically via eveai-secrets or a ConfigMap.
- Apps will continue to push business metrics; Prometheus scrapes the Pushgateway.
## Image tags strategy (staging/production channels)
- The push script now creates and pushes two tags per service:
- A versioned tag: :vX.Y.Z (e.g., :v1.2.3)
- An environment channel tag based on ENVIRONMENT: :staging or :production
- Recommendation for staging manifests:
- Refer to the channel tag (e.g., rg.fr-par.scw.cloud/eveai-staging/...:<staging>) and set imagePullPolicy: Always so new pushes are picked up without manifest changes.
- Production can later use immutable version tags or digests via a production overlay.
- Ensure PUSH_GATEWAY_HOST and PUSH_GATEWAY_PORT are provided (e.g., pushgateway.monitoring.svc.cluster.local:9091), typically via eveai-secrets or a ConfigMap.
- Apps will continue to push business metrics; Prometheus scrapes the Pushgateway.
## Bunny.net WAF (TODO)
- Configure Pull Zone for evie-staging.askeveai.com
- Set Origin to the LoadBalancer IP with HTTPS and Host header evie-staging.askeveai.com
- Define rate limits primarily on /api, looser on /client; enable bot filtering
- Only switch DNS (CNAME) to Bunny after TLS issuance completed directly against LoadBalancer
## Troubleshooting
```bash
kubectl get all -n eveai-staging
kubectl get events -n eveai-staging --sort-by=.lastTimestamp
kubectl describe ingress eveai-staging-ingress -n eveai-staging
kubectl logs -n eveai-staging deploy/eveai-api --tail=200
```
## Rollback / Cleanup
```bash
# Remove frontend/backend (keeps verification and other base resources)
kubectl delete -k scaleway/manifests/base/applications/frontend/
kubectl delete -k scaleway/manifests/base/applications/backend/
# Jobs are kept for history due to ttlSecondsAfterFinished; to delete immediately:
kubectl -n eveai-staging delete jobs --all
```
## Private registry (Scaleway)
1) Create docker pull secret via External Secrets (once):
```bash
kubectl apply -f scaleway/manifests/base/secrets/scaleway-registry-secret.yaml
kubectl -n eveai-staging get secret scaleway-registry-cred -o yaml | grep "type: kubernetes.io/dockerconfigjson"
```
2) Use the staging overlay to deploy apps with registry rewrite and imagePullSecrets:
```bash
kubectl apply -k scaleway/manifests/overlays/staging/
```
Notes:
- Base manifests keep generic images (josakola/...). The overlay rewrites them to rg.fr-par.scw.cloud/eveai-staging/josakola/...:staging and adds imagePullSecrets to all Pods.
- Staging uses imagePullPolicy: Always, so new pushes to :staging are pulled automatically.

View File

@@ -0,0 +1,138 @@
### Kort antwoord
- Ja, je kunt kubectl port-forward als een systemd service draaien op je Linux-machine.
- Ja, je kunt eender welke lokale poort gebruiken (niet beperkt tot 8080), via het formaat LOCAL:REMOTE (bijv. 18080:80).
- Ja, je kunt de lokale forward achter Nginx Proxy Manager (NPM) zetten. Let wel op extra beveiliging als je dit publiek maakt.
---
### 1) Port-forward als Linux service (systemd)
Hiermee start de forward automatisch bij boot en herstart hij bij fouten.
#### Benodigd
- Werkende kubeconfig voor jouw user (zodat `kubectl` verbinding kan maken met je cluster).
- De juiste servicenaam (bijv. `pgadmin-pgadmin4`). Controleer met:
- `kubectl -n tools get svc`
Onderstaande voorbeelden binden op localhost (127.0.0.1) voor veiligheid. Pas de servicenaam en poorten aan naar jouw situatie.
#### Stap 1: Wrapper-script
Maak `/usr/local/bin/pf-pgadmin.sh`:
```bash
#!/usr/bin/env bash
set -euo pipefail
# Optioneel, als je kubeconfig niet in ~/.kube/config staat:
# export KUBECONFIG=/home/<user>/.kube/config
NAMESPACE="tools"
SERVICE="pgadmin-pgadmin4" # pas aan indien anders
LOCAL_PORT="18080" # kies je eigen lokale poort
REMOTE_PORT="80" # servicepoort in de cluster
# Bind alleen op localhost voor veiligheid
exec kubectl -n "${NAMESPACE}" port-forward --address=127.0.0.1 "svc/${SERVICE}" "${LOCAL_PORT}:${REMOTE_PORT}"
```
Maak uitvoerbaar:
```bash
sudo chmod +x /usr/local/bin/pf-pgadmin.sh
```
#### Stap 2: systemd unit
Maak `/etc/systemd/system/pgadmin-portforward.service`:
```ini
[Unit]
Description=Kubernetes port-forward for pgAdmin (tools namespace)
After=network-online.target
Wants=network-online.target
[Service]
User=<jouw-linux-user>
Group=<jouw-linux-user>
# Eventueel: Environment=KUBECONFIG=/home/<jouw-linux-user>/.kube/config
ExecStart=/usr/local/bin/pf-pgadmin.sh
Restart=always
RestartSec=3
KillSignal=SIGINT
[Install]
WantedBy=multi-user.target
```
Vervang `<jouw-linux-user>` door je gebruikersnaam.
Activeer en start:
```bash
sudo systemctl daemon-reload
sudo systemctl enable pgadmin-portforward
sudo systemctl start pgadmin-portforward
sudo systemctl status pgadmin-portforward
```
Logs volgen:
```bash
journalctl -u pgadmin-portforward -f
```
Hierna staat pgAdmin lokaal op http://127.0.0.1:18080.
Beveiligingstip: laat `--address=127.0.0.1` staan. Wil je LAN-toegang, overweeg dan `--address=0.0.0.0` maar beveilig met firewall/IP-allowlist op de host.
---
### 2) Andere lokale poort gebruiken
Je kunt de lokale poort zelf kiezen met `LOCAL:REMOTE`:
- Voorbeeld: 18080 lokaal naar 80 in de cluster:
- `kubectl -n tools port-forward svc/pgadmin-pgadmin4 18080:80`
- Je kunt meerdere forwards tegelijk draaien zolang elke lokale poort uniek is.
---
### 3) Beschikbaar maken via Nginx Proxy Manager (NPM)
Ik ga uit van Nginx Proxy Manager (de reverse proxy UI). Als je Node Package Manager bedoelde, laat het weten.
Doel: NPM publiceert een hostnaam en reverse-proxyt naar jouw lokale forward op 127.0.0.1:18080.
Voorwaarden:
- De port-forward draait stabiel (bijv. via systemd op poort 18080).
- NPM draait op dezelfde host (aanbevolen) of kan de host bereiken waar de forward draait.
Stappen in NPM:
1) Login in NPM.
2) Add Proxy Host:
- Domain Names: bijv. `pgadmin.local.mijnlan` (LAN) of een publiek domein (alleen als strikt noodzakelijk).
- Scheme: `http`
- Forward Hostname/IP: `127.0.0.1` (als NPM op dezelfde host draait) of het LAN-IP van de host met de port-forward.
- Forward Port: `18080`
3) SSL:
- LAN: vaak geen SSL nodig.
- Publiek: vraag Lets Encrypt aan in NPM, zet Force SSL/HTTP2/HSTS aan.
4) Access List / Security:
- Sterk aanbevolen: IP-allowlist, Basic Auth of SSO voor publieke toegang.
Belangrijke waarschuwing:
- Een publieke NPM-proxy naar pgAdmin vergroot het aanvalsoppervlak. Gebruik sterke admin-credentials (heb je), overweeg extra Basic Auth/SSO, en houd pgAdmin up-to-date.
Alternatief voor publieke toegang:
- Cloudflared/Zero Trust tunnel of inlets tunnel naar een subdomein met policies.
- Of later een Kubernetes Ingress met dedicated host + TLS, zoals eerder besproken.
---
### Alternatieven en tips
- SSH-tunnel: kan ook, maar kubectl port-forward is vaak het eenvoudigst (gebruikt de K8s API).
- kubefwd: handig als je meerdere services tegelijk lokaal wilt resolven.
- Teamtoegang: liever een VPN (Tailscale/WireGuard) dan publieke NPM. Bind de forward op 127.0.0.1 en laat teamleden via VPN op je host browsen.
---
### Troubleshooting
- Niet-startende service: bekijk `journalctl -u pgadmin-portforward -f`. Vaak is kubeconfig-onbereikbaar de oorzaak; zet `Environment=KUBECONFIG=...` in de unit of export in het script.
- Verkeerde servicenaam: check met `kubectl -n tools get svc` en pas aan in je script.
- Poort bezet: kies een andere lokale poort (bijv. 18081).
- NPM werkt niet: controleer of NPM dezelfde host is (dan 127.0.0.1 gebruiken), anders gebruik het LAN-IP; check firewall.
---
### Samenvatting
- Service: ja, via systemd; voorbeeldscript en unit hierboven.
- Andere poort: ja, stel de linkerzijde van LOCAL:REMOTE in (bijv. 18080:80).
- NPM: ja, reverse proxy naar 127.0.0.1:18080; voeg extra beveiliging toe als je het publiek maakt.
Als je wilt, kan ik het script en de unit exact invullen met jouw servicenaam en gewenste poort. Stuur me de output van `kubectl -n tools get svc` en je Linux-gebruikersnaam.

Some files were not shown because too many files have changed in this diff Show More