69 Commits

Author SHA1 Message Date
Josako
866cc2a60d - Fixed bug where negative answers in KO Criteria resulted in a blank answer
- Fixed bug where removal of audio processor caused eveai_workers to not start up, resulting in documents not being processed.
2025-08-07 08:52:15 +02:00
Josako
ed87d73c5a - Bug fixes
- TRAICIE_KO_INTERVIEW_DEFINITION spacialist updated to new version
- Edit Document Version now includes Catalog Tagging Fields
- eveai_ordered_list_editor no longer includes Expand Button & Add Row doesn't submit
- Active Period was not correctly returned in some cases in the license_period_services.py
- Partner menu removed if not Super User
2025-08-05 18:48:12 +02:00
Josako
212ea28de8 - Adding specialist configuration information to be added as arguments for retrievers. 2025-08-03 18:31:42 +02:00
Josako
cea38e02d2 - Mobile client changes. 2025-08-03 17:56:52 +02:00
Josako
248fae500a - Correctie van de ActiveChatInput container (werd nu te groot getoond). 2025-08-02 18:09:16 +02:00
Josako
4d6466038f - Start met Mobiele versie van de chat client. 2025-08-02 17:27:20 +02:00
Josako
9a88582fff - Refinement of the chat client to have better visible clues for user vs chatbot messages
- Introduction of interview_phase and normal phase in TRAICIE_SELECTION_SPECIALIST to make interaction with bot more human.
- More and random humanised messages to TRAICIE_SELECTION_SPECIALIST
2025-08-02 16:36:41 +02:00
Josako
998ddf4c03 Changelog for 2.3.12 2025-07-28 23:01:57 +02:00
Josako
dabf97c96e Changes for eveai_chat_client:
- Session Defaults Header clickable
- Document Processing View - show 'Finished Processing' iso 'Processing' to have more logical visual indicators
- TRAICIE_SELECTION_SPECIALIST now no longer shows question to start selection procedure at initialisation.
- Error Messages for adding documents in 'alert'
- Correction of error in Template variable replacement, resulting in missing template variable value
2025-07-28 22:56:37 +02:00
Josako
5e81595622 Changes for eveai_chat_client:
- Modal display of privacy statement & Terms & Conditions
- Consent-flag ==> check of privacy and Terms & Conditions
- customisation option added to show or hide DynamicForm titles
2025-07-28 21:47:56 +02:00
Josako
ef138462d9 Changelog for 2.3.12 2025-07-25 22:42:00 +02:00
Josako
42ffe3795f - Fixed Error where Catalog Types other than default could not be added
- Fixed error in TRAICIE_KO_INTERVIEW_DEFINITION_SPECIALIST
- Minor improvements
2025-07-25 22:35:08 +02:00
Josako
ba523a95c5 - RQC output of TRAICIE_SELECTION_SPECIALIST to EveAIDataCapsule 2025-07-25 04:27:19 +02:00
Josako
8a85b4540f - Adapting TRAICIE_SELECTION_SPECIALIST to retrieve prefered contact times using a form iso free text
- Improvement of DynamicForm en FormField to handle boolean values.
2025-07-24 14:43:08 +02:00
Josako
fc3cae1986 - Layout improvements for the Chat client - alignment of LanguageSelector 2025-07-23 22:23:04 +02:00
Josako
32df3d0589 - Layout improvements for the Chat client 2025-07-23 18:06:47 +02:00
Josako
ccc1a2afb8 - Layout improvements for the Chat client 2025-07-23 16:02:11 +02:00
Josako
f16ed85e82 - Latest interaction is now positioned right above the chat-input / form
- It moves to the standard positing in MessageHistory.vue
2025-07-23 09:43:33 +02:00
Josako
e990fe65d8 - eveai_chat_client update to have different ways of presenting ProgressTracker.vue. Based on progress_tracker_insights in Tenant Make Configuration. 2025-07-22 21:27:39 +02:00
Josako
32cf105d7b - Introduction of preferred contact time form
- Logging asset usage in TRAICIE_SELECTION_SPECIALIST
2025-07-22 18:20:01 +02:00
Josako
dc6cd9d940 - Correction in the tenant_list_view to only show 'partner tenants' in case the user is a partner admin.
- Edit Partner can only be executed by Super User
- Give a more precise error message when a 403 client error is returned trying to get a URL.
2025-07-22 15:44:39 +02:00
Josako
a0f806ba4e - Translation of ProgressTracker.vue constants OK 2025-07-22 12:27:04 +02:00
Josako
98db88b00b - Fixed bug that prevented Material Icons to show up properly
- Changelog for 2.3.10
2025-07-22 04:24:56 +02:00
Josako
4ad621428e - verbeteringen client
- Enkel nog probleem met vertaling van de ProgressTracker constanten
2025-07-21 21:45:46 +02:00
Josako
0f33beddf4 - verbeteringen client
- Invoer van een 'constanten' cache op niveau van useTranslation.js, om in de ProgressTracker de boodschappen in de juiste taal te zetten.
2025-07-21 17:39:52 +02:00
Josako
f8f941d1e1 - verbeteringen client
- vereenvoudiging van de ProgressTracker.vue door verwijderen van single line display
- toevoegen van busy animatie tijdens redeneren
2025-07-21 16:01:26 +02:00
Josako
abc0a50dcc - verbeteringen client
- removal of eveai_chat
2025-07-20 21:19:22 +02:00
Josako
854d889413 - verbeteringen client 2025-07-20 19:31:55 +02:00
Josako
7bbc32e381 - opkuis 2025-07-20 18:10:56 +02:00
Josako
e75c49d2fa - iconManager MaterialIconManager.js zijn nu 'unified' in 1 component, en samen met translation utilities omgezet naar een meer moderne Vue composable
- De sidebar is nu eveneens omgezet naar een Vue component.
2025-07-20 18:07:17 +02:00
Josako
ccb844c15c - Min of meer werkende chat client new stule 2025-07-20 11:36:00 +02:00
Josako
b60600e9f6 - introductie van vue files - bijna werkende versie van eveai_chat_client. 2025-07-18 20:32:55 +02:00
Josako
11b1d548bd - Eerste stap in het opnieuw laten werken van de chat client... 2025-07-18 16:07:13 +02:00
Josako
f3a243698c - Introduction of PARTNER_RAG retriever, PARTNER_RAG_SPECIALIST and linked Agent and Task, to support documentation inquiries in the management app (eveai_app)
- Addition of a tenant_partner_services view to show partner services from the viewpoint of a tenant
- Addition of domain model diagrams
- Addition of license_periods views and form
2025-07-16 21:24:08 +02:00
Josako
000636a229 - Changes to the list views - now using tabulator with filtering and sorting, client-side pagination, ...
- Adaptation of all list views in the app
2025-07-14 18:58:54 +02:00
Josako
acad28b623 - Introduction of eveai-listview (to select objects) that is sortable, filterable, ...
- npm build does now also include building css files.
- Source javascript and css are now defined in the source directories (eveai_app or eveai_chat_client), and automatically built for use with nginx
- eveai.css is now split into several more manageable files (per control type)
2025-07-11 15:25:28 +02:00
Josako
42635a583c Fix correcting changed Tenant scheme in database initialisation code 2025-07-10 15:19:56 +02:00
Josako
7d7db296d3 Changelog adaptation for 2.3.9-alfa 2025-07-10 10:47:57 +02:00
Josako
51fd16bcc6 - RAG Specialist fully implemented new style
- Selection Specialist - VA version - fully implemented
- Correction of TRAICIE_ROLE_DEFINITION_SPECIALIST - adaptation to new style
- Removal of 'debug' statements
2025-07-10 10:39:42 +02:00
Josako
509ee95d81 - Revisiting RAG_SPECIALIST
- Adapt Catalogs & Retrievers to use specific types, removing tagging_fields
- Adding CrewAI Implementation Guide
2025-07-08 15:54:16 +02:00
Josako
33b5742d2f - Full implementation of Traicie Selection Specialist - VA version
- Improvements to CrewAI specialists and Specialists in general
- Addition of reusable components to check or get answers to questions from the full Human Message - HumanAnswerServices
2025-07-06 20:01:30 +02:00
Josako
50773fe602 - Adding functionality for listing and editing assets
- Started adding functionality for creating a 'full_documents' list view.
2025-07-03 11:14:10 +02:00
Josako
51d029d960 - Introduction of TRACIE_KO_INTERVIEW_DEFINITION_SPECIALIST
- Re-introduction of EveAIAsset
- Make translation services resistent for situation with and without current_event defined.
- Ensure first question is asked in eveai_chat_client
- Start of version 1.4.0 of TRAICIE_SELECTION_SPECIALIST
2025-07-02 16:58:43 +02:00
Josako
fbc9f44ac8 - Translations completed for Front-End, Configs (e.g. Forms) and free text.
- Allowed_languages and default_language now part of Tenant Make iso Tenant
- Introduction of Translation into Traicie Selection Specialist
2025-06-30 14:20:17 +02:00
Josako
4338f09f5c Changelog update for 2.3.8-alfa 2025-06-26 16:00:51 +02:00
Josako
53e32a67bd - Remove welcome message from tenant make customisation
- Add possibility to add allowed_languages to tenant make
2025-06-26 15:52:10 +02:00
Josako
fda267b479 - Introduction of the Automatic HTML Processor
- Translation Service improvement
- Enable activation / deactivation of Processors
- Renew API-keys for Mistral (leading to workspaces)
- Align all Document views to use of a session catalog
- Allow for different processors for the same file type
2025-06-26 14:38:40 +02:00
Josako
f5c9542a49 - Introducing translation service prompts
- Ensure Traicie Role Definition Specialist complies to latest technical requirements
- Ensure that empty historical messages do not cause a crash in eveai_client
- take into account empty customisation options
- make was not processed in the system dynamic attribute tenant_make
- ensure only relevant makes are shown when creating magic links
- refresh partner info when editing or adding Partner Services$
2025-06-24 14:15:36 +02:00
Josako
043cea45f2 Changelog update for 2.3.7 2025-06-23 11:51:52 +02:00
Josako
7b87880045 - Full Traicie Selection Specialist Flow implemented
- Added Specialist basics for handling phases and automatically transferring data between state and output
- Added QR-code generation for Magic Links
2025-06-23 11:46:56 +02:00
Josako
5b2c04501c - logging improvement and simplification (no more graylog)
- Traicie Selection Specialist Round Trip
- Session improvements + debugging enabled
- Tone of Voice & Langauge Level definitions introduced
2025-06-20 07:58:06 +02:00
Josako
babcd6ec04 Changelog update for 2.3.6-alfa 2025-06-16 11:10:59 +02:00
Josako
71adf64668 - Verbeterde versie Selectie Specialist - voor demo (1.2) 2025-06-16 11:06:20 +02:00
Josako
dbea41451a - Aanpassingen aan opbouw specialist historiek
- Nieuwe versie van de selectie specialist "Fake it till you Make it" ;-)
2025-06-15 18:31:13 +02:00
Josako
82e25b356c Chat client changes
- Form values shown correct in MessageHistory of Chat client
- Improements to CSS
- Move css en js to assets directory
- Introduce better Personal Contact Form & Professional Contact Form
- Start working on actual Selection Specialist
2025-06-15 05:25:00 +02:00
Josako
3c7460f741 Form in ChatInput are displayed correctly! 2025-06-13 20:30:56 +02:00
Josako
2835486599 Eerste goed werkende versie van een formulier in de chat input. 2025-06-13 17:27:49 +02:00
Josako
f1c60f9574 tussentijdse status voor significante wijzigingen. Bezig aan creatie Dynamic Form in de chat client. 2025-06-13 14:19:05 +02:00
Josako
b326c0c6f2 Chat Client changes:
- maximum width for input and message history
- ensure good display for sidebar explanation
2025-06-13 00:56:22 +02:00
Josako
5f1a5711f6 - Build of the Chat Client using Vue.js
- Accompanying css
- Views to serve the Chat Client
- first test version of the TRACIE_SELECTION_SPECIALIST
- ESS Implemented.
2025-06-12 18:21:51 +02:00
Josako
67ceb57b79 - Changelog to 2.3.5-alfa 2025-06-10 20:57:07 +02:00
Josako
23b49516cb - Create framework for chat-client, including logo, explanatory text, color settings, ...
- remove allowed_langages from tenant
- Correct bugs in Tenant, TenantMake, SpecialistMagicLink
- Change chat client customisation elements
2025-06-10 20:52:01 +02:00
Josako
9cc266b97f - Corrections to tenant, catalog, and tenant_make
- Clean-up of tenant elements
- ensure the chat_client get's it's initial call rifht.
2025-06-10 16:10:08 +02:00
Josako
3f77871c4f - Add a default make to the tenant
- Add a make to the SpecialistMagicLink
2025-06-09 18:13:38 +02:00
Josako
199cf94cf2 - Changed label for specialist_name to chatbot name ==> more logical
- Bug in unique name for catalogs
2025-06-09 16:06:41 +02:00
Josako
c4dcd6a0d3 - Add a new 'system' type to dynamic forms, first one defined = 'tenant_make'
- Add active field to Specialist model
- Improve Specialists view
- Propagate make for Role Definition Specialist to Selection Specialist (make is defined at the role level)
- Ensure a make with a given name can only be defined once
2025-06-09 11:06:36 +02:00
Josako
43ee9139d6 Changelog for version 2.3.3-alfa 2025-06-07 11:18:05 +02:00
Josako
8f45005713 - Bug fixes:
- Catalog Name Unique Constraint
  - Selection constraint to view processed document
  - remove tab from tenant overview
2025-06-07 11:14:23 +02:00
Josako
bc1626c4ff - Initialisation of the EveAI Chat Client.
- Introduction of Tenant Makes
2025-06-06 16:42:24 +02:00
611 changed files with 30561 additions and 58008 deletions

19
.aiignore Normal file
View File

@@ -0,0 +1,19 @@
# An .aiignore file follows the same syntax as a .gitignore file.
# .gitignore documentation: https://git-scm.com/docs/gitignore
# you can ignore files
.DS_Store
*.log
*.tmp
# or folders
dist/
build/
out/
nginx/node_modules/
nginx/static/
db_backups/
docker/eveai_logs/
docker/logs/
docker/minio/

2
.gitignore vendored
View File

@@ -53,3 +53,5 @@ scripts/__pycache__/run_eveai_app.cpython-312.pyc
/docker/grafana/data/
/temp_requirements/
/nginx/node_modules/
/nginx/.parcel-cache/
/nginx/static/

View File

@@ -44,7 +44,6 @@ class TrackedMistralAIEmbeddings(EveAIEmbeddings):
for i in range(0, len(texts), self.batch_size):
batch = texts[i:i + self.batch_size]
batch_num = i // self.batch_size + 1
current_app.logger.debug(f"Processing embedding batch {batch_num}, size: {len(batch)}")
start_time = time.time()
try:
@@ -70,9 +69,6 @@ class TrackedMistralAIEmbeddings(EveAIEmbeddings):
}
current_event.log_llm_metrics(metrics)
current_app.logger.debug(f"Batch {batch_num} processed: {len(batch)} texts, "
f"{result.usage.total_tokens} tokens, {batch_time:.2f}s")
# If processing multiple batches, add a small delay to avoid rate limits
if len(texts) > self.batch_size and i + self.batch_size < len(texts):
time.sleep(0.25) # 250ms pause between batches
@@ -82,7 +78,6 @@ class TrackedMistralAIEmbeddings(EveAIEmbeddings):
# If a batch fails, try to process each text individually
for j, text in enumerate(batch):
try:
current_app.logger.debug(f"Attempting individual embedding for item {i + j}")
single_start_time = time.time()
single_result = self.client.embeddings.create(
model=self.model,

View File

@@ -3,7 +3,6 @@ from langchain.callbacks.base import BaseCallbackHandler
from typing import Dict, Any, List
from langchain.schema import LLMResult
from common.utils.business_event_context import current_event
from flask import current_app
class LLMMetricsHandler(BaseCallbackHandler):

View File

@@ -0,0 +1,47 @@
import time
from langchain.callbacks.base import BaseCallbackHandler
from typing import Dict, Any, List
from langchain.schema import LLMResult
from common.utils.business_event_context import current_event
class PersistentLLMMetricsHandler(BaseCallbackHandler):
"""Metrics handler that allows metrics to be retrieved from within any call. In case metrics are required for other
purposes than business event logging."""
def __init__(self):
self.total_tokens: int = 0
self.prompt_tokens: int = 0
self.completion_tokens: int = 0
self.start_time: float = 0
self.end_time: float = 0
self.total_time: float = 0
def reset(self):
self.total_tokens = 0
self.prompt_tokens = 0
self.completion_tokens = 0
self.start_time = 0
self.end_time = 0
self.total_time = 0
def on_llm_start(self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) -> None:
self.start_time = time.time()
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
self.end_time = time.time()
self.total_time = self.end_time - self.start_time
usage = response.llm_output.get('token_usage', {})
self.prompt_tokens += usage.get('prompt_tokens', 0)
self.completion_tokens += usage.get('completion_tokens', 0)
self.total_tokens = self.prompt_tokens + self.completion_tokens
def get_metrics(self) -> Dict[str, int | float]:
return {
'total_tokens': self.total_tokens,
'prompt_tokens': self.prompt_tokens,
'completion_tokens': self.completion_tokens,
'time_elapsed': self.total_time,
'interaction_type': 'LLM',
}

View File

@@ -8,9 +8,10 @@ import sqlalchemy as sa
class Catalog(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(50), nullable=False)
name = db.Column(db.String(50), nullable=False, unique=True)
description = db.Column(db.Text, nullable=True)
type = db.Column(db.String(50), nullable=False, default="STANDARD_CATALOG")
type_version = db.Column(db.String(20), nullable=True, default="1.0.0")
min_chunk_size = db.Column(db.Integer, nullable=True, default=1500)
max_chunk_size = db.Column(db.Integer, nullable=True, default=2500)
@@ -26,6 +27,20 @@ class Catalog(db.Model):
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey(User.id))
def to_dict(self):
return {
'id': self.id,
'name': self.name,
'description': self.description,
'type': self.type,
'type_version': self.type_version,
'min_chunk_size': self.min_chunk_size,
'max_chunk_size': self.max_chunk_size,
'user_metadata': self.user_metadata,
'system_metadata': self.system_metadata,
'configuration': self.configuration,
}
class Processor(db.Model):
id = db.Column(db.Integer, primary_key=True)
@@ -34,6 +49,7 @@ class Processor(db.Model):
catalog_id = db.Column(db.Integer, db.ForeignKey('catalog.id'), nullable=True)
type = db.Column(db.String(50), nullable=False)
sub_file_type = db.Column(db.String(50), nullable=True)
active = db.Column(db.Boolean, nullable=True, default=True)
# Tuning enablers
tuning = db.Column(db.Boolean, nullable=True, default=False)
@@ -89,6 +105,12 @@ class Document(db.Model):
# Relations
versions = db.relationship('DocumentVersion', backref='document', lazy=True)
@property
def latest_version(self):
"""Returns the latest document version (the one with highest id)"""
from sqlalchemy import desc
return DocumentVersion.query.filter_by(doc_id=self.id).order_by(desc(DocumentVersion.id)).first()
def __repr__(self):
return f"<Document {self.id}: {self.name}>"

View File

@@ -1,7 +1,7 @@
from sqlalchemy.dialects.postgresql import JSONB
from ..extensions import db
from .user import User, Tenant
from .user import User, Tenant, TenantMake
from .document import Embedding, Retriever
@@ -29,6 +29,7 @@ class Specialist(db.Model):
tuning = db.Column(db.Boolean, nullable=True, default=False)
configuration = db.Column(JSONB, nullable=True)
arguments = db.Column(JSONB, nullable=True)
active = db.Column(db.Boolean, nullable=True, default=True)
# Relationship to retrievers through the association table
retrievers = db.relationship('SpecialistRetriever', backref='specialist', lazy=True,
@@ -44,6 +45,21 @@ class Specialist(db.Model):
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey(User.id))
def __repr__(self):
return f"<Specialist {self.id}: {self.name}>"
def to_dict(self):
return {
'id': self.id,
'name': self.name,
'description': self.description,
'type': self.type,
'type_version': self.type_version,
'configuration': self.configuration,
'arguments': self.arguments,
'active': self.active,
}
class EveAIAsset(db.Model):
id = db.Column(db.Integer, primary_key=True)
@@ -51,25 +67,23 @@ class EveAIAsset(db.Model):
description = db.Column(db.Text, nullable=True)
type = db.Column(db.String(50), nullable=False, default="DOCUMENT_TEMPLATE")
type_version = db.Column(db.String(20), nullable=True, default="1.0.0")
valid_from = db.Column(db.DateTime, nullable=True)
valid_to = db.Column(db.DateTime, nullable=True)
# Versioning Information
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
created_by = db.Column(db.Integer, db.ForeignKey(User.id), nullable=True)
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey(User.id))
# Relations
versions = db.relationship('EveAIAssetVersion', backref='asset', lazy=True)
class EveAIAssetVersion(db.Model):
id = db.Column(db.Integer, primary_key=True)
asset_id = db.Column(db.Integer, db.ForeignKey(EveAIAsset.id), nullable=False)
# Storage information
bucket_name = db.Column(db.String(255), nullable=True)
object_name = db.Column(db.String(200), nullable=True)
file_type = db.Column(db.String(20), nullable=True)
file_size = db.Column(db.Float, nullable=True)
# Metadata information
user_metadata = db.Column(JSONB, nullable=True)
system_metadata = db.Column(JSONB, nullable=True)
# Configuration information
configuration = db.Column(JSONB, nullable=True)
arguments = db.Column(JSONB, nullable=True)
# Cost information
prompt_tokens = db.Column(db.Integer, nullable=True)
completion_tokens = db.Column(db.Integer, nullable=True)
# Versioning Information
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
@@ -77,25 +91,25 @@ class EveAIAssetVersion(db.Model):
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey(User.id))
# Relations
instructions = db.relationship('EveAIAssetInstruction', backref='asset_version', lazy=True)
last_used_at = db.Column(db.DateTime, nullable=True)
class EveAIAssetInstruction(db.Model):
class EveAIDataCapsule(db.Model):
id = db.Column(db.Integer, primary_key=True)
asset_version_id = db.Column(db.Integer, db.ForeignKey(EveAIAssetVersion.id), nullable=False)
name = db.Column(db.String(255), nullable=False)
content = db.Column(db.Text, nullable=True)
chat_session_id = db.Column(db.Integer, db.ForeignKey(ChatSession.id), nullable=False)
type = db.Column(db.String(50), nullable=False, default="STANDARD_RAG")
type_version = db.Column(db.String(20), nullable=True, default="1.0.0")
configuration = db.Column(JSONB, nullable=True)
data = db.Column(JSONB, nullable=True)
# Versioning Information
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
created_by = db.Column(db.Integer, db.ForeignKey(User.id), nullable=True)
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey(User.id))
class EveAIProcessedAsset(db.Model):
id = db.Column(db.Integer, primary_key=True)
asset_version_id = db.Column(db.Integer, db.ForeignKey(EveAIAssetVersion.id), nullable=False)
specialist_id = db.Column(db.Integer, db.ForeignKey(Specialist.id), nullable=True)
chat_session_id = db.Column(db.Integer, db.ForeignKey(ChatSession.id), nullable=True)
bucket_name = db.Column(db.String(255), nullable=True)
object_name = db.Column(db.String(255), nullable=True)
created_at = db.Column(db.DateTime, nullable=True, server_default=db.func.now())
# Unieke constraint voor chat_session_id, type en type_version
__table_args__ = (db.UniqueConstraint('chat_session_id', 'type', 'type_version', name='uix_data_capsule_session_type_version'),)
class EveAIAgent(db.Model):
@@ -222,6 +236,7 @@ class SpecialistMagicLink(db.Model):
name = db.Column(db.String(50), nullable=False)
description = db.Column(db.Text, nullable=True)
specialist_id = db.Column(db.Integer, db.ForeignKey(Specialist.id, ondelete='CASCADE'), nullable=False)
tenant_make_id = db.Column(db.Integer, db.ForeignKey(TenantMake.id, ondelete='CASCADE'), nullable=True)
magic_link_code = db.Column(db.String(55), nullable=False, unique=True)
valid_from = db.Column(db.DateTime, nullable=True)
@@ -236,3 +251,14 @@ class SpecialistMagicLink(db.Model):
def __repr__(self):
return f"<SpecialistMagicLink {self.specialist_id} {self.magic_link_code}>"
def to_dict(self):
return {
'id': self.id,
'name': self.name,
'description': self.description,
'magic_link_code': self.magic_link_code,
'valid_from': self.valid_from,
'valid_to': self.valid_to,
'specialist_args': self.specialist_args,
}

View File

@@ -2,7 +2,7 @@ from datetime import date
from common.extensions import db
from flask_security import UserMixin, RoleMixin
from sqlalchemy.dialects.postgresql import ARRAY
from sqlalchemy.dialects.postgresql import ARRAY, JSONB
import sqlalchemy as sa
from common.models.entitlements import License
@@ -26,19 +26,18 @@ class Tenant(db.Model):
timezone = db.Column(db.String(50), nullable=True, default='UTC')
type = db.Column(db.String(20), nullable=True, server_default='Active')
# language information
default_language = db.Column(db.String(2), nullable=True)
allowed_languages = db.Column(ARRAY(sa.String(2)), nullable=True)
# Entitlements
currency = db.Column(db.String(20), nullable=True)
storage_dirty = db.Column(db.Boolean, nullable=True, default=False)
default_tenant_make_id = db.Column(db.Integer, db.ForeignKey('public.tenant_make.id'), nullable=True)
# Relations
users = db.relationship('User', backref='tenant')
domains = db.relationship('TenantDomain', backref='tenant')
licenses = db.relationship('License', back_populates='tenant')
license_usages = db.relationship('LicenseUsage', backref='tenant')
tenant_makes = db.relationship('TenantMake', backref='tenant', foreign_keys='TenantMake.tenant_id')
default_tenant_make = db.relationship('TenantMake', foreign_keys=[default_tenant_make_id], uselist=False)
@property
def current_license(self):
@@ -59,9 +58,8 @@ class Tenant(db.Model):
'website': self.website,
'timezone': self.timezone,
'type': self.type,
'default_language': self.default_language,
'allowed_languages': self.allowed_languages,
'currency': self.currency,
'default_tenant_make_id': self.default_tenant_make_id,
}
@@ -173,6 +171,46 @@ class TenantProject(db.Model):
return f"<TenantProject {self.id}: {self.name}>"
class TenantMake(db.Model):
__bind_key__ = 'public'
__table_args__ = {'schema': 'public'}
id = db.Column(db.Integer, primary_key=True)
tenant_id = db.Column(db.Integer, db.ForeignKey('public.tenant.id'), nullable=False)
name = db.Column(db.String(50), nullable=False, unique=True)
description = db.Column(db.Text, nullable=True)
active = db.Column(db.Boolean, nullable=False, default=True)
website = db.Column(db.String(255), nullable=True)
logo_url = db.Column(db.String(255), nullable=True)
default_language = db.Column(db.String(2), nullable=True)
allowed_languages = db.Column(ARRAY(sa.String(2)), nullable=True)
# Chat customisation options
chat_customisation_options = db.Column(JSONB, nullable=True)
# Versioning Information
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
created_by = db.Column(db.Integer, db.ForeignKey('public.user.id'), nullable=True)
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey('public.user.id'))
def __repr__(self):
return f"<TenantMake {self.id} for tenant {self.tenant_id}: {self.name}>"
def to_dict(self):
return {
'id': self.id,
'name': self.name,
'description': self.description,
'active': self.active,
'website': self.website,
'logo_url': self.logo_url,
'chat_customisation_options': self.chat_customisation_options,
'allowed_languages': self.allowed_languages,
'default_language': self.default_language,
}
class Partner(db.Model):
__bind_key__ = 'public'
__table_args__ = {'schema': 'public'}
@@ -281,3 +319,38 @@ class SpecialistMagicLinkTenant(db.Model):
tenant_id = db.Column(db.Integer, db.ForeignKey('public.tenant.id'), nullable=False)
class TranslationCache(db.Model):
__bind_key__ = 'public'
__table_args__ = {'schema': 'public'}
cache_key = db.Column(db.String(16), primary_key=True)
source_text = db.Column(db.Text, nullable=False)
translated_text = db.Column(db.Text, nullable=False)
source_language = db.Column(db.String(2), nullable=True)
target_language = db.Column(db.String(2), nullable=False)
context = db.Column(db.Text, nullable=True)
# Translation cost
prompt_tokens = db.Column(db.Integer, nullable=False)
completion_tokens = db.Column(db.Integer, nullable=False)
# Tracking
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
created_by = db.Column(db.Integer, db.ForeignKey('public.user.id'), nullable=True)
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey('public.user.id'), nullable=True)
last_used_at = db.Column(db.DateTime, nullable=True)
class PartnerRAGRetriever(db.Model):
__bind_key__ = 'public'
__table_args__ = (
db.PrimaryKeyConstraint('tenant_id', 'retriever_id'),
db.UniqueConstraint('partner_id', 'tenant_id', 'retriever_id'),
{'schema': 'public'},
)
partner_id = db.Column(db.Integer, db.ForeignKey('public.partner.id'), nullable=False)
tenant_id = db.Column(db.Integer, db.ForeignKey('public.tenant.id'), nullable=False)
retriever_id = db.Column(db.Integer, nullable=False)

View File

@@ -41,7 +41,7 @@ class LicensePeriodServices:
current_app.logger.debug(f"Found license period {license_period.id} for tenant {tenant_id} "
f"with status {license_period.status}")
match license_period.status:
case PeriodStatus.UPCOMING:
case PeriodStatus.UPCOMING | PeriodStatus.PENDING:
current_app.logger.debug(f"In upcoming state")
LicensePeriodServices._complete_last_license_period(tenant_id=tenant_id)
current_app.logger.debug(f"Completed last license period for tenant {tenant_id}")
@@ -71,9 +71,9 @@ class LicensePeriodServices:
delta = abs(current_date - license_period.period_start)
if delta > timedelta(days=current_app.config.get('ENTITLEMENTS_MAX_PENDING_DAYS', 5)):
raise EveAIPendingLicensePeriod()
case PeriodStatus.ACTIVE:
else:
return license_period
case PeriodStatus.PENDING:
case PeriodStatus.ACTIVE:
return license_period
else:
raise EveAILicensePeriodsExceeded(license_id=None)
@@ -125,7 +125,7 @@ class LicensePeriodServices:
tenant_id=tenant_id,
period_number=next_period_number,
period_start=the_license.start_date + relativedelta(months=next_period_number-1),
period_end=the_license.end_date + relativedelta(months=next_period_number, days=-1),
period_end=the_license.start_date + relativedelta(months=next_period_number, days=-1),
status=PeriodStatus.UPCOMING,
upcoming_at=dt.now(tz.utc),
)

View File

@@ -0,0 +1,9 @@
from common.models.interaction import EveAIAsset
from common.extensions import minio_client
class AssetServices:
@staticmethod
def add_or_replace_asset_file(asset_id, file_data):
asset = EveAIAsset.query.get_or_404(asset_id)

View File

@@ -0,0 +1,25 @@
from datetime import datetime as dt, timezone as tz
from common.models.interaction import EveAIDataCapsule
from common.extensions import db
from common.utils.model_logging_utils import set_logging_information, update_logging_information
class CapsuleServices:
@staticmethod
def push_capsule_data(chat_session_id: str, type: str, type_version: str, configuration: dict, data: dict):
capsule = EveAIDataCapsule.query.filter_by(chat_session_id=chat_session_id, type=type, type_version=type_version).first()
if capsule:
# Update bestaande capsule als deze al bestaat
capsule.configuration = configuration
capsule.data = data
update_logging_information(capsule, dt.now(tz.utc))
else:
# Maak nieuwe capsule aan als deze nog niet bestaat
capsule = EveAIDataCapsule(chat_session_id=chat_session_id, type=type, type_version=type_version,
configuration=configuration, data=data)
set_logging_information(capsule, dt.now(tz.utc))
db.session.add(capsule)
db.session.commit()
return capsule

View File

@@ -220,3 +220,18 @@ class SpecialistServices:
db.session.add(tool)
current_app.logger.info(f"Created tool {tool.id} of type {tool_type}")
return tool
@staticmethod
def get_specialist_system_field(specialist_id, config_name, system_name):
"""Get the value of a system field in a specialist's configuration. Returns the actual value, or None."""
specialist = Specialist.query.get(specialist_id)
if not specialist:
raise ValueError(f"Specialist with ID {specialist_id} not found")
config = cache_manager.specialists_config_cache.get_config(specialist.type, specialist.type_version)
if not config:
raise ValueError(f"No configuration found for {specialist.type} version {specialist.version}")
potential_field = config.get(config_name, None)
if potential_field:
if potential_field.type == 'system' and potential_field.system_name == system_name:
return specialist.configuration.get(config_name, None)
return None

View File

@@ -1,4 +1,4 @@
from typing import List
from typing import List, Dict, Any
from flask import session
from sqlalchemy.exc import SQLAlchemyError
@@ -43,5 +43,11 @@ class PartnerServices:
return license_tier_ids
@staticmethod
def get_management_service() -> Dict[str, Any]:
management_service = next((service for service in session['partner']['services']
if service.get('type') == 'MANAGEMENT_SERVICE'), None)
return management_service

View File

@@ -69,6 +69,10 @@ class TenantServices:
cache_handler = cache_manager.tasks_types_cache
elif config_type == 'tools':
cache_handler = cache_manager.tools_types_cache
elif config_type == 'catalogs':
cache_handler = cache_manager.catalogs_types_cache
elif config_type == 'retrievers':
cache_handler = cache_manager.retrievers_types_cache
else:
raise ValueError(f"Unsupported config type: {config_type}")
@@ -78,7 +82,7 @@ class TenantServices:
# Filter to include:
# 1. Types with no partner (global)
# 2. Types with partners that have a SPECIALIST_SERVICE relationship with this tenant
available_partners = TenantServices.get_tenant_partner_names(tenant_id)
available_partners = TenantServices.get_tenant_partner_specialist_denominators(tenant_id)
available_types = {
type_id: info for type_id, info in all_types.items()
@@ -88,9 +92,10 @@ class TenantServices:
return available_types
@staticmethod
def get_tenant_partner_names(tenant_id: int) -> List[str]:
def get_tenant_partner_specialist_denominators(tenant_id: int) -> List[str]:
"""
Get names of partners that have a SPECIALIST_SERVICE relationship with this tenant
Get names of partners that have a SPECIALIST_SERVICE relationship with this tenant, that can be used for
filtering configurations.
Args:
tenant_id: The tenant ID
@@ -99,7 +104,7 @@ class TenantServices:
List of partner names (tenant names)
"""
# Find all PartnerTenant relationships for this tenant
partner_names = []
partner_service_denominators = []
try:
# Get all partner services of type SPECIALIST_SERVICE
specialist_services = (
@@ -128,17 +133,12 @@ class TenantServices:
)
if partner_service:
partner = Partner.query.get(partner_service.partner_id)
if partner:
# Get the tenant associated with this partner
partner_tenant = Tenant.query.get(partner.tenant_id)
if partner_tenant:
partner_names.append(partner_tenant.name)
partner_service_denominators.append(partner_service.configuration.get("specialist_denominator", ""))
except SQLAlchemyError as e:
current_app.logger.error(f"Database error retrieving partner names: {str(e)}")
return partner_names
return partner_service_denominators
@staticmethod
def can_use_specialist_type(tenant_id: int, specialist_type: str) -> bool:
@@ -166,7 +166,7 @@ class TenantServices:
# If it's a partner-specific specialist, check if tenant has access
partner_name = specialist_def.get('partner')
available_partners = TenantServices.get_tenant_partner_names(tenant_id)
available_partners = TenantServices.get_tenant_partner_specialist_denominators(tenant_id)
return partner_name in available_partners

View File

@@ -0,0 +1,108 @@
from flask import current_app, session
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from common.utils.business_event import BusinessEvent
from common.utils.business_event_context import current_event
from common.utils.model_utils import get_template
from eveai_chat_workers.outputs.globals.a2q_output.q_a_output_v1_0 import A2QOutput
from eveai_chat_workers.outputs.globals.q_a_output.q_a_output_v1_0 import QAOutput
class HumanAnswerServices:
@staticmethod
def check_affirmative_answer(tenant_id: int, question: str, answer: str, language_iso: str) -> bool:
return HumanAnswerServices._check_answer(tenant_id, question, answer, language_iso, "check_affirmative_answer",
"Check Affirmative Answer")
@staticmethod
def check_additional_information(tenant_id: int, question: str, answer: str, language_iso: str) -> bool:
result = HumanAnswerServices._check_answer(tenant_id, question, answer, language_iso,
"check_additional_information", "Check Additional Information")
return result
@staticmethod
def get_answer_to_question(tenant_id: int, question: str, answer: str, language_iso: str) -> str:
language = HumanAnswerServices._process_arguments(question, answer, language_iso)
span_name = "Get Answer To Question"
template_name = "get_answer_to_question"
if not current_event:
with BusinessEvent('Answer Check Service', tenant_id):
with current_event.create_span(span_name):
return HumanAnswerServices._get_answer_to_question_logic(question, answer, language, template_name)
else:
with current_event.create_span('Check Affirmative Answer'):
return HumanAnswerServices._get_answer_to_question_logic(question, answer, language, template_name)
@staticmethod
def _check_answer(tenant_id: int, question: str, answer: str, language_iso: str, template_name: str,
span_name: str) -> bool:
language = HumanAnswerServices._process_arguments(question, answer, language_iso)
if not current_event:
with BusinessEvent('Answer Check Service', tenant_id):
with current_event.create_span(span_name):
return HumanAnswerServices._check_answer_logic(question, answer, language, template_name)
else:
with current_event.create_span(span_name):
return HumanAnswerServices._check_answer_logic(question, answer, language, template_name)
@staticmethod
def _check_answer_logic(question: str, answer: str, language: str, template_name: str) -> bool:
prompt_params = {
'question': question,
'answer': answer,
'language': language,
}
template, llm = get_template(template_name)
check_answer_prompt = ChatPromptTemplate.from_template(template)
setup = RunnablePassthrough()
output_schema = QAOutput
structured_llm = llm.with_structured_output(output_schema)
chain = (setup | check_answer_prompt | structured_llm )
raw_answer = chain.invoke(prompt_params)
return raw_answer.answer
@staticmethod
def _get_answer_to_question_logic(question: str, answer: str, language: str, template_name: str) \
-> str:
prompt_params = {
'question': question,
'answer': answer,
'language': language,
}
template, llm = get_template(template_name)
check_answer_prompt = ChatPromptTemplate.from_template(template)
setup = RunnablePassthrough()
output_schema = A2QOutput
structured_llm = llm.with_structured_output(output_schema)
chain = (setup | check_answer_prompt | structured_llm)
raw_answer = chain.invoke(prompt_params)
return raw_answer.answer
@staticmethod
def _process_arguments(question, answer, language_iso: str) -> str:
if language_iso.strip() == '':
raise ValueError("Language cannot be empty")
language = current_app.config.get('SUPPORTED_LANGUAGE_ISO639_1_LOOKUP').get(language_iso)
if language is None:
raise ValueError(f"Unsupported language: {language_iso}")
if question.strip() == '':
raise ValueError("Question cannot be empty")
if answer.strip() == '':
raise ValueError("Answer cannot be empty")
return language

View File

@@ -0,0 +1,148 @@
import json
from typing import Dict, Any, Optional
from flask import session
from common.extensions import cache_manager
from common.utils.business_event import BusinessEvent
from common.utils.business_event_context import current_event
class TranslationServices:
@staticmethod
def translate_config(tenant_id: int, config_data: Dict[str, Any], field_config: str, target_language: str,
source_language: Optional[str] = None, context: Optional[str] = None) -> Dict[str, Any]:
"""
Vertaalt een configuratie op basis van een veld-configuratie.
Args:
tenant_id: Identificatie van de tenant waarvoor we de vertaling doen.
config_data: Een dictionary of JSON (die dan wordt geconverteerd naar een dictionary) met configuratiegegevens
field_config: De naam van een veld-configuratie (bijv. 'fields')
target_language: De taal waarnaar vertaald moet worden
source_language: Optioneel, de brontaal van de configuratie
context: Optioneel, een specifieke context voor de vertaling
Returns:
Een dictionary met de vertaalde configuratie
"""
config_type = config_data.get('type', 'Unknown')
config_version = config_data.get('version', 'Unknown')
span_name = f"{config_type}-{config_version}-{field_config}"
if current_event:
with current_event.create_span(span_name):
translated_config = TranslationServices._translate_config(tenant_id, config_data, field_config,
target_language, source_language, context)
return translated_config
else:
with BusinessEvent('Config Translation Service', tenant_id):
with current_event.create_span(span_name):
translated_config = TranslationServices._translate_config(tenant_id, config_data, field_config,
target_language, source_language, context)
return translated_config
@staticmethod
def _translate_config(tenant_id: int, config_data: Dict[str, Any], field_config: str, target_language: str,
source_language: Optional[str] = None, context: Optional[str] = None) -> Dict[str, Any]:
# Zorg ervoor dat we een dictionary hebben
if isinstance(config_data, str):
config_data = json.loads(config_data)
# Maak een kopie van de originele data om te wijzigen
translated_config = config_data.copy()
# Haal type en versie op voor de Business Event span
config_type = config_data.get('type', 'Unknown')
config_version = config_data.get('version', 'Unknown')
if field_config in config_data:
fields = config_data[field_config]
# Haal description uit metadata voor context als geen context is opgegeven
description_context = ""
if not context and 'metadata' in config_data and 'description' in config_data['metadata']:
description_context = config_data['metadata']['description']
# Loop door elk veld in de configuratie
for field_name, field_data in fields.items():
# Vertaal name als het bestaat en niet leeg is
if 'name' in field_data and field_data['name']:
# Gebruik context indien opgegeven, anders description_context
field_context = context if context else description_context
translated_name = cache_manager.translation_cache.get_translation(
text=field_data['name'],
target_lang=target_language,
source_lang=source_language,
context=field_context
)
if translated_name:
translated_config[field_config][field_name]['name'] = translated_name.translated_text
if 'title' in field_data and field_data['title']:
# Gebruik context indien opgegeven, anders description_context
field_context = context if context else description_context
translated_title = cache_manager.translation_cache.get_translation(
text=field_data['title'],
target_lang=target_language,
source_lang=source_language,
context=field_context
)
if translated_title:
translated_config[field_config][field_name]['title'] = translated_title.translated_text
# Vertaal description als het bestaat en niet leeg is
if 'description' in field_data and field_data['description']:
# Gebruik context indien opgegeven, anders description_context
field_context = context if context else description_context
translated_desc = cache_manager.translation_cache.get_translation(
text=field_data['description'],
target_lang=target_language,
source_lang=source_language,
context=field_context
)
if translated_desc:
translated_config[field_config][field_name]['description'] = translated_desc.translated_text
# Vertaal context als het bestaat en niet leeg is
if 'context' in field_data and field_data['context']:
translated_ctx = cache_manager.translation_cache.get_translation(
text=field_data['context'],
target_lang=target_language,
source_lang=source_language,
context=context
)
if translated_ctx:
translated_config[field_config][field_name]['context'] = translated_ctx.translated_text
# vertaal allowed values als het veld bestaat en de waarden niet leeg zijn.
if 'allowed_values' in field_data and field_data['allowed_values']:
translated_allowed_values = []
for allowed_value in field_data['allowed_values']:
translated_allowed_value = cache_manager.translation_cache.get_translation(
text=allowed_value,
target_lang=target_language,
source_lang=source_language,
context=context
)
translated_allowed_values.append(translated_allowed_value.translated_text)
if translated_allowed_values:
translated_config[field_config][field_name]['allowed_values'] = translated_allowed_values
return translated_config
@staticmethod
def translate(tenant_id: int, text: str, target_language: str, source_language: Optional[str] = None,
context: Optional[str] = None)-> str:
if current_event:
with current_event.create_span('Translation'):
translation_cache = cache_manager.translation_cache.get_translation(text, target_language,
source_language, context)
return translation_cache.translated_text
else:
with BusinessEvent('Translation Service', tenant_id):
with current_event.create_span('Translation'):
translation_cache = cache_manager.translation_cache.get_translation(text, target_language,
source_language, context)
return translation_cache.translated_text

View File

@@ -4,59 +4,9 @@ from flask import current_app
from sqlalchemy.exc import SQLAlchemyError
from common.extensions import cache_manager, minio_client, db
from common.models.interaction import EveAIAsset, EveAIAssetVersion
from common.models.interaction import EveAIAsset
from common.utils.model_logging_utils import set_logging_information
def create_asset_stack(api_input, tenant_id):
type_version = cache_manager.assets_version_tree_cache.get_latest_version(api_input['type'])
api_input['type_version'] = type_version
new_asset = create_asset(api_input, tenant_id)
new_asset_version = create_version_for_asset(new_asset, tenant_id)
db.session.add(new_asset)
db.session.add(new_asset_version)
try:
db.session.commit()
except SQLAlchemyError as e:
current_app.logger.error(f"Could not add asset for tenant {tenant_id}: {str(e)}")
db.session.rollback()
raise e
return new_asset, new_asset_version
def create_asset(api_input, tenant_id):
new_asset = EveAIAsset()
new_asset.name = api_input['name']
new_asset.description = api_input['description']
new_asset.type = api_input['type']
new_asset.type_version = api_input['type_version']
if api_input['valid_from'] and api_input['valid_from'] != '':
new_asset.valid_from = api_input['valid_from']
else:
new_asset.valid_from = dt.now(tz.utc)
new_asset.valid_to = api_input['valid_to']
set_logging_information(new_asset, dt.now(tz.utc))
return new_asset
def create_version_for_asset(asset, tenant_id):
new_asset_version = EveAIAssetVersion()
new_asset_version.asset = asset
new_asset_version.bucket_name = minio_client.create_tenant_bucket(tenant_id)
set_logging_information(new_asset_version, dt.now(tz.utc))
return new_asset_version
def add_asset_version_file(asset_version, field_name, file, tenant_id):
object_name, file_size = minio_client.upload_file(asset_version.bucket_name, asset_version.id, field_name,
file.content_type)
# mark_tenant_storage_dirty(tenant_id)
# TODO - zorg ervoor dat de herberekening van storage onmiddellijk gebeurt!
return object_name

View File

@@ -7,7 +7,7 @@ from flask import current_app
from common.utils.cache.base import CacheHandler, CacheKey
from config.type_defs import agent_types, task_types, tool_types, specialist_types, retriever_types, prompt_types, \
catalog_types, partner_service_types, processor_types
catalog_types, partner_service_types, processor_types, customisation_types, specialist_form_types, capsule_types
def is_major_minor(version: str) -> bool:
@@ -332,24 +332,22 @@ class BaseConfigTypesCacheHandler(CacheHandler[Dict[str, Any]]):
"""
return isinstance(value, dict) # Cache all dictionaries
def _load_type_definitions(self) -> Dict[str, Dict[str, str]]:
def _load_type_definitions(self) -> Dict[str, Dict[str, Any]]:
"""Load type definitions from the corresponding type_defs module"""
if not self._types_module:
raise ValueError("_types_module must be set by subclass")
type_definitions = {
type_id: {
'name': info['name'],
'description': info['description'],
'partner': info.get('partner') # Include partner info if available
}
for type_id, info in self._types_module.items()
}
type_definitions = {}
for type_id, info in self._types_module.items():
# Kopieer alle velden uit de type definitie
type_definitions[type_id] = {}
for key, value in info.items():
type_definitions[type_id][key] = value
return type_definitions
def get_types(self) -> Dict[str, Dict[str, str]]:
"""Get dictionary of available types with name and description"""
def get_types(self) -> Dict[str, Dict[str, Any]]:
"""Get dictionary of available types with all defined properties"""
result = self.get(
lambda type_name: self._load_type_definitions(),
type_name=f'{self.config_type}_types',
@@ -463,7 +461,6 @@ ProcessorConfigCacheHandler, ProcessorConfigVersionTreeCacheHandler, ProcessorCo
types_module=processor_types.PROCESSOR_TYPES
))
# Add to common/utils/cache/config_cache.py
PartnerServiceConfigCacheHandler, PartnerServiceConfigVersionTreeCacheHandler, PartnerServiceConfigTypesCacheHandler = (
create_config_cache_handlers(
config_type='partner_services',
@@ -471,6 +468,31 @@ PartnerServiceConfigCacheHandler, PartnerServiceConfigVersionTreeCacheHandler, P
types_module=partner_service_types.PARTNER_SERVICE_TYPES
))
CustomisationConfigCacheHandler, CustomisationConfigVersionTreeCacheHandler, CustomisationConfigTypesCacheHandler = (
create_config_cache_handlers(
config_type='customisations',
config_dir='config/customisations',
types_module=customisation_types.CUSTOMISATION_TYPES
)
)
SpecialistFormConfigCacheHandler, SpecialistFormConfigVersionTreeCacheHandler, SpecialistFormConfigTypesCacheHandler = (
create_config_cache_handlers(
config_type='specialist_forms',
config_dir='config/specialist_forms',
types_module=specialist_form_types.SPECIALIST_FORM_TYPES
)
)
CapsuleConfigCacheHandler, CapsuleConfigVersionTreeCacheHandler, CapsuleConfigTypesCacheHandler = (
create_config_cache_handlers(
config_type='data_capsules',
config_dir='config/data_capsules',
types_module=capsule_types.CAPSULE_TYPES
)
)
def register_config_cache_handlers(cache_manager) -> None:
cache_manager.register_handler(AgentConfigCacheHandler, 'eveai_config')
@@ -503,6 +525,12 @@ def register_config_cache_handlers(cache_manager) -> None:
cache_manager.register_handler(PartnerServiceConfigCacheHandler, 'eveai_config')
cache_manager.register_handler(PartnerServiceConfigTypesCacheHandler, 'eveai_config')
cache_manager.register_handler(PartnerServiceConfigVersionTreeCacheHandler, 'eveai_config')
cache_manager.register_handler(CustomisationConfigCacheHandler, 'eveai_config')
cache_manager.register_handler(CustomisationConfigTypesCacheHandler, 'eveai_config')
cache_manager.register_handler(CustomisationConfigVersionTreeCacheHandler, 'eveai_config')
cache_manager.register_handler(SpecialistFormConfigCacheHandler, 'eveai_config')
cache_manager.register_handler(SpecialistFormConfigTypesCacheHandler, 'eveai_config')
cache_manager.register_handler(SpecialistFormConfigVersionTreeCacheHandler, 'eveai_config')
cache_manager.agents_config_cache.set_version_tree_cache(cache_manager.agents_version_tree_cache)
cache_manager.tasks_config_cache.set_version_tree_cache(cache_manager.tasks_version_tree_cache)
@@ -513,3 +541,5 @@ def register_config_cache_handlers(cache_manager) -> None:
cache_manager.catalogs_config_cache.set_version_tree_cache(cache_manager.catalogs_version_tree_cache)
cache_manager.processors_config_cache.set_version_tree_cache(cache_manager.processors_version_tree_cache)
cache_manager.partner_services_config_cache.set_version_tree_cache(cache_manager.partner_services_version_tree_cache)
cache_manager.customisations_config_cache.set_version_tree_cache(cache_manager.customisations_version_tree_cache)
cache_manager.specialist_forms_config_cache.set_version_tree_cache(cache_manager.specialist_forms_version_tree_cache)

View File

@@ -42,7 +42,7 @@ def create_cache_regions(app):
# Region for model-related caching (ModelVariables etc)
model_region = make_region(name='eveai_model').configure(
'dogpile.cache.redis',
arguments=redis_config,
arguments={**redis_config, 'db': 6},
replace_existing_backend=True
)
regions['eveai_model'] = model_region

223
common/utils/cache/translation_cache.py vendored Normal file
View File

@@ -0,0 +1,223 @@
import json
import re
from typing import Dict, Any, Optional
from datetime import datetime as dt, timezone as tz
import xxhash
from flask import current_app
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from sqlalchemy.inspection import inspect
from common.langchain.persistent_llm_metrics_handler import PersistentLLMMetricsHandler
from common.utils.business_event_context import current_event
from common.utils.cache.base import CacheHandler, T
from common.extensions import db
from common.models.user import TranslationCache
from flask_security import current_user
from common.utils.model_utils import get_template
class TranslationCacheHandler(CacheHandler[TranslationCache]):
"""Handles caching of translations with fallback to database and external translation service"""
handler_name = 'translation_cache'
def __init__(self, region):
super().__init__(region, 'translation')
self.configure_keys('hash_key')
def _to_cache_data(self, instance: TranslationCache) -> Dict[str, Any]:
"""Convert TranslationCache instance to cache data using SQLAlchemy inspection"""
if not instance:
return {}
mapper = inspect(TranslationCache)
data = {}
for column in mapper.columns:
value = getattr(instance, column.name)
# Handle date serialization
if isinstance(value, dt):
data[column.name] = value.isoformat()
else:
data[column.name] = value
return data
def _from_cache_data(self, data: Dict[str, Any], **kwargs) -> TranslationCache:
if not data:
return None
# Create a new TranslationCache instance
translation = TranslationCache()
mapper = inspect(TranslationCache)
# Set all attributes dynamically
for column in mapper.columns:
if column.name in data:
value = data[column.name]
# Handle date deserialization
if column.name.endswith('_date') and value:
if isinstance(value, str):
value = dt.fromisoformat(value).date()
setattr(translation, column.name, value)
metrics = {
'total_tokens': translation.prompt_tokens + translation.completion_tokens,
'prompt_tokens': translation.prompt_tokens,
'completion_tokens': translation.completion_tokens,
'time_elapsed': 0,
'interaction_type': 'TRANSLATION-CACHE'
}
current_event.log_llm_metrics(metrics)
return translation
def _should_cache(self, value) -> bool:
"""Validate if the translation should be cached"""
if value is None:
return False
# Handle both TranslationCache objects and serialized data (dict)
if isinstance(value, TranslationCache):
return value.cache_key is not None
elif isinstance(value, dict):
return value.get('cache_key') is not None
return False
def get_translation(self, text: str, target_lang: str, source_lang: str = None, context: str = None) -> Optional[
TranslationCache]:
"""
Get the translation for a text in a specific language
Args:
text: The text to be translated
target_lang: The target language for the translation
source_lang: The source language of the text to be translated
context: Optional context for the translation
Returns:
TranslationCache instance if found, None otherwise
"""
if not context:
context = 'No context provided.'
def creator_func(hash_key: str) -> Optional[TranslationCache]:
# Check if translation already exists in database
existing_translation = db.session.query(TranslationCache).filter_by(cache_key=hash_key).first()
if existing_translation:
# Update last used timestamp
existing_translation.last_used_at = dt.now(tz=tz.utc)
metrics = {
'total_tokens': existing_translation.prompt_tokens + existing_translation.completion_tokens,
'prompt_tokens': existing_translation.prompt_tokens,
'completion_tokens': existing_translation.completion_tokens,
'time_elapsed': 0,
'interaction_type': 'TRANSLATION-DB'
}
current_event.log_llm_metrics(metrics)
db.session.commit()
return existing_translation
# Translation not found in DB, need to create it
# Get the translation and metrics
translated_text, metrics = self.translate_text(
text_to_translate=text,
target_lang=target_lang,
source_lang=source_lang,
context=context
)
# Create new translation cache record
new_translation = TranslationCache(
cache_key=hash_key,
source_text=text,
translated_text=translated_text,
source_language=source_lang,
target_language=target_lang,
context=context,
prompt_tokens=metrics.get('prompt_tokens', 0),
completion_tokens=metrics.get('completion_tokens', 0),
created_at=dt.now(tz=tz.utc),
created_by=getattr(current_user, 'id', None) if 'current_user' in globals() else None,
updated_at=dt.now(tz=tz.utc),
updated_by=getattr(current_user, 'id', None) if 'current_user' in globals() else None,
last_used_at=dt.now(tz=tz.utc)
)
# Save to database
db.session.add(new_translation)
db.session.commit()
return new_translation
# Generate the hash key using your existing method
hash_key = self._generate_cache_key(text, target_lang, source_lang, context)
# Pass the hash_key to the get method
return self.get(creator_func, hash_key=hash_key)
def invalidate_tenant_translations(self, tenant_id: int):
"""Invalidate cached translations for specific tenant"""
self.invalidate(tenant_id=tenant_id)
def _generate_cache_key(self, text: str, target_lang: str, source_lang: str = None, context: str = None) -> str:
"""Generate cache key for a translation"""
cache_data = {
"text": text.strip(),
"target_lang": target_lang.lower(),
"source_lang": source_lang.lower() if source_lang else None,
"context": context.strip() if context else None,
}
cache_string = json.dumps(cache_data, sort_keys=True, ensure_ascii=False)
return xxhash.xxh64(cache_string.encode('utf-8')).hexdigest()
def translate_text(self, text_to_translate: str, target_lang: str, source_lang: str = None, context: str = None) \
-> tuple[str, dict[str, int | float]]:
target_language = current_app.config['SUPPORTED_LANGUAGE_ISO639_1_LOOKUP'][target_lang]
prompt_params = {
"text_to_translate": text_to_translate,
"target_language": target_language,
}
if context:
template, llm = get_template("translation_with_context")
prompt_params["context"] = context
else:
template, llm = get_template("translation_without_context")
# Add a metrics handler to capture usage
metrics_handler = PersistentLLMMetricsHandler()
existing_callbacks = llm.callbacks
llm.callbacks = existing_callbacks + [metrics_handler]
translation_prompt = ChatPromptTemplate.from_template(template)
setup = RunnablePassthrough()
chain = (setup | translation_prompt | llm | StrOutputParser())
translation = chain.invoke(prompt_params)
# Remove double square brackets from translation
translation = re.sub(r'\[\[(.*?)\]\]', r'\1', translation)
metrics = metrics_handler.get_metrics()
return translation, metrics
def register_translation_cache_handlers(cache_manager) -> None:
"""Register translation cache handlers with cache manager"""
cache_manager.register_handler(
TranslationCacheHandler,
'eveai_model' # Use existing eveai_model region
)

172
common/utils/chat_utils.py Normal file
View File

@@ -0,0 +1,172 @@
import json
import re
"""
Utility functions for chat customization.
"""
from flask import current_app
def get_default_chat_customisation(tenant_customisation=None):
"""
Get chat customization options with default values for missing options.
Args:
tenant_customisation (dict or str, optional): The tenant's customization options.
Defaults to None. Can be a dict or a JSON string.
Returns:
dict: A dictionary containing all customization options with default values
for any missing options.
"""
# Default customization options
default_customisation = {
'sidebar_markdown': '',
'sidebar_color': '#f8f9fa',
'sidebar_background': '#2c3e50',
'markdown_background_color': 'transparent',
'markdown_text_color': '#ffffff',
'gradient_start_color': '#f5f7fa',
'gradient_end_color': '#c3cfe2',
'progress_tracker_insights': 'No Information',
'form_title_display': 'Full Title',
'active_background_color': '#ffffff',
'history_background': 10,
'ai_message_background': '#ffffff',
'ai_message_text_color': '#212529',
'human_message_background': '#212529',
'human_message_text_color': '#ffffff',
}
# If no tenant customization is provided, return the defaults
if tenant_customisation is None:
return default_customisation
# Start with the default customization
customisation = default_customisation.copy()
# Convert JSON string to dict if needed
if isinstance(tenant_customisation, str):
try:
tenant_customisation = json.loads(tenant_customisation)
except json.JSONDecodeError as e:
current_app.logger.error(f"Error parsing JSON customisation: {e}")
return default_customisation
# Update with tenant customization
if tenant_customisation:
for key, value in tenant_customisation.items():
if key in customisation:
customisation[key] = value
return customisation
def hex_to_rgb(hex_color):
"""
Convert hex color to RGB tuple.
Args:
hex_color (str): Hex color string (e.g., '#ffffff' or 'ffffff')
Returns:
tuple: RGB values as (r, g, b)
"""
# Remove # if present
hex_color = hex_color.lstrip('#')
# Handle 3-character hex codes
if len(hex_color) == 3:
hex_color = ''.join([c*2 for c in hex_color])
# Convert to RGB
try:
return tuple(int(hex_color[i:i+2], 16) for i in (0, 2, 4))
except ValueError:
# Return white as fallback
return (255, 255, 255)
def adjust_color_alpha(percentage):
"""
Convert percentage to RGBA color with appropriate base color and alpha.
Args:
percentage (int): Percentage (-50 to 50)
Positive = white base (lighten)
Negative = black base (darken)
Zero = transparent
Returns:
str: RGBA color string for CSS
"""
if percentage == 0:
return 'rgba(255, 255, 255, 0)' # Volledig transparant
# Bepaal basis kleur
if percentage > 0:
# Positief = wit voor verheldering
base_color = (255, 255, 255)
else:
# Negatief = zwart voor verdonkering
base_color = (0, 0, 0)
# Bereken alpha op basis van percentage (max 50 = alpha 1.0)
alpha = abs(percentage) / 50.0
alpha = max(0.0, min(1.0, alpha)) # Zorg voor 0.0-1.0 range
return f'rgba({base_color[0]}, {base_color[1]}, {base_color[2]}, {alpha})'
def adjust_color_brightness(hex_color, percentage):
"""
Adjust the brightness of a hex color by a percentage.
Args:
hex_color (str): Hex color string (e.g., '#ffffff')
percentage (int): Percentage to adjust (-100 to 100)
Positive = lighter, Negative = darker
Returns:
str: RGBA color string for CSS (e.g., 'rgba(255, 255, 255, 0.9)')
"""
if not hex_color or not isinstance(hex_color, str):
return 'rgba(255, 255, 255, 0.1)'
# Get RGB values
r, g, b = hex_to_rgb(hex_color)
# Calculate adjustment factor
if percentage > 0:
# Lighten: move towards white
factor = percentage / 100.0
r = int(r + (255 - r) * factor)
g = int(g + (255 - g) * factor)
b = int(b + (255 - b) * factor)
else:
# Darken: move towards black
factor = abs(percentage) / 100.0
r = int(r * (1 - factor))
g = int(g * (1 - factor))
b = int(b * (1 - factor))
# Ensure values are within 0-255 range
r = max(0, min(255, r))
g = max(0, min(255, g))
b = max(0, min(255, b))
# Return as rgba with slight transparency for better blending
return f'rgba({r}, {g}, {b}, 0.9)'
def get_base_background_color():
"""
Get the base background color for history adjustments.
This should be the main chat background color.
Returns:
str: Hex color string
"""
# Use a neutral base color that works well with adjustments
return '#f8f9fa'

View File

@@ -21,7 +21,7 @@ class TaggingField(BaseModel):
@field_validator('type', mode='before')
@classmethod
def validate_type(cls, v: str) -> str:
valid_types = ['string', 'integer', 'float', 'date', 'enum']
valid_types = ['string', 'integer', 'float', 'date', 'enum', 'color']
if v not in valid_types:
raise ValueError(f'type must be one of {valid_types}')
return v
@@ -243,7 +243,7 @@ class ArgumentDefinition(BaseModel):
@field_validator('type')
@classmethod
def validate_type(cls, v: str) -> str:
valid_types = ['string', 'integer', 'float', 'date', 'enum']
valid_types = ['string', 'integer', 'float', 'date', 'enum', 'color']
if v not in valid_types:
raise ValueError(f'type must be one of {valid_types}')
return v
@@ -256,7 +256,8 @@ class ArgumentDefinition(BaseModel):
'integer': NumericConstraint,
'float': NumericConstraint,
'date': DateConstraint,
'enum': EnumConstraint
'enum': EnumConstraint,
'color': StringConstraint
}
expected_type = expected_constraint_types.get(self.type)

View File

@@ -4,8 +4,6 @@ import logging
from packaging import version
from flask import current_app
logger = logging.getLogger(__name__)
class ContentManager:
def __init__(self, app=None):
self.app = app
@@ -16,10 +14,10 @@ class ContentManager:
self.app = app
# Controleer of het pad bestaat
if not os.path.exists(app.config['CONTENT_DIR']):
logger.warning(f"Content directory not found at: {app.config['CONTENT_DIR']}")
else:
logger.info(f"Content directory configured at: {app.config['CONTENT_DIR']}")
# if not os.path.exists(app.config['CONTENT_DIR']):
# logger.warning(f"Content directory not found at: {app.config['CONTENT_DIR']}")
# else:
# logger.info(f"Content directory configured at: {app.config['CONTENT_DIR']}")
def get_content_path(self, content_type, major_minor=None, patch=None):
"""
@@ -66,12 +64,12 @@ class ContentManager:
content_path = os.path.join(self.app.config['CONTENT_DIR'], content_type)
if not os.path.exists(content_path):
logger.error(f"Content path does not exist: {content_path}")
current_app.logger.error(f"Content path does not exist: {content_path}")
return None
# Als geen major_minor opgegeven, vind de hoogste
if not major_minor:
available_versions = os.listdir(content_path)
available_versions = [f for f in os.listdir(content_path) if not f.startswith('.')]
if not available_versions:
return None
@@ -81,16 +79,19 @@ class ContentManager:
# Nu we major_minor hebben, zoek de hoogste patch
major_minor_path = os.path.join(content_path, major_minor)
current_app.logger.debug(f"Major/Minor path: {major_minor_path}")
if not os.path.exists(major_minor_path):
logger.error(f"Version path does not exist: {major_minor_path}")
current_app.logger.error(f"Version path does not exist: {major_minor_path}")
return None
files = os.listdir(major_minor_path)
files = [f for f in os.listdir(major_minor_path) if not f.startswith('.')]
current_app.logger.debug(f"Files in version path: {files}")
version_files = []
for file in files:
mm, p = self._parse_version(file)
current_app.logger.debug(f"File: {file}, mm: {mm}, p: {p}")
if mm == major_minor and p:
version_files.append((mm, p, f"{mm}.{p}"))
@@ -99,10 +100,12 @@ class ContentManager:
# Sorteer op patch nummer
version_files.sort(key=lambda v: int(v[1]))
current_app.logger.debug(f"Latest version: {version_files[-1]}")
return version_files[-1]
except Exception as e:
logger.error(f"Error finding latest version for {content_type}: {str(e)}")
current_app.logger.error(f"Error finding latest version for {content_type}: {str(e)}")
return None
def read_content(self, content_type, major_minor=None, patch=None):
@@ -125,11 +128,12 @@ class ContentManager:
} of None bij fout
"""
try:
current_app.logger.debug(f"Reading content {content_type}")
# Als geen versie opgegeven, vind de laatste
if not major_minor:
version_info = self.get_latest_version(content_type)
if not version_info:
logger.error(f"No versions found for {content_type}")
current_app.logger.error(f"No versions found for {content_type}")
return None
major_minor, patch, full_version = version_info
@@ -138,7 +142,7 @@ class ContentManager:
elif not patch:
version_info = self.get_latest_version(content_type, major_minor)
if not version_info:
logger.error(f"No versions found for {content_type} {major_minor}")
current_app.logger.error(f"No versions found for {content_type} {major_minor}")
return None
major_minor, patch, full_version = version_info
@@ -147,14 +151,17 @@ class ContentManager:
# Nu hebben we major_minor en patch, lees het bestand
file_path = self.get_content_path(content_type, major_minor, patch)
current_app.logger.debug(f"Content File path: {file_path}")
if not os.path.exists(file_path):
logger.error(f"Content file does not exist: {file_path}")
current_app.logger.error(f"Content file does not exist: {file_path}")
return None
with open(file_path, 'r', encoding='utf-8') as file:
content = file.read()
current_app.logger.debug(f"Content read: {content}")
return {
'content': content,
'version': full_version,
@@ -162,7 +169,7 @@ class ContentManager:
}
except Exception as e:
logger.error(f"Error reading content {content_type} {major_minor}.{patch}: {str(e)}")
current_app.logger.error(f"Error reading content {content_type} {major_minor}.{patch}: {str(e)}")
return None
def list_content_types(self):
@@ -171,7 +178,7 @@ class ContentManager:
return [d for d in os.listdir(self.app.config['CONTENT_DIR'])
if os.path.isdir(os.path.join(self.app.config['CONTENT_DIR'], d))]
except Exception as e:
logger.error(f"Error listing content types: {str(e)}")
current_app.logger.error(f"Error listing content types: {str(e)}")
return []
def list_versions(self, content_type):
@@ -211,5 +218,5 @@ class ContentManager:
return versions
except Exception as e:
logger.error(f"Error listing versions for {content_type}: {str(e)}")
current_app.logger.error(f"Error listing versions for {content_type}: {str(e)}")
return []

View File

@@ -3,7 +3,7 @@ from datetime import datetime as dt, timezone as tz
from sqlalchemy import desc
from sqlalchemy.exc import SQLAlchemyError
from werkzeug.utils import secure_filename
from common.models.document import Document, DocumentVersion, Catalog
from common.models.document import Document, DocumentVersion, Catalog, Processor
from common.extensions import db, minio_client
from common.utils.celery_utils import current_celery
from flask import current_app
@@ -11,15 +11,15 @@ import requests
from urllib.parse import urlparse, unquote, urlunparse, parse_qs
import os
from config.type_defs.processor_types import PROCESSOR_TYPES
from .config_field_types import normalize_json_field
from .eveai_exceptions import (EveAIInvalidLanguageException, EveAIDoubleURLException, EveAIUnsupportedFileType,
EveAIInvalidCatalog, EveAIInvalidDocument, EveAIInvalidDocumentVersion, EveAIException)
from .minio_utils import MIB_CONVERTOR
from ..models.user import Tenant
from common.utils.model_logging_utils import set_logging_information, update_logging_information
from common.services.entitlements import LicenseUsageServices
MB_CONVERTOR = 1_048_576
def get_file_size(file):
try:
@@ -38,7 +38,7 @@ def get_file_size(file):
def create_document_stack(api_input, file, filename, extension, tenant_id):
# Precheck if we can add a document to the stack
LicenseUsageServices.check_storage_and_embedding_quota(tenant_id, get_file_size(file)/MB_CONVERTOR)
LicenseUsageServices.check_storage_and_embedding_quota(tenant_id, get_file_size(file) / MIB_CONVERTOR)
# Create the Document
catalog_id = int(api_input.get('catalog_id'))
@@ -143,7 +143,7 @@ def upload_file_for_version(doc_vers, file, extension, tenant_id):
)
doc_vers.bucket_name = bn
doc_vers.object_name = on
doc_vers.file_size = size / MB_CONVERTOR # Convert bytes to MB
doc_vers.file_size = size / MIB_CONVERTOR # Convert bytes to MB
db.session.commit()
current_app.logger.info(f'Successfully saved document to MinIO for tenant {tenant_id} for '
@@ -192,9 +192,32 @@ def process_url(url, tenant_id):
existing_doc = DocumentVersion.query.filter_by(url=url).first()
if existing_doc:
raise EveAIDoubleURLException
# Prepare the headers for maximal chance of downloading url
referer = get_referer_from_url(url)
headers = {
"User-Agent": (
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
"AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/115.0.0.0 Safari/537.36"
),
"Accept": (
"text/html,application/xhtml+xml,application/xml;"
"q=0.9,image/avif,image/webp,image/apng,*/*;"
"q=0.8,application/signed-exchange;v=b3;q=0.7"
),
"Accept-Encoding": "gzip, deflate, br",
"Accept-Language": "nl-BE,nl;q=0.9,en-US;q=0.8,en;q=0.7",
"Connection": "keep-alive",
"Upgrade-Insecure-Requests": "1",
"Referer": referer,
"Sec-Fetch-Dest": "document",
"Sec-Fetch-Mode": "navigate",
"Sec-Fetch-Site": "same-origin",
"Sec-Fetch-User": "?1",
}
# Download the content
response = requests.get(url)
response = requests.get(url, headers=headers)
response.raise_for_status()
file_content = response.content
@@ -353,7 +376,7 @@ def refresh_document_with_content(doc_id: int, tenant_id: int, file_content: byt
old_doc_vers = DocumentVersion.query.filter_by(doc_id=doc_id).order_by(desc(DocumentVersion.id)).first()
# Precheck if we have enough quota for the new version
LicenseUsageServices.check_storage_and_embedding_quota(tenant_id, get_file_size(file_content) / MB_CONVERTOR)
LicenseUsageServices.check_storage_and_embedding_quota(tenant_id, get_file_size(file_content) / MIB_CONVERTOR)
# Create new version with same file type as original
extension = old_doc_vers.file_type
@@ -469,3 +492,19 @@ def lookup_document(tenant_id: int, lookup_criteria: dict, metadata_type: str) -
"Error during document lookup",
status_code=500
)
def is_file_type_supported_by_catalog(catalog_id, file_type):
processors = Processor.query.filter_by(catalog_id=catalog_id).filter_by(active=True).all()
supported_file_types = []
for processor in processors:
processor_file_types = PROCESSOR_TYPES[processor.type]['file_types']
file_types = [f.strip() for f in processor_file_types.split(",")]
supported_file_types.extend(file_types)
if file_type not in supported_file_types:
raise EveAIUnsupportedFileType()
def get_referer_from_url(url):
parsed = urlparse(url)
return f"{parsed.scheme}://{parsed.netloc}/"

View File

@@ -38,6 +38,8 @@ def create_default_config_from_type_config(type_config):
default_config[field_name] = 0
elif field_type == "boolean":
default_config[field_name] = False
elif field_type == "color":
default_config[field_name] = "#000000"
else:
default_config[field_name] = ""

View File

@@ -34,7 +34,25 @@ class EveAIDoubleURLException(EveAIException):
class EveAIUnsupportedFileType(EveAIException):
"""Raised when an invalid file type is provided"""
def __init__(self, message="Filetype is not supported", status_code=400, payload=None):
def __init__(self, message="Filetype is not supported by current active processors", status_code=400, payload=None):
super().__init__(message, status_code, payload)
class EveAINoProcessorFound(EveAIException):
"""Raised when no processor is found for a given file type"""
def __init__(self, catalog_id, file_type, file_subtype, status_code=400, payload=None):
message = f"No active processor found for catalog {catalog_id} with file type {file_type} and subtype {file_subtype}"
super().__init__(message, status_code, payload)
class EveAINoContentFound(EveAIException):
"""Raised when no content is found for a given document"""
def __init__(self, document_id, document_version_id, status_code=400, payload=None):
self.document_id = document_id
self.document_version_id = document_version_id
message = f"No content found while processing Document with ID {document_id} and version {document_version_id}."
super().__init__(message, status_code, payload)
@@ -248,3 +266,14 @@ class EveAIPendingLicensePeriod(EveAIException):
message = f"Basic Fee Payment has not been received yet. Please ensure payment has been made, and please wait for payment to be processed."
super().__init__(message, status_code, payload)
class EveAISpecialistExecutionError(EveAIException):
"""Raised when an error occurs during specialist execution"""
def __init__(self, tenant_id, specialist_id, session_id, details, status_code=400, payload=None):
message = (f"Error during specialist {specialist_id} execution \n"
f"with Session ID {session_id} \n"
f"for Tenant {tenant_id}. \n"
f"Details: {details} \n"
f"The System Administrator has been notified. Please try again later.")
super().__init__(message, status_code, payload)

View File

@@ -1,54 +0,0 @@
from flask import request, render_template, abort
from sqlalchemy import desc, asc
class FilteredListView:
def __init__(self, model, template, per_page=10):
self.model = model
self.template = template
self.per_page = per_page
def get_query(self):
return self.model.query
def apply_filters(self, query):
filters = request.args.get('filters', {})
for key, value in filters.items():
if hasattr(self.model, key):
column = getattr(self.model, key)
if value.startswith('like:'):
query = query.filter(column.like(f"%{value[5:]}%"))
else:
query = query.filter(column == value)
return query
def apply_sorting(self, query):
sort_by = request.args.get('sort_by')
if sort_by and hasattr(self.model, sort_by):
sort_order = request.args.get('sort_order', 'asc')
column = getattr(self.model, sort_by)
if sort_order == 'desc':
query = query.order_by(desc(column))
else:
query = query.order_by(asc(column))
return query
def paginate(self, query):
page = request.args.get('page', 1, type=int)
return query.paginate(page=page, per_page=self.per_page, error_out=False)
def get(self):
query = self.get_query()
query = self.apply_filters(query)
query = self.apply_sorting(query)
pagination = self.paginate(query)
context = {
'items': pagination.items,
'pagination': pagination,
'model': self.model.__name__,
'filters': request.args.get('filters', {}),
'sort_by': request.args.get('sort_by'),
'sort_order': request.args.get('sort_order', 'asc')
}
return render_template(self.template, **context)

View File

@@ -6,22 +6,17 @@ from flask import current_app
def send_email(to_email, to_name, subject, html):
current_app.logger.debug(f"Sending email to {to_email} with subject {subject}")
access_key = current_app.config['SW_EMAIL_ACCESS_KEY']
secret_key = current_app.config['SW_EMAIL_SECRET_KEY']
default_project_id = current_app.config['SW_PROJECT']
default_region = "fr-par"
current_app.logger.debug(f"Access Key: {access_key}\nSecret Key: {secret_key}\n"
f"Default Project ID: {default_project_id}\nDefault Region: {default_region}")
client = Client(
access_key=access_key,
secret_key=secret_key,
default_project_id=default_project_id,
default_region=default_region
)
current_app.logger.debug(f"Scaleway Client Initialized")
tem = TemV1Alpha1API(client)
current_app.logger.debug(f"Tem Initialized")
from_ = CreateEmailRequestAddress(email=current_app.config['SW_EMAIL_SENDER'],
name=current_app.config['SW_EMAIL_NAME'])
to_ = CreateEmailRequestAddress(email=to_email, name=to_name)
@@ -34,7 +29,6 @@ def send_email(to_email, to_name, subject, html):
html=html,
project_id=default_project_id,
)
current_app.logger.debug(f"Email sent to {to_email}")
def html_to_text(html_content):

View File

@@ -4,6 +4,9 @@ from flask import Flask
import io
from werkzeug.datastructures import FileStorage
MIB_CONVERTOR = 1_048_576
class MinioClient:
def __init__(self):
self.client = None
@@ -33,8 +36,8 @@ class MinioClient:
def generate_object_name(self, document_id, language, version_id, filename):
return f"{document_id}/{language}/{version_id}/{filename}"
def generate_asset_name(self, asset_version_id, file_name, content_type):
return f"assets/{asset_version_id}/{file_name}.{content_type}"
def generate_asset_name(self, asset_id, asset_type, content_type):
return f"assets/{asset_type}/{asset_id}.{content_type}"
def upload_document_file(self, tenant_id, document_id, language, version_id, filename, file_data):
bucket_name = self.generate_bucket_name(tenant_id)
@@ -57,8 +60,10 @@ class MinioClient:
except S3Error as err:
raise Exception(f"Error occurred while uploading file: {err}")
def upload_asset_file(self, bucket_name, asset_version_id, file_name, file_type, file_data):
object_name = self.generate_asset_name(asset_version_id, file_name, file_type)
def upload_asset_file(self, tenant_id: int, asset_id: int, asset_type: str, file_type: str,
file_data: bytes | FileStorage | io.BytesIO | str, ) -> tuple[str, str, int]:
bucket_name = self.generate_bucket_name(tenant_id)
object_name = self.generate_asset_name(asset_id, asset_type, file_type)
try:
if isinstance(file_data, FileStorage):
@@ -73,7 +78,7 @@ class MinioClient:
self.client.put_object(
bucket_name, object_name, io.BytesIO(file_data), len(file_data)
)
return object_name, len(file_data)
return bucket_name, object_name, len(file_data)
except S3Error as err:
raise Exception(f"Error occurred while uploading asset: {err}")
@@ -84,6 +89,13 @@ class MinioClient:
except S3Error as err:
raise Exception(f"Error occurred while downloading file: {err}")
def download_asset_file(self, tenant_id, bucket_name, object_name):
try:
response = self.client.get_object(bucket_name, object_name)
return response.read()
except S3Error as err:
raise Exception(f"Error occurred while downloading asset: {err}")
def list_document_files(self, tenant_id, document_id, language=None, version_id=None):
bucket_name = self.generate_bucket_name(tenant_id)
prefix = f"{document_id}/"
@@ -105,3 +117,16 @@ class MinioClient:
return True
except S3Error as err:
raise Exception(f"Error occurred while deleting file: {err}")
def delete_object(self, bucket_name, object_name):
try:
self.client.remove_object(bucket_name, object_name)
except S3Error as err:
raise Exception(f"Error occurred while deleting object: {err}")
def get_bucket_size(self, tenant_id: int) -> int:
bucket_name = self.generate_bucket_name(tenant_id)
total_size = 0
for obj in self.client.list_objects(bucket_name, recursive=True):
total_size += obj.size
return total_size

View File

@@ -56,7 +56,9 @@ def replace_variable_in_template(template: str, variable: str, value: str) -> st
Returns:
str: Template with variable placeholder replaced
"""
return template.replace(variable, value or "")
modified_template = template.replace(f"{{{variable}}}", value or "")
return modified_template
def get_embedding_model_and_class(tenant_id, catalog_id, full_embedding_name="mistral.mistral-embed"):

View File

@@ -12,7 +12,16 @@ def prefixed_url_for(endpoint, **values):
if external:
path, query, fragment = urlsplit(generated_url)[2:5]
# Check if the prefix is already present in the path
if prefix and not path.startswith(prefix):
new_path = prefix + path
else:
new_path = path
return urlunsplit((scheme, host, new_path, query, fragment))
else:
# Check if the prefix is already present in the generated URL
if prefix and not generated_url.startswith(prefix):
return prefix + generated_url
else:
return generated_url

View File

@@ -12,7 +12,6 @@ from datetime import datetime as dt, timezone as tz
def set_tenant_session_data(sender, user, **kwargs):
tenant = Tenant.query.filter_by(id=user.tenant_id).first()
session['tenant'] = tenant.to_dict()
session['default_language'] = tenant.default_language
partner = Partner.query.filter_by(tenant_id=user.tenant_id).first()
if partner:
session['partner'] = partner.to_dict()

View File

@@ -1,196 +0,0 @@
from datetime import datetime as dt, timezone as tz
from typing import Optional, Dict, Any
from flask import current_app
from sqlalchemy.exc import SQLAlchemyError
from common.extensions import db, cache_manager
from common.models.interaction import (
Specialist, EveAIAgent, EveAITask, EveAITool
)
from common.utils.model_logging_utils import set_logging_information, update_logging_information
def initialize_specialist(specialist_id: int, specialist_type: str, specialist_version: str):
"""
Initialize an agentic specialist by creating all its components based on configuration.
Args:
specialist_id: ID of the specialist to initialize
specialist_type: Type of the specialist
specialist_version: Version of the specialist type to use
Raises:
ValueError: If specialist not found or invalid configuration
SQLAlchemyError: If database operations fail
"""
config = cache_manager.specialists_config_cache.get_config(specialist_type, specialist_version)
if not config:
raise ValueError(f"No configuration found for {specialist_type} version {specialist_version}")
if config['framework'] == 'langchain':
pass # Langchain does not require additional items to be initialized. All configuration is in the specialist.
specialist = Specialist.query.get(specialist_id)
if not specialist:
raise ValueError(f"Specialist with ID {specialist_id} not found")
if config['framework'] == 'crewai':
initialize_crewai_specialist(specialist, config)
def initialize_crewai_specialist(specialist: Specialist, config: Dict[str, Any]):
timestamp = dt.now(tz=tz.utc)
try:
# Initialize agents
if 'agents' in config:
for agent_config in config['agents']:
_create_agent(
specialist_id=specialist.id,
agent_type=agent_config['type'],
agent_version=agent_config['version'],
name=agent_config.get('name'),
description=agent_config.get('description'),
timestamp=timestamp
)
# Initialize tasks
if 'tasks' in config:
for task_config in config['tasks']:
_create_task(
specialist_id=specialist.id,
task_type=task_config['type'],
task_version=task_config['version'],
name=task_config.get('name'),
description=task_config.get('description'),
timestamp=timestamp
)
# Initialize tools
if 'tools' in config:
for tool_config in config['tools']:
_create_tool(
specialist_id=specialist.id,
tool_type=tool_config['type'],
tool_version=tool_config['version'],
name=tool_config.get('name'),
description=tool_config.get('description'),
timestamp=timestamp
)
db.session.commit()
current_app.logger.info(f"Successfully initialized crewai specialist {specialist.id}")
except SQLAlchemyError as e:
db.session.rollback()
current_app.logger.error(f"Database error initializing crewai specialist {specialist.id}: {str(e)}")
raise
except Exception as e:
db.session.rollback()
current_app.logger.error(f"Error initializing crewai specialist {specialist.id}: {str(e)}")
raise
def _create_agent(
specialist_id: int,
agent_type: str,
agent_version: str,
name: Optional[str] = None,
description: Optional[str] = None,
timestamp: Optional[dt] = None
) -> EveAIAgent:
"""Create an agent with the given configuration."""
if timestamp is None:
timestamp = dt.now(tz=tz.utc)
# Get agent configuration from cache
agent_config = cache_manager.agents_config_cache.get_config(agent_type, agent_version)
agent = EveAIAgent(
specialist_id=specialist_id,
name=name or agent_config.get('name', agent_type),
description=description or agent_config.get('metadata').get('description', ''),
type=agent_type,
type_version=agent_version,
role=None,
goal=None,
backstory=None,
tuning=False,
configuration=None,
arguments=None
)
set_logging_information(agent, timestamp)
db.session.add(agent)
current_app.logger.info(f"Created agent {agent.id} of type {agent_type}")
return agent
def _create_task(
specialist_id: int,
task_type: str,
task_version: str,
name: Optional[str] = None,
description: Optional[str] = None,
timestamp: Optional[dt] = None
) -> EveAITask:
"""Create a task with the given configuration."""
if timestamp is None:
timestamp = dt.now(tz=tz.utc)
# Get task configuration from cache
task_config = cache_manager.tasks_config_cache.get_config(task_type, task_version)
task = EveAITask(
specialist_id=specialist_id,
name=name or task_config.get('name', task_type),
description=description or task_config.get('metadata').get('description', ''),
type=task_type,
type_version=task_version,
task_description=None,
expected_output=None,
tuning=False,
configuration=None,
arguments=None,
context=None,
asynchronous=False,
)
set_logging_information(task, timestamp)
db.session.add(task)
current_app.logger.info(f"Created task {task.id} of type {task_type}")
return task
def _create_tool(
specialist_id: int,
tool_type: str,
tool_version: str,
name: Optional[str] = None,
description: Optional[str] = None,
timestamp: Optional[dt] = None
) -> EveAITool:
"""Create a tool with the given configuration."""
if timestamp is None:
timestamp = dt.now(tz=tz.utc)
# Get tool configuration from cache
tool_config = cache_manager.tools_config_cache.get_config(tool_type, tool_version)
tool = EveAITool(
specialist_id=specialist_id,
name=name or tool_config.get('name', tool_type),
description=description or tool_config.get('metadata').get('description', ''),
type=tool_type,
type_version=tool_version,
tuning=False,
configuration=None,
arguments=None,
)
set_logging_information(tool, timestamp)
db.session.add(tool)
current_app.logger.info(f"Created tool {tool.id} of type {tool_type}")
return tool

View File

@@ -5,6 +5,7 @@ import markdown
from markupsafe import Markup
from datetime import datetime
from common.utils.nginx_utils import prefixed_url_for as puf
from common.utils.chat_utils import adjust_color_brightness, adjust_color_alpha, get_base_background_color
from flask import current_app, url_for
@@ -98,7 +99,6 @@ def get_pagination_html(pagination, endpoint, **kwargs):
if page:
is_active = 'active' if page == pagination.page else ''
url = url_for(endpoint, page=page, **kwargs)
current_app.logger.debug(f"URL for page {page}: {url}")
html.append(f'<li class="page-item {is_active}"><a class="page-link" href="{url}">{page}</a></li>')
else:
html.append('<li class="page-item disabled"><span class="page-link">...</span></li>')
@@ -117,7 +117,10 @@ def register_filters(app):
app.jinja_env.filters['prefixed_url_for'] = prefixed_url_for
app.jinja_env.filters['markdown'] = render_markdown
app.jinja_env.filters['clean_markdown'] = clean_markdown
app.jinja_env.filters['adjust_color_brightness'] = adjust_color_brightness
app.jinja_env.filters['adjust_color_alpha'] = adjust_color_alpha
app.jinja_env.globals['prefixed_url_for'] = prefixed_url_for
app.jinja_env.globals['get_pagination_html'] = get_pagination_html
app.jinja_env.globals['get_base_background_color'] = get_base_background_color

View File

@@ -0,0 +1,26 @@
version: "1.0.0"
name: "Partner Rag Agent"
role: >
You are a virtual assistant responsible for answering user questions about the Evie platform (Ask Eve AI) and products
developed by partners on top of it. You are reliable point of contact for end-users seeking help, clarification, or
deeper understanding of features, capabilities, integrations, or workflows related to these AI-powered solutions.
goal: >
Your primary goal is to:
• Provide clear, relevant, and accurate responses to user questions.
• Reduce friction in user onboarding and daily usage.
• Increase user confidence and adoption of both the platform and partner-developed products.
• Act as a bridge between documentation and practical application, enabling users to help themselves through intelligent guidance.
backstory: >
You have availability Evies own documentation, partner product manuals, and real user interactions. You are designed
to replace passive documentation with active, contextual assistance.
You have evolved beyond a support bot: you combine knowledge, reasoning, and a friendly tone to act as a product
companion that grows with the ecosystem. As partner products expand, the agent updates its knowledge and learns to
distinguish between general platform capabilities and product-specific nuances, offering a personalised experience
each time.
full_model_name: "mistral.mistral-medium-latest"
temperature: 0.3
metadata:
author: "Josako"
date_added: "2025-07-16"
description: "An Agent that does RAG based on a user's question, RAG content & history"
changes: "Initial version"

View File

@@ -0,0 +1,23 @@
version: "1.0.0"
name: "Rag Agent"
role: >
{tenant_name} Spokesperson. {custom_role}
goal: >
You get questions by a human correspondent, and give answers based on a given context, taking into account the history
of the current conversation.
{custom_goal}
backstory: >
You are the primary contact for {tenant_name}. You are known by {name}, and can be addressed by this name, or you. You are
a very good communicator, and adapt to the style used by the human asking for information (e.g. formal or informal).
You always stay correct and polite, whatever happens. And you ensure no discriminating language is used.
You are perfectly multilingual in all known languages, and do your best to answer questions in {language}, whatever
language the context provided to you is in. You are participating in a conversation, not writing e.g. an email. Do not
include a salutation or closing greeting in your answer.
{custom_backstory}
full_model_name: "mistral.mistral-medium-latest"
temperature: 0.5
metadata:
author: "Josako"
date_added: "2025-01-08"
description: "An Agent that does RAG based on a user's question, RAG content & history"
changes: "Initial version"

View File

@@ -0,0 +1,25 @@
version: "1.0.0"
name: "Traicie Recruiter"
role: >
You are an Expert Recruiter working for {tenant_name}
{custom_role}
goal: >
As an expert recruiter, you identify, attract, and secure top talent by building genuine relationships, deeply
understanding business needs, and ensuring optimal alignment between candidate potential and organizational goals
, while championing diversity, culture fit, and long-term retention.
{custom_goal}
backstory: >
You started your career in a high-pressure agency setting, where you quickly learned the art of fast-paced hiring and
relationship building. Over the years, you moved in-house, partnering closely with business leaders to shape
recruitment strategies that go beyond filling roles—you focus on finding the right people to drive growth and culture.
With a strong grasp of both tech and non-tech profiles, youve adapted to changing trends, from remote work to
AI-driven sourcing. Youre more than a recruiter—youre a trusted advisor, a brand ambassador, and a connector of
people and purpose.
{custom_backstory}
full_model_name: "mistral.mistral-medium-latest"
temperature: 0.3
metadata:
author: "Josako"
date_added: "2025-06-18"
description: "Traicie Recruiter Agent"
changes: "Initial version"

View File

@@ -0,0 +1,25 @@
version: "1.0.1"
name: "Traicie Recruiter"
role: >
You are an Expert Recruiter working for {tenant_name}, known as {name}. You can be addressed as {name}
{custom_role}
goal: >
As an expert recruiter, you identify, attract, and secure top talent by building genuine relationships, deeply
understanding business needs, and ensuring optimal alignment between candidate potential and organizational goals
, while championing diversity, culture fit, and long-term retention.
{custom_goal}
backstory: >
You started your career in a high-pressure agency setting, where you quickly learned the art of fast-paced hiring and
relationship building. Over the years, you moved in-house, partnering closely with business leaders to shape
recruitment strategies that go beyond filling roles—you focus on finding the right people to drive growth and culture.
With a strong grasp of both tech and non-tech profiles, youve adapted to changing trends, from remote work to
AI-driven sourcing. Youre more than a recruiter—youre a trusted advisor, a brand ambassador, and a connector of
people and purpose.
{custom_backstory}
full_model_name: "mistral.mistral-medium-latest"
temperature: 0.3
metadata:
author: "Josako"
date_added: "2025-07-03"
description: "Traicie Recruiter Agent"
changes: "Ensure recruiter can be addressed by a name"

View File

@@ -0,0 +1,15 @@
version: "1.0.0"
name: "Traicie KO Criteria Questions"
file_type: "yaml"
dynamic: true
configuration:
specialist_id:
name: "Specialist ID"
type: "int"
description: "The Specialist this asset is created for"
required: True
metadata:
author: "Josako"
date_added: "2025-07-01"
description: "Asset that defines a KO Criteria Questions and Answers"
changes: "Initial version"

View File

@@ -0,0 +1,19 @@
version: "1.0.0"
name: "Role Definition Catalog"
description: "A Catalog containing information specific to a specific role"
configuration:
tagging_fields:
role_reference:
type: "string"
required: true
description: "A unique identification for the role"
document_type:
type: "enum"
required: true
description: "Type of document"
allowed_values: [ "Intake", "Vacancy Text", "Additional Information" ]
document_version_configurations: ["tagging_fields"]
metadata:
author: "Josako"
date_added: "2025-07-07"
description: "A Catalog containing information specific to a specific role"

View File

@@ -12,10 +12,7 @@ class Config(object):
DEBUG = False
DEVELOPMENT = False
SECRET_KEY = environ.get('SECRET_KEY')
SESSION_COOKIE_SECURE = False
SESSION_COOKIE_HTTPONLY = True
COMPONENT_NAME = environ.get('COMPONENT_NAME')
SESSION_KEY_PREFIX = f'{COMPONENT_NAME}_'
# Database Settings
DB_HOST = environ.get('DB_HOST')
@@ -44,8 +41,6 @@ class Config(object):
# SECURITY_POST_CHANGE_VIEW = '/admin/login'
# SECURITY_BLUEPRINT_NAME = 'security_bp'
SECURITY_PASSWORD_SALT = environ.get('SECURITY_PASSWORD_SALT')
REMEMBER_COOKIE_SAMESITE = 'strict'
SESSION_COOKIE_SAMESITE = 'Lax'
SECURITY_CONFIRMABLE = True
SECURITY_TRACKABLE = True
SECURITY_PASSWORD_COMPLEXITY_CHECKER = 'zxcvbn'
@@ -56,6 +51,10 @@ class Config(object):
SECURITY_EMAIL_SUBJECT_PASSWORD_NOTICE = 'Your Password Has Been Reset'
SECURITY_EMAIL_PLAINTEXT = False
SECURITY_EMAIL_HTML = True
SECURITY_SESSION_PROTECTION = 'basic' # of 'basic' als 'strong' problemen geeft
SECURITY_REMEMBER_TOKEN_VALIDITY = timedelta(minutes=60) # Zelfde als session lifetime
SECURITY_AUTO_LOGIN_AFTER_CONFIRM = True
SECURITY_AUTO_LOGIN_AFTER_RESET = True
# Ensure Flask-Security-Too is handling CSRF tokens when behind a proxy
SECURITY_CSRF_PROTECT_MECHANISMS = ['session']
@@ -67,7 +66,91 @@ class Config(object):
MAX_CONTENT_LENGTH = 50 * 1024 * 1024
# supported languages
SUPPORTED_LANGUAGES = ['en', 'fr', 'nl', 'de', 'es']
SUPPORTED_LANGUAGE_DETAILS = {
"English": {
"iso 639-1": "en",
"iso 639-2": "eng",
"iso 639-3": "eng",
"flag": "🇬🇧"
},
"French": {
"iso 639-1": "fr",
"iso 639-2": "fre", # of 'fra'
"iso 639-3": "fra",
"flag": "🇫🇷"
},
"German": {
"iso 639-1": "de",
"iso 639-2": "ger", # of 'deu'
"iso 639-3": "deu",
"flag": "🇩🇪"
},
"Spanish": {
"iso 639-1": "es",
"iso 639-2": "spa",
"iso 639-3": "spa",
"flag": "🇪🇸"
},
"Italian": {
"iso 639-1": "it",
"iso 639-2": "ita",
"iso 639-3": "ita",
"flag": "🇮🇹"
},
"Portuguese": {
"iso 639-1": "pt",
"iso 639-2": "por",
"iso 639-3": "por",
"flag": "🇵🇹"
},
"Dutch": {
"iso 639-1": "nl",
"iso 639-2": "dut", # of 'nld'
"iso 639-3": "nld",
"flag": "🇳🇱"
},
"Russian": {
"iso 639-1": "ru",
"iso 639-2": "rus",
"iso 639-3": "rus",
"flag": "🇷🇺"
},
"Chinese": {
"iso 639-1": "zh",
"iso 639-2": "chi", # of 'zho'
"iso 639-3": "zho",
"flag": "🇨🇳"
},
"Japanese": {
"iso 639-1": "ja",
"iso 639-2": "jpn",
"iso 639-3": "jpn",
"flag": "🇯🇵"
},
"Korean": {
"iso 639-1": "ko",
"iso 639-2": "kor",
"iso 639-3": "kor",
"flag": "🇰🇷"
},
"Arabic": {
"iso 639-1": "ar",
"iso 639-2": "ara",
"iso 639-3": "ara",
"flag": "🇸🇦"
},
"Hindi": {
"iso 639-1": "hi",
"iso 639-2": "hin",
"iso 639-3": "hin",
"flag": "🇮🇳"
},
}
# Afgeleide taalconstanten
SUPPORTED_LANGUAGES = [lang_details["iso 639-1"] for lang_details in SUPPORTED_LANGUAGE_DETAILS.values()]
SUPPORTED_LANGUAGES_FULL = list(SUPPORTED_LANGUAGE_DETAILS.keys())
SUPPORTED_LANGUAGE_ISO639_1_LOOKUP = {lang_details["iso 639-1"]: lang_name for lang_name, lang_details in SUPPORTED_LANGUAGE_DETAILS.items()}
# supported currencies
SUPPORTED_CURRENCIES = ['', '$']
@@ -75,10 +158,7 @@ class Config(object):
# supported LLMs
# SUPPORTED_EMBEDDINGS = ['openai.text-embedding-3-small', 'openai.text-embedding-3-large', 'mistral.mistral-embed']
SUPPORTED_EMBEDDINGS = ['mistral.mistral-embed']
SUPPORTED_LLMS = ['openai.gpt-4o', 'openai.gpt-4o-mini',
'mistral.mistral-large-latest', 'mistral.mistral-medium_latest', 'mistral.mistral-small-latest']
ANTHROPIC_LLM_VERSIONS = {'claude-3-5-sonnet': 'claude-3-5-sonnet-20240620', }
SUPPORTED_LLMS = ['mistral.mistral-large-latest', 'mistral.mistral-medium_latest', 'mistral.mistral-small-latest']
# Annotation text chunk length
ANNOTATION_TEXT_CHUNK_LENGTH = 10000
@@ -107,6 +187,15 @@ class Config(object):
PERMANENT_SESSION_LIFETIME = timedelta(minutes=60)
SESSION_REFRESH_EACH_REQUEST = True
SESSION_COOKIE_NAME = f'{COMPONENT_NAME}_session'
SESSION_COOKIE_DOMAIN = None # Laat Flask dit automatisch bepalen
SESSION_COOKIE_PATH = '/'
SESSION_COOKIE_HTTPONLY = True
SESSION_COOKIE_SECURE = False # True voor production met HTTPS
SESSION_COOKIE_SAMESITE = 'Lax'
REMEMBER_COOKIE_SAMESITE = 'strict'
SESSION_KEY_PREFIX = f'{COMPONENT_NAME}_'
# JWT settings
JWT_SECRET_KEY = environ.get('JWT_SECRET_KEY')
JWT_ACCESS_TOKEN_EXPIRES = timedelta(hours=1) # Set token expiry to 1 hour
@@ -185,6 +274,7 @@ class DevConfig(Config):
# Define the nginx prefix used for the specific apps
EVEAI_APP_LOCATION_PREFIX = '/admin'
EVEAI_CHAT_LOCATION_PREFIX = '/chat'
CHAT_CLIENT_PREFIX = 'chat-client/chat/'
# file upload settings
# UPLOAD_FOLDER = '/app/tenant_files'
@@ -205,6 +295,8 @@ class DevConfig(Config):
CHAT_WORKER_CACHE_URL = f'{REDIS_BASE_URI}/4'
# specialist execution pub/sub Redis Settings
SPECIALIST_EXEC_PUBSUB = f'{REDIS_BASE_URI}/5'
# eveai_model cache Redis setting
MODEL_CACHE_URL = f'{REDIS_BASE_URI}/6'
# Unstructured settings

View File

@@ -0,0 +1,89 @@
version: "1.0.0"
name: "Chat Client Customisation"
configuration:
sidebar_markdown:
name: "Sidebar Markdown"
description: "Sidebar Markdown-formatted Text"
type: "text"
required: false
sidebar_color:
name: "Sidebar Text Color"
description: "Sidebar Color"
type: "color"
required: false
sidebar_background:
name: "Sidebar Background Color"
description: "Sidebar Background Color"
type: "color"
required: false
markdown_background_color:
name: "Markdown Background Color"
description: "Markdown Background Color"
type: "color"
required: false
markdown_text_color:
name: "Markdown Text Color"
description: "Markdown Text Color"
type: "color"
required: false
gradient_start_color:
name: "Chat Gradient Background Start Color"
description: "Start Color for the gradient in the Chat Area"
type: "color"
required: false
gradient_end_color:
name: "Chat Gradient Background End Color"
description: "End Color for the gradient in the Chat Area"
type: "color"
required: false
progress_tracker_insights:
name: "Progress Tracker Insights Level"
description: "Level of information shown by the Progress Tracker"
type: "enum"
allowed_values: ["No Information", "Active Interaction Only", "All Interactions"]
default: "No Information"
required: true
form_title_display:
name: "Form Title Display"
description: Level of information shown for the Form Title
type: "enum"
allowed_values: ["No Title", "Full Title"]
default: "Full Title"
required: true
active_background_color:
name: "Active Interaction Background Color"
description: "Primary Color"
type: "color"
required: false
history_background:
name: "History Background"
description: "Percentage to lighten (+) / darken (-) the user message background"
type: "integer"
min_value: -50
max_value: 50
required: false
ai_message_background:
name: "AI (Bot) Message Background Color"
description: "AI (Bot) Message Background Color"
type: "color"
required: false
ai_message_text_color:
name: "AI (Bot) Message Text Color"
description: "AI (Bot) Message Text Color"
type: "color"
required: false
human_message_background:
name: "Human Message Background Color"
description: "Human Message Background Color"
type: "color"
required: false
human_message_text_color:
name: "Human Message Text Color"
description: "Human Message Text Color"
type: "color"
required: false
metadata:
author: "Josako"
date_added: "2024-06-06"
changes: "Adaptations to make color choosing more consistent and user friendly"
description: "Parameters allowing to customise the chat client"

View File

@@ -0,0 +1,8 @@
version: "1.0.0"
name: "RQC"
description: "Recruitment Qualified Candidate"
configuration: {}
metadata:
author: "Josako"
date_added: "2025-07-24"
description: "Capsule storing RQC information"

View File

@@ -1,15 +1,13 @@
import json
import os
import sys
from datetime import datetime as dt, timezone as tz
from flask import current_app
from graypy import GELFUDPHandler
import logging
import logging.config
# Graylog configuration
GRAYLOG_HOST = os.environ.get('GRAYLOG_HOST', 'localhost')
GRAYLOG_PORT = int(os.environ.get('GRAYLOG_PORT', 12201))
env = os.environ.get('FLASK_ENV', 'development')
@@ -144,23 +142,6 @@ class TuningFormatter(logging.Formatter):
return formatted_msg
class GraylogFormatter(logging.Formatter):
"""Maintains existing Graylog formatting while adding tuning fields"""
def format(self, record):
if getattr(record, 'is_tuning_log', False):
# Add tuning-specific fields to Graylog
record.tuning_fields = {
'is_tuning_log': True,
'tuning_type': record.tuning_type,
'tenant_id': record.tenant_id,
'catalog_id': record.catalog_id,
'specialist_id': record.specialist_id,
'retriever_id': record.retriever_id,
'processor_id': record.processor_id,
'session_id': record.session_id,
}
return super().format(record)
class TuningLogger:
"""Helper class to manage tuning logs with consistent structure"""
@@ -177,10 +158,10 @@ class TuningLogger:
specialist_id: Optional specialist ID for context
retriever_id: Optional retriever ID for context
processor_id: Optional processor ID for context
session_id: Optional session ID for context and log file naming
log_file: Optional custom log file name to use
session_id: Optional session ID for context
log_file: Optional custom log file name (ignored - all logs go to tuning.log)
"""
# Always use the standard tuning logger
self.logger = logging.getLogger(logger_name)
self.tenant_id = tenant_id
self.catalog_id = catalog_id
@@ -188,61 +169,6 @@ class TuningLogger:
self.retriever_id = retriever_id
self.processor_id = processor_id
self.session_id = session_id
self.log_file = log_file
# Determine whether to use a session-specific logger
if session_id:
# Create a unique logger name for this session
session_logger_name = f"{logger_name}_{session_id}"
self.logger = logging.getLogger(session_logger_name)
# If this logger doesn't have handlers yet, configure it
if not self.logger.handlers:
# Determine log file path
if not log_file and session_id:
log_file = f"logs/tuning_{session_id}.log"
elif not log_file:
log_file = "logs/tuning.log"
# Configure the logger
self._configure_session_logger(log_file)
else:
# Use the standard tuning logger
self.logger = logging.getLogger(logger_name)
def _configure_session_logger(self, log_file):
"""Configure a new session-specific logger with appropriate handlers"""
# Create and configure a file handler
file_handler = logging.handlers.RotatingFileHandler(
filename=log_file,
maxBytes=1024 * 1024 * 3, # 3MB
backupCount=3
)
file_handler.setFormatter(TuningFormatter())
file_handler.setLevel(logging.DEBUG)
# Add the file handler to the logger
self.logger.addHandler(file_handler)
# Add Graylog handler in production
env = os.environ.get('FLASK_ENV', 'development')
if env == 'production':
try:
graylog_handler = GELFUDPHandler(
host=GRAYLOG_HOST,
port=GRAYLOG_PORT,
debugging_fields=True
)
graylog_handler.setFormatter(GraylogFormatter())
self.logger.addHandler(graylog_handler)
except Exception as e:
# Fall back to just file logging if Graylog setup fails
fallback_logger = logging.getLogger('eveai_app')
fallback_logger.warning(f"Failed to set up Graylog handler: {str(e)}")
# Set logger level and disable propagation
self.logger.setLevel(logging.DEBUG)
self.logger.propagate = False
def log_tuning(self, tuning_type: str, message: str, data=None, level=logging.DEBUG):
"""Log a tuning event with structured data"""
@@ -275,13 +201,82 @@ def log_tuning(self, tuning_type: str, message: str, data=None, level=logging.DE
self.logger.handle(record)
except Exception as e:
fallback_logger = logging.getLogger('eveai_workers')
fallback_logger.exception(f"Failed to log tuning message: {str(e)}")
print(f"Failed to log tuning message: {str(e)}")
# Set the custom log record factory
logging.setLogRecordFactory(TuningLogRecord)
def configure_logging():
"""Configure logging based on environment
When running in Kubernetes, directs logs to stdout in JSON format
Otherwise uses file-based logging for development/testing
"""
try:
# Verkrijg het absolute pad naar de logs directory
base_dir = os.path.abspath(os.path.dirname(os.path.dirname(__file__)))
logs_dir = os.path.join(base_dir, 'logs')
# Zorg ervoor dat de logs directory bestaat met de juiste permissies
if not os.path.exists(logs_dir):
try:
os.makedirs(logs_dir, exist_ok=True)
print(f"Logs directory aangemaakt op: {logs_dir}")
except (IOError, PermissionError) as e:
print(f"WAARSCHUWING: Kan logs directory niet aanmaken: {e}")
print(f"Logs worden mogelijk niet correct geschreven!")
# Check if running in Kubernetes
in_kubernetes = os.environ.get('KUBERNETES_SERVICE_HOST') is not None
# Controleer of de pythonjsonlogger pakket beschikbaar is als we in Kubernetes zijn
if in_kubernetes:
try:
import pythonjsonlogger.jsonlogger
has_json_logger = True
except ImportError:
print("WAARSCHUWING: python-json-logger pakket is niet geïnstalleerd.")
print("Voer 'pip install python-json-logger>=2.0.7' uit om JSON logging in te schakelen.")
print("Terugvallen op standaard logging formaat.")
has_json_logger = False
in_kubernetes = False # Fall back to standard logging
else:
has_json_logger = False
# Apply the configuration
logging_config = dict(LOGGING)
# Wijzig de json_console handler om terug te vallen op console als pythonjsonlogger niet beschikbaar is
if not has_json_logger and 'json_console' in logging_config['handlers']:
# Vervang json_console handler door een console handler met standaard formatter
logging_config['handlers']['json_console']['formatter'] = 'standard'
# In Kubernetes, conditionally modify specific loggers to use JSON console output
# This preserves the same logger names but changes where/how they log
if in_kubernetes:
for logger_name in logging_config['loggers']:
if logger_name: # Skip the root logger
logging_config['loggers'][logger_name]['handlers'] = ['json_console']
# Controleer of de logs directory schrijfbaar is voordat we de configuratie toepassen
logs_dir = os.path.join(os.path.abspath(os.path.dirname(os.path.dirname(__file__))), 'logs')
if os.path.exists(logs_dir) and not os.access(logs_dir, os.W_OK):
print(f"WAARSCHUWING: Logs directory bestaat maar is niet schrijfbaar: {logs_dir}")
print("Logs worden mogelijk niet correct geschreven!")
logging.config.dictConfig(logging_config)
logging.info(f"Logging configured. Environment: {'Kubernetes' if in_kubernetes else 'Development/Testing'}")
logging.info(f"Logs directory: {logs_dir}")
except Exception as e:
print(f"Error configuring logging: {str(e)}")
print("Gedetailleerde foutinformatie:")
import traceback
traceback.print_exc()
# Fall back to basic configuration
logging.basicConfig(level=logging.INFO)
LOGGING = {
'version': 1,
@@ -290,7 +285,7 @@ LOGGING = {
'file_app': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/eveai_app.log',
'filename': os.path.join(os.path.abspath(os.path.dirname(os.path.dirname(__file__))), 'logs', 'eveai_app.log'),
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
@@ -298,15 +293,15 @@ LOGGING = {
'file_workers': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/eveai_workers.log',
'filename': os.path.join(os.path.abspath(os.path.dirname(os.path.dirname(__file__))), 'logs', 'eveai_workers.log'),
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
},
'file_chat': {
'file_chat_client': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/eveai_chat.log',
'filename': os.path.join(os.path.abspath(os.path.dirname(os.path.dirname(__file__))), 'logs', 'eveai_chat_client.log'),
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
@@ -314,7 +309,7 @@ LOGGING = {
'file_chat_workers': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/eveai_chat_workers.log',
'filename': os.path.join(os.path.abspath(os.path.dirname(os.path.dirname(__file__))), 'logs', 'eveai_chat_workers.log'),
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
@@ -322,7 +317,7 @@ LOGGING = {
'file_api': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/eveai_api.log',
'filename': os.path.join(os.path.abspath(os.path.dirname(os.path.dirname(__file__))), 'logs', 'eveai_api.log'),
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
@@ -330,7 +325,7 @@ LOGGING = {
'file_beat': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/eveai_beat.log',
'filename': os.path.join(os.path.abspath(os.path.dirname(os.path.dirname(__file__))), 'logs', 'eveai_beat.log'),
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
@@ -338,7 +333,7 @@ LOGGING = {
'file_entitlements': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/eveai_entitlements.log',
'filename': os.path.join(os.path.abspath(os.path.dirname(os.path.dirname(__file__))), 'logs', 'eveai_entitlements.log'),
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
@@ -346,7 +341,7 @@ LOGGING = {
'file_sqlalchemy': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/sqlalchemy.log',
'filename': os.path.join(os.path.abspath(os.path.dirname(os.path.dirname(__file__))), 'logs', 'sqlalchemy.log'),
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
@@ -354,7 +349,7 @@ LOGGING = {
'file_security': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/security.log',
'filename': os.path.join(os.path.abspath(os.path.dirname(os.path.dirname(__file__))), 'logs', 'security.log'),
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
@@ -362,7 +357,7 @@ LOGGING = {
'file_rag_tuning': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/rag_tuning.log',
'filename': os.path.join(os.path.abspath(os.path.dirname(os.path.dirname(__file__))), 'logs', 'rag_tuning.log'),
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
@@ -370,7 +365,7 @@ LOGGING = {
'file_embed_tuning': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/embed_tuning.log',
'filename': os.path.join(os.path.abspath(os.path.dirname(os.path.dirname(__file__))), 'logs', 'embed_tuning.log'),
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
@@ -378,7 +373,7 @@ LOGGING = {
'file_business_events': {
'level': 'INFO',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/business_events.log',
'filename': os.path.join(os.path.abspath(os.path.dirname(os.path.dirname(__file__))), 'logs', 'business_events.log'),
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
@@ -388,98 +383,102 @@ LOGGING = {
'level': 'DEBUG',
'formatter': 'standard',
},
'json_console': {
'class': 'logging.StreamHandler',
'level': 'INFO',
'formatter': 'json',
'stream': 'ext://sys.stdout',
},
'tuning_file': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/tuning.log',
'filename': os.path.join(os.path.abspath(os.path.dirname(os.path.dirname(__file__))), 'logs', 'tuning.log'),
'maxBytes': 1024 * 1024 * 3, # 3MB
'backupCount': 3,
'formatter': 'tuning',
},
'graylog': {
'level': 'DEBUG',
'class': 'graypy.GELFUDPHandler',
'host': GRAYLOG_HOST,
'port': GRAYLOG_PORT,
'debugging_fields': True,
'formatter': 'graylog'
},
},
'formatters': {
'standard': {
'format': '%(asctime)s [%(levelname)s] %(name)s (%(component)s) [%(module)s:%(lineno)d]: %(message)s',
'datefmt': '%Y-%m-%d %H:%M:%S'
},
'graylog': {
'format': '[%(levelname)s] %(name)s (%(component)s) [%(module)s:%(lineno)d in %(funcName)s] '
'[Thread: %(threadName)s]: %(message)s',
'datefmt': '%Y-%m-%d %H:%M:%S',
'()': GraylogFormatter
},
'tuning': {
'()': TuningFormatter,
'datefmt': '%Y-%m-%d %H:%M:%S UTC'
},
'json': {
'format': '%(message)s',
'class': 'logging.Formatter' if not 'pythonjsonlogger' in sys.modules else 'pythonjsonlogger.jsonlogger.JsonFormatter',
'json_default': lambda obj: str(obj) if isinstance(obj, (dt, Exception)) else None,
'json_ensure_ascii': False,
'rename_fields': {
'asctime': 'timestamp',
'levelname': 'severity'
},
'timestamp': True,
'datefmt': '%Y-%m-%dT%H:%M:%S.%fZ'
}
},
'loggers': {
'eveai_app': { # logger for the eveai_app
'handlers': ['file_app', 'graylog', ] if env == 'production' else ['file_app', ],
'handlers': ['file_app'],
'level': 'DEBUG',
'propagate': False
},
'eveai_workers': { # logger for the eveai_workers
'handlers': ['file_workers', 'graylog', ] if env == 'production' else ['file_workers', ],
'handlers': ['file_workers'],
'level': 'DEBUG',
'propagate': False
},
'eveai_chat': { # logger for the eveai_chat
'handlers': ['file_chat', 'graylog', ] if env == 'production' else ['file_chat', ],
'eveai_chat_client': { # logger for the eveai_chat
'handlers': ['file_chat_client'],
'level': 'DEBUG',
'propagate': False
},
'eveai_chat_workers': { # logger for the eveai_chat_workers
'handlers': ['file_chat_workers', 'graylog', ] if env == 'production' else ['file_chat_workers', ],
'handlers': ['file_chat_workers'],
'level': 'DEBUG',
'propagate': False
},
'eveai_api': { # logger for the eveai_chat_workers
'handlers': ['file_api', 'graylog', ] if env == 'production' else ['file_api', ],
'eveai_api': { # logger for the eveai_api
'handlers': ['file_api'],
'level': 'DEBUG',
'propagate': False
},
'eveai_beat': { # logger for the eveai_beat
'handlers': ['file_beat', 'graylog', ] if env == 'production' else ['file_beat', ],
'handlers': ['file_beat'],
'level': 'DEBUG',
'propagate': False
},
'eveai_entitlements': { # logger for the eveai_entitlements
'handlers': ['file_entitlements', 'graylog', ] if env == 'production' else ['file_entitlements', ],
'handlers': ['file_entitlements'],
'level': 'DEBUG',
'propagate': False
},
'sqlalchemy.engine': { # logger for the sqlalchemy
'handlers': ['file_sqlalchemy', 'graylog', ] if env == 'production' else ['file_sqlalchemy', ],
'handlers': ['file_sqlalchemy'],
'level': 'DEBUG',
'propagate': False
},
'security': { # logger for the security
'handlers': ['file_security', 'graylog', ] if env == 'production' else ['file_security', ],
'handlers': ['file_security'],
'level': 'DEBUG',
'propagate': False
},
'business_events': {
'handlers': ['file_business_events', 'graylog'],
'handlers': ['file_business_events'],
'level': 'DEBUG',
'propagate': False
},
# Single tuning logger
'tuning': {
'handlers': ['tuning_file', 'graylog'] if env == 'production' else ['tuning_file'],
'handlers': ['tuning_file'],
'level': 'DEBUG',
'propagate': False,
},
'': { # root logger
'handlers': ['console'],
'handlers': ['console'] if os.environ.get('KUBERNETES_SERVICE_HOST') is None else ['json_console'],
'level': 'WARNING', # Set higher level for root to minimize noise
'propagate': False
},

View File

@@ -0,0 +1,9 @@
version: "1.0.0"
name: "Knowledge Service"
configuration: {}
permissions: {}
metadata:
author: "Josako"
date_added: "2025-04-02"
changes: "Initial version"
description: "Partner providing catalog content"

View File

@@ -0,0 +1,14 @@
version: "1.0.0"
name: "HTML Processor"
file_types: "html"
description: "A processor for HTML files, driven by AI"
configuration:
custom_instructions:
name: "Custom Instructions"
description: "Some custom instruction to guide our AI agent in parsing your HTML file"
type: "text"
required: false
metadata:
author: "Josako"
date_added: "2025-06-25"
description: "A processor for HTML files, driven by AI"

View File

@@ -42,7 +42,7 @@ configuration:
image_handling:
name: "Image Handling"
type: "enum"
description: "How to handle embedded images"
description: "How to handle embedded img"
required: false
default: "skip"
allowed_values: ["skip", "extract", "placeholder"]

View File

@@ -0,0 +1,30 @@
version: "1.0.0"
content: |
You are a top administrative assistant specialized in transforming given HTML into markdown formatted files. The
generated files will be used to generate embeddings in a RAG-system.
# Best practices are:
- Respect wordings and language(s) used in the HTML.
- The following items need to be considered: headings, paragraphs, listed items (numbered or not) and tables. Images can be neglected.
- Sub-headers can be used as lists. This is true when a header is followed by a series of sub-headers without content (paragraphs or listed items). Present those sub-headers as a list.
- Be careful of encoding of the text. Everything needs to be human readable.
You only return relevant information, and filter out non-relevant information, such as:
- information found in menu bars, sidebars, footers or headers
- information in forms, buttons
Process the file or text carefully, and take a stepped approach. The resulting markdown should be the result of the
processing of the complete input html file. Answer with the pure markdown, without any other text.
{custom_instructions}
HTML to be processed is in between triple backquotes.
```{html}```
llm_model: "mistral.mistral-small-latest"
metadata:
author: "Josako"
date_added: "2025-06-25"
description: "An aid in transforming HTML-based inputs to markdown, fully automatic"
changes: "Initial version"

View File

@@ -0,0 +1,22 @@
version: "1.0.0"
content: >
Check if there are other elements available in the provided text (in between triple $) than answers to the
following question (in between triple €):
€€€
{question}
€€€
Provided text:
$$$
{answer}
$$$
Answer with True or False, without additional information.
llm_model: "mistral.mistral-medium-latest"
metadata:
author: "Josako"
date_added: "2025-06-23"
description: "An assistant to check if the answer to a question is affirmative."
changes: "Initial version"

View File

@@ -0,0 +1,17 @@
version: "1.0.0"
content: >
Determine if there is an affirmative answer on the following question (in between triple backquotes):
```{question}```
in the provided answer (in between triple backquotes):
```{answer}```
Answer with True or False, without additional information.
llm_model: "mistral.mistral-medium-latest"
metadata:
author: "Josako"
date_added: "2025-06-23"
description: "An assistant to check if the answer to a question is affirmative."
changes: "Initial version"

View File

@@ -0,0 +1,16 @@
version: "1.0.0"
content: >
Provide us with the answer to the following question (in between triple backquotes) from the text provided to you:
```{question}````
Reply in exact wordings and in the same language. If no answer can be found, reply with "No answer provided"
Text provided to you:
```{answer}```
llm_model: "mistral.mistral-medium-latest"
metadata:
author: "Josako"
date_added: "2025-06-23"
description: "An assistant to check if the answer to a question is affirmative."
changes: "Initial version"

View File

@@ -4,7 +4,7 @@ content: |
question is understandable without that history. The conversation is a consequence of questions and context provided
by the HUMAN, and the AI (you) answering back, in chronological order. The most recent (i.e. last) elements are the
most important when detailing the question.
You answer by stating the detailed question in {language}.
You return the only the detailed question in {language}. Without any additional information.
History:
```{history}```
Question to be detailed:

View File

@@ -0,0 +1,22 @@
version: "1.0.0"
content: >
You are a top translator. We need you to translate (in between triple quotes)
'''{text_to_translate}'''
into '{target_language}', taking
into account this context:
'{context}'
Do not translate text in between double square brackets, as these are names or terms that need to remain intact.
Remove the triple quotes in your translation!
I only want you to return the translation. No explanation, no options. I need to be able to directly use your answer
without further interpretation. If more than one option is available, present me with the most probable one.
llm_model: "mistral.mistral-medium-latest"
metadata:
author: "Josako"
date_added: "2025-06-23"
description: "An assistant to translate given a context."
changes: "Initial version"

View File

@@ -0,0 +1,19 @@
version: "1.0.0"
content: >
You are a top translator. We need you to translate (in between triple quotes)
'''{text_to_translate}'''
into '{target_language}'.
Do not translate text in between double square brackets, as these are names or terms that need to remain intact.
Remove the triple quotes in your translation!
I only want you to return the translation. No explanation, no options. I need to be able to directly use your answer
without further interpretation. If more than one option is available, present me with the most probable one.
llm_model: "mistral.mistral-medium-latest"
metadata:
author: "Josako"
date_added: "2025-06-23"
description: "An assistant to translate without context."
changes: "Initial version"

View File

@@ -0,0 +1,21 @@
version: "1.0.0"
name: "Standard RAG Retriever"
configuration:
es_k:
name: "es_k"
type: "integer"
description: "K-value to retrieve embeddings (max embeddings retrieved)"
required: true
default: 8
es_similarity_threshold:
name: "es_similarity_threshold"
type: "float"
description: "Similarity threshold for retrieving embeddings"
required: true
default: 0.3
arguments: {}
metadata:
author: "Josako"
date_added: "2025-01-24"
changes: "Initial version"
description: "Retrieving all embeddings conform the query"

View File

@@ -1,36 +0,0 @@
version: "1.0.0"
name: "DOSSIER Retriever"
configuration:
es_k:
name: "es_k"
type: "int"
description: "K-value to retrieve embeddings (max embeddings retrieved)"
required: true
default: 8
es_similarity_threshold:
name: "es_similarity_threshold"
type: "float"
description: "Similarity threshold for retrieving embeddings"
required: true
default: 0.3
tagging_fields_filter:
name: "Tagging Fields Filter"
type: "tagging_fields_filter"
description: "Filter JSON to retrieve a subset of documents"
required: true
dynamic_arguments:
name: "Dynamic Arguments"
type: "dynamic_arguments"
description: "dynamic arguments used in the filter"
required: false
arguments:
query:
name: "query"
type: "str"
description: "Query to retrieve embeddings"
required: True
metadata:
author: "Josako"
date_added: "2025-03-11"
changes: "Initial version"
description: "Retrieving all embeddings conform the query and the tagging fields filter"

View File

@@ -3,7 +3,7 @@ name: "Standard RAG Retriever"
configuration:
es_k:
name: "es_k"
type: "int"
type: "integer"
description: "K-value to retrieve embeddings (max embeddings retrieved)"
required: true
default: 8
@@ -13,12 +13,7 @@ configuration:
description: "Similarity threshold for retrieving embeddings"
required: true
default: 0.3
arguments:
query:
name: "query"
type: "str"
description: "Query to retrieve embeddings"
required: True
arguments: {}
metadata:
author: "Josako"
date_added: "2025-01-24"

View File

@@ -0,0 +1,26 @@
version: "1.0.0"
name: "Retrieves role information for a specific role"
configuration:
es_k:
name: "es_k"
type: "integer"
description: "K-value to retrieve embeddings (max embeddings retrieved)"
required: true
default: 8
es_similarity_threshold:
name: "es_similarity_threshold"
type: "float"
description: "Similarity threshold for retrieving embeddings"
required: true
default: 0.3
arguments:
role_reference:
name: "Role Reference"
type: "string"
description: "The role information needs to be retrieved for"
required: true
metadata:
author: "Josako"
date_added: "2025-07-07"
changes: "Initial version"
description: "Retrieves role information for a specific role"

View File

@@ -0,0 +1,36 @@
type: "CONTACT_TIME_PREFERENCES_SIMPLE"
version: "1.0.0"
name: "Contact Time Preferences"
icon: "calendar_month"
fields:
early:
name: "Early in the morning"
description: "Contact me early in the morning"
type: "boolean"
required: false
# It is possible to also add a field 'context'. It allows you to provide an elaborate piece of information.
late_morning:
name: "During the morning"
description: "Contact me during the morning"
type: "boolean"
required: false
afternoon:
name: "In the afternoon"
description: "Contact me in the afternoon"
type: "boolean"
required: false
evening:
name: "In the evening"
description: "Contact me in the evening"
type: "boolean"
required: false
other:
name: "Other"
description: "Specify your preferred contact moment"
type: "string"
required: false
metadata:
author: "Josako"
date_added: "2025-07-22"
changes: "Initial Version"
description: "Simple Contact Time Preferences Form"

View File

@@ -0,0 +1,31 @@
type: "PERSONAL_CONTACT_FORM"
version: "1.0.0"
name: "Personal Contact Form"
icon: "person"
fields:
name:
name: "Name"
description: "Your name"
type: "str"
required: true
# It is possible to also add a field 'context'. It allows you to provide an elaborate piece of information.
email:
name: "Email"
type: "str"
description: "Your Name"
required: true
phone:
name: "Phone Number"
type: "str"
description: "Your Phone Number"
required: true
consent:
name: "Consent"
type: "boolean"
description: "Consent"
required: true
metadata:
author: "Josako"
date_added: "2025-07-29"
changes: "Initial Version"
description: "Personal Contact Form"

View File

@@ -0,0 +1,51 @@
type: "PERSONAL_CONTACT_FORM"
version: "1.0.0"
name: "Personal Contact Form"
icon: "person"
fields:
name:
name: "Name"
description: "Your name"
type: "str"
required: true
# It is possible to also add a field 'context'. It allows you to provide an elaborate piece of information.
email:
name: "Email"
type: "str"
description: "Your Name"
required: true
phone:
name: "Phone Number"
type: "str"
description: "Your Phone Number"
required: true
address:
name: "Address"
type: "string"
description: "Your Address"
required: false
zip:
name: "Postal Code"
type: "string"
description: "Postal Code"
required: false
city:
name: "City"
type: "string"
description: "City"
required: false
country:
name: "Country"
type: "string"
description: "Country"
required: false
consent:
name: "Consent"
type: "boolean"
description: "Consent"
required: true
metadata:
author: "Josako"
date_added: "2025-06-18"
changes: "Initial Version"
description: "Personal Contact Form"

View File

@@ -0,0 +1,60 @@
type: "PROFESSIONAL_CONTACT_FORM"
version: "1.0.0"
name: "Professional Contact Form"
icon: "account_circle"
fields:
name:
name: "Name"
description: "Your name"
type: "str"
required: true
email:
name: "Email"
type: "str"
description: "Your Name"
required: true
phone:
name: "Phone Number"
type: "str"
description: "Your Phone Number"
required: true
company:
name: "Company Name"
type: "str"
description: "Company Name"
required: true
job_title:
name: "Job Title"
type: "str"
description: "Job Title"
required: false
address:
name: "Address"
type: "str"
description: "Your Address"
required: false
zip:
name: "Postal Code"
type: "str"
description: "Postal Code"
required: false
city:
name: "City"
type: "str"
description: "City"
required: false
country:
name: "Country"
type: "str"
description: "Country"
required: false
consent:
name: "Consent"
type: "bool"
description: "Consent"
required: true
metadata:
author: "Josako"
date_added: "2025-06-18"
changes: "Initial Version"
description: "Professional Contact Form"

View File

@@ -0,0 +1,34 @@
version: "1.0.0"
name: "Partner RAG Specialist"
framework: "crewai"
chat: true
configuration: {}
arguments: {}
results:
rag_output:
answer:
name: "answer"
type: "str"
description: "Answer to the query"
required: true
citations:
name: "citations"
type: "List[str]"
description: "List of citations"
required: false
insufficient_info:
name: "insufficient_info"
type: "bool"
description: "Whether or not the query is insufficient info"
required: true
agents:
- type: "PARTNER_RAG_AGENT"
version: "1.0"
tasks:
- type: "PARTNER_RAG_TASK"
version: "1.0"
metadata:
author: "Josako"
date_added: "2025-07-16"
changes: "Initial version"
description: "Q&A through Partner RAG Specialist (for documentation purposes)"

View File

@@ -19,11 +19,6 @@ arguments:
type: "str"
description: "Language code to be used for receiving questions and giving answers"
required: true
query:
name: "query"
type: "str"
description: "Query or response to process"
required: true
results:
rag_output:
answer:

View File

@@ -0,0 +1,49 @@
version: "1.1.0"
name: "RAG Specialist"
framework: "crewai"
chat: true
configuration:
name:
name: "name"
type: "str"
description: "The name the specialist is called upon."
required: true
welcome_message:
name: "Welcome Message"
type: "string"
description: "Welcome Message to be given to the end user"
required: false
arguments:
language:
name: "Language"
type: "str"
description: "Language code to be used for receiving questions and giving answers"
required: true
results:
rag_output:
answer:
name: "answer"
type: "str"
description: "Answer to the query"
required: true
citations:
name: "citations"
type: "List[str]"
description: "List of citations"
required: false
insufficient_info:
name: "insufficient_info"
type: "bool"
description: "Whether or not the query is insufficient info"
required: true
agents:
- type: "RAG_AGENT"
version: "1.1"
tasks:
- type: "RAG_TASK"
version: "1.1"
metadata:
author: "Josako"
date_added: "2025-01-08"
changes: "Initial version"
description: "A Specialist that performs Q&A activities"

View File

@@ -1,53 +0,0 @@
version: 1.0.0
name: "Standard RAG Specialist"
framework: "langchain"
chat: true
configuration:
specialist_context:
name: "Specialist Context"
type: "text"
description: "The context to be used by the specialist."
required: false
temperature:
name: "Temperature"
type: "number"
description: "The inference temperature to be used by the specialist."
required: false
default: 0.3
arguments:
language:
name: "Language"
type: "str"
description: "Language code to be used for receiving questions and giving answers"
required: true
query:
name: "query"
type: "str"
description: "Query to answer"
required: true
results:
detailed_query:
name: "detailed_query"
type: "str"
description: "The query detailed with the Chat Session History."
required: true
answer:
name: "answer"
type: "str"
description: "Answer to the query"
required: true
citations:
name: "citations"
type: "List[str]"
description: "List of citations"
required: false
insufficient_info:
name: "insufficient_info"
type: "bool"
description: "Whether or not the query is insufficient info"
required: true
metadata:
author: "Josako"
date_added: "2025-01-08"
changes: "Initial version"
description: "A Specialist that performs standard Q&A"

View File

@@ -0,0 +1,29 @@
version: "1.1.0"
name: "Traicie KO Criteria Interview Definition Specialist"
framework: "crewai"
partner: "traicie"
chat: false
configuration:
arguments:
specialist_id:
name: "specialist_id"
description: "ID of the specialist for which to define KO Criteria Questions and Asnwers"
type: "integer"
required: true
results:
asset_id:
name: "asset_id"
description: "ID of the Asset containing questions and answers for each of the defined KO Criteria"
type: "integer"
required: true
agents:
- type: "TRAICIE_RECRUITER_AGENT"
version: "1.0"
tasks:
- type: "TRAICIE_KO_CRITERIA_INTERVIEW_DEFINITION_TASK"
version: "1.0"
metadata:
author: "Josako"
date_added: "2025-07-01"
changes: "Initial Version"
description: "Specialist assisting in questions and answers definition for KO Criteria"

View File

@@ -0,0 +1,29 @@
version: "1.1.0"
name: "Traicie KO Criteria Interview Definition Specialist"
framework: "crewai"
partner: "traicie"
chat: false
configuration:
arguments:
specialist_id:
name: "specialist_id"
description: "ID of the specialist for which to define KO Criteria Questions and Asnwers"
type: "integer"
required: true
results:
asset_id:
name: "asset_id"
description: "ID of the Asset containing questions and answers for each of the defined KO Criteria"
type: "integer"
required: true
agents:
- type: "TRAICIE_HR_BP_AGENT"
version: "1.0"
tasks:
- type: "TRAICIE_KO_CRITERIA_INTERVIEW_DEFINITION_TASK"
version: "1.0"
metadata:
author: "Josako"
date_added: "2025-07-01"
changes: "Initial Version"
description: "Specialist assisting in questions and answers definition for KO Criteria"

View File

@@ -1,4 +1,4 @@
version: "1.1.0"
version: "1.2.0"
name: "Traicie Role Definition Specialist"
framework: "crewai"
partner: "traicie"
@@ -11,9 +11,9 @@ arguments:
type: "str"
required: true
specialist_name:
name: "Specialist Name"
description: "The name the specialist will be called upon"
type: str
name: "Chatbot Name"
description: "The name of the chatbot."
type: "str"
required: true
role_reference:
name: "Role Reference"

View File

@@ -0,0 +1,50 @@
version: "1.3.0"
name: "Traicie Role Definition Specialist"
framework: "crewai"
partner: "traicie"
chat: false
configuration: {}
arguments:
role_name:
name: "Role Name"
description: "The name of the role that is being processed. Will be used to create a selection specialist"
type: "str"
required: true
specialist_name:
name: "Chatbot Name"
description: "The name of the chatbot."
type: "str"
required: true
make:
name: "Make"
description: "The make for which the role is defined and the selection specialist is created"
type: "system"
system_name: "tenant_make"
required: true
role_reference:
name: "Role Reference"
description: "A customer reference to the role"
type: "str"
required: false
vacancy_text:
name: "vacancy_text"
type: "text"
description: "The Vacancy Text"
required: true
results:
competencies:
name: "competencies"
type: "List[str, str]"
description: "List of vacancy competencies and their descriptions"
required: false
agents:
- type: "TRAICIE_HR_BP_AGENT"
version: "1.0"
tasks:
- type: "TRAICIE_GET_COMPETENCIES_TASK"
version: "1.1"
metadata:
author: "Josako"
date_added: "2025-05-27"
changes: "Added a make to be specified (a selection specialist now is created in context of a make"
description: "Assistant to create a new Vacancy based on Vacancy Text"

View File

@@ -2,7 +2,7 @@ version: "1.0.0"
name: "Traicie Selection Specialist"
framework: "crewai"
partner: "traicie"
chat: false
chat: true
configuration:
name:
name: "Name"
@@ -88,7 +88,13 @@ arguments:
type: "str"
description: "The language (2-letter code) used to start the conversation"
required: true
interaction_mode:
name: "Interaction Mode"
type: "enum"
description: "The interaction mode the specialist will start working in."
allowed_values: ["Job Application", "Seduction"]
default: "Job Application"
required: true
results:
competencies:
name: "competencies"
@@ -105,4 +111,4 @@ metadata:
author: "Josako"
date_added: "2025-05-27"
changes: "Updated for unified competencies and ko criteria"
description: "Assistant to create a new Vacancy based on Vacancy Text"
description: "Assistant to assist in candidate selection"

View File

@@ -0,0 +1,120 @@
version: "1.1.0"
name: "Traicie Selection Specialist"
framework: "crewai"
partner: "traicie"
chat: true
configuration:
name:
name: "Name"
description: "The name the specialist is called upon."
type: "str"
required: true
role_reference:
name: "Role Reference"
description: "A customer reference to the role"
type: "str"
required: false
make:
name: "Make"
description: "The make for which the role is defined and the selection specialist is created"
type: "system"
system_name: "tenant_make"
required: true
competencies:
name: "Competencies"
description: "An ordered list of competencies."
type: "ordered_list"
list_type: "competency_details"
required: true
tone_of_voice:
name: "Tone of Voice"
description: "The tone of voice the specialist uses to communicate"
type: "enum"
allowed_values: ["Professional & Neutral", "Warm & Empathetic", "Energetic & Enthusiastic", "Accessible & Informal", "Expert & Trustworthy", "No-nonsense & Goal-driven"]
default: "Professional & Neutral"
required: true
language_level:
name: "Language Level"
description: "Language level to be used when communicating, relating to CEFR levels"
type: "enum"
allowed_values: ["Basic", "Standard", "Professional"]
default: "Standard"
required: true
welcome_message:
name: "Welcome Message"
description: "Introductory text given by the specialist - but translated according to Tone of Voice, Language Level and Starting Language"
type: "text"
required: false
closing_message:
name: "Closing Message"
description: "Closing message given by the specialist - but translated according to Tone of Voice, Language Level and Starting Language"
type: "text"
required: false
competency_details:
title:
name: "Title"
description: "Competency Title"
type: "str"
required: true
description:
name: "Description"
description: "Description (in context of the role) of the competency"
type: "text"
required: true
is_knockout:
name: "KO"
description: "Defines if the competency is a knock-out criterium"
type: "boolean"
required: true
default: false
assess:
name: "Assess"
description: "Indication if this competency is to be assessed"
type: "boolean"
required: true
default: true
arguments:
region:
name: "Region"
type: "str"
description: "The region of the specific vacancy"
required: false
working_schedule:
name: "Work Schedule"
type: "str"
description: "The work schedule or employment type of the specific vacancy"
required: false
start_date:
name: "Start Date"
type: "date"
description: "The start date of the specific vacancy"
required: false
language:
name: "Language"
type: "str"
description: "The language (2-letter code) used to start the conversation"
required: true
interaction_mode:
name: "Interaction Mode"
type: "enum"
description: "The interaction mode the specialist will start working in."
allowed_values: ["Job Application", "Seduction"]
default: "Job Application"
required: true
results:
competencies:
name: "competencies"
type: "List[str, str]"
description: "List of vacancy competencies and their descriptions"
required: false
agents:
- type: "TRAICIE_HR_BP_AGENT"
version: "1.0"
tasks:
- type: "TRAICIE_GET_COMPETENCIES_TASK"
version: "1.1"
metadata:
author: "Josako"
date_added: "2025-05-27"
changes: "Add make to the selection specialist"
description: "Assistant to assist in candidate selection"

View File

@@ -0,0 +1,120 @@
version: "1.3.0"
name: "Traicie Selection Specialist"
framework: "crewai"
partner: "traicie"
chat: true
configuration:
name:
name: "Name"
description: "The name the specialist is called upon."
type: "str"
required: true
role_reference:
name: "Role Reference"
description: "A customer reference to the role"
type: "str"
required: false
make:
name: "Make"
description: "The make for which the role is defined and the selection specialist is created"
type: "system"
system_name: "tenant_make"
required: true
competencies:
name: "Competencies"
description: "An ordered list of competencies."
type: "ordered_list"
list_type: "competency_details"
required: true
tone_of_voice:
name: "Tone of Voice"
description: "The tone of voice the specialist uses to communicate"
type: "enum"
allowed_values: ["Professional & Neutral", "Warm & Empathetic", "Energetic & Enthusiastic", "Accessible & Informal", "Expert & Trustworthy", "No-nonsense & Goal-driven"]
default: "Professional & Neutral"
required: true
language_level:
name: "Language Level"
description: "Language level to be used when communicating, relating to CEFR levels"
type: "enum"
allowed_values: ["Basic", "Standard", "Professional"]
default: "Standard"
required: true
welcome_message:
name: "Welcome Message"
description: "Introductory text given by the specialist - but translated according to Tone of Voice, Language Level and Starting Language"
type: "text"
required: false
closing_message:
name: "Closing Message"
description: "Closing message given by the specialist - but translated according to Tone of Voice, Language Level and Starting Language"
type: "text"
required: false
competency_details:
title:
name: "Title"
description: "Competency Title"
type: "str"
required: true
description:
name: "Description"
description: "Description (in context of the role) of the competency"
type: "text"
required: true
is_knockout:
name: "KO"
description: "Defines if the competency is a knock-out criterium"
type: "boolean"
required: true
default: false
assess:
name: "Assess"
description: "Indication if this competency is to be assessed"
type: "boolean"
required: true
default: true
arguments:
region:
name: "Region"
type: "str"
description: "The region of the specific vacancy"
required: false
working_schedule:
name: "Work Schedule"
type: "str"
description: "The work schedule or employment type of the specific vacancy"
required: false
start_date:
name: "Start Date"
type: "date"
description: "The start date of the specific vacancy"
required: false
language:
name: "Language"
type: "str"
description: "The language (2-letter code) used to start the conversation"
required: true
interaction_mode:
name: "Interaction Mode"
type: "enum"
description: "The interaction mode the specialist will start working in."
allowed_values: ["Job Application", "Seduction"]
default: "Job Application"
required: true
results:
competencies:
name: "competencies"
type: "List[str, str]"
description: "List of vacancy competencies and their descriptions"
required: false
agents:
- type: "TRAICIE_RECRUITER"
version: "1.0"
tasks:
- type: "TRAICIE_KO_CRITERIA_INTERVIEW_DEFINITION"
version: "1.0"
metadata:
author: "Josako"
date_added: "2025-06-16"
changes: "Realising the actual interaction with the LLM"
description: "Assistant to assist in candidate selection"

View File

@@ -0,0 +1,120 @@
version: "1.3.0"
name: "Traicie Selection Specialist"
framework: "crewai"
partner: "traicie"
chat: true
configuration:
name:
name: "Name"
description: "The name the specialist is called upon."
type: "str"
required: true
role_reference:
name: "Role Reference"
description: "A customer reference to the role"
type: "str"
required: false
make:
name: "Make"
description: "The make for which the role is defined and the selection specialist is created"
type: "system"
system_name: "tenant_make"
required: true
competencies:
name: "Competencies"
description: "An ordered list of competencies."
type: "ordered_list"
list_type: "competency_details"
required: true
tone_of_voice:
name: "Tone of Voice"
description: "The tone of voice the specialist uses to communicate"
type: "enum"
allowed_values: ["Professional & Neutral", "Warm & Empathetic", "Energetic & Enthusiastic", "Accessible & Informal", "Expert & Trustworthy", "No-nonsense & Goal-driven"]
default: "Professional & Neutral"
required: true
language_level:
name: "Language Level"
description: "Language level to be used when communicating, relating to CEFR levels"
type: "enum"
allowed_values: ["Basic", "Standard", "Professional"]
default: "Standard"
required: true
welcome_message:
name: "Welcome Message"
description: "Introductory text given by the specialist - but translated according to Tone of Voice, Language Level and Starting Language"
type: "text"
required: false
closing_message:
name: "Closing Message"
description: "Closing message given by the specialist - but translated according to Tone of Voice, Language Level and Starting Language"
type: "text"
required: false
competency_details:
title:
name: "Title"
description: "Competency Title"
type: "str"
required: true
description:
name: "Description"
description: "Description (in context of the role) of the competency"
type: "text"
required: true
is_knockout:
name: "KO"
description: "Defines if the competency is a knock-out criterium"
type: "boolean"
required: true
default: false
assess:
name: "Assess"
description: "Indication if this competency is to be assessed"
type: "boolean"
required: true
default: true
arguments:
region:
name: "Region"
type: "str"
description: "The region of the specific vacancy"
required: false
working_schedule:
name: "Work Schedule"
type: "str"
description: "The work schedule or employment type of the specific vacancy"
required: false
start_date:
name: "Start Date"
type: "date"
description: "The start date of the specific vacancy"
required: false
language:
name: "Language"
type: "str"
description: "The language (2-letter code) used to start the conversation"
required: true
interaction_mode:
name: "Interaction Mode"
type: "enum"
description: "The interaction mode the specialist will start working in."
allowed_values: ["Job Application", "Seduction"]
default: "Job Application"
required: true
results:
competencies:
name: "competencies"
type: "List[str, str]"
description: "List of vacancy competencies and their descriptions"
required: false
agents:
- type: "TRAICIE_RECRUITER_AGENT"
version: "1.0"
tasks:
- type: "TRAICIE_KO_CRITERIA_INTERVIEW_DEFINITION_TASK"
version: "1.0"
metadata:
author: "Josako"
date_added: "2025-06-18"
changes: "Add make to the selection specialist"
description: "Assistant to assist in candidate selection"

View File

@@ -0,0 +1,115 @@
version: "1.4.0"
name: "Traicie Selection Specialist"
framework: "crewai"
partner: "traicie"
chat: true
configuration:
name:
name: "Name"
description: "The name the specialist is called upon."
type: "str"
required: true
role_reference:
name: "Role Reference"
description: "A customer reference to the role"
type: "str"
required: false
make:
name: "Make"
description: "The make for which the role is defined and the selection specialist is created"
type: "system"
system_name: "tenant_make"
required: true
competencies:
name: "Competencies"
description: "An ordered list of competencies."
type: "ordered_list"
list_type: "competency_details"
required: true
tone_of_voice:
name: "Tone of Voice"
description: "The tone of voice the specialist uses to communicate"
type: "enum"
allowed_values: ["Professional & Neutral", "Warm & Empathetic", "Energetic & Enthusiastic", "Accessible & Informal", "Expert & Trustworthy", "No-nonsense & Goal-driven"]
default: "Professional & Neutral"
required: true
language_level:
name: "Language Level"
description: "Language level to be used when communicating, relating to CEFR levels"
type: "enum"
allowed_values: ["Basic", "Standard", "Professional"]
default: "Standard"
required: true
welcome_message:
name: "Welcome Message"
description: "Introductory text given by the specialist - but translated according to Tone of Voice, Language Level and Starting Language"
type: "text"
required: false
competency_details:
title:
name: "Title"
description: "Competency Title"
type: "str"
required: true
description:
name: "Description"
description: "Description (in context of the role) of the competency"
type: "text"
required: true
is_knockout:
name: "KO"
description: "Defines if the competency is a knock-out criterium"
type: "boolean"
required: true
default: false
assess:
name: "Assess"
description: "Indication if this competency is to be assessed"
type: "boolean"
required: true
default: true
arguments:
region:
name: "Region"
type: "str"
description: "The region of the specific vacancy"
required: false
working_schedule:
name: "Work Schedule"
type: "str"
description: "The work schedule or employment type of the specific vacancy"
required: false
start_date:
name: "Start Date"
type: "date"
description: "The start date of the specific vacancy"
required: false
language:
name: "Language"
type: "str"
description: "The language (2-letter code) used to start the conversation"
required: true
interaction_mode:
name: "Interaction Mode"
type: "enum"
description: "The interaction mode the specialist will start working in."
allowed_values: ["orientation", "selection"]
default: "orientation"
required: true
results:
competencies:
name: "competencies"
type: "List[str, str]"
description: "List of vacancy competencies and their descriptions"
required: false
agents:
- type: "RAG_AGENT"
version: "1.1"
tasks:
- type: "RAG_TASK"
version: "1.1"
metadata:
author: "Josako"
date_added: "2025-07-03"
changes: "Update for a Full Virtual Assistant Experience"
description: "Assistant to assist in candidate selection"

View File

@@ -0,0 +1,121 @@
version: "1.4.0"
name: "Traicie Selection Specialist"
framework: "crewai"
partner: "traicie"
chat: true
configuration:
name:
name: "Name"
description: "The name the specialist is called upon."
type: "str"
required: true
role_reference:
name: "Role Reference"
description: "A customer reference to the role"
type: "str"
required: false
make:
name: "Make"
description: "The make for which the role is defined and the selection specialist is created"
type: "system"
system_name: "tenant_make"
required: true
competencies:
name: "Competencies"
description: "An ordered list of competencies."
type: "ordered_list"
list_type: "competency_details"
required: true
tone_of_voice:
name: "Tone of Voice"
description: "The tone of voice the specialist uses to communicate"
type: "enum"
allowed_values: ["Professional & Neutral", "Warm & Empathetic", "Energetic & Enthusiastic", "Accessible & Informal", "Expert & Trustworthy", "No-nonsense & Goal-driven"]
default: "Professional & Neutral"
required: true
language_level:
name: "Language Level"
description: "Language level to be used when communicating, relating to CEFR levels"
type: "enum"
allowed_values: ["Basic", "Standard", "Professional"]
default: "Standard"
required: true
welcome_message:
name: "Welcome Message"
description: "Introductory text given by the specialist - but translated according to Tone of Voice, Language Level and Starting Language"
type: "text"
required: false
competency_details:
title:
name: "Title"
description: "Competency Title"
type: "str"
required: true
description:
name: "Description"
description: "Description (in context of the role) of the competency"
type: "text"
required: true
is_knockout:
name: "KO"
description: "Defines if the competency is a knock-out criterium"
type: "boolean"
required: true
default: false
assess:
name: "Assess"
description: "Indication if this competency is to be assessed"
type: "boolean"
required: true
default: true
arguments:
region:
name: "Region"
type: "str"
description: "The region of the specific vacancy"
required: false
working_schedule:
name: "Work Schedule"
type: "str"
description: "The work schedule or employment type of the specific vacancy"
required: false
start_date:
name: "Start Date"
type: "date"
description: "The start date of the specific vacancy"
required: false
language:
name: "Language"
type: "str"
description: "The language (2-letter code) used to start the conversation"
required: true
interaction_mode:
name: "Interaction Mode"
type: "enum"
description: "The interaction mode the specialist will start working in."
allowed_values: ["orientation", "selection"]
default: "orientation"
required: true
results:
competencies:
name: "competencies"
type: "List[str, str]"
description: "List of vacancy competencies and their descriptions"
required: false
agents:
- type: "TRAICIE_RECRUITER_AGENT"
version: "1.0"
- type: "RAG_AGENT"
version: "1.1"
tasks:
- type: "TRAICIE_DETERMINE_INTERVIEW_MODE_TASK"
version: "1.0"
- type: "TRAICIE_AFFIRMATIVE_ANSWER_CHECK_TASK"
version: "1.0"
- type: "ADVANCED_RAG_TASK"
version: "1.0"
metadata:
author: "Josako"
date_added: "2025-07-30"
changes: "Update for a Full Virtual Assistant Experience"
description: "Assistant to assist in candidate selection"

View File

@@ -0,0 +1,22 @@
version: "1.0.0"
name: "RAG Task"
task_description: >
Answer the question based on the following context, and taking into account the history of the discussion. Try not to
repeat answers already given in the recent history, unless confirmation is required or repetition is essential to
give a coherent answer.
Answer the end user in the language used in his/her question.
If the question cannot be answered using the given context, answer "I have insufficient information to answer this
question."
Context (in between triple $):
$$${context}$$$
History (in between triple €):
€€€{history}€€€
Question (in between triple £):
£££{question}£££
expected_output: >
Your answer.
metadata:
author: "Josako"
date_added: "2025-07-16"
description: "A Task that gives RAG-based answers"
changes: "Initial version"

View File

@@ -0,0 +1,43 @@
version: "1.0.0"
name: "Advanced RAG Task"
task_description: >
Answer the following question (in between triple £):
£££{question}£££
Base your answer on the following context (in between triple $):
$$${context}$$$
Take into account the following history of the conversation (in between triple €):
€€€{history}€€€
The HUMAN parts indicate the interactions by the end user, the AI parts are your interactions.
Best Practices are:
- Answer the provided question as precisely and directly as you can, combining elements of the provided context.
- Always focus your answer on the actual question.
- Limit repetition in your answers to an absolute minimum, unless absolutely necessary.
- Always be friendly and helpful for the end user.
Tune your answers to the following:
- You use the following Tone of Voice for your answer: {tone_of_voice}, i.e. {tone_of_voice_context}
- You use the following Language Level for your answer: {language_level}, i.e. {language_level_context}
Use the following language in your communication: {language}
If the question cannot be answered using the given context, answer "I have insufficient information to answer this
question." and give the appropriate indication.
{custom_description}
expected_output: >
metadata:
author: "Josako"
date_added: "2025-07-30"
description: "A Task that performs RAG and checks for human answers"
changes: "Initial version"

View File

@@ -0,0 +1,36 @@
version: "1.0.0"
name: "RAG Task"
task_description: >
Answer the following question (in between triple £):
£££{question}£££
Base your answer on the following context (in between triple $):
$$${context}$$$
Take into account the following history of the conversation (in between triple €):
€€€{history}€€€
The HUMAN parts indicate the interactions by the end user, the AI parts are your interactions.
Best Practices are:
- Answer the provided question as precisely and directly as you can, combining elements of the provided context.
- Always focus your answer on the actual HUMAN question.
- Try not to repeat your answers (preceded by AI), unless absolutely necessary.
- Focus your answer on the question at hand.
- Always be friendly and helpful for the end user.
{custom_description}
Use the following {language} in your communication.
If the question cannot be answered using the given context, answer "I have insufficient information to answer this
question." and give the appropriate indication.
expected_output: >
Your answer.
metadata:
author: "Josako"
date_added: "2025-01-08"
description: "A Task that gives RAG-based answers"
changes: "Initial version"

View File

@@ -0,0 +1,29 @@
version: "1.0.0"
name: "Traicie Affirmative Answer Check"
task_description: >
You are provided with the following end user answer (in between triple £):
£££{question}£££
This is the history of the conversation (in between triple €):
€€€{history}€€€
(In this history, user interactions are preceded by 'HUMAN', and your interactions with 'AI'.)
Check if the user has given an affirmative answer or not.
Please note that this answer can be very short:
- Affirmative answers: e.g. Yes, OK, Sure, Of Course
- Negative answers: e.g. No, not really, No, I'd rather not.
Please consider that the answer will be given in {language}!
{custom_description}
expected_output: >
Your determination if the answer was affirmative (true) or negative (false)
metadata:
author: "Josako"
date_added: "2025-07-30"
description: "A Task to check if the answer to a question is affirmative"
changes: "Initial version"

View File

@@ -0,0 +1,23 @@
version: "1.0.0"
name: "Traicie Determine Interview Mode"
task_description: >
you are provided with the following user input (in between triple backquotes):
```{question}```
If this user input contains one or more questions, your answer is simply 'RAG'. In all other cases, your answer is
'CHECK'.
Best practices to be applied:
- A question doesn't always have an ending question mark. It can be a query for more information, such as 'I'd like
to understand ...', 'I'd like to know more about...'. Or it is possible the user didn't enter a question mark. Take
into account the user might be working on a mobile device like a phone, making typing not as obvious.
- If there is a question mark, then normally you are provided with a question of course.
expected_output: >
Your Answer.
metadata:
author: "Josako"
date_added: "2025-07-30"
description: "A Task to determine the interview mode based on the last user input"
changes: "Initial version"

View File

@@ -0,0 +1,42 @@
version: "1.0.0"
name: "KO Criteria Interview Definition"
task_description: >
In context of a vacancy in your company {tenant_name}, you are provided with a set of knock-out criteria
(both description and title). The criteria are in between triple backquotes.You need to prepare for the interviews,
and are to provide for each of these ko criteria:
- A short question to ask the recruitment candidate describing the context of the ko criterium. Use your experience to
ask a question that enables us to verify compliancy to the criterium.
- A set of 2 short answers to that question, from the candidates perspective. One of the answers will result in a
positive evaluation of the criterium, the other one in a negative evaluation. Mark each of the answers as positive
or negative.
Describe the answers from the perspective of the candidate. Be sure to include all necessary aspects in you answers.
Apply the following tone of voice in both questions and answers: {tone_of_voice}
Use the following description to understand tone of voice:
{tone_of_voice_context}
Apply the following language level in both questions and answers: {language_level}
Use {language} as language for both questions and answers.
Use the following description to understand language_level:
{language_level_context}
```{ko_criteria}```
{custom_description}
expected_output: >
For each of the ko criteria, you provide:
- the exact title as specified in the original language
- the question in {language}
- a positive answer, resulting in a positive evaluation of the criterium. In {language}.
- a negative answer, resulting in a negative evaluation of the criterium. In {language}.
{custom_expected_output}
metadata:
author: "Josako"
date_added: "2025-06-15"
description: "A Task to define interview Q&A from given KO Criteria"
changes: "Initial Version"

View File

@@ -0,0 +1,37 @@
version: "1.0.0"
name: "KO Criteria Interview Definition"
task_description: >
In context of a vacancy in your company {tenant_name}, you are provided with a set of competencies
(both description and title). The competencies are in between triple backquotes. The competencies provided should be
handled as knock-out criteria.
For each of the knock-out criteria, you need to define
- A short (1 sentence), closed-ended question (Yes / No) to ask the recruitment candidate. Use your experience to ask a question that
enables us to verify compliancy to the criterium.
- A set of 2 short answers (1 small sentence of about 10 words each) to that question (positive answer / negative answer), from the
candidates perspective. Do not just repeat the words already formulated in the question.
The positive answer will result in a positive evaluation of the criterium, the negative answer in a negative evaluation
of the criterium. Try to avoid just using Yes / No as positive and negative answers.
Apply the following tone of voice in both questions and answers: {tone_of_voice}, i.e. {tone_of_voice_context}
Apply the following language level in both questions and answers: {language_level}, i.e. {language_level_context}
Use the language used in the competencies as language for your answer / output. We call this the original language.
```{ko_criteria}```
{custom_description}
expected_output: >
For each of the ko criteria, you provide:
- the exact title as specified in the original language
- the question in the original language
- a positive answer, resulting in a positive evaluation of the criterium, in the original language.
- a negative answer, resulting in a negative evaluation of the criterium, in the original langauge.
{custom_expected_output}
metadata:
author: "Josako"
date_added: "2025-06-20"
description: "A Task to define interview Q&A from given KO Criteria"
changes: "Improvement to ensure closed-ended questions and short descriptions"

View File

@@ -32,5 +32,10 @@ AGENT_TYPES = {
"name": "Traicie HR BP Agent",
"description": "An HR Business Partner Agent",
"partner": "traicie"
}
},
"TRAICIE_RECRUITER_AGENT": {
"name": "Traicie Recruiter Agent",
"description": "An Senior Recruiter Agent",
"partner": "traicie"
},
}

View File

@@ -1,5 +1,5 @@
# Agent Types
AGENT_TYPES = {
ASSET_TYPES = {
"DOCUMENT_TEMPLATE": {
"name": "Document Template",
"description": "Asset that defines a template in markdown a specialist can process",
@@ -8,4 +8,9 @@ AGENT_TYPES = {
"name": "Specialist Configuration",
"description": "Asset that defines a specialist configuration",
},
"TRAICIE_KO_CRITERIA_QUESTIONS": {
"name": "Traicie KO Criteria Questions",
"description": "Asset that defines KO Criteria Questions and Answers",
"partner": "traicie"
},
}

View File

@@ -0,0 +1,8 @@
# Catalog Types
CAPSULE_TYPES = {
"TRAICIE_RQC": {
"name": "Traicie Recruitment Qualified Candidate Capsule",
"description": "A capsule storing RQCs",
"partner": "traicie"
},
}

View File

@@ -4,8 +4,9 @@ CATALOG_TYPES = {
"name": "Standard Catalog",
"description": "A Catalog with information in Evie's Library, to be considered as a whole",
},
"DOSSIER_CATALOG": {
"name": "Dossier Catalog",
"description": "A Catalog with information in Evie's Library in which several Dossiers can be stored",
"TRAICIE_ROLE_DEFINITION_CATALOG": {
"name": "Role Definition Catalog",
"description": "A Catalog with information about roles, to be considered as a whole",
"partner": "traicie"
},
}

View File

@@ -0,0 +1,7 @@
# Catalog Types
CUSTOMISATION_TYPES = {
"CHAT_CLIENT_CUSTOMISATION": {
"name": "Chat Client Customisation",
"description": "Parameters allowing to customise the chat client",
},
}

View File

@@ -1,9 +1,5 @@
# config/type_defs/partner_service_types.py
PARTNER_SERVICE_TYPES = {
"REFERRAL_SERVICE": {
"name": "Referral Service",
"description": "Partner referring new customers",
},
"KNOWLEDGE_SERVICE": {
"name": "Knowledge Service",
"description": "Partner providing catalog content",

View File

@@ -10,11 +10,6 @@ PROCESSOR_TYPES = {
"description": "A Processor for PDF files",
"file_types": "pdf",
},
"AUDIO_PROCESSOR": {
"name": "AUDIO Processor",
"description": "A Processor for audio files",
"file_types": "mp3, mp4, ogg",
},
"MARKDOWN_PROCESSOR": {
"name": "Markdown Processor",
"description": "A Processor for markdown files",
@@ -24,5 +19,10 @@ PROCESSOR_TYPES = {
"name": "DOCX Processor",
"description": "A processor for DOCX files",
"file_types": "docx",
}
},
"AUTOMAGIC_HTML_PROCESSOR": {
"name": "AutoMagic HTML Processor",
"description": "A processor for HTML files, driven by AI",
"file_types": "html, htm",
},
}

View File

@@ -28,4 +28,20 @@ PROMPT_TYPES = {
"name": "transcript",
"description": "An assistant to transform a transcript to markdown.",
},
"translation_with_context": {
"name": "translation_with_context",
"description": "An assistant to translate text with context",
},
"translation_without_context": {
"name": "translation_without_context",
"description": "An assistant to translate text without context",
},
"check_affirmative_answer": {
"name": "check_affirmative_answer",
"description": "An assistant to check if the answer to a question is affirmative",
},
"check_additional_information": {
"name": "check_additional_information",
"description": "An assistant to check if the answer to a question includes additional information or questions",
},
}

View File

@@ -4,8 +4,15 @@ RETRIEVER_TYPES = {
"name": "Standard RAG Retriever",
"description": "Retrieving all embeddings from the catalog conform the query",
},
"DOSSIER_RETRIEVER": {
"name": "Retriever for managing DOSSIER catalogs",
"description": "Retrieving filtered embeddings from the catalog conform the query",
}
"PARTNER_RAG": {
"name": "Partner RAG Retriever",
"description": "RAG intended for partner documentation",
"partner": "evie_partner"
},
"TRAICIE_ROLE_DEFINITION_BY_ROLE_IDENTIFICATION": {
"name": "Traicie Role Definition Retriever by Role Identification",
"description": "Retrieves relevant role information for a given role",
"partner": "traicie",
"valid_catalog_types": ["TRAICIE_ROLE_DEFINITION_CATALOG"]
},
}

View File

@@ -0,0 +1,19 @@
# Specialist Form Types
SPECIALIST_FORM_TYPES = {
"PERSONAL_CONTACT_FORM": {
"name": "Personal Contact Form",
"description": "A form for entering your personal contact details",
},
"PROFESSIONAL_CONTACT_FORM": {
"name": "Professional Contact Form",
"description": "A form for entering your professional contact details",
},
"CONTACT_TIME_PREFERENCES_SIMPLE": {
"name": "Contact Time Preferences Form",
"description": "A form for entering contact time preferences",
},
"MINIMAL_PERSONAL_CONTACT_FORM": {
"name": "Personal Contact Form",
"description": "A form for entering your personal contact details",
}
}

View File

@@ -1,13 +1,14 @@
# Specialist Types
SPECIALIST_TYPES = {
"STANDARD_RAG_SPECIALIST": {
"name": "Q&A RAG Specialist",
"description": "Standard Q&A through RAG Specialist",
},
"RAG_SPECIALIST": {
"name": "RAG Specialist",
"description": "Q&A through RAG Specialist",
},
"PARTNER_RAG_SPECIALIST": {
"name": "Partner RAG Specialist",
"description": "Q&A through Partner RAG Specialist (for documentation purposes)",
"partner": "evie_partner"
},
"SPIN_SPECIALIST": {
"name": "Spin Sales Specialist",
"description": "A specialist that allows to answer user queries, try to get SPIN-information and Identification",
@@ -20,5 +21,9 @@ SPECIALIST_TYPES = {
"TRAICIE_SELECTION_SPECIALIST": {
"name": "Traicie Selection Specialist",
"description": "Recruitment Selection Assistant",
}
},
"TRAICIE_KO_INTERVIEW_DEFINITION_SPECIALIST": {
"name": "Traicie KO Interview Definition Specialist",
"description": "Specialist assisting in questions and answers definition for KO Criteria",
},
}

View File

@@ -37,9 +37,23 @@ TASK_TYPES = {
"description": "A Task to get Competencies from a Vacancy Text",
"partner": "traicie"
},
"TRAICIE_GET_KO_CRITERIA_TASK": {
"name": "Traicie Get KO Criteria",
"description": "A Task to get KO Criteria from a Vacancy Text",
"TRAICIE_KO_CRITERIA_INTERVIEW_DEFINITION_TASK": {
"name": "Traicie KO Criteria Interview Definition",
"description": "A Task to define KO Criteria questions to be used during the interview",
"partner": "traicie"
},
"TRAICIE_ADVANCED_RAG_TASK": {
"name": "Traicie Advanced RAG",
"description": "A Task to perform Advanced RAG taking into account previous questions, tone of voice and language level",
"partner": "traicie"
},
"TRAICIE_AFFIRMATIVE_ANSWER_CHECK_TASK": {
"name": "Traicie Affirmative Answer Check",
"description": "A Task to check if the answer to a question is affirmative",
"partner": "traicie"
},
"TRAICIE_DETERMINE_INTERVIEW_MODE_TASK": {
"name": "Traicie Determine Interview Mode",
"description": "A Task to determine the interview mode based on the last user input",
}
}

View File

@@ -5,6 +5,157 @@ All notable changes to EveAI will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [3.0.0-beta]
### Added
- Mobile Support for the chat client.
- Additional visual clues for chatbot and human messages in the chat client
### Changed
- Additional visual clues for chatbot and human messages in the chat client
- Adaptation (new version) of TRAICIE_SELECTION_SPECIALIST to further humanise interactions with end users. (Introduction of an additional interview phase to allow divergence from the interview scenario for normal questions, and convergence to the interview scenario.
- Humanisation of cached interaction messages (random choice)
- Adding specialist configuration information to be added as arguments for retrievers
## [2.3.12-alfa]
### Added
- Modal display of privacy statement and terms & conditions documents in eveai_Chat_client
- Consent-flag ==> Consent on privacy & terms...
- Customisation option added to show or hide DynamicForm Title (and icon)
- Session Header defaults clickable, opening selection views for Partner, Tenant and Catalog
-
### Changed
- Document Processing View - show 'Finished Processing' iso 'Processing' to have more logical visual indicators
- TRAICIE_SELECTION_SPECIALIST now no longer shows question to start selection procedure at initialisation.
### Fixed
- Error Messages for adding documents in 'alert'
- Correction of error in Template variable replacement, resulting in missing template variable value
## [2.3.11-alfa]
### Added
- RQC (Recruitment Qualified Candidate) export to EveAIDataCapsule
### Changed
- Adapt configuration possibilities for Chat Client
- Progress Tracker (client) level of information configuration
- Definition of an Active Region in the client to ensure proper understanding
- Adapting TRAICIE_SELECTION_SPECIALIST to retrieve preferred contact times using a form iso free text
- Improvement of DynamicForm en FormField to handle boolean values.
## [2.3.10-alfa]
### Added
- introduction of eveai-listview that is sortable and filterable (using tabulator), with client-side pagination
- Introduction of PARTNER_RAG retriever, PARTNER_RAG_SPECIALIST and linked Agents and Tasks, to support for documentation RAG
- Domain model diagrams added
- Addition of LicensePeriod views and form
### Changed
- npm build now includes building of css files
- npm build takes information from sourcefiles, defined in the correct component locations
- eveai.css is now split into more maintainable, separate css files
- adaptation of all list views in the application
- Chat-client converted to vue components and composables
## [2.3.9-alfa]
### Added
- Translation functionality for Front-End, configs (e.g. Forms) and free text
- Introduction of TRACIE_KO_INTERVIEW_DEFINITION_SPECIALIST
- Introduction of intelligent Q&A analysis - HumanAnswerServices
- Full VA-versioin of TRAICIE_SELECTION_SPECIALIST
- EveAICrewAI implementation guide
### Changed
- Allowed Languages and default_language part of Tenant Make
- Refinement of EveAI Assets to define Partner Assets and allow storage of json
- Improvements of Base & EveAICrewAI Specialists
- Catalogs & Retrievers now fully type-based, removing need for end-user definition of Tagging Fields
- RAG_SPECIALIST to support new possibilities
## [2.3.8-alfa]
### Added
- Translation Service
- Automagic HTML Processor
- Allowed languages defined at level of Tenant Make
### Changed
- For changes in existing functionality.
- Allow to activate / de-activate Processors
- Align all document views with session catalog
- Allow different processor types to handle the same file types
- Remove welcome message from tenant_make customisation, add to specialist configuration
### Fixed
- Adapt TRAICIE_ROLE_DEFINITION_SPECIALIST to latest requirements
- Allow for empty historical messages
- Ensure client can cope with empty customisation options
- Ensure only tenant-defined makes are selectable throughout the application
- Refresh partner info when adding Partner Services
### Security
- In case of vulnerabilities.
## [2.3.7-alfa]
### Added
- Basic Base Specialist additions for handling phases and transferring data between state and output
- Introduction of URL and QR-code for MagicLink
### Changed
- Logging improvement & simplification (remove Graylog)
- Traicie Selection Specialist v1.3 - full roundtrip & full process
## [2.3.6-alfa]
### Added
- Full Chat Client functionality, including Forms, ESS, theming
- First Demo version of Traicie Selection Specialist
## [2.3.5-alfa]
### Added
- Chat Client Initialisation (based on SpecialistMagicLink code)
- Definition of framework for the chat_client (using vue.js)
### Changed
- Remove AllowedLanguages from Tenant
- Remove Tenant URL (now in Make)
- Adapt chat client customisation options
### Fixed
- Several Bugfixes to administrative app
## [2.3.4-alfa]
### Added
- Introduction of Tenant Make
- Introduction of 'system' type for dynamic attributes
- Introduce Tenant Make to Traicie Specialists
### Changed
- Enable Specialist 'activation' / 'deactivation'
- Unique constraints introduced for Catalog Name (tenant level) and make name (public level)
## [2.3.3-alfa]
### Added
- Add Tenant Make
- Add Chat Client customisation options to Tenant Make
### Changed
- Catalog name must be unique to avoid mistakes
### Fixed
- Ensure document version is selected in UI before trying to view it.
- Remove obsolete tab from tenant overview
## [2.3.2-alfa]
### Added
@@ -29,18 +180,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Role Definition Specialist creates Selection Specialist from generated competencies
- Improvements to Selection Specialist (Agent definition to be started)
### Deprecated
- For soon-to-be removed features.
### Removed
- For now removed features.
### Fixed
- For any bug fixes.
### Security
- In case of vulnerabilities.
## [2.3.0-alfa]
### Added
@@ -60,7 +199,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Introduction of ChatSession (Specialist Execution) follow-up in administrative interface
- Introduce npm for javascript libraries usage and optimisations
- Introduction of new top bar in administrative interface to show session defaults (removing old navbar buttons)
-
### Changed
- Add 'Register'-button to list views, replacing register menu-items
@@ -118,9 +256,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Fixed
- Set default language when registering Documents or URLs.
### Security
- In case of vulnerabilities.
## [2.1.0-alfa]
### Added

View File

@@ -1,37 +1,726 @@
# Privacy Policy
# Data Protection Agreement Ask Eve AI
## Version 1.0.0
Ask Eve AI respects the privacy of their Customers, Partners, Users and End
Users, and is strongly committed to keeping secure any information
obtained from, for or about each of them. This Data Protection Agreement
describes the practices with respect to Personal Data that Ask Eve AI
collects from or about Customers, Partners, Users and End Users when
they use the applications and services of Ask Eve AI (collectively,
"Services").
*Effective Date: 2025-06-03*
## Definitions
### 1. Introduction
**Data Controller and Data Processor**: have each the meanings set out in
the Data Protection Legislation;
This Privacy Policy describes how EveAI collects, uses, and discloses your information when you use our services.
*Data Protection Legislation:* means the European Union's General Data
Protection Regulation 2016/679 on the protection of natural persons with
regard to the processing of personal data and on the free movement of
such data ("GDPR") and all applicable laws and regulations relating to
the processing of personal data and privacy and any amendment or
re-enactment of any of them;
### 2. Information We Collect
*Data Subject:* has the meaning set out in the Data Protection
Legislation and shall refer, in this Data Processing Agreement to the
identified or identifiable individual(s) whose Personal Data is/are
under control of the Data Controller and is/are the subject of the
Processing by the Data Processor in the context of the Services;
We collect information you provide directly to us, such as account information, content you process through our services, and communication data.
*Personal Data*: has the meaning set out in the Data Protection
Legislation and shall refer, in this Data Processing Agreement to any
information relating to the Data Subject that is subject to the
Processing in the context of the Services;
### 3. How We Use Your Information
*Processing*: has the meaning given to that term in the Data Protection
Legislation and "process" and "processed" shall have a corresponding
meaning;
We use your information to provide, maintain, and improve our services, process transactions, send communications, and comply with legal obligations.
*Purposes*: shall mean the limited, specific and legitimate purposes of
the Processing as described in the Agreement;
### 4. Data Security
*Regulators:* means those government departments and regulatory,
statutory and other bodies, entities and committees which, whether under
statute, rule, regulation, code of practice or otherwise, are entitled
to regulate, investigate or influence the privacy matters dealt with in
agreements and/or by the parties to the agreements (as the case may be);
We implement appropriate security measures to protect your personal information against unauthorized access, alteration, disclosure, or destruction.
*Sub-Processor:* shall mean the subcontractor(s) listed in Annex 1,
engaged by the Data Processor to Process Personal Data on behalf of the
Data Controller and in accordance with its instructions, the terms of
this Data Processing Agreement and the terms of the written subcontract
to be entered into with the Sub-Processor;
### 5. International Data Transfers
*Third Country:* means a country outside the European Economic Area that
is not considered by the European Commission as offering an adequate
level of protection in accordance with Article 44 of the European
Union's General Data Protection Regulation 679/2016.
Your information may be transferred to and processed in countries other than the country you reside in, where data protection laws may differ.
*Tenant / Customer*: A tenant is the organisation, enterprise or company
subscribing to the services of Ask Eve AI. Same as Customer, but more in
context of a SAAS product like Ask Eve AI.
### 6. Your Rights
*Partner*: Any organisation, enterprise or company that offers services
or knowledge on top of the Ask Eve AI platform.
Depending on your location, you may have certain rights regarding your personal information, such as access, correction, deletion, or restriction of processing.
*Account / User*: A user is a natural person performing activities like
configuration or testing in Ask Eve AI, working within the context of a
Tenant. A user is explicitly registered within the system as a member of
the tenant.
### 7. Changes to This Policy
*End User*: An end user is every person making use of Ask Eve AI's services,
in the context of Ask Eve AI services exposed by the tenant
(e.g. a chatbot). This user is not explicitly registered within the
system.
We may update this Privacy Policy from time to time. We will notify you of any changes by posting the new Privacy Policy on this page.
*Ask Eve AI Platform*: The Ask Eve AI Platform (also referred to as
"Evie" or "platform") is the combination of software components and
products, code, configuration and prompts that allow Ask Eve AI to
perform its activities.
### 8. Contact Us
*Ask Eve AI Services*: Is the collection of all services on top of the
Ask Eve AI Platform offered to all users of the platform (Tenants,
Partners, Users and End Users), including all services exposed by
Partners on the Ask Eve AI platform.
If you have any questions about this Privacy Policy, please contact us at privacy@askeveai.be.
*Partner Services:* Is the collection of all services and applications built on top of
the Ask Eve AI Platform offered by Partners. This excludes services
connected through API's to the Ask Eve AI platform or services connected
to the platform by any other means.
## Qualification of Parties
2.1 As part of the provision of the Services, Partner and Customer may
engage Ask Eve AI to collect, process and/or use Personal Data on its
behalf and/or Ask Eve AI may be able to access Personal Data and
accordingly, in relation to the Agreement, the Parties agree that Partner
or Customer is the Data Controller and Ask Eve AI is the Data Processor.
2.2 From time to time, Partner or Customer may request Ask Eve AI to
collect, process and/or use Personal Data on behalf of a third party for
which Ask Eve AI may be able to access Personal Data and accordingly, in
relation to the Agreement, the Parties agree that Customer is the Data
Processor and Ask Eve AI is the Data Sub-Processor.
# Data Classification
Ask Eve AI classifies data as follows:
# Data Protection {#data-protection-1}
The Data Processor warrants, represents and undertakes to the Data
Controller that it shall only process the Personal Data as limited in de
following paragraphs.
**System Data:**
Ask Eve AI System Data is the data required to enable Ask Eve AI to:
- authenticate and authorise accounts / users
- authenticate and authorise automated interfaces (APIs, sockets,
integrations)
- to invoice according to subscription and effective usage of Ask Eve
AI's services
The following personal information is gathered:
1. *Account / User Information*: This information enables a user to log
into the Ask Eve AI systems, or to subscribe to the system's
services. It includes name, e-mail address, a secured password and
roles in the system.
2. *Tenant / Customer Information*: Although not personal data in the
strict sense, in order to subscribe to the services provided by Ask
Eve AI, payment information such as financial details, VAT numbers,
valid addresses and email information is required.
**Tenant Data:**
Tenant data is all information that is added to Ask Eve AI by
- one of the tenant's registered accounts
- one of the automated interfaces (APIs, sockets, integrations)
authorised by the tenant
- interaction by one of the end users that has access to Ask Eve AI's
services exposed by the tenant
This data is required to enable Ask Eve AI to perform the
tenant-specific functions requested or defined by the Tenant, such as
enabling AI chatbots or AI specialists to work on tenant specific
information.
There's no personal data collected explicitly, however, the following
personal information is gathered:
1. *End User Content*: Ask Eve AI collects Personal Data that the End
User provides in the input to our Services ("Content") as is.
2. *Communication Information*: If the Customer communicates with Ask
Eve AI, such as via email, our pages on social media sites or the
chatbots or other interfaces we provide to our services, Ask Eve AI
may collect Personal Data like name, contact information, and the
contents of the messages the Customer sends ("Communication
Information"). End User personal information may be provided by End
User in interactions with Ask Eve AI's services, and as such will be
stored in Ask Eve AI's services as is.
>
> **User Data:**
> Ask Eve AI collects information the User may provide to Ask Eve AI,
> such as when you participate in our events, surveys, ask us to get in
> contact or provide us with information to establish your identity or
> age.
>
>
> \
**Technical Data:**\
When you visit, use, or interact with the Services, we receive the
following information about your visit, use, or interactions ("Technical
Information"):
1. *Log Data:* Ask Eve AI collects information that your browser or
device automatically sends when the Customer uses the Services. Log
data includes the Internet Protocol address, browser type and
settings, the date and time of your request, and how the Customer
interacts with the Services.
2. *Usage Data:* Ask Eve AI collects information about the use of the
Services, such as the types of content that the Customer views or
engages with, the features the Customer uses and the actions the
Customer takes, as well as the Customer's time zone, country, the
dates and times of access, user agent and version, type of computer
or mobile device, and the Customer's computer connection.
3. *Interaction Data*: Ask Eve AI collects the data you provide when
interacting with it's services, such as interacting with a chatbot
or similar advanced means.
4. *Device Information:* Ask Eve AI collects information about the
device the Customer uses to access the Services, such as the name of
the device, operating system, device identifiers, and browser you
are using. Information collected may depend on the type of device
the Customer uses and its settings.
5. *Location Information:* Ask Eve AI may determine the general area
from which your device accesses our Services based on information
like its IP address for security reasons and to make your product
experience better, for example to protect the Customer's account by
detecting unusual login activity or to provide more accurate
responses. In addition, some of our Services allow the Customer to
choose to provide more precise location information from the
Customer's device, such as location information from your device's
GPS.
6. *Cookies and Similar Technologies:* Ask Eve AI uses cookies and
similar technologies to operate and administer our Services, and
improve your experience. If the Customer uses the Services without
creating an account, Ask Eve AI may store some of the information
described in this Agreement with cookies, for example to help
maintain the Customer's preferences across browsing sessions. For
details about our use of cookies, please read our Cookie Policy.
**External Data:**
Information Ask Eve AI receives from other sources:
Ask Eve AI receives information from trusted partners, such as security
partners, to protect against fraud, abuse, and other security threats to
the Services, and from marketing vendors who provide us with information
about potential customers of our business services.
Ask Eve AI also collects information from other sources, like
information that is publicly available on the internet, to develop the
models that power the Services.
Ask Eve AI may use Personal Data for the following purposes:
- To provide, analyse, and maintain the Services, for example to respond
to the Customer's questions for Ask Eve AI;
- To improve and develop the Services and conduct research, for example
to develop new product features;
- To communicate with the Customer, including to send the Customer
information about our Services and events, for example about changes
or improvements to the Services;
- To prevent fraud, illegal activity, or misuses of our Services, and to
protect the security of our systems and Services;
- To comply with legal obligations and to protect the rights, privacy,
safety, or property of our users or third parties.
Ask Eve AI may also aggregate or de-identify Personal Data so that it no
longer identifies the Customer and use this information for the purposes
described above, such as to analyse the way our Services are being used,
to improve and add features to them, and to conduct research. Ask Eve AI
will maintain and use de-identified information in de-identified form
and not attempt to reidentify the information, unless required by law.
As noted above, Ask Eve AI may use content the Customer provides Ask Eve
AI to improve the Services, for example to train the models that power
Ask Eve AI. Read [**our instructions**(opens in a new
window)**](https://help.openai.com/en/articles/5722486-how-your-data-is-used-to-improve-model-performance) on
how you can opt out of our use of your Content to train our models.\
1. 1. ## Instructions {#instructions-3}
Data Processor shall only Process Personal Data of Data Controller on
behalf of the Data Controller and in accordance with this Data
Processing Agreement, solely for the Purposes and the eventual
instructions of the Data Controller, and to the extent, and in such a
manner, as is reasonably necessary to provide the Services in accordance
with the Agreement. Data Controller shall only give instructions that
comply with the Data Protection legislation.
2. 1. ## Applicable mandatory laws {#applicable-mandatory-laws-3}
Data Processor shall only Process as required by applicable mandatory
laws and always in compliance with Data Protection Legislation.\
3. 1. ## Transfer to a third party {#transfer-to-a-third-party-3}
Data Processor uses functionality of third party services to realise
it's functionality. For the purpose of realising Ask Eve AI's
functionality, and only for this purpose, information is sent to it's
sub-processors.
Data Processor shall not transfer or disclose any Personal Data to any
other third party and/or appoint any third party as a sub-processor of
Personal Data unless it is legally required or in case of a notification
to the Data Controller by which he gives his consent.
4. 1. ## Transfer to a Third Country {#transfer-to-a-third-country-3}
Data Processor shall not transfer Personal Data (including any transfer
via electronic media) to any Third Country without the prior written
consent of the Data Controller by exception of the following.
The Parties agree that Personal Data can only be transferred to and/or
kept with the recipient outside the European Economic Area (EEA) in a
country that not falls under an adequacy decision issued by the European
Commission by exception and only if necessary to comply with the
obligations of this Agreement or when legally required. Such transfer
shall be governed by the terms of a data transfer agreement containing
standard contractual clauses as published in the Decision of the
European Commission of June 4, 2021 (Decision (EU) 2021/914), or by
other mechanisms foreseen by the applicable data protection law.
The Data Processor shall prior to the international transfer inform the
Data Controller about the particular measures taken to guarantee the
protection of the Personal Data of the Data Subject in accordance with
the Regulation.
\
5. 1. ## Data secrecy {#data-secrecy-3}
The Data Processor shall maintain data secrecy in accordance with
applicable Data Protection Legislation and shall take all reasonable
steps to ensure that:
> \(1\) only those Data Processor personnel and the Sub-Processor
> personnel that need to have access to Personal Data are given access
> and only to the extent necessary to provide the Services; and
> \(2\) the Data Processor and the Sub-Processor personnel entrusted
> with the processing of, or who may have access to, Personal Data are
> reliable, familiar with the requirements of data protection and
> subject to appropriate obligations of confidentiality and data secrecy
> in accordance with applicable Data Protection Legislation and at all
> times act in compliance with the Data Protection Obligations.
6. 1. ## Appropriate technical and organizational measures {#appropriate-technical-and-organizational-measures-3}
Data Processor has implemented (and shall comply with) all appropriate
technical and organizational measures to ensure the security of the
Personal Data, to ensure that processing of the Personal Data is
performed in compliance with the applicable Data Protection Legislation
and to ensure the protection of the Personal Data against accidental or
unauthorized access, alteration, destruction, damage, corruption or loss
as well as against any other unauthorized or unlawful processing or
disclosure ("Data Breach"). Such measures shall ensure best practice
security, be compliant with Data Protection Legislation at all times and
comply with the Data Controller's applicable IT security policies.
Data Controller has also introduced technical and organizational
measures, and will continue to introduce them to protect its Personal
Data from accidental or unlawful destruction or accidental loss,
alteration, unauthorized disclosure or access. For the sake of clarity,
the Data Controller is responsible for the access control policy,
registration, de-registration and withdrawal of the access rights of the
Users or Consultant(s) to its systems, for the access control,
registration, de-registration and withdrawal of automation access codes
(API Keys), and is also responsible for the complete physical security
of its environment.
7. 1. ## Assistance and co-operation {#assistance-and-co-operation-3}
The Data Processor shall provide the Data Controller with such
assistance and co-operation as the Data Controller may reasonably
request to enable the Data Controller to comply with any obligations
imposed on it by Data Protection Legislation in relation to Personal
Data processed by the Data Processor, including but not limited to:
> \(1\) on request of the Data Controller, promptly providing written
> information regarding the technical and organizational measures which
> the Data Processor has implemented to safeguard Personal Data;\
> \(2\) disclosing full and relevant details in respect of any and all
> government, law enforcement or other access protocols or controls
> which it has implemented, but only in so far this information is
> available to the Data Processor;
> \(3\) notifying the Data Controller as soon as possible and as far as
> it is legally permitted to do so, of any access request for disclosure
> of data which concerns Personal Data (or any part thereof) by any
> Regulator, or by a court or other authority of competent jurisdiction.
> For the avoidance of doubt and as far as it is legally permitted to do
> so, the Data Processor shall not disclose or release any Personal Data
> in response to such request served on the Data Processor without first
> consulting with and obtaining the written consent of the Data
> Controller; and
> \(4\) notifying the Data Controller as soon as possible of any legal
> or factual circumstances preventing the Data Processor from executing
> any of the instructions of the Data Controller.
> \(5\) notifying the Data Controller as soon as possible of any request
> received directly from a Data Subject regarding the Processing of
> Personal Data, without responding to such request. For the avoidance
> of doubt, the Data Controller is solely responsible for handling and
> responding to such requests.
> \(6\) notifying the Data Controller immediately in writing if it
> becomes aware of any Data Breach and provide the Data Controller, as
> soon as possible, with information relating to a Data Breach,
> including, without limitation, but only insofar this information is
> readily available to the Data Processor: the nature of the Data Breach
> and the Personal Data affected, the categories and number of Data
> Subjects concerned, the number of Personal Data records concerned,
> measures taken to address the Data Breach, the possible consequences
> and adverse effect of the Data Breach .
> \(7\) Where the Data Controller is legally required to provide
> information regarding the Personal Data Processed by Data Processor
> and its Processing to any Data Subject or third party, the Data
> Processor shall support the Data Controller in the provision of such
> information when explicitly requested by the Data Controller.
4. # Audit {#audit-1}
At the Data Controller's request the Data Processor shall provide the
Data Controller with all information needed to demonstrate that it
complies with this Data Processing Agreement The Data Processor shall
permit the Data Controller, or a third-party auditor acting under the
Data Controller's direction, (but only to the extent this third-party
auditor cannot be considered a competitor of the Data Processor), to
conduct, at the Data Controller's cost (for internal and external
costs), a data privacy and security audit, concerning the Data
Processor's data security and privacy procedures relating to the
processing of Personal Data, and its compliance with the Data Protection
Obligations, but not more than once per contract year. The Data
Controller shall provide the Data Processor with at least thirty (30)
days prior written notice of its intention to perform an audit. The
notification must include the name of the auditor, a description of the
purpose and the scope of the audit. The audit has to be carried out in
such a way that the inconvenience for the Data Processor is kept to a
minimum, and the Data Controller shall impose sufficient confidentiality
obligations on its auditors. Every auditor who does an inspection will
be at all times accompanied by a dedicated employee of the Processor.
4. # Liability {#liability-1}
Each Party shall be liable for any suffered foreseeable, direct and
personal damages ("Direct Damages") resulting from any attributable
breach of its obligations under this Data Processing Agreement. If one
Party is held liable for a violation of its obligations hereunder, it
undertakes to indemnify the non-defaulting Party for any Direct Damages
resulting from any attributable breach of the defaulting Party's
obligations under this Data Processing Agreement or any fault or
negligence to the performance of this Data Processing Agreement. Under
no circumstances shall the Data Processor be liable for indirect,
incidental or consequential damages, including but not limited to
financial and commercial losses, loss of profit, increase of general
expenses, lost savings, diminished goodwill, damages resulting from
business interruption or interruption of operation, damages resulting
from claims of customers of the Data Controller, disruptions of
planning, loss of anticipated profit, loss of capital, loss of
customers, missed opportunities, loss of advantages or corruption and/or
loss of files resulting from the performance of the Agreement.
[]{#anchor}[]{#anchor-1}[]{#anchor-2}[]{#anchor-3}If it appears that
both the Data Controller and the Data Processor are responsible for the
damage caused by the processing of Personal Data, both Parties shall be
liable and pay damages, in accordance with their individual share in the
responsibility for the damage caused by the processing.
[]{#anchor-4}[]{#anchor-5}[]{#anchor-6}In any event the total liability
of the Data Processor under this Agreement shall be limited to the cause
of damage and to the amount that equals the total amount of fees paid by
the Data Controller to the Data Processor for the delivery and
performance of the Services for a period not more than twelve months
immediately prior to the cause of damages. In no event shall the Data
Processor be held liable if the Data Processor can prove he is not
responsible for the event or cause giving rise to the damage.
4. # Term {#term-1}
This Data Processing Agreement shall be valid for as long as the
Customer uses the Services.
After the termination of the Processing of the Personal Data or earlier
upon request of the Data Controller, the Data Processor shall cease all
use of Personal Data and delete all Personal Data and copies thereof in
its possession unless otherwise agreed or when deletion of the Personal
Data should be technically impossible.
4. # Governing law -- jurisdiction {#governing-law-jurisdiction-1}
This Data Processing Agreement and any non-contractual obligations
arising out of or in connection with it shall be governed by and
construed in accordance with Belgian Law.
Any litigation relating to the conclusion, validity, interpretation
and/or performance of this Data Processing Agreement or of subsequent
contracts or operations derived therefrom, as well as any other
litigation concerning or related to this Data Processing Agreement,
without any exception, shall be submitted to the exclusive jurisdiction
of the courts of Gent, Belgium.
# Annex1
# Sub-Processors
The Data Controller hereby agrees to the following list of
Sub-Processors, engaged by the Data Processor for the Processing of
Personal Data under the Agreement:
+-------------+--------------------------------------------------------+
| | |
+=============+========================================================+
| **Open AI** | |
+-------------+--------------------------------------------------------+
| Address | OpenAI, L.L.C., |
| | |
| | 3180 18th St, San Francisco, |
| | |
| | CA 94110, |
| | |
| | United States of America. |
+-------------+--------------------------------------------------------+
| Contact | OpenAI's Data Protection team |
| | |
| | dsar@openai.com |
+-------------+--------------------------------------------------------+
| Description | Ask Eve AI accesses Open AI's models through Open AI's |
| | API to realise it's functionality. |
| | |
| | Services are GDPR compliant. |
+-------------+--------------------------------------------------------+
| | |
+-------------+--------------------------------------------------------+
+---------------+------------------------------------------------------+
| | |
+===============+======================================================+
| **StackHero** | |
+---------------+------------------------------------------------------+
| Address | Stackhero |
| | |
| | 1 rue de Stockholm |
| | |
| | 75008 Paris |
| | |
| | France |
+---------------+------------------------------------------------------+
| Contact | support@stackhero.io |
+---------------+------------------------------------------------------+
| Description | StackHero is Ask Eve AI's cloud provider, and hosts |
| | the services for PostgreSQL, Redis, Docker, Minio |
| | and Greylog. |
| | |
| | Services are GDPR compliant. |
+---------------+------------------------------------------------------+
| **** | |
+---------------+------------------------------------------------------+
+----------------+-----------------------------------------------------+
| | |
+================+=====================================================+
| **A2 Hosting** | |
+----------------+-----------------------------------------------------+
| Address | A2 Hosting, Inc. |
| | |
| | PO Box 2998 |
| | |
| | Ann Arbor, MI 48106 |
| | |
| | United States |
+----------------+-----------------------------------------------------+
| Contact | [*+1 734-222-4678*](tel:+1(734)222-4678) |
+----------------+-----------------------------------------------------+
| Description | A2 hosting is hosting our main webserver and |
| | mailserver. They are all hosted on European servers |
| | (Iceland). It does not handle data of our business |
| | applications. |
| | |
| | Services are GDPR compliant. |
+----------------+-----------------------------------------------------+
| **** | |
+----------------+-----------------------------------------------------+
# Annex 2
# []{#anchor-7}Technical and organizational measures
# 1. Purpose of this document
This document contains an overview of the technical and operational
measures which are applicable by default within Ask Eve AI. The actual
measures taken depend on the services provided and the specific customer
context. Ask Eve AI guarantees it has for all its services and sites the
necessary adequate technical and operational measures included in the
list below following a Data Protection Impact Assessment (DPIA).
These measures are designed to:
1. ensure the security and confidentiality of Ask Eve AI managed data,
information, applications and infrastructure;
2. protect against any anticipated threats or hazards to the security
and integrity of Personal Data, Ask Eve AI Intellectual Property,
Infrastructure or other business-critical assets;
3. protect against any actual unauthorized processing, loss, use,
disclosure or acquisition of or access to any Personal Data or other
business-critical information or data managed by Ask Eve AI.
Ask Eve AI ensures that all its Sub-Processors have provided the
necessary and required guarantees on the protection of personal data
they process on Ask Eve AI's behalf.
Ask Eve AI continuously monitors the effectiveness of its information
safeguards and organizes a yearly compliance audit by a Third Party to
provide assurance on the measures and controls in place.
# 2. Technical & Organizational Measures
Ask Eve AI has designed, invested and implemented a dynamic
multi-layered security architecture protecting its endpoints, locations,
cloud services and custom-developed business applications against
today's variety of cyberattacks ranging from spear phishing, malware,
viruses to intrusion, ransomware and data loss / data breach incidents
by external and internal bad actors.
This architecture, internationally recognized and awarded, is a
combination of automated proactive, reactive and forensic quarantine
measures and Ask Eve AI internal awareness and training initiatives that
creates and end-to-end chain of protection to identify, classify and
stop any potential malicious action on Ask Eve AI's digital
infrastructure. Ask Eve AI uses an intent-based approach where
activities are constantly monitored, analysed and benchmarked instead of
relying solely on a simple authentication/authorization trust model.
4. 1. ## General Governance & Awareness {#general-governance-awareness-3}
As a product company, Ask Eve AI is committed to maintain and preserve
an IT infrastructure that has a robust security architecture, complies
with data regulation policies and provides a platform to its employees
for flexible and effective work and collaboration activities with each
other and our customers.
Ask Eve AI IT has a cloud-first and cloud-native strategy and as such
works with several third-party vendors that store and process our
company data. Ask Eve AI IT aims to work exclusively with vendors that
are compliant with the national and European Data Protection
Regulations. Transfers of Personal Data to third-countries are subject
to compliance by the third-country Processor/Sub-Processor with the
Standard Contractual Clauses as launched by virtue of the EU Commission
Decision 2010/87/EU of 5 February 2010 as updated by the EU Comission
Decision (EU) 2021/914 of 4 June 2021, unless the third country of the
Processor/Sub-Processor has been qualified as providing an adequate
level of protection for Personal Data by the European Commission, (a.o.
EU-U.S. Data Privacy Framework).
Ask Eve AI has an extensive IT policy applicable to any employee or
service provider that uses Ask Eve AI platforms or infrastructure. This
policy informs the user of his or her rights & duties and informs the
user of existing monitoring mechanisms to enforce security and data
compliance. The policy is updated regularly and an integrated part of
new employee onboarding and continuous training and development
initiatives on internal tooling and cyber security;
Ask Eve AI IT has several internal policies on minimal requirements
before an application, platform or tool can enter our application
landscape. These include encryption requirements, DLP requirements,
transparent governance & licensing requirements and certified support
contract procedures & certifications;
These policies are actively enforced through our endpoint security, CASB
and cloud firewall solutions. Any infraction on these policies is met
with appropriate action and countermeasures and may result in a complete
ban from using and accessing Ask Eve AI's infrastructure and platforms
or even additional legal action against employees, clients or other
actors;
## 9.2. Physical Security & Infrastructure
Ask Eve AI has deployed industry-standard physical access controls to
its location for employee presence and visitor management.
Restricted environments including network infrastructure, data center
and server rooms are safeguarded by additional access controls and
access to these rooms is audited. CCTV surveillance is present in all
restricted and critical areas.
Fire alarm and firefighting systems are implemented for employee and
visitor safety. Regular fire simulations and evacuation drills are
performed.
Clean desk policies are enforced, employees regularly in contact with
sensitive information have private offices and follow-me printing
enabled.
Key management governance is implemented and handled by Facilities.
1. 1. ## Endpoint Security & User Accounts {#endpoint-security-user-accounts-3}
All endpoints and any information stored are encrypted using
enterprise-grade encryption on all operating systems supported by Ask
Eve AI.
Ask Eve AI has implemented a centrally managed anti-virus and malware
protection system for endpoints, email and document stores.
Multifactor Authentication is enforced on all user accounts where
possible.
Conditional Access is implemented across the entire infrastructure
limiting access to specific regions and setting minimum requirements for
the OS version, network security level, endpoint protection level and
user behavior.
Only vendor supplied updates are installed.
Ask Eve AI has deployed a comprehensive device management strategy to
ensure endpoint integrity and policy compliance.
Access is managed according to role-based access control principles and
all user behavior on Ask Eve AI platforms is audited.
1. 1. ## Data Storage, Recovery & Securing Personal Data {#data-storage-recovery-securing-personal-data-3}
> Ask Eve AI has deployed:
- An automated multi-site encrypted back-up process with daily integrity
reviews.
- The possibility for the anonymization, pseudonymization and encryption
of Personal Data.
- The ability to monitor and ensure the ongoing confidentiality,
integrity, availability and resilience of processing systems and
services.
- The ability to restore the availability and access to Personal Data in
a timely manner in the event of a physical or technical incident.
- A logical separation between its own data, the data of its customers
and suppliers.
- A process to keep processed data accurate, reliable and up-to-date.
- Records of the processing activities.
- Data Retention Policies
1. 1. ## Protection & Insurance {#protection-insurance-3}
Ask Eve AI has a cyber-crime insurance policy. Details on the policy can
be requested through the legal department.

View File

@@ -24,7 +24,7 @@ x-common-variables: &common-variables
FLOWER_PASSWORD: 'Jungles'
OPENAI_API_KEY: 'sk-proj-8R0jWzwjL7PeoPyMhJTZT3BlbkFJLb6HfRB2Hr9cEVFWEhU7'
GROQ_API_KEY: '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'
MISTRAL_API_KEY: 'jGDc6fkCbt0iOC0jQsbuZhcjLWBPGc2b'
MISTRAL_API_KEY: '0f4ZiQ1kIpgIKTHX8d0a8GOD2vAgVqEn'
ANTHROPIC_API_KEY: 'sk-ant-api03-c2TmkzbReeGhXBO5JxNH6BJNylRDonc9GmZd0eRbrvyekec2'
JWT_SECRET_KEY: 'bsdMkmQ8ObfMD52yAFg4trrvjgjMhuIqg2fjDpD/JqvgY0ccCcmlsEnVFmR79WPiLKEA3i8a5zmejwLZKl4v9Q=='
API_ENCRYPTION_KEY: 'xfF5369IsredSrlrYZqkM9ZNrfUASYYS6TCcAR9UKj4='
@@ -70,6 +70,7 @@ services:
depends_on:
- eveai_app
- eveai_api
- eveai_chat_client
networks:
- eveai-network
@@ -143,39 +144,43 @@ services:
networks:
- eveai-network
# eveai_chat:
# image: josakola/eveai_chat:latest
# build:
# context: ..
# dockerfile: ./docker/eveai_chat/Dockerfile
# platforms:
# - linux/amd64
# - linux/arm64
# ports:
# - 5002:5002
# environment:
# <<: *common-variables
# COMPONENT_NAME: eveai_chat
# volumes:
# - ../eveai_chat:/app/eveai_chat
# - ../common:/app/common
# - ../config:/app/config
# - ../scripts:/app/scripts
# - ../patched_packages:/app/patched_packages
# - ./eveai_logs:/app/logs
# depends_on:
# db:
# condition: service_healthy
# redis:
# condition: service_healthy
# healthcheck:
# test: [ "CMD", "curl", "-f", "http://localhost:5002/healthz/ready" ] # Adjust based on your health endpoint
# interval: 30s
# timeout: 1s
# retries: 3
# start_period: 30s
# networks:
# - eveai-network
eveai_chat_client:
image: josakola/eveai_chat_client:latest
build:
context: ..
dockerfile: ./docker/eveai_chat_client/Dockerfile
platforms:
- linux/amd64
- linux/arm64
ports:
- 5004:5004
expose:
- 8000
environment:
<<: *common-variables
COMPONENT_NAME: eveai_chat_client
volumes:
- ../eveai_chat_client:/app/eveai_chat_client
- ../common:/app/common
- ../config:/app/config
- ../scripts:/app/scripts
- ../patched_packages:/app/patched_packages
- ./eveai_logs:/app/logs
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
minio:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5004/healthz/ready"]
interval: 30s
timeout: 1s
retries: 3
start_period: 30s
networks:
- eveai-network
eveai_chat_workers:
image: josakola/eveai_chat_workers:latest
@@ -441,4 +446,3 @@ volumes:
#secrets:
# db-password:
# file: ./db/password.txt

View File

@@ -26,7 +26,7 @@ x-common-variables: &common-variables
REDIS_PORT: '6379'
FLOWER_USER: 'Felucia'
FLOWER_PASSWORD: 'Jungles'
MISTRAL_API_KEY: 'Vkwgr67vUs6ScKmcFF2QVw7uHKgq0WEN'
MISTRAL_API_KEY: 'qunKSaeOkFfLteNiUO77RCsXXSLK65Ec'
JWT_SECRET_KEY: '7e9c8b3a215f4d6e90712c5d8f3b97a60e482c15f39a7d68bcd45910ef23a784'
API_ENCRYPTION_KEY: 'kJ7N9p3IstyRGkluYTryM8ZMnfUBSXWR3TCfDG9VLc4='
MINIO_ENDPOINT: minio:9000
@@ -56,6 +56,7 @@ services:
depends_on:
- eveai_app
- eveai_api
- eveai_chat_client
networks:
- eveai-network
restart: "no"
@@ -106,6 +107,33 @@ services:
- eveai-network
restart: "no"
eveai_chat_client:
image: josakola/eveai_chat_client:${EVEAI_VERSION:-latest}
ports:
- 5004:5004
expose:
- 8000
environment:
<<: *common-variables
COMPONENT_NAME: eveai_chat_client
volumes:
- eveai_logs:/app/logs
- crewai_storage:/app/crewai_storage
depends_on:
redis:
condition: service_healthy
minio:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5004/healthz/ready"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
networks:
- eveai-network
restart: "no"
eveai_chat_workers:
image: josakola/eveai_chat_workers:${EVEAI_VERSION:-latest}
expose:

View File

@@ -115,15 +115,41 @@ echo "Set COMPOSE_FILE to $COMPOSE_FILE"
echo "Set EVEAI_VERSION to $VERSION"
echo "Set DOCKER_ACCOUNT to $DOCKER_ACCOUNT"
# Define aliases for common Docker commands
alias docker-compose="docker compose -f $COMPOSE_FILE"
alias dc="docker compose -f $COMPOSE_FILE"
alias dcup="docker compose -f $COMPOSE_FILE up -d --remove-orphans"
alias dcdown="docker compose -f $COMPOSE_FILE down"
alias dcps="docker compose -f $COMPOSE_FILE ps"
alias dclogs="docker compose -f $COMPOSE_FILE logs"
alias dcpull="docker compose -f $COMPOSE_FILE pull"
alias dcrefresh="docker compose -f $COMPOSE_FILE pull && docker compose -f $COMPOSE_FILE up -d --remove-orphans"
docker-compose() {
docker compose -f $COMPOSE_FILE "$@"
}
dc() {
docker compose -f $COMPOSE_FILE "$@"
}
dcup() {
docker compose -f $COMPOSE_FILE up -d --remove-orphans "$@"
}
dcdown() {
docker compose -f $COMPOSE_FILE down "$@"
}
dcps() {
docker compose -f $COMPOSE_FILE ps "$@"
}
dclogs() {
docker compose -f $COMPOSE_FILE logs "$@"
}
dcpull() {
docker compose -f $COMPOSE_FILE pull "$@"
}
dcrefresh() {
docker compose -f $COMPOSE_FILE pull && docker compose -f $COMPOSE_FILE up -d --remove-orphans "$@"
}
# Exporteer de functies zodat ze beschikbaar zijn in andere scripts
export -f docker-compose dc dcup dcdown dcps dclogs dcpull dcrefresh
echo "Docker environment switched to $ENVIRONMENT with version $VERSION"
echo "You can now use 'docker-compose', 'dc', 'dcup', 'dcdown', 'dcps', 'dclogs', 'dcpull' or 'dcrefresh' commands"

Some files were not shown because too many files have changed in this diff Show More