58 Commits

Author SHA1 Message Date
Josako
b6ee7182de - Adding Prometheus and grafana services in development
- Adding Prometheus metrics to the business events
- Ensure asynchronous behaviour of crewai specialists.
- Adapt Business events to working in mixed synchronous / asynchronous contexts
- Extend business events with specialist information
- Started adding a grafana dashboard (TBC)
2025-03-24 16:39:22 +01:00
Josako
238bdb58f4 - Upgrade to crewai v108 2025-03-18 14:46:06 +01:00
Josako
a35486b573 - Removed specialist models no longer in use from navigation (They were already removed from the rest of the code) 2025-03-18 14:45:47 +01:00
Josako
dc64bbc257 - Corrected old reference to catalog embedding model 2025-03-18 14:45:03 +01:00
Josako
09555ae8b0 - Corrected old reference to catalog embedding model 2025-03-18 14:44:43 +01:00
Josako
cf2201a1f7 - Started addition of Assets (to e.g. handle document templates).
- To be continued (Models, first views are ready)
2025-03-17 17:40:42 +01:00
Josako
a6402524ce - Correct bug where URL can be too long due to tracking parameters ==> added clean_url function, to be called before adding an URL. 2025-03-17 17:39:32 +01:00
Josako
56a00c2894 - Add DOSSIER Catalog management possibilities to eveai_app. 2025-03-12 11:25:48 +01:00
Josako
6465e4f358 - Re-introduced detail_question to crewai specialists 2025-03-10 15:49:21 +01:00
Josako
4b43f96afe - Move RAG from Langchain to crewai 2025-03-10 08:31:15 +01:00
Josako
e088ef7e4e - Remove embedding model from Catalog. We use Mistral's embedding. 2025-03-07 15:06:51 +01:00
Josako
9e03af45e1 - small improvement to RAG to not repeat historic answers 2025-03-07 15:06:20 +01:00
Josako
5bfd3445bb - adding usage to specialist execution
- Correcting implementation of usage
- Removed some obsolete debug statements
2025-03-07 11:10:28 +01:00
Josako
efff63043a - Removed 'Add Multiple URLs' for navigation (AEA-2) 2025-03-06 14:20:32 +01:00
Josako
c15cabc289 - Move to Mistral iso OpenAI as primary choice 2025-03-06 14:19:35 +01:00
Josako
55a89c11bb - Move from OpenAI to Mistral Embeddings
- Move embedding model settings from tenant to catalog
- BUG: error processing configuration for chunking patterns in HTML_PROCESSOR
- Removed eveai_chat from docker-files and nginx configuration, as it is now obsolete
- BUG: error in Library Operations when creating a new default RAG library
- BUG: Added public type in migration scripts
- Removed SocketIO from all code and requirements.txt
2025-02-25 11:17:19 +01:00
Josako
c037d4135e - Minor changes to the SPIN_SPECIALIST 2025-02-20 11:35:14 +01:00
Josako
25213f2004 - Implementation of specialist execution api, including SSE protocol
- eveai_chat becomes deprecated and should be replaced with SSE
- Adaptation of STANDARD_RAG specialist
- Base class definition allowing to realise specialists with crewai framework
- Implementation of SPIN_SPECIALIST
- Implementation of test app for testing specialists (test_specialist_client). Also serves as an example for future SSE-based client
- Improvements to startup scripts to better handle and scale multiple connections
- Small improvements to the interaction forms and views
- Caching implementation improved and augmented with additional caches
2025-02-20 05:50:16 +01:00
Josako
d106520d22 - Finish editing of Specialists with overview, agent - task - tool editor
- Split differrent caching mechanisms (types, version tree, config) into different cachers
- Improve resource usage on starting components, and correct gevent usage
- Refine repopack usage for eveai_app (too large)
- Change nginx dockerfile to allow for specialist overviews being served statically
2025-01-23 09:43:48 +01:00
Josako
7bddeb0ebd - Add configuration of agents, tasks, tools, specialist in context of SPIN specialist
- correct startup of applications using gevent
- introduce startup scripts (eveai_app)
- caching manager for all configurations
2025-01-16 20:27:00 +01:00
Josako
f7cd58ed2a - Zapier Document Refresh action (create) added 2024-12-17 16:40:21 +01:00
Josako
53c625599a - Release documentation update 2024-12-13 11:18:07 +01:00
Josako
88ee4f482b - Try to move Add... functionality to overviews 2024-12-13 10:41:29 +01:00
Josako
3176b95323 - finished add_document on Zapier interface 2024-12-13 10:40:57 +01:00
Josako
46c60b36a0 - First 'working' version of the Zapier plugin. Needs further debugging and needs additional functionalty (only add_document.js) 2024-12-12 16:36:41 +01:00
Josako
d35ec9f5ae - Addition of general chunking parameters chunking_heading_level and chunking patterns
- Addition of Processor types docx and markdown
2024-12-05 15:19:37 +01:00
Josako
311927d5ea Just because 2024-11-29 14:11:49 +01:00
Josako
fb798501b9 - Build and Release script for WordPress plugins 2024-11-29 14:11:36 +01:00
Josako
99135c9b02 - Updated OpenAI client due to 'proxies' error (no longer supported). 2024-11-29 14:11:07 +01:00
Josako
425b580f15 - Corrected problem where Language Dropdown was not filled 2024-11-29 11:49:50 +01:00
Josako
b658e68e65 - Minor bugfixes 2024-11-29 11:24:32 +01:00
Josako
b8e07bec77 - RAG Specialist was not using detailed_question, but original question. Resulting in Evie not returning good or correct answers. 2024-11-29 11:23:54 +01:00
Josako
344ea26ecc - Security improvements to Docker images (Docker Scout advise) 2024-11-27 12:27:28 +01:00
Josako
98cb4e4f2f - Created a new eveai_chat plugin to support the new dynamic possibilities of the Specialists. Currently only supports standard Rag retrievers (i.e. no extra arguments). 2024-11-27 12:26:49 +01:00
Josako
07d89d204f - Created a new eveai_chat plugin to support the new dynamic possibilities of the Specialists. Currently only supports standard Rag retrievers (i.e. no extra arguments). 2024-11-26 13:35:29 +01:00
Josako
7702a6dfcc - Modernized authentication with the introduction of TenantProject
- Created a base mail template
- Adapt and improve document API to usage of catalogs and processors
- Adapt eveai_sync to new authentication mechanism and usage of catalogs and processors
2024-11-21 17:24:33 +01:00
Josako
4c009949b3 - Changes to support SpecialistID being passed iso CatalogID
- Removed error that stopped sync
2024-11-15 13:13:45 +01:00
Josako
aa4ac3ec7c - Changes to support SpecialistID being passed iso CatalogID
- Removed error that stopped sync
2024-11-15 13:13:33 +01:00
Josako
1807435339 - Introduction of dynamic Retrievers & Specialists
- Introduction of dynamic Processors
- Introduction of caching system
- Introduction of a better template manager
- Adaptation of ModelVariables to support dynamic Processors / Retrievers / Specialists
- Start adaptation of chat client
2024-11-15 10:00:53 +01:00
Josako
55a8a95f79 - Finalisation of the Specialist model, forms and views 2024-11-04 11:22:40 +01:00
Josako
503ea7965d - Temporary checkin to branch for the rest of the introduction of experts 2024-11-03 16:18:14 +01:00
Josako
88f4db1178 - Organise retrievers 2024-11-01 11:19:55 +01:00
Josako
2df291ea91 - Organise retrievers 2024-11-01 11:19:34 +01:00
Josako
5841525b4c - When no explicit path is given in the browser, we automatically get redirected to the admin interface (eveai_app)
- Tuning moved to Retriever iso in the configuration, as this is an attribute that should be available for all types of Retrievers
2024-10-31 08:32:02 +01:00
Josako
532073d38e - Add dynamic fields to DocumentVersion in case the Catalog requires it. 2024-10-30 13:52:18 +01:00
Josako
43547287b1 - Refining & Enhancing dynamic fields
- Creating a specialized Form class for handling dynamic fields
- Refinement of HTML-macros to handle dynamic fields
- Introduction of dynamic fields for Catalogs
2024-10-29 09:17:44 +01:00
Josako
aa358df28e - Allowing for multiple types of Catalogs
- Introduction of retrievers
- Ensuring processing information is collected from Catalog iso Tenant
- Introduction of a generic Form class to enable dynamic fields based on a configuration
- Realisation of Retriever functionality to support dynamic fields
2024-10-25 14:11:47 +02:00
Josako
30fec27488 - Release script added to tag in both git and docker 2024-10-21 07:45:06 +02:00
Josako
5e77b478dd - Release script added to tag in both git and docker 2024-10-17 11:22:18 +02:00
Josako
6f71259822 - Changelog update 2024-10-17 10:35:51 +02:00
Josako
74cc7ae95e - Adapt Sync Wordpress Component to Catalog introduction
- Small bug fixes
2024-10-17 10:31:13 +02:00
Josako
7f12c8b355 - Remove obsolete fields from Tenant model (Catalog introduction) 2024-10-16 13:59:57 +02:00
Josako
6069f5f7e5 - Catalog functionality integrated into document and document_version views
- small bugfixes and improvements
2024-10-16 13:09:19 +02:00
Josako
3e644f1652 - Add Catalog Functionality 2024-10-15 18:14:57 +02:00
Josako
3316a8bc47 - Small changes to show when upgrades are finished 2024-10-14 16:40:56 +02:00
Josako
270479c77d - Add Catalog Concept to Document Domain
- Create Catalog views
- Modify document stack creation
2024-10-14 13:56:23 +02:00
Josako
0f4558d775 - Small fix in interaction view, as it still refered to file_name 2024-10-11 18:14:35 +02:00
Josako
9f5f090f0c - License Usage Calculation realised
- View License Usages
- Celery Beat container added
- First schedule in Celery Beat for calculating usage (hourly)
- repopack can now split for different components
- Various fixes as consequece of changing file_location / file_name ==> bucket_name / object_name
- Celery Routing / Queuing updated
2024-10-11 16:33:36 +02:00
340 changed files with 25605 additions and 3647 deletions

10
.gitignore vendored
View File

@@ -42,3 +42,13 @@ migrations/public/.DS_Store
scripts/.DS_Store
scripts/__pycache__/run_eveai_app.cpython-312.pyc
/eveai_repo.txt
*repo.txt
/docker/eveai_logs/
/integrations/Wordpress/eveai_sync.zip
/integrations/Wordpress/eveai-chat.zip
/db_backups/
/tests/interactive_client/specialist_client.log
/.repopackignore
/patched_packages/crewai/
/docker/prometheus/data/
/docker/grafana/data/

2
.idea/misc.xml generated
View File

@@ -3,5 +3,5 @@
<component name="Black">
<option name="sdkName" value="Python 3.12 (eveai_tbd)" />
</component>
<component name="ProjectRootManager" version="2" project-jdk-name="Python 3.12 (eveai_tbd)" project-jdk-type="Python SDK" />
<component name="ProjectRootManager" version="2" project-jdk-name="Python 3.12 (TBD)" project-jdk-type="Python SDK" />
</project>

View File

@@ -1 +1 @@
eveai_tbd
3.12.7

View File

@@ -2,6 +2,7 @@
# Example:
# *.log
# tmp/
db_backups/
logs/
nginx/static/assets/fonts/
nginx/static/assets/img/
@@ -12,10 +13,10 @@ migrations/
*material*
*nucleo*
*package*
*.svg
nginx/mime.types
*.gitignore*
.python-version
.repopackignore
.repopackignore*
repopack.config.json
*repo.txt

View File

@@ -0,0 +1,12 @@
docker/
eveai_api/
eveai_app/
eveai_beat/
eveai_chat/
eveai_chat_workers/
eveai_entitlements/
eveai_workers/
instance/
integrations/
nginx/
scripts/

12
.repopackignore_docker Normal file
View File

@@ -0,0 +1,12 @@
common/
config/
eveai_api/
eveai_app/
eveai_beat/
eveai_chat/
eveai_chat_workers/
eveai_entitlements/
eveai_workers/
instance/
integrations/
nginx/

10
.repopackignore_eveai_api Normal file
View File

@@ -0,0 +1,10 @@
docker/
eveai_app/
eveai_beat/
eveai_chat/
eveai_chat_workers/
eveai_entitlements/
instance/
integrations/Wordpress/eveai-chat
nginx/
scripts/

12
.repopackignore_eveai_app Normal file
View File

@@ -0,0 +1,12 @@
docker/
eveai_api/
eveai_beat/
eveai_chat/
eveai_chat_workers/
eveai_entitlements/
eveai_workers/
instance/
integrations/
migrations/
nginx/
scripts/

View File

@@ -0,0 +1,28 @@
docker/
eveai_api/
eveai_beat/
eveai_chat/
eveai_chat_workers/
eveai_entitlements/
eveai_workers/
instance/
integrations/
migrations/
nginx/
scripts/
common/models/entitlements.py
common/models/interaction.py
common/models/user.py
config/agents/
config/prompts/
config/specialists/
config/tasks/
config/tools/
eveai_app/templates/administration/
eveai_app/templates/entitlements/
eveai_app/templates/interaction/
eveai_app/templates/user/
eveai_app/views/administration*
eveai_app/views/entitlements*
eveai_app/views/interaction*
eveai_app/views/user*

View File

@@ -0,0 +1,28 @@
docker/
eveai_api/
eveai_beat/
eveai_chat/
eveai_chat_workers/
eveai_entitlements/
eveai_workers/
instance/
integrations/
migrations/
nginx/
scripts/
common/models/document.py
common/models/interaction.py
common/models/user.py
config/agents/
config/prompts/
config/specialists/
config/tasks/
config/tools/
eveai_app/templates/administration/
eveai_app/templates/document/
eveai_app/templates/interaction/
eveai_app/templates/user/
eveai_app/views/administration*
eveai_app/views/document*
eveai_app/views/interaction*
eveai_app/views/user*

View File

@@ -0,0 +1,23 @@
docker/
eveai_api/
eveai_beat/
eveai_chat/
eveai_chat_workers/
eveai_entitlements/
eveai_workers/
instance/
integrations/
migrations/
nginx/
scripts/
common/models/entitlements.py
common/models/document.py
common/models/user.py
eveai_app/templates/administration/
eveai_app/templates/entitlements/
eveai_app/templates/document/
eveai_app/templates/user/
eveai_app/views/administration*
eveai_app/views/entitlements*
eveai_app/views/document*
eveai_app/views/user*

View File

@@ -0,0 +1,13 @@
eveai_api/
eveai_beat/
eveai_chat/
eveai_chat_workers/
eveai_entitlements/
eveai_workers/
eveai_app/templates/
eveai_app/views/
instance/
integrations/
migrations/
nginx/
scripts/

View File

@@ -0,0 +1,28 @@
docker/
eveai_api/
eveai_beat/
eveai_chat/
eveai_chat_workers/
eveai_entitlements/
eveai_workers/
instance/
integrations/
migrations/
nginx/
scripts/
common/models/entitlements.py
common/models/interaction.py
common/models/document.py
config/agents/
config/prompts/
config/specialists/
config/tasks/
config/tools/
eveai_app/templates/administration/
eveai_app/templates/entitlements/
eveai_app/templates/interaction/
eveai_app/templates/document/
eveai_app/views/administration*
eveai_app/views/entitlements*
eveai_app/views/interaction*
eveai_app/views/document*

View File

@@ -0,0 +1,11 @@
docker/
eveai_api/
eveai_app/
eveai_chat/
eveai_chat_workers/
eveai_entitlements/
eveai_workers/
instance/
integrations/
nginx/
scripts/

View File

@@ -0,0 +1,10 @@
docker/
eveai_app/
eveai_beat/
eveai_chat_workers/
eveai_entitlements/
eveai_workers/
instance/
integrations/Wordpress/eveai_sync
nginx/
scripts/

View File

@@ -0,0 +1,11 @@
docker/
eveai_api/
eveai_app/
eveai_beat/
eveai_chat/
eveai_entitlements/
eveai_workers/
instance/
integrations/
nginx/
scripts/

View File

@@ -0,0 +1,11 @@
docker/
eveai_api/
eveai_app/
eveai_beat/
eveai_chat/
eveai_chat_workers/
eveai_workers/
instance/
integrations/
nginx/
scripts/

View File

@@ -0,0 +1,11 @@
docker/
eveai_api/
eveai_app/
eveai_beat/
eveai_chat/
eveai_chat_workers/
eveai_entitlements/
instance/
integrations/
nginx/
scripts/

4
.repopackignore_full Normal file
View File

@@ -0,0 +1,4 @@
docker
integrations
nginx
scripts

View File

@@ -0,0 +1,13 @@
common/
config/
docker/
eveai_api/
eveai_app/
eveai_beat/
eveai_chat/
eveai_chat_workers/
eveai_entitlements/
eveai_workers/
instance/
nginx/
scripts/

11
.repopackignore_nginx Normal file
View File

@@ -0,0 +1,11 @@
docker/
eveai_api/
eveai_app/
eveai_beat/
eveai_chat/
eveai_chat_workers/
eveai_entitlements/
eveai_workers/
instance/
integrations/
scripts/

View File

@@ -0,0 +1,11 @@
docker/
eveai_api/
eveai_app/
eveai_beat/
eveai_chat/
eveai_entitlements/
eveai_workers/
instance/
integrations/
nginx/
scripts/

View File

@@ -25,6 +25,146 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Security
- In case of vulnerabilities.
## [2.1.0-alfa]
### Added
- Zapier Refresh Document
- SPIN Specialist definition - from start to finish
- Introduction of startup scripts in eveai_app
- Caching for all configurations added
- Caching for processed specialist configurations
- Caching for specialist history
- Augmented Specialist Editor, including Specialist graphic presentation
- Introduction of specialist_execution_api, introducting SSE
- Introduction of crewai framework for specialist implementation
- Test app for testing specialists - also serves as a sample client application for SSE
-
### Changed
- Improvement of startup of applications using gevent, and better handling and scaling of multiple connections
- STANDARD_RAG Specialist improvement
-
### Deprecated
- eveai_chat - using sockets - will be replaced with new specialist_execution_api and SSE
## [2.0.1-alfa]
### Added
- Zapîer Integration (partial - only adding files).
- Addition of general chunking parameters (chunking_heading_level and chunking_patterns)
- Addition of DocX and markdown Processor Types
### Changed
- For changes in existing functionality.
### Deprecated
- For soon-to-be removed features.
### Removed
- For now removed features.
### Fixed
- Ensure the RAG Specialist is using the detailed_question
- Wordpress Chat Plugin: languages dropdown filled again
- OpenAI update - proxies no longer supported
- Build & Release script for Wordpress Plugins (including end user download folder)
### Security
- In case of vulnerabilities.
## [2.0.0-alfa]
### Added
- Introduction of dynamic Retrievers & Specialists
- Introduction of dynamic Processors
- Introduction of caching system
- Introduction of a better template manager
- Modernisation of external API/Socket authentication using projects
- Creation of new eveai_chat WordPress plugin to support specialists
### Changed
- Update of eveai_sync WordPress plugin
### Fixed
- Set default language when registering Documents or URLs.
### Security
- Security improvements to Docker images
## [1.0.14-alfa]
### Added
- New release script added to tag images with release number
- Allow the addition of multiple types of Catalogs
- Generic functionality to enable dynamic fields
- Addition of Retrievers to allow for smart collection of information in Catalogs
- Add dynamic fields to Catalog / Retriever / DocumentVersion
### Changed
- Processing parameters defined at Catalog level iso Tenant level
- Reroute 'blank' paths to 'admin'
### Deprecated
- For soon-to-be removed features.
### Removed
- For now removed features.
### Fixed
- Set default language when registering Documents or URLs.
### Security
- In case of vulnerabilities.
## [1.0.13-alfa]
### Added
- Finished Catalog introduction
- Reinitialization of WordPress site for syncing
### Changed
- Modification of WordPress Sync Component
- Cleanup of attributes in Tenant
### Fixed
- Overall bugfixes as result from the Catalog introduction
## [1.0.12-alfa]
### Added
- Added Catalog functionality
### Changed
- For changes in existing functionality.
### Deprecated
- For soon-to-be removed features.
### Removed
- For now removed features.
### Fixed
- Set default language when registering Documents or URLs.
### Security
- In case of vulnerabilities.
## [1.0.11-alfa]
### Added
- License Usage Calculation realised
- View License Usages
- Celery Beat container added
- First schedule in Celery Beat for calculating usage (hourly)
### Changed
- repopack can now split for different components
### Fixed
- Various fixes as consequence of changing file_location / file_name ==> bucket_name / object_name
- Celery Routing / Queuing updated
## [1.0.10-alfa]
### Added

View File

@@ -0,0 +1,11 @@
from abc import abstractmethod
from typing import List
class EveAIEmbeddings:
@abstractmethod
def embed_documents(self, texts: List[str]) -> List[List[float]]:
raise NotImplementedError
def embed_query(self, text: str) -> List[float]:
return self.embed_documents([text])[0]

View File

@@ -0,0 +1,40 @@
from flask import current_app
from langchain_mistralai import MistralAIEmbeddings
from typing import List, Any
import time
from common.eveai_model.eveai_embedding_base import EveAIEmbeddings
from common.utils.business_event_context import current_event
from mistralai import Mistral
class TrackedMistralAIEmbeddings(EveAIEmbeddings):
def __init__(self, model: str = "mistral_embed"):
api_key = current_app.config['MISTRAL_API_KEY']
self.client = Mistral(
api_key=api_key
)
self.model = model
super().__init__()
def embed_documents(self, texts: list[str]) -> list[list[float]]:
start_time = time.time()
result = self.client.embeddings.create(
model=self.model,
inputs=texts
)
end_time = time.time()
metrics = {
'total_tokens': result.usage.total_tokens,
'prompt_tokens': result.usage.prompt_tokens, # For embeddings, all tokens are prompt tokens
'completion_tokens': result.usage.completion_tokens,
'time_elapsed': end_time - start_time,
'interaction_type': 'Embedding',
}
current_event.log_llm_metrics(metrics)
embeddings = [embedding.embedding for embedding in result.data]
return embeddings

View File

@@ -5,15 +5,17 @@ from flask_security import Security
from flask_mailman import Mail
from flask_login import LoginManager
from flask_cors import CORS
from flask_socketio import SocketIO
from flask_jwt_extended import JWTManager
from flask_session import Session
from flask_wtf import CSRFProtect
from flask_restx import Api
from prometheus_flask_exporter import PrometheusMetrics
from .langchain.templates.template_manager import TemplateManager
from .utils.cache.eveai_cache_manager import EveAICacheManager
from .utils.simple_encryption import SimpleEncryption
from .utils.minio_utils import MinioClient
from .utils.performance_monitoring import EveAIMetrics
# Create extensions
@@ -25,10 +27,13 @@ security = Security()
mail = Mail()
login_manager = LoginManager()
cors = CORS()
socketio = SocketIO()
jwt = JWTManager()
session = Session()
api_rest = Api()
simple_encryption = SimpleEncryption()
minio_client = MinioClient()
metrics = PrometheusMetrics.for_app_factory()
template_manager = TemplateManager()
cache_manager = EveAICacheManager()
eveai_metrics = EveAIMetrics()

View File

@@ -1,52 +0,0 @@
from langchain_core.retrievers import BaseRetriever
from sqlalchemy import asc
from sqlalchemy.exc import SQLAlchemyError
from pydantic import Field, BaseModel, PrivateAttr
from typing import Any, Dict
from flask import current_app
from common.extensions import db
from common.models.interaction import ChatSession, Interaction
from common.utils.model_utils import ModelVariables
class EveAIHistoryRetriever(BaseRetriever, BaseModel):
_model_variables: ModelVariables = PrivateAttr()
_session_id: str = PrivateAttr()
def __init__(self, model_variables: ModelVariables, session_id: str):
super().__init__()
self._model_variables = model_variables
self._session_id = session_id
@property
def model_variables(self) -> ModelVariables:
return self._model_variables
@property
def session_id(self) -> str:
return self._session_id
def _get_relevant_documents(self, query: str):
current_app.logger.debug(f'Retrieving history of interactions for query: {query}')
try:
query_obj = (
db.session.query(Interaction)
.join(ChatSession, Interaction.chat_session_id == ChatSession.id)
.filter(ChatSession.session_id == self.session_id)
.order_by(asc(Interaction.id))
)
interactions = query_obj.all()
result = []
for interaction in interactions:
result.append(f'HUMAN:\n{interaction.detailed_question}\n\nAI: \n{interaction.answer}\n\n')
except SQLAlchemyError as e:
current_app.logger.error(f'Error retrieving history of interactions: {e}')
db.session.rollback()
return []
return result

View File

@@ -1,138 +0,0 @@
from langchain_core.retrievers import BaseRetriever
from sqlalchemy import func, and_, or_, desc
from sqlalchemy.exc import SQLAlchemyError
from pydantic import BaseModel, Field, PrivateAttr
from typing import Any, Dict
from flask import current_app
from common.extensions import db
from common.models.document import Document, DocumentVersion
from common.utils.datetime_utils import get_date_in_timezone
from common.utils.model_utils import ModelVariables
class EveAIRetriever(BaseRetriever, BaseModel):
_model_variables: ModelVariables = PrivateAttr()
_tenant_info: Dict[str, Any] = PrivateAttr()
def __init__(self, model_variables: ModelVariables, tenant_info: Dict[str, Any]):
super().__init__()
current_app.logger.debug(f'Model variables type: {type(model_variables)}')
self._model_variables = model_variables
self._tenant_info = tenant_info
@property
def model_variables(self) -> ModelVariables:
return self._model_variables
@property
def tenant_info(self) -> Dict[str, Any]:
return self._tenant_info
def _get_relevant_documents(self, query: str):
current_app.logger.debug(f'Retrieving relevant documents for query: {query}')
query_embedding = self._get_query_embedding(query)
current_app.logger.debug(f'Model Variables Private: {type(self._model_variables)}')
current_app.logger.debug(f'Model Variables Property: {type(self.model_variables)}')
db_class = self.model_variables['embedding_db_model']
similarity_threshold = self.model_variables['similarity_threshold']
k = self.model_variables['k']
if self.tenant_info['rag_tuning']:
try:
current_date = get_date_in_timezone(self.tenant_info['timezone'])
current_app.rag_tuning_logger.debug(f'Current date: {current_date}\n')
# Debug query to show similarity for all valid documents (without chunk text)
debug_query = (
db.session.query(
Document.id.label('document_id'),
DocumentVersion.id.label('version_id'),
db_class.id.label('embedding_id'),
(1 - db_class.embedding.cosine_distance(query_embedding)).label('similarity')
)
.join(DocumentVersion, db_class.doc_vers_id == DocumentVersion.id)
.join(Document, DocumentVersion.doc_id == Document.id)
.filter(
or_(Document.valid_from.is_(None), func.date(Document.valid_from) <= current_date),
or_(Document.valid_to.is_(None), func.date(Document.valid_to) >= current_date)
)
.order_by(desc('similarity'))
)
debug_results = debug_query.all()
current_app.logger.debug("Debug: Similarity for all valid documents:")
for row in debug_results:
current_app.rag_tuning_logger.debug(f"Doc ID: {row.document_id}, "
f"Version ID: {row.version_id}, "
f"Embedding ID: {row.embedding_id}, "
f"Similarity: {row.similarity}")
current_app.rag_tuning_logger.debug(f'---------------------------------------\n')
except SQLAlchemyError as e:
current_app.logger.error(f'Error generating overview: {e}')
db.session.rollback()
if self.tenant_info['rag_tuning']:
current_app.rag_tuning_logger.debug(f'Parameters for Retrieval of documents: \n')
current_app.rag_tuning_logger.debug(f'Similarity Threshold: {similarity_threshold}\n')
current_app.rag_tuning_logger.debug(f'K: {k}\n')
current_app.rag_tuning_logger.debug(f'---------------------------------------\n')
try:
current_date = get_date_in_timezone(self.tenant_info['timezone'])
# Subquery to find the latest version of each document
subquery = (
db.session.query(
DocumentVersion.doc_id,
func.max(DocumentVersion.id).label('latest_version_id')
)
.group_by(DocumentVersion.doc_id)
.subquery()
)
# Main query to filter embeddings
query_obj = (
db.session.query(db_class,
(1 - db_class.embedding.cosine_distance(query_embedding)).label('similarity'))
.join(DocumentVersion, db_class.doc_vers_id == DocumentVersion.id)
.join(Document, DocumentVersion.doc_id == Document.id)
.join(subquery, DocumentVersion.id == subquery.c.latest_version_id)
.filter(
or_(Document.valid_from.is_(None), func.date(Document.valid_from) <= current_date),
or_(Document.valid_to.is_(None), func.date(Document.valid_to) >= current_date),
(1 - db_class.embedding.cosine_distance(query_embedding)) > similarity_threshold
)
.order_by(desc('similarity'))
.limit(k)
)
if self.tenant_info['rag_tuning']:
current_app.rag_tuning_logger.debug(f'Query executed for Retrieval of documents: \n')
current_app.rag_tuning_logger.debug(f'{query_obj.statement}\n')
current_app.rag_tuning_logger.debug(f'---------------------------------------\n')
res = query_obj.all()
if self.tenant_info['rag_tuning']:
current_app.rag_tuning_logger.debug(f'Retrieved {len(res)} relevant documents \n')
current_app.rag_tuning_logger.debug(f'Data retrieved: \n')
current_app.rag_tuning_logger.debug(f'{res}\n')
current_app.rag_tuning_logger.debug(f'---------------------------------------\n')
result = []
for doc in res:
if self.tenant_info['rag_tuning']:
current_app.rag_tuning_logger.debug(f'Document ID: {doc[0].id} - Distance: {doc[1]}\n')
current_app.rag_tuning_logger.debug(f'Chunk: \n {doc[0].chunk}\n\n')
result.append(f'SOURCE: {doc[0].id}\n\n{doc[0].chunk}\n\n')
except SQLAlchemyError as e:
current_app.logger.error(f'Error retrieving relevant documents: {e}')
db.session.rollback()
return []
return result
def _get_query_embedding(self, query: str):
embedding_model = self.model_variables['embedding_model']
query_embedding = embedding_model.embed_query(query)
return query_embedding

View File

@@ -0,0 +1,23 @@
# Output Schema Management - common/langchain/outputs/base.py
from typing import Dict, Type, Any
from pydantic import BaseModel
class BaseSpecialistOutput(BaseModel):
"""Base class for all specialist outputs"""
pass
class OutputRegistry:
"""Registry for specialist output schemas"""
_schemas: Dict[str, Type[BaseSpecialistOutput]] = {}
@classmethod
def register(cls, specialist_type: str, schema_class: Type[BaseSpecialistOutput]):
cls._schemas[specialist_type] = schema_class
@classmethod
def get_schema(cls, specialist_type: str) -> Type[BaseSpecialistOutput]:
if specialist_type not in cls._schemas:
raise ValueError(f"No output schema registered for {specialist_type}")
return cls._schemas[specialist_type]

View File

@@ -0,0 +1,22 @@
# RAG Specialist Output - common/langchain/outputs/rag.py
from typing import List
from pydantic import Field
from .base import BaseSpecialistOutput
class RAGOutput(BaseSpecialistOutput):
"""Output schema for RAG specialist"""
"""Default docstring - to be replaced with actual prompt"""
answer: str = Field(
...,
description="The answer to the user question, based on the given sources",
)
citations: List[int] = Field(
...,
description="The integer IDs of the SPECIFIC sources that were used to generate the answer"
)
insufficient_info: bool = Field(
False, # Default value is set to False
description="A boolean indicating whether given sources were sufficient or not to generate the answer"
)

View File

@@ -0,0 +1,153 @@
import os
import yaml
from typing import Dict, Optional, Any
from packaging import version
from dataclasses import dataclass
from flask import current_app, Flask
from common.utils.os_utils import get_project_root
@dataclass
class PromptTemplate:
"""Represents a versioned prompt template"""
content: str
version: str
metadata: Dict[str, Any]
class TemplateManager:
"""Manages versioned prompt templates"""
def __init__(self):
self.templates_dir = None
self._templates = None
self.app = None
def init_app(self, app: Flask) -> None:
# Initialize template manager
base_dir = "/app"
self.templates_dir = os.path.join(base_dir, 'config', 'prompts')
self.app = app
self._templates = self._load_templates()
# Log available templates for each supported model
for llm in app.config['SUPPORTED_LLMS']:
try:
available_templates = self.list_templates(llm)
app.logger.info(f"Loaded templates for {llm}: {available_templates}")
except ValueError:
app.logger.warning(f"No templates found for {llm}")
def _load_templates(self) -> Dict[str, Dict[str, Dict[str, PromptTemplate]]]:
"""
Load all template versions from the templates directory.
Structure: {provider.model -> {template_name -> {version -> template}}}
Directory structure:
prompts/
├── provider/
│ └── model/
│ └── template_name/
│ └── version.yaml
"""
templates = {}
# Iterate through providers (anthropic, openai)
for provider in os.listdir(self.templates_dir):
provider_path = os.path.join(self.templates_dir, provider)
if not os.path.isdir(provider_path):
continue
# Iterate through models (claude-3, gpt-4o)
for model in os.listdir(provider_path):
model_path = os.path.join(provider_path, model)
if not os.path.isdir(model_path):
continue
provider_model = f"{provider}.{model}"
templates[provider_model] = {}
# Iterate through template types (rag, summary, etc.)
for template_name in os.listdir(model_path):
template_path = os.path.join(model_path, template_name)
if not os.path.isdir(template_path):
continue
template_versions = {}
# Load all version files for this template
for version_file in os.listdir(template_path):
if not version_file.endswith('.yaml'):
continue
version_str = version_file[:-5] # Remove .yaml
if not self._is_valid_version(version_str):
current_app.logger.warning(
f"Invalid version format for {template_name}: {version_str}")
continue
try:
with open(os.path.join(template_path, version_file)) as f:
template_data = yaml.safe_load(f)
# Verify required fields
if not template_data.get('content'):
raise ValueError("Template content is required")
template_versions[version_str] = PromptTemplate(
content=template_data['content'],
version=version_str,
metadata=template_data.get('metadata', {})
)
except Exception as e:
current_app.logger.error(
f"Error loading template {template_name} version {version_str}: {e}")
continue
if template_versions:
templates[provider_model][template_name] = template_versions
return templates
def _is_valid_version(self, version_str: str) -> bool:
"""Validate semantic versioning string"""
try:
version.parse(version_str)
return True
except version.InvalidVersion:
return False
def get_template(self,
provider_model: str,
template_name: str,
template_version: Optional[str] = None) -> PromptTemplate:
"""
Get a specific template version. If version not specified,
returns the latest version.
"""
if provider_model not in self._templates:
raise ValueError(f"Unknown provider.model: {provider_model}")
if template_name not in self._templates[provider_model]:
raise ValueError(f"Unknown template: {template_name}")
versions = self._templates[provider_model][template_name]
if template_version:
if template_version not in versions:
raise ValueError(f"Template version {template_version} not found")
return versions[template_version]
# Return latest version
latest = max(versions.keys(), key=version.parse)
return versions[latest]
def list_templates(self, provider_model: str) -> Dict[str, list]:
"""
List all available templates and their versions for a provider.model
Returns: {template_name: [version1, version2, ...]}
"""
if provider_model not in self._templates:
raise ValueError(f"Unknown provider.model: {provider_model}")
return {
template_name: sorted(versions.keys(), key=version.parse)
for template_name, versions in self._templates[provider_model].items()
}

View File

@@ -1,27 +0,0 @@
import time
from common.utils.business_event_context import current_event
def tracked_transcribe(client, *args, **kwargs):
start_time = time.time()
# Extract the file and model from kwargs if present, otherwise use defaults
file = kwargs.get('file')
model = kwargs.get('model', 'whisper-1')
duration = kwargs.pop('duration', 600)
result = client.audio.transcriptions.create(*args, **kwargs)
end_time = time.time()
# Token usage for transcriptions is actually the duration in seconds we pass, as the whisper model is priced per second transcribed
metrics = {
'total_tokens': duration,
'prompt_tokens': 0, # For transcriptions, all tokens are considered "completion"
'completion_tokens': duration,
'time_elapsed': end_time - start_time,
'interaction_type': 'ASR',
}
current_event.log_llm_metrics(metrics)
return result

View File

@@ -0,0 +1,77 @@
# common/langchain/tracked_transcription.py
from typing import Any, Optional, Dict
import time
from openai import OpenAI
from common.utils.business_event_context import current_event
class TrackedOpenAITranscription:
"""Wrapper for OpenAI transcription with metric tracking"""
def __init__(self, api_key: str, **kwargs: Any):
"""Initialize with OpenAI client settings"""
self.client = OpenAI(api_key=api_key)
self.model = kwargs.get('model', 'whisper-1')
def transcribe(self,
file: Any,
model: Optional[str] = None,
language: Optional[str] = None,
prompt: Optional[str] = None,
response_format: Optional[str] = None,
temperature: Optional[float] = None,
duration: Optional[int] = None) -> str:
"""
Transcribe audio with metrics tracking
Args:
file: Audio file to transcribe
model: Model to use (defaults to whisper-1)
language: Optional language of the audio
prompt: Optional prompt to guide transcription
response_format: Response format (json, text, etc)
temperature: Sampling temperature
duration: Duration of audio in seconds for metrics
Returns:
Transcription text
"""
start_time = time.time()
try:
# Create transcription options
options = {
"file": file,
"model": model or self.model,
}
if language:
options["language"] = language
if prompt:
options["prompt"] = prompt
if response_format:
options["response_format"] = response_format
if temperature:
options["temperature"] = temperature
response = self.client.audio.transcriptions.create(**options)
# Calculate metrics
end_time = time.time()
# Token usage for transcriptions is based on audio duration
metrics = {
'total_tokens': duration or 600, # Default to 10 minutes if duration not provided
'prompt_tokens': 0, # For transcriptions, all tokens are completion
'completion_tokens': duration or 600,
'time_elapsed': end_time - start_time,
'interaction_type': 'ASR',
}
current_event.log_llm_metrics(metrics)
# Return text from response
if isinstance(response, str):
return response
return response.text
except Exception as e:
raise Exception(f"Transcription failed: {str(e)}")

View File

@@ -2,12 +2,81 @@ from common.extensions import db
from .user import User, Tenant
from pgvector.sqlalchemy import Vector
from sqlalchemy.dialects.postgresql import JSONB
from sqlalchemy.dialects.postgresql import ARRAY
import sqlalchemy as sa
class Catalog(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(50), nullable=False)
description = db.Column(db.Text, nullable=True)
type = db.Column(db.String(50), nullable=False, default="STANDARD_CATALOG")
min_chunk_size = db.Column(db.Integer, nullable=True, default=1500)
max_chunk_size = db.Column(db.Integer, nullable=True, default=2500)
# Meta Data
user_metadata = db.Column(JSONB, nullable=True)
system_metadata = db.Column(JSONB, nullable=True)
configuration = db.Column(JSONB, nullable=True)
# Versioning Information
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
created_by = db.Column(db.Integer, db.ForeignKey(User.id), nullable=True)
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey(User.id))
class Processor(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(50), nullable=False)
description = db.Column(db.Text, nullable=True)
catalog_id = db.Column(db.Integer, db.ForeignKey('catalog.id'), nullable=True)
type = db.Column(db.String(50), nullable=False)
sub_file_type = db.Column(db.String(50), nullable=True)
# Tuning enablers
tuning = db.Column(db.Boolean, nullable=True, default=False)
# Meta Data
user_metadata = db.Column(JSONB, nullable=True)
system_metadata = db.Column(JSONB, nullable=True)
configuration = db.Column(JSONB, nullable=True)
# Versioning Information
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
created_by = db.Column(db.Integer, db.ForeignKey(User.id), nullable=True)
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey(User.id))
class Retriever(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(50), nullable=False)
description = db.Column(db.Text, nullable=True)
catalog_id = db.Column(db.Integer, db.ForeignKey('catalog.id'), nullable=True)
type = db.Column(db.String(50), nullable=False, default="STANDARD_RAG")
type_version = db.Column(db.String(20), nullable=True, default="STANDARD_RAG")
tuning = db.Column(db.Boolean, nullable=True, default=False)
# Meta Data
user_metadata = db.Column(JSONB, nullable=True)
system_metadata = db.Column(JSONB, nullable=True)
configuration = db.Column(JSONB, nullable=True)
arguments = db.Column(JSONB, nullable=True)
# Versioning Information
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
created_by = db.Column(db.Integer, db.ForeignKey(User.id), nullable=True)
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey(User.id))
class Document(db.Model):
id = db.Column(db.Integer, primary_key=True)
# tenant_id = db.Column(db.Integer, db.ForeignKey(Tenant.id), nullable=False)
catalog_id = db.Column(db.Integer, db.ForeignKey(Catalog.id), nullable=True)
name = db.Column(db.String(100), nullable=False)
tenant_id = db.Column(db.Integer, db.ForeignKey(Tenant.id), nullable=False)
valid_from = db.Column(db.DateTime, nullable=True)
valid_to = db.Column(db.DateTime, nullable=True)
@@ -31,12 +100,14 @@ class DocumentVersion(db.Model):
bucket_name = db.Column(db.String(255), nullable=True)
object_name = db.Column(db.String(200), nullable=True)
file_type = db.Column(db.String(20), nullable=True)
sub_file_type = db.Column(db.String(50), nullable=True)
file_size = db.Column(db.Float, nullable=True)
language = db.Column(db.String(2), nullable=False)
user_context = db.Column(db.Text, nullable=True)
system_context = db.Column(db.Text, nullable=True)
user_metadata = db.Column(JSONB, nullable=True)
system_metadata = db.Column(JSONB, nullable=True)
catalog_properties = db.Column(JSONB, nullable=True)
# Versioning Information
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
@@ -56,12 +127,6 @@ class DocumentVersion(db.Model):
def __repr__(self):
return f"<DocumentVersion {self.document_language.document_id}.{self.document_language.language}>.{self.id}>"
def calc_file_location(self):
return f"{self.document.tenant_id}/{self.document.id}/{self.language}"
def calc_file_name(self):
return f"{self.id}.{self.file_type}"
class Embedding(db.Model):
__tablename__ = 'embeddings'

View File

@@ -11,10 +11,13 @@ class BusinessEventLog(db.Model):
tenant_id = db.Column(db.Integer, nullable=False)
trace_id = db.Column(db.String(50), nullable=False)
span_id = db.Column(db.String(50))
span_name = db.Column(db.String(50))
span_name = db.Column(db.String(255))
parent_span_id = db.Column(db.String(50))
document_version_id = db.Column(db.Integer)
document_version_file_size = db.Column(db.Float)
specialist_id = db.Column(db.Integer)
specialist_type = db.Column(db.String(50))
specialist_type_version = db.Column(db.String(20))
chat_session_id = db.Column(db.String(50))
interaction_id = db.Column(db.Integer)
environment = db.Column(db.String(20))
@@ -94,8 +97,8 @@ class LicenseUsage(db.Model):
id = db.Column(db.Integer, primary_key=True)
license_id = db.Column(db.Integer, db.ForeignKey('public.license.id'), nullable=False)
tenant_id = db.Column(db.Integer, db.ForeignKey('public.tenant.id'), nullable=False)
storage_mb_used = db.Column(db.Integer, default=0)
embedding_mb_used = db.Column(db.Integer, default=0)
storage_mb_used = db.Column(db.Float, default=0)
embedding_mb_used = db.Column(db.Float, default=0)
embedding_prompt_tokens_used = db.Column(db.Integer, default=0)
embedding_completion_tokens_used = db.Column(db.Integer, default=0)
embedding_total_tokens_used = db.Column(db.Integer, default=0)

View File

@@ -1,6 +1,8 @@
from sqlalchemy.dialects.postgresql import JSONB
from ..extensions import db
from .user import User, Tenant
from .document import Embedding
from .document import Embedding, Retriever
class ChatSession(db.Model):
@@ -18,14 +20,168 @@ class ChatSession(db.Model):
return f"<ChatSession {self.id} by {self.user_id}>"
class Specialist(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(50), nullable=False)
description = db.Column(db.Text, nullable=True)
type = db.Column(db.String(50), nullable=False, default="STANDARD_RAG")
type_version = db.Column(db.String(20), nullable=True, default="1.0.0")
tuning = db.Column(db.Boolean, nullable=True, default=False)
configuration = db.Column(JSONB, nullable=True)
arguments = db.Column(JSONB, nullable=True)
# Relationship to retrievers through the association table
retrievers = db.relationship('SpecialistRetriever', backref='specialist', lazy=True,
cascade="all, delete-orphan")
agents = db.relationship('EveAIAgent', backref='specialist', lazy=True)
tasks = db.relationship('EveAITask', backref='specialist', lazy=True)
tools = db.relationship('EveAITool', backref='specialist', lazy=True)
dispatchers = db.relationship('SpecialistDispatcher', backref='specialist', lazy=True)
# Versioning Information
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
created_by = db.Column(db.Integer, db.ForeignKey(User.id), nullable=True)
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey(User.id))
class EveAIAsset(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(50), nullable=False)
description = db.Column(db.Text, nullable=True)
type = db.Column(db.String(50), nullable=False, default="DOCUMENT_TEMPLATE")
type_version = db.Column(db.String(20), nullable=True, default="1.0.0")
valid_from = db.Column(db.DateTime, nullable=True)
valid_to = db.Column(db.DateTime, nullable=True)
# Versioning Information
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
created_by = db.Column(db.Integer, db.ForeignKey(User.id), nullable=True)
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey(User.id))
# Relations
versions = db.relationship('EveAIAssetVersion', backref='asset', lazy=True)
class EveAIAssetVersion(db.Model):
id = db.Column(db.Integer, primary_key=True)
asset_id = db.Column(db.Integer, db.ForeignKey(EveAIAsset.id), nullable=False)
bucket_name = db.Column(db.String(255), nullable=True)
configuration = db.Column(JSONB, nullable=True)
arguments = db.Column(JSONB, nullable=True)
# Versioning Information
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
created_by = db.Column(db.Integer, db.ForeignKey(User.id), nullable=True)
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey(User.id))
# Relations
instructions = db.relationship('EveAIAssetInstruction', backref='asset_version', lazy=True)
class EveAIAssetInstruction(db.Model):
id = db.Column(db.Integer, primary_key=True)
asset_version_id = db.Column(db.Integer, db.ForeignKey(EveAIAssetVersion.id), nullable=False)
name = db.Column(db.String(255), nullable=False)
content = db.Column(db.Text, nullable=True)
class EveAIProcessedAsset(db.Model):
id = db.Column(db.Integer, primary_key=True)
asset_version_id = db.Column(db.Integer, db.ForeignKey(EveAIAssetVersion.id), nullable=False)
specialist_id = db.Column(db.Integer, db.ForeignKey(Specialist.id), nullable=True)
chat_session_id = db.Column(db.Integer, db.ForeignKey(ChatSession.id), nullable=True)
bucket_name = db.Column(db.String(255), nullable=True)
object_name = db.Column(db.String(255), nullable=True)
created_at = db.Column(db.DateTime, nullable=True, server_default=db.func.now())
class EveAIAgent(db.Model):
id = db.Column(db.Integer, primary_key=True)
specialist_id = db.Column(db.Integer, db.ForeignKey(Specialist.id), nullable=False)
name = db.Column(db.String(50), nullable=False)
description = db.Column(db.Text, nullable=True)
type = db.Column(db.String(50), nullable=False, default="STANDARD_RAG")
type_version = db.Column(db.String(20), nullable=True, default="1.0.0")
role = db.Column(db.Text, nullable=True)
goal = db.Column(db.Text, nullable=True)
backstory = db.Column(db.Text, nullable=True)
tuning = db.Column(db.Boolean, nullable=True, default=False)
configuration = db.Column(JSONB, nullable=True)
arguments = db.Column(JSONB, nullable=True)
# Versioning Information
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
created_by = db.Column(db.Integer, db.ForeignKey(User.id), nullable=True)
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey(User.id))
class EveAITask(db.Model):
id = db.Column(db.Integer, primary_key=True)
specialist_id = db.Column(db.Integer, db.ForeignKey(Specialist.id), nullable=False)
name = db.Column(db.String(50), nullable=False)
description = db.Column(db.Text, nullable=True)
type = db.Column(db.String(50), nullable=False, default="STANDARD_RAG")
type_version = db.Column(db.String(20), nullable=True, default="1.0.0")
task_description = db.Column(db.Text, nullable=True)
expected_output = db.Column(db.Text, nullable=True)
tuning = db.Column(db.Boolean, nullable=True, default=False)
configuration = db.Column(JSONB, nullable=True)
arguments = db.Column(JSONB, nullable=True)
context = db.Column(JSONB, nullable=True)
asynchronous = db.Column(db.Boolean, nullable=True, default=False)
# Versioning Information
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
created_by = db.Column(db.Integer, db.ForeignKey(User.id), nullable=True)
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey(User.id))
class EveAITool(db.Model):
id = db.Column(db.Integer, primary_key=True)
specialist_id = db.Column(db.Integer, db.ForeignKey(Specialist.id), nullable=False)
name = db.Column(db.String(50), nullable=False)
description = db.Column(db.Text, nullable=True)
type = db.Column(db.String(50), nullable=False, default="STANDARD_RAG")
type_version = db.Column(db.String(20), nullable=True, default="1.0.0")
tuning = db.Column(db.Boolean, nullable=True, default=False)
configuration = db.Column(JSONB, nullable=True)
arguments = db.Column(JSONB, nullable=True)
# Versioning Information
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
created_by = db.Column(db.Integer, db.ForeignKey(User.id), nullable=True)
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey(User.id))
class Dispatcher(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(50), nullable=False)
description = db.Column(db.Text, nullable=True)
type = db.Column(db.String(50), nullable=False, default="STANDARD_RAG")
type_version = db.Column(db.String(20), nullable=True, default="1.0.0")
tuning = db.Column(db.Boolean, nullable=True, default=False)
configuration = db.Column(JSONB, nullable=True)
arguments = db.Column(JSONB, nullable=True)
# Versioning Information
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
created_by = db.Column(db.Integer, db.ForeignKey(User.id), nullable=True)
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey(User.id))
class Interaction(db.Model):
id = db.Column(db.Integer, primary_key=True)
chat_session_id = db.Column(db.Integer, db.ForeignKey(ChatSession.id), nullable=False)
question = db.Column(db.Text, nullable=False)
detailed_question = db.Column(db.Text, nullable=True)
answer = db.Column(db.Text, nullable=True)
algorithm_used = db.Column(db.String(20), nullable=True)
language = db.Column(db.String(2), nullable=False)
specialist_id = db.Column(db.Integer, db.ForeignKey(Specialist.id), nullable=True)
specialist_arguments = db.Column(JSONB, nullable=True)
specialist_results = db.Column(JSONB, nullable=True)
timezone = db.Column(db.String(30), nullable=True)
appreciation = db.Column(db.Integer, nullable=True)
@@ -44,3 +200,17 @@ class Interaction(db.Model):
class InteractionEmbedding(db.Model):
interaction_id = db.Column(db.Integer, db.ForeignKey(Interaction.id, ondelete='CASCADE'), primary_key=True)
embedding_id = db.Column(db.Integer, db.ForeignKey(Embedding.id, ondelete='CASCADE'), primary_key=True)
class SpecialistRetriever(db.Model):
specialist_id = db.Column(db.Integer, db.ForeignKey(Specialist.id, ondelete='CASCADE'), primary_key=True)
retriever_id = db.Column(db.Integer, db.ForeignKey(Retriever.id, ondelete='CASCADE'), primary_key=True)
retriever = db.relationship("Retriever", backref="specialist_retrievers")
class SpecialistDispatcher(db.Model):
specialist_id = db.Column(db.Integer, db.ForeignKey(Specialist.id, ondelete='CASCADE'), primary_key=True)
dispatcher_id = db.Column(db.Integer, db.ForeignKey(Dispatcher.id, ondelete='CASCADE'), primary_key=True)
dispatcher = db.relationship("Dispatcher", backref="specialist_dispatchers")

View File

@@ -31,39 +31,10 @@ class Tenant(db.Model):
allowed_languages = db.Column(ARRAY(sa.String(2)), nullable=True)
# LLM specific choices
embedding_model = db.Column(db.String(50), nullable=True)
llm_model = db.Column(db.String(50), nullable=True)
# Embedding variables
html_tags = db.Column(ARRAY(sa.String(10)), nullable=True, default=['p', 'h1', 'h2', 'h3', 'h4', 'h5', 'h6', 'li'])
html_end_tags = db.Column(ARRAY(sa.String(10)), nullable=True, default=['p', 'li'])
html_included_elements = db.Column(ARRAY(sa.String(50)), nullable=True)
html_excluded_elements = db.Column(ARRAY(sa.String(50)), nullable=True)
html_excluded_classes = db.Column(ARRAY(sa.String(200)), nullable=True)
min_chunk_size = db.Column(db.Integer, nullable=True, default=2000)
max_chunk_size = db.Column(db.Integer, nullable=True, default=3000)
# Embedding search variables
es_k = db.Column(db.Integer, nullable=True, default=5)
es_similarity_threshold = db.Column(db.Float, nullable=True, default=0.7)
# Chat variables
chat_RAG_temperature = db.Column(db.Float, nullable=True, default=0.3)
chat_no_RAG_temperature = db.Column(db.Float, nullable=True, default=0.5)
fallback_algorithms = db.Column(ARRAY(sa.String(50)), nullable=True)
# Licensing Information
encrypted_chat_api_key = db.Column(db.String(500), nullable=True)
encrypted_api_key = db.Column(db.String(500), nullable=True)
# Tuning enablers
embed_tuning = db.Column(db.Boolean, nullable=True, default=False)
rag_tuning = db.Column(db.Boolean, nullable=True, default=False)
# Entitlements
currency = db.Column(db.String(20), nullable=True)
usage_email = db.Column(db.String(255), nullable=True)
storage_dirty = db.Column(db.Boolean, nullable=True, default=False)
# Relations
@@ -94,24 +65,8 @@ class Tenant(db.Model):
'type': self.type,
'default_language': self.default_language,
'allowed_languages': self.allowed_languages,
'embedding_model': self.embedding_model,
'llm_model': self.llm_model,
'html_tags': self.html_tags,
'html_end_tags': self.html_end_tags,
'html_included_elements': self.html_included_elements,
'html_excluded_elements': self.html_excluded_elements,
'html_excluded_classes': self.html_excluded_classes,
'min_chunk_size': self.min_chunk_size,
'max_chunk_size': self.max_chunk_size,
'es_k': self.es_k,
'es_similarity_threshold': self.es_similarity_threshold,
'chat_RAG_temperature': self.chat_RAG_temperature,
'chat_no_RAG_temperature': self.chat_no_RAG_temperature,
'fallback_algorithms': self.fallback_algorithms,
'embed_tuning': self.embed_tuning,
'rag_tuning': self.rag_tuning,
'currency': self.currency,
'usage_email': self.usage_email,
}
@@ -153,6 +108,8 @@ class User(db.Model, UserMixin):
fs_uniquifier = db.Column(db.String(255), unique=True, nullable=False)
confirmed_at = db.Column(db.DateTime, nullable=True)
valid_to = db.Column(db.Date, nullable=True)
is_primary_contact = db.Column(db.Boolean, nullable=True, default=False)
is_financial_contact = db.Column(db.Boolean, nullable=True, default=False)
# Security Trackable Information
last_login_at = db.Column(db.DateTime, nullable=True)
@@ -193,3 +150,29 @@ class TenantDomain(db.Model):
def __repr__(self):
return f"<TenantDomain {self.id}: {self.domain}>"
class TenantProject(db.Model):
__bind_key__ = 'public'
__table_args__ = {'schema': 'public'}
id = db.Column(db.Integer, primary_key=True)
tenant_id = db.Column(db.Integer, db.ForeignKey('public.tenant.id'), nullable=False)
name = db.Column(db.String(50), nullable=False)
description = db.Column(db.Text, nullable=True)
services = db.Column(ARRAY(sa.String(50)), nullable=False)
encrypted_api_key = db.Column(db.String(500), nullable=True)
visual_api_key = db.Column(db.String(20), nullable=True)
active = db.Column(db.Boolean, nullable=False, default=True)
responsible_email = db.Column(db.String(255), nullable=True)
# Versioning Information
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
created_by = db.Column(db.Integer, db.ForeignKey('public.user.id'), nullable=True)
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey('public.user.id'))
# Relations
tenant = db.relationship('Tenant', backref='projects')
def __repr__(self):
return f"<TenantProject {self.id}: {self.name}>"

View File

@@ -0,0 +1,62 @@
from datetime import datetime as dt, timezone as tz
from flask import current_app
from sqlalchemy.exc import SQLAlchemyError
from common.extensions import cache_manager, minio_client, db
from common.models.interaction import EveAIAsset, EveAIAssetVersion
from common.utils.document_utils import mark_tenant_storage_dirty
from common.utils.model_logging_utils import set_logging_information
def create_asset_stack(api_input, tenant_id):
type_version = cache_manager.assets_version_tree_cache.get_latest_version(api_input['type'])
api_input['type_version'] = type_version
new_asset = create_asset(api_input, tenant_id)
new_asset_version = create_version_for_asset(new_asset, tenant_id)
db.session.add(new_asset)
db.session.add(new_asset_version)
try:
db.session.commit()
except SQLAlchemyError as e:
current_app.logger.error(f"Could not add asset for tenant {tenant_id}: {str(e)}")
db.session.rollback()
raise e
return new_asset, new_asset_version
def create_asset(api_input, tenant_id):
new_asset = EveAIAsset()
new_asset.name = api_input['name']
new_asset.description = api_input['description']
new_asset.type = api_input['type']
new_asset.type_version = api_input['type_version']
if api_input['valid_from'] and api_input['valid_from'] != '':
new_asset.valid_from = api_input['valid_from']
else:
new_asset.valid_from = dt.now(tz.utc)
new_asset.valid_to = api_input['valid_to']
set_logging_information(new_asset, dt.now(tz.utc))
return new_asset
def create_version_for_asset(asset, tenant_id):
new_asset_version = EveAIAssetVersion()
new_asset_version.asset = asset
new_asset_version.bucket_name = minio_client.create_tenant_bucket(tenant_id)
set_logging_information(new_asset_version, dt.now(tz.utc))
return new_asset_version
def add_asset_version_file(asset_version, field_name, file, tenant_id):
object_name, file_size = minio_client.upload_file(asset_version.bucket_name, asset_version.id, field_name,
file.content_type)
mark_tenant_storage_dirty(tenant_id)
return object_name

View File

@@ -1,15 +1,81 @@
import os
import time
import uuid
from contextlib import contextmanager
from contextlib import contextmanager, asynccontextmanager
from datetime import datetime
from typing import Dict, Any, Optional
from typing import Dict, Any, Optional, List
from datetime import datetime as dt, timezone as tz
from portkey_ai import Portkey, Config
import logging
from prometheus_client import Counter, Histogram, Gauge, Summary
from .business_event_context import BusinessEventContext
from common.models.entitlements import BusinessEventLog
from common.extensions import db
from .celery_utils import current_celery
from common.utils.performance_monitoring import EveAIMetrics
# Standard duration buckets for all histograms
DURATION_BUCKETS = EveAIMetrics.get_standard_buckets()
# Prometheus metrics for business events
TRACE_COUNTER = Counter(
'eveai_business_events_total',
'Total number of business events triggered',
['tenant_id', 'event_type', 'specialist_id', 'specialist_type', 'specialist_type_version']
)
TRACE_DURATION = Histogram(
'eveai_business_events_duration_seconds',
'Duration of business events in seconds',
['tenant_id', 'event_type', 'specialist_id', 'specialist_type', 'specialist_type_version'],
buckets=DURATION_BUCKETS
)
CONCURRENT_TRACES = Gauge(
'eveai_business_events_concurrent',
'Number of concurrent business events',
['tenant_id', 'event_type', 'specialist_id', 'specialist_type', 'specialist_type_version']
)
SPAN_COUNTER = Counter(
'eveai_business_spans_total',
'Total number of spans within business events',
['tenant_id', 'event_type', 'activity_name', 'specialist_id', 'specialist_type', 'specialist_type_version']
)
SPAN_DURATION = Histogram(
'eveai_business_spans_duration_seconds',
'Duration of spans within business events in seconds',
['tenant_id', 'event_type', 'activity_name', 'specialist_id', 'specialist_type', 'specialist_type_version'],
buckets=DURATION_BUCKETS
)
CONCURRENT_SPANS = Gauge(
'eveai_business_spans_concurrent',
'Number of concurrent spans within business events',
['tenant_id', 'event_type', 'activity_name', 'specialist_id', 'specialist_type', 'specialist_type_version']
)
# LLM Usage metrics
LLM_TOKENS_COUNTER = Counter(
'eveai_llm_tokens_total',
'Total number of tokens used in LLM calls',
['tenant_id', 'event_type', 'interaction_type', 'token_type', 'specialist_id', 'specialist_type',
'specialist_type_version']
)
LLM_DURATION = Histogram(
'eveai_llm_duration_seconds',
'Duration of LLM API calls in seconds',
['tenant_id', 'event_type', 'interaction_type', 'specialist_id', 'specialist_type', 'specialist_type_version'],
buckets=DURATION_BUCKETS
)
LLM_CALLS_COUNTER = Counter(
'eveai_llm_calls_total',
'Total number of LLM API calls',
['tenant_id', 'event_type', 'interaction_type', 'specialist_id', 'specialist_type', 'specialist_type_version']
)
class BusinessEvent:
@@ -28,6 +94,9 @@ class BusinessEvent:
self.document_version_file_size = kwargs.get('document_version_file_size')
self.chat_session_id = kwargs.get('chat_session_id')
self.interaction_id = kwargs.get('interaction_id')
self.specialist_id = kwargs.get('specialist_id')
self.specialist_type = kwargs.get('specialist_type')
self.specialist_type_version = kwargs.get('specialist_type_version')
self.environment = os.environ.get("FLASK_ENV", "development")
self.span_counter = 0
self.spans = []
@@ -39,10 +108,44 @@ class BusinessEvent:
'call_count': 0,
'interaction_type': None
}
self._log_buffer = []
# Prometheus label values must be strings
self.tenant_id_str = str(self.tenant_id)
self.specialist_id_str = str(self.specialist_id) if self.specialist_id else ""
self.specialist_type_str = str(self.specialist_type) if self.specialist_type else ""
self.specialist_type_version_str = str(self.specialist_type_version) if self.specialist_type_version else ""
# Increment concurrent events gauge when initialized
CONCURRENT_TRACES.labels(
tenant_id=self.tenant_id_str,
event_type=self.event_type,
specialist_id=self.specialist_id_str,
specialist_type=self.specialist_type_str,
specialist_type_version=self.specialist_type_version_str
).inc()
# Increment trace counter
TRACE_COUNTER.labels(
tenant_id=self.tenant_id_str,
event_type=self.event_type,
specialist_id=self.specialist_id_str,
specialist_type=self.specialist_type_str,
specialist_type_version=self.specialist_type_version_str
).inc()
def update_attribute(self, attribute: str, value: any):
if hasattr(self, attribute):
setattr(self, attribute, value)
# Update string versions for Prometheus labels if needed
if attribute == 'specialist_id':
self.specialist_id_str = str(value) if value else ""
elif attribute == 'specialist_type':
self.specialist_type_str = str(value) if value else ""
elif attribute == 'specialist_type_version':
self.specialist_type_version_str = str(value) if value else ""
elif attribute == 'tenant_id':
self.tenant_id_str = str(value)
else:
raise AttributeError(f"'{self.__class__.__name__}' object has no attribute '{attribute}'")
@@ -54,6 +157,60 @@ class BusinessEvent:
self.llm_metrics['call_count'] += 1
self.llm_metrics['interaction_type'] = metrics['interaction_type']
# Track in Prometheus metrics
interaction_type = metrics['interaction_type']
# Track token usage
LLM_TOKENS_COUNTER.labels(
tenant_id=self.tenant_id_str,
event_type=self.event_type,
interaction_type=interaction_type,
token_type='total',
specialist_id=self.specialist_id_str,
specialist_type=self.specialist_type_str,
specialist_type_version=self.specialist_type_version_str
).inc(metrics['total_tokens'])
LLM_TOKENS_COUNTER.labels(
tenant_id=self.tenant_id_str,
event_type=self.event_type,
interaction_type=interaction_type,
token_type='prompt',
specialist_id=self.specialist_id_str,
specialist_type=self.specialist_type_str,
specialist_type_version=self.specialist_type_version_str
).inc(metrics['prompt_tokens'])
LLM_TOKENS_COUNTER.labels(
tenant_id=self.tenant_id_str,
event_type=self.event_type,
interaction_type=interaction_type,
token_type='completion',
specialist_id=self.specialist_id_str,
specialist_type=self.specialist_type_str,
specialist_type_version=self.specialist_type_version_str
).inc(metrics['completion_tokens'])
# Track duration
LLM_DURATION.labels(
tenant_id=self.tenant_id_str,
event_type=self.event_type,
interaction_type=interaction_type,
specialist_id=self.specialist_id_str,
specialist_type=self.specialist_type_str,
specialist_type_version=self.specialist_type_version_str
).observe(metrics['time_elapsed'])
# Track call count
LLM_CALLS_COUNTER.labels(
tenant_id=self.tenant_id_str,
event_type=self.event_type,
interaction_type=interaction_type,
specialist_id=self.specialist_id_str,
specialist_type=self.specialist_type_str,
specialist_type_version=self.specialist_type_version_str
).inc()
def reset_llm_metrics(self):
self.llm_metrics['total_tokens'] = 0
self.llm_metrics['prompt_tokens'] = 0
@@ -81,15 +238,61 @@ class BusinessEvent:
self.span_name = span_name
self.parent_span_id = parent_span_id
self.log(f"Starting span {span_name}")
# Track start time for the span
span_start_time = time.time()
# Increment span metrics - using span_name as activity_name for metrics
SPAN_COUNTER.labels(
tenant_id=self.tenant_id_str,
event_type=self.event_type,
activity_name=span_name,
specialist_id=self.specialist_id_str,
specialist_type=self.specialist_type_str,
specialist_type_version=self.specialist_type_version_str
).inc()
# Increment concurrent spans gauge
CONCURRENT_SPANS.labels(
tenant_id=self.tenant_id_str,
event_type=self.event_type,
activity_name=span_name,
specialist_id=self.specialist_id_str,
specialist_type=self.specialist_type_str,
specialist_type_version=self.specialist_type_version_str
).inc()
self.log(f"Start")
try:
yield
finally:
# Calculate total time for this span
span_total_time = time.time() - span_start_time
# Observe span duration
SPAN_DURATION.labels(
tenant_id=self.tenant_id_str,
event_type=self.event_type,
activity_name=span_name,
specialist_id=self.specialist_id_str,
specialist_type=self.specialist_type_str,
specialist_type_version=self.specialist_type_version_str
).observe(span_total_time)
# Decrement concurrent spans gauge
CONCURRENT_SPANS.labels(
tenant_id=self.tenant_id_str,
event_type=self.event_type,
activity_name=span_name,
specialist_id=self.specialist_id_str,
specialist_type=self.specialist_type_str,
specialist_type_version=self.specialist_type_version_str
).dec()
if self.llm_metrics['call_count'] > 0:
self.log_final_metrics()
self.reset_llm_metrics()
self.log(f"Ending span {span_name}")
self.log(f"End", extra_fields={'span_duration': span_total_time})
# Restore the previous span info
if self.spans:
self.span_id, self.span_name, self.parent_span_id = self.spans.pop()
@@ -98,9 +301,87 @@ class BusinessEvent:
self.span_name = None
self.parent_span_id = None
def log(self, message: str, level: str = 'info'):
logger = logging.getLogger('business_events')
@asynccontextmanager
async def create_span_async(self, span_name: str):
"""Async version of create_span using async context manager"""
parent_span_id = self.span_id
self.span_counter += 1
new_span_id = str(uuid.uuid4())
# Save the current span info
self.spans.append((self.span_id, self.span_name, self.parent_span_id))
# Set the new span info
self.span_id = new_span_id
self.span_name = span_name
self.parent_span_id = parent_span_id
# Track start time for the span
span_start_time = time.time()
# Increment span metrics - using span_name as activity_name for metrics
SPAN_COUNTER.labels(
tenant_id=self.tenant_id_str,
event_type=self.event_type,
activity_name=span_name,
specialist_id=self.specialist_id_str,
specialist_type=self.specialist_type_str,
specialist_type_version=self.specialist_type_version_str
).inc()
# Increment concurrent spans gauge
CONCURRENT_SPANS.labels(
tenant_id=self.tenant_id_str,
event_type=self.event_type,
activity_name=span_name,
specialist_id=self.specialist_id_str,
specialist_type=self.specialist_type_str,
specialist_type_version=self.specialist_type_version_str
).inc()
self.log(f"Start")
try:
yield
finally:
# Calculate total time for this span
span_total_time = time.time() - span_start_time
# Observe span duration
SPAN_DURATION.labels(
tenant_id=self.tenant_id_str,
event_type=self.event_type,
activity_name=span_name,
specialist_id=self.specialist_id_str,
specialist_type=self.specialist_type_str,
specialist_type_version=self.specialist_type_version_str
).observe(span_total_time)
# Decrement concurrent spans gauge
CONCURRENT_SPANS.labels(
tenant_id=self.tenant_id_str,
event_type=self.event_type,
activity_name=span_name,
specialist_id=self.specialist_id_str,
specialist_type=self.specialist_type_str,
specialist_type_version=self.specialist_type_version_str
).dec()
if self.llm_metrics['call_count'] > 0:
self.log_final_metrics()
self.reset_llm_metrics()
self.log(f"End", extra_fields={'span_duration': span_total_time})
# Restore the previous span info
if self.spans:
self.span_id, self.span_name, self.parent_span_id = self.spans.pop()
else:
self.span_id = None
self.span_name = None
self.parent_span_id = None
def log(self, message: str, level: str = 'info', extra_fields: Dict[str, Any] = None):
log_data = {
'timestamp': dt.now(tz=tz.utc),
'event_type': self.event_type,
'tenant_id': self.tenant_id,
'trace_id': self.trace_id,
@@ -111,35 +392,29 @@ class BusinessEvent:
'document_version_file_size': self.document_version_file_size,
'chat_session_id': self.chat_session_id,
'interaction_id': self.interaction_id,
'specialist_id': self.specialist_id,
'specialist_type': self.specialist_type,
'specialist_type_version': self.specialist_type_version,
'environment': self.environment,
'message': message,
}
# log to Graylog
getattr(logger, level)(message, extra=log_data)
# Add any extra fields
if extra_fields:
for key, value in extra_fields.items():
# For span/trace duration, use the llm_metrics_total_time field
if key == 'span_duration' or key == 'trace_duration':
log_data['llm_metrics_total_time'] = value
else:
log_data[key] = value
# Log to database
event_log = BusinessEventLog(
timestamp=dt.now(tz=tz.utc),
event_type=self.event_type,
tenant_id=self.tenant_id,
trace_id=self.trace_id,
span_id=self.span_id,
span_name=self.span_name,
parent_span_id=self.parent_span_id,
document_version_id=self.document_version_id,
document_version_file_size=self.document_version_file_size,
chat_session_id=self.chat_session_id,
interaction_id=self.interaction_id,
environment=self.environment,
message=message
)
db.session.add(event_log)
db.session.commit()
self._log_buffer.append(log_data)
def log_llm_metrics(self, metrics: dict, level: str = 'info'):
self.update_llm_metrics(metrics)
message = "LLM Metrics"
logger = logging.getLogger('business_events')
log_data = {
'timestamp': dt.now(tz=tz.utc),
'event_type': self.event_type,
'tenant_id': self.tenant_id,
'trace_id': self.trace_id,
@@ -150,44 +425,24 @@ class BusinessEvent:
'document_version_file_size': self.document_version_file_size,
'chat_session_id': self.chat_session_id,
'interaction_id': self.interaction_id,
'specialist_id': self.specialist_id,
'specialist_type': self.specialist_type,
'specialist_type_version': self.specialist_type_version,
'environment': self.environment,
'llm_metrics_total_tokens': metrics['total_tokens'],
'llm_metrics_prompt_tokens': metrics['prompt_tokens'],
'llm_metrics_completion_tokens': metrics['completion_tokens'],
'llm_metrics_total_time': metrics['time_elapsed'],
'llm_interaction_type': metrics['interaction_type'],
'message': message,
}
# log to Graylog
getattr(logger, level)(message, extra=log_data)
# Log to database
event_log = BusinessEventLog(
timestamp=dt.now(tz=tz.utc),
event_type=self.event_type,
tenant_id=self.tenant_id,
trace_id=self.trace_id,
span_id=self.span_id,
span_name=self.span_name,
parent_span_id=self.parent_span_id,
document_version_id=self.document_version_id,
document_version_file_size=self.document_version_file_size,
chat_session_id=self.chat_session_id,
interaction_id=self.interaction_id,
environment=self.environment,
llm_metrics_total_tokens=metrics['total_tokens'],
llm_metrics_prompt_tokens=metrics['prompt_tokens'],
llm_metrics_completion_tokens=metrics['completion_tokens'],
llm_metrics_total_time=metrics['time_elapsed'],
llm_interaction_type=metrics['interaction_type'],
message=message
)
db.session.add(event_log)
db.session.commit()
self._log_buffer.append(log_data)
def log_final_metrics(self, level: str = 'info'):
logger = logging.getLogger('business_events')
message = "Final LLM Metrics"
log_data = {
'timestamp': dt.now(tz=tz.utc),
'event_type': self.event_type,
'tenant_id': self.tenant_id,
'trace_id': self.trace_id,
@@ -198,6 +453,9 @@ class BusinessEvent:
'document_version_file_size': self.document_version_file_size,
'chat_session_id': self.chat_session_id,
'interaction_id': self.interaction_id,
'specialist_id': self.specialist_id,
'specialist_type': self.specialist_type,
'specialist_type_version': self.specialist_type_version,
'environment': self.environment,
'llm_metrics_total_tokens': self.llm_metrics['total_tokens'],
'llm_metrics_prompt_tokens': self.llm_metrics['prompt_tokens'],
@@ -205,42 +463,133 @@ class BusinessEvent:
'llm_metrics_total_time': self.llm_metrics['total_time'],
'llm_metrics_call_count': self.llm_metrics['call_count'],
'llm_interaction_type': self.llm_metrics['interaction_type'],
'message': message,
}
# log to Graylog
getattr(logger, level)(message, extra=log_data)
self._log_buffer.append(log_data)
# Log to database
event_log = BusinessEventLog(
timestamp=dt.now(tz=tz.utc),
event_type=self.event_type,
tenant_id=self.tenant_id,
trace_id=self.trace_id,
span_id=self.span_id,
span_name=self.span_name,
parent_span_id=self.parent_span_id,
document_version_id=self.document_version_id,
document_version_file_size=self.document_version_file_size,
chat_session_id=self.chat_session_id,
interaction_id=self.interaction_id,
environment=self.environment,
llm_metrics_total_tokens=self.llm_metrics['total_tokens'],
llm_metrics_prompt_tokens=self.llm_metrics['prompt_tokens'],
llm_metrics_completion_tokens=self.llm_metrics['completion_tokens'],
llm_metrics_total_time=self.llm_metrics['total_time'],
llm_metrics_call_count=self.llm_metrics['call_count'],
llm_interaction_type=self.llm_metrics['interaction_type'],
message=message
)
db.session.add(event_log)
db.session.commit()
@staticmethod
def _direct_db_persist(log_entries: List[Dict[str, Any]]):
"""Fallback method to directly persist logs to DB if async fails"""
try:
db_entries = []
for entry in log_entries:
event_log = BusinessEventLog(
timestamp=entry.pop('timestamp'),
event_type=entry.pop('event_type'),
tenant_id=entry.pop('tenant_id'),
trace_id=entry.pop('trace_id'),
span_id=entry.pop('span_id', None),
span_name=entry.pop('span_name', None),
parent_span_id=entry.pop('parent_span_id', None),
document_version_id=entry.pop('document_version_id', None),
document_version_file_size=entry.pop('document_version_file_size', None),
chat_session_id=entry.pop('chat_session_id', None),
interaction_id=entry.pop('interaction_id', None),
specialist_id=entry.pop('specialist_id', None),
specialist_type=entry.pop('specialist_type', None),
specialist_type_version=entry.pop('specialist_type_version', None),
environment=entry.pop('environment', None),
llm_metrics_total_tokens=entry.pop('llm_metrics_total_tokens', None),
llm_metrics_prompt_tokens=entry.pop('llm_metrics_prompt_tokens', None),
llm_metrics_completion_tokens=entry.pop('llm_metrics_completion_tokens', None),
llm_metrics_total_time=entry.pop('llm_metrics_total_time', None),
llm_metrics_call_count=entry.pop('llm_metrics_call_count', None),
llm_interaction_type=entry.pop('llm_interaction_type', None),
message=entry.pop('message', None)
)
db_entries.append(event_log)
# Bulk insert
db.session.bulk_save_objects(db_entries)
db.session.commit()
except Exception as e:
logger = logging.getLogger('business_events')
logger.error(f"Failed to persist logs directly to DB: {e}")
db.session.rollback()
def _flush_log_buffer(self):
"""Flush the log buffer to the database via a Celery task"""
if self._log_buffer:
try:
# Send to Celery task
current_celery.send_task(
'persist_business_events',
args=[self._log_buffer],
queue='entitlements' # Or dedicated log queue
)
except Exception as e:
# Fallback to direct DB write in case of issues with Celery
logger = logging.getLogger('business_events')
logger.error(f"Failed to send logs to Celery. Falling back to direct DB: {e}")
self._direct_db_persist(self._log_buffer)
# Clear the buffer after sending
self._log_buffer = []
def __enter__(self):
self.trace_start_time = time.time()
self.log(f'Starting Trace for {self.event_type}')
return BusinessEventContext(self).__enter__()
def __exit__(self, exc_type, exc_val, exc_tb):
trace_total_time = time.time() - self.trace_start_time
# Record trace duration
TRACE_DURATION.labels(
tenant_id=self.tenant_id_str,
event_type=self.event_type,
specialist_id=self.specialist_id_str,
specialist_type=self.specialist_type_str,
specialist_type_version=self.specialist_type_version_str
).observe(trace_total_time)
# Decrement concurrent traces gauge
CONCURRENT_TRACES.labels(
tenant_id=self.tenant_id_str,
event_type=self.event_type,
specialist_id=self.specialist_id_str,
specialist_type=self.specialist_type_str,
specialist_type_version=self.specialist_type_version_str
).dec()
if self.llm_metrics['call_count'] > 0:
self.log_final_metrics()
self.reset_llm_metrics()
self.log(f'Ending Trace for {self.event_type}')
self.log(f'Ending Trace for {self.event_type}', extra_fields={'trace_duration': trace_total_time})
self._flush_log_buffer()
return BusinessEventContext(self).__exit__(exc_type, exc_val, exc_tb)
async def __aenter__(self):
self.trace_start_time = time.time()
self.log(f'Starting Trace for {self.event_type}')
return await BusinessEventContext(self).__aenter__()
async def __aexit__(self, exc_type, exc_val, exc_tb):
trace_total_time = time.time() - self.trace_start_time
# Record trace duration
TRACE_DURATION.labels(
tenant_id=self.tenant_id_str,
event_type=self.event_type,
specialist_id=self.specialist_id_str,
specialist_type=self.specialist_type_str,
specialist_type_version=self.specialist_type_version_str
).observe(trace_total_time)
# Decrement concurrent traces gauge
CONCURRENT_TRACES.labels(
tenant_id=self.tenant_id_str,
event_type=self.event_type,
specialist_id=self.specialist_id_str,
specialist_type=self.specialist_type_str,
specialist_type_version=self.specialist_type_version_str
).dec()
if self.llm_metrics['call_count'] > 0:
self.log_final_metrics()
self.reset_llm_metrics()
self.log(f'Ending Trace for {self.event_type}', extra_fields={'trace_duration': trace_total_time})
self._flush_log_buffer()
return await BusinessEventContext(self).__aexit__(exc_type, exc_val, exc_tb)

View File

@@ -1,9 +1,22 @@
from werkzeug.local import LocalProxy, LocalStack
import asyncio
from contextvars import ContextVar
import contextvars
# Keep existing stack for backward compatibility
_business_event_stack = LocalStack()
# Add contextvar for async support
_business_event_contextvar = ContextVar('business_event', default=None)
def _get_current_event():
# Try contextvar first (for async)
event = _business_event_contextvar.get()
if event is not None:
return event
# Fall back to the stack-based approach (for sync)
top = _business_event_stack.top
if top is None:
raise RuntimeError("No business event context found. Are you sure you're in a business event?")
@@ -16,10 +29,24 @@ current_event = LocalProxy(_get_current_event)
class BusinessEventContext:
def __init__(self, event):
self.event = event
self._token = None # For storing contextvar token
def __enter__(self):
_business_event_stack.push(self.event)
self._token = _business_event_contextvar.set(self.event)
return self.event
def __exit__(self, exc_type, exc_val, exc_tb):
_business_event_stack.pop()
if self._token is not None:
_business_event_contextvar.reset(self._token)
async def __aenter__(self):
_business_event_stack.push(self.event)
self._token = _business_event_contextvar.set(self.event)
return self.event
async def __aexit__(self, exc_type, exc_val, exc_tb):
_business_event_stack.pop()
if self._token is not None:
_business_event_contextvar.reset(self._token)

192
common/utils/cache/base.py vendored Normal file
View File

@@ -0,0 +1,192 @@
from typing import Any, Dict, List, Optional, TypeVar, Generic, Type
from dataclasses import dataclass
from flask import Flask, current_app
from dogpile.cache import CacheRegion
from abc import ABC, abstractmethod
T = TypeVar('T') # Generic type parameter for cached data
@dataclass
class CacheKey:
"""
Represents a composite cache key made up of multiple components.
Enables structured and consistent key generation for cache entries.
Attributes:
components (Dict[str, Any]): Dictionary of key components and their values
Example:
key = CacheKey({'tenant_id': 123, 'user_id': 456})
str(key) -> "tenant_id=123:user_id=456"
"""
components: Dict[str, Any]
def __str__(self) -> str:
"""
Converts components into a deterministic string representation.
Components are sorted alphabetically to ensure consistent key generation.
"""
return ":".join(f"{k}={v}" for k, v in sorted(self.components.items()))
class CacheHandler(Generic[T]):
"""
Base cache handler implementation providing structured caching functionality.
Uses generics to ensure type safety of cached data.
Type Parameters:
T: Type of data being cached
Attributes:
region (CacheRegion): Dogpile cache region for storage
prefix (str): Prefix for all cache keys managed by this handler
"""
def __init__(self, region: CacheRegion, prefix: str):
self.region = region
self.prefix = prefix
self._key_components = [] # List of required key components
@abstractmethod
def _to_cache_data(self, instance: T) -> Any:
"""
Convert the data to a cacheable format for internal use.
Args:
instance: The data to be cached.
Returns:
A serializable format of the instance.
"""
raise NotImplementedError
@abstractmethod
def _from_cache_data(self, data: Any, **kwargs) -> T:
"""
Convert cached data back to usable format for internal use.
Args:
data: The cached data.
**kwargs: Additional context.
Returns:
The data in its usable format.
"""
raise NotImplementedError
@abstractmethod
def _should_cache(self, value: T) -> bool:
"""
Validate if the value should be cached for internal use.
Args:
value: The value to be cached.
Returns:
True if the value should be cached, False otherwise.
"""
raise NotImplementedError
def configure_keys(self, *components: str):
"""
Configure required components for cache key generation.
Args:
*components: Required key component names
Returns:
self for method chaining
"""
self._key_components = components
return self
def generate_key(self, **identifiers) -> str:
"""
Generate a cache key from provided identifiers.
Args:
**identifiers: Key-value pairs for key components
Returns:
Formatted cache key string
Raises:
ValueError: If required components are missing
"""
missing = set(self._key_components) - set(identifiers.keys())
if missing:
raise ValueError(f"Missing key components: {missing}")
region_name = getattr(self.region, 'name', 'default_region')
key = CacheKey({k: identifiers[k] for k in self._key_components})
return f"{region_name}_{self.prefix}:{str(key)}"
def get(self, creator_func, **identifiers) -> T:
"""
Get or create a cached value.
Args:
creator_func: Function to create value if not cached
**identifiers: Key components for cache key
Returns:
Cached or newly created value
"""
cache_key = self.generate_key(**identifiers)
def creator():
instance = creator_func(**identifiers)
serialized_instance = self._to_cache_data(instance)
return serialized_instance
cached_data = self.region.get_or_create(
cache_key,
creator,
should_cache_fn=self._should_cache
)
return self._from_cache_data(cached_data, **identifiers)
def invalidate(self, **identifiers):
"""
Invalidate a specific cache entry.
Args:
**identifiers: Key components for the cache entry
"""
cache_key = self.generate_key(**identifiers)
self.region.delete(cache_key)
def invalidate_by_model(self, model: str, **identifiers):
"""
Invalidate cache entry based on model changes.
Args:
model: Changed model name
**identifiers: Model instance identifiers
"""
try:
self.invalidate(**identifiers)
except ValueError:
pass # Skip if cache key can't be generated from provided identifiers
def invalidate_region(self):
"""
Invalidate all cache entries within this region.
Deletes all keys that start with the region prefix.
"""
# Construct the pattern for all keys in this region
pattern = f"{self.region}_{self.prefix}:*"
# Assuming Redis backend with dogpile, use `delete_multi` or direct Redis access
if hasattr(self.region.backend, 'client'):
redis_client = self.region.backend.client
keys_to_delete = redis_client.keys(pattern)
if keys_to_delete:
redis_client.delete(*keys_to_delete)
else:
# Fallback for other backends
raise NotImplementedError("Region invalidation is only supported for Redis backend.")

468
common/utils/cache/config_cache.py vendored Normal file
View File

@@ -0,0 +1,468 @@
from typing import Dict, Any, Optional
from pathlib import Path
import yaml
from packaging import version
import os
from flask import current_app
from common.utils.cache.base import CacheHandler, CacheKey
from config.type_defs import agent_types, task_types, tool_types, specialist_types, retriever_types, prompt_types, \
catalog_types
def is_major_minor(version: str) -> bool:
parts = version.strip('.').split('.')
return len(parts) == 2 and all(part.isdigit() for part in parts)
class BaseConfigCacheHandler(CacheHandler[Dict[str, Any]]):
"""Base handler for configuration caching"""
def __init__(self, region, config_type: str):
"""
Args:
region: Cache region
config_type: Type of configuration (agents, tasks, etc.)
"""
super().__init__(region, f'config_{config_type}')
self.config_type = config_type
self._types_module = None # Set by subclasses
self._config_dir = None # Set by subclasses
self.version_tree_cache = None
self.configure_keys('type_name', 'version')
def _to_cache_data(self, instance: Dict[str, Any]) -> Dict[str, Any]:
"""Convert the data to a cacheable format"""
# For configuration data, we can just return the dictionary as is
# since it's already in a serializable format
return instance
def _from_cache_data(self, data: Dict[str, Any], **kwargs) -> Dict[str, Any]:
"""Convert cached data back to usable format"""
# Similarly, we can return the data directly since it's already
# in the format we need
return data
def _should_cache(self, value: Dict[str, Any]) -> bool:
"""
Validate if the value should be cached
Args:
value: The value to be cached
Returns:
bool: True if the value should be cached
"""
return isinstance(value, dict) # Cache all dictionaries
def set_version_tree_cache(self, cache):
"""Set the version tree cache dependency."""
self.version_tree_cache = cache
def _load_specific_config(self, type_name: str, version_str: str) -> Dict[str, Any]:
"""
Load a specific configuration version
Args:
type_name: Type name
version_str: Version string
Returns:
Configuration data
"""
version_tree = self.version_tree_cache.get_versions(type_name)
versions = version_tree['versions']
if version_str == 'latest':
version_str = version_tree['latest_version']
if version_str not in versions:
raise ValueError(f"Version {version_str} not found for {type_name}")
file_path = versions[version_str]['file_path']
try:
with open(file_path) as f:
config = yaml.safe_load(f)
return config
except Exception as e:
raise ValueError(f"Error loading config from {file_path}: {e}")
def get_config(self, type_name: str, version: Optional[str] = None) -> Dict[str, Any]:
"""
Get configuration for a specific type and version
If version not specified, returns latest
Args:
type_name: Configuration type name
version: Optional specific version to retrieve
Returns:
Configuration data
"""
if version is None:
version_str = self.version_tree_cache.get_latest_version(type_name)
elif is_major_minor(version):
version_str = self.version_tree_cache.get_latest_patch_version(type_name, version)
else:
version_str = version
result = self.get(
lambda type_name, version: self._load_specific_config(type_name, version),
type_name=type_name,
version=version_str
)
return result
class BaseConfigVersionTreeCacheHandler(CacheHandler[Dict[str, Any]]):
"""Base handler for configuration version tree caching"""
def __init__(self, region, config_type: str):
"""
Args:
region: Cache region
config_type: Type of configuration (agents, tasks, etc.)
"""
super().__init__(region, f'config_{config_type}_version_tree')
self.config_type = config_type
self._types_module = None # Set by subclasses
self._config_dir = None # Set by subclasses
self.configure_keys('type_name')
def _load_version_tree(self, type_name: str) -> Dict[str, Any]:
"""
Load version tree for a specific type without loading full configurations
Args:
type_name: Name of configuration type
Returns:
Dict containing available versions and their metadata
"""
type_path = Path(self._config_dir) / type_name
if not type_path.exists():
raise ValueError(f"No configuration found for type {type_name}")
version_files = list(type_path.glob('*.yaml'))
if not version_files:
raise ValueError(f"No versions found for type {type_name}")
versions = {}
latest_version = None
latest_version_obj = None
for file_path in version_files:
ver = file_path.stem # Get version from filename
try:
ver_obj = version.parse(ver)
# Only load minimal metadata for version tree
with open(file_path) as f:
yaml_data = yaml.safe_load(f)
metadata = yaml_data.get('metadata', {})
versions[ver] = {
'metadata': metadata,
'file_path': str(file_path)
}
# Track latest version
if latest_version_obj is None or ver_obj > latest_version_obj:
latest_version = ver
latest_version_obj = ver_obj
except Exception as e:
current_app.logger.error(f"Error loading version {ver}: {e}")
continue
return {
'versions': versions,
'latest_version': latest_version
}
def _to_cache_data(self, instance: Dict[str, Any]) -> Dict[str, Any]:
"""Convert the data to a cacheable format"""
# For configuration data, we can just return the dictionary as is
# since it's already in a serializable format
return instance
def _from_cache_data(self, data: Dict[str, Any], **kwargs) -> Dict[str, Any]:
"""Convert cached data back to usable format"""
# Similarly, we can return the data directly since it's already
# in the format we need
return data
def _should_cache(self, value: Dict[str, Any]) -> bool:
"""
Validate if the value should be cached
Args:
value: The value to be cached
Returns:
bool: True if the value should be cached
"""
return isinstance(value, dict) # Cache all dictionaries
def get_versions(self, type_name: str) -> Dict[str, Any]:
"""
Get version tree for a type
Args:
type_name: Type to get versions for
Returns:
Dict with version information
"""
return self.get(
lambda type_name: self._load_version_tree(type_name),
type_name=type_name
)
def get_latest_version(self, type_name: str) -> str:
"""
Get the latest version for a given type name.
Args:
type_name: Name of the configuration type
Returns:
Latest version string
Raises:
ValueError: If type not found or no versions available
"""
version_tree = self.get_versions(type_name)
if not version_tree or 'latest_version' not in version_tree:
raise ValueError(f"No versions found for {type_name}")
return version_tree['latest_version']
def get_latest_patch_version(self, type_name: str, major_minor: str) -> str:
"""
Get the latest patch version for a given major.minor version.
Args:
type_name: Name of the configuration type
major_minor: Major.minor version (e.g. "1.0")
Returns:
Latest patch version string (e.g. "1.0.3")
Raises:
ValueError: If type not found or no matching versions
"""
version_tree = self.get_versions(type_name)
if not version_tree or 'versions' not in version_tree:
raise ValueError(f"No versions found for {type_name}")
# Filter versions that match the major.minor prefix
matching_versions = [
ver for ver in version_tree['versions'].keys()
if ver.startswith(major_minor + '.')
]
if not matching_versions:
raise ValueError(f"No versions found for {type_name} with prefix {major_minor}")
# Return highest matching version
latest_patch = max(matching_versions, key=version.parse)
return latest_patch
class BaseConfigTypesCacheHandler(CacheHandler[Dict[str, Any]]):
"""Base handler for configuration types caching"""
def __init__(self, region, config_type: str):
"""
Args:
region: Cache region
config_type: Type of configuration (agents, tasks, etc.)
"""
super().__init__(region, f'config_{config_type}_types')
self.config_type = config_type
self._types_module = None # Set by subclasses
self._config_dir = None # Set by subclasses
self.configure_keys()
def _to_cache_data(self, instance: Dict[str, Any]) -> Dict[str, Any]:
"""Convert the data to a cacheable format"""
# For configuration data, we can just return the dictionary as is
# since it's already in a serializable format
return instance
def _from_cache_data(self, data: Dict[str, Any], **kwargs) -> Dict[str, Any]:
"""Convert cached data back to usable format"""
# Similarly, we can return the data directly since it's already
# in the format we need
return data
def _should_cache(self, value: Dict[str, Any]) -> bool:
"""
Validate if the value should be cached
Args:
value: The value to be cached
Returns:
bool: True if the value should be cached
"""
return isinstance(value, dict) # Cache all dictionaries
def _load_type_definitions(self) -> Dict[str, Dict[str, str]]:
"""Load type definitions from the corresponding type_defs module"""
if not self._types_module:
raise ValueError("_types_module must be set by subclass")
type_definitions = {
type_id: {
'name': info['name'],
'description': info['description']
}
for type_id, info in self._types_module.items()
}
return type_definitions
def get_types(self) -> Dict[str, Dict[str, str]]:
"""Get dictionary of available types with name and description"""
result = self.get(
lambda type_name: self._load_type_definitions(),
type_name=f'{self.config_type}_types',
)
return result
def create_config_cache_handlers(config_type: str, config_dir: str, types_module: dict) -> tuple:
"""
Factory function to dynamically create the 3 cache handler classes for a given configuration type.
The following cache names are created:
- <config_type>_config_cache
- <config_type>_version_tree_cache
- <config_type>_types_cache
Args:
config_type: The configuration type (e.g., 'agents', 'tasks').
config_dir: The directory where configuration files are stored.
types_module: The types module defining the available types for this config.
Returns:
A tuple of dynamically created classes for config, version tree, and types handlers.
"""
class ConfigCacheHandler(BaseConfigCacheHandler):
handler_name = f"{config_type}_config_cache"
def __init__(self, region):
super().__init__(region, config_type)
self._types_module = types_module
self._config_dir = config_dir
class VersionTreeCacheHandler(BaseConfigVersionTreeCacheHandler):
handler_name = f"{config_type}_version_tree_cache"
def __init__(self, region):
super().__init__(region, config_type)
self._types_module = types_module
self._config_dir = config_dir
class TypesCacheHandler(BaseConfigTypesCacheHandler):
handler_name = f"{config_type}_types_cache"
def __init__(self, region):
super().__init__(region, config_type)
self._types_module = types_module
self._config_dir = config_dir
return ConfigCacheHandler, VersionTreeCacheHandler, TypesCacheHandler
AgentConfigCacheHandler, AgentConfigVersionTreeCacheHandler, AgentConfigTypesCacheHandler = (
create_config_cache_handlers(
config_type='agents',
config_dir='config/agents',
types_module=agent_types.AGENT_TYPES
))
TaskConfigCacheHandler, TaskConfigVersionTreeCacheHandler, TaskConfigTypesCacheHandler = (
create_config_cache_handlers(
config_type='tasks',
config_dir='config/tasks',
types_module=task_types.TASK_TYPES
))
ToolConfigCacheHandler, ToolConfigVersionTreeCacheHandler, ToolConfigTypesCacheHandler = (
create_config_cache_handlers(
config_type='tools',
config_dir='config/tools',
types_module=tool_types.TOOL_TYPES
))
SpecialistConfigCacheHandler, SpecialistConfigVersionTreeCacheHandler, SpecialistConfigTypesCacheHandler = (
create_config_cache_handlers(
config_type='specialists',
config_dir='config/specialists',
types_module=specialist_types.SPECIALIST_TYPES
))
RetrieverConfigCacheHandler, RetrieverConfigVersionTreeCacheHandler, RetrieverConfigTypesCacheHandler = (
create_config_cache_handlers(
config_type='retrievers',
config_dir='config/retrievers',
types_module=retriever_types.RETRIEVER_TYPES
))
PromptConfigCacheHandler, PromptConfigVersionTreeCacheHandler, PromptConfigTypesCacheHandler = (
create_config_cache_handlers(
config_type='prompts',
config_dir='config/prompts',
types_module=prompt_types.PROMPT_TYPES
))
CatalogConfigCacheHandler, CatalogConfigVersionTreeCacheHandler, CatalogConfigTypesCacheHandler = (
create_config_cache_handlers(
config_type='catalogs',
config_dir='config/catalogs',
types_module=catalog_types.CATALOG_TYPES
))
def register_config_cache_handlers(cache_manager) -> None:
cache_manager.register_handler(AgentConfigCacheHandler, 'eveai_config')
cache_manager.register_handler(AgentConfigTypesCacheHandler, 'eveai_config')
cache_manager.register_handler(AgentConfigVersionTreeCacheHandler, 'eveai_config')
cache_manager.register_handler(TaskConfigCacheHandler, 'eveai_config')
cache_manager.register_handler(TaskConfigTypesCacheHandler, 'eveai_config')
cache_manager.register_handler(TaskConfigVersionTreeCacheHandler, 'eveai_config')
cache_manager.register_handler(ToolConfigCacheHandler, 'eveai_config')
cache_manager.register_handler(ToolConfigTypesCacheHandler, 'eveai_config')
cache_manager.register_handler(ToolConfigVersionTreeCacheHandler, 'eveai_config')
cache_manager.register_handler(SpecialistConfigCacheHandler, 'eveai_config')
cache_manager.register_handler(SpecialistConfigTypesCacheHandler, 'eveai_config')
cache_manager.register_handler(SpecialistConfigVersionTreeCacheHandler, 'eveai_config')
cache_manager.register_handler(RetrieverConfigCacheHandler, 'eveai_config')
cache_manager.register_handler(RetrieverConfigTypesCacheHandler, 'eveai_config')
cache_manager.register_handler(RetrieverConfigVersionTreeCacheHandler, 'eveai_config')
cache_manager.register_handler(PromptConfigCacheHandler, 'eveai_config')
cache_manager.register_handler(PromptConfigVersionTreeCacheHandler, 'eveai_config')
cache_manager.register_handler(PromptConfigTypesCacheHandler, 'eveai_config')
cache_manager.register_handler(CatalogConfigCacheHandler, 'eveai_config')
cache_manager.register_handler(CatalogConfigTypesCacheHandler, 'eveai_config')
cache_manager.register_handler(CatalogConfigVersionTreeCacheHandler, 'eveai_config')
cache_manager.register_handler(AgentConfigCacheHandler, 'eveai_config')
cache_manager.register_handler(AgentConfigTypesCacheHandler, 'eveai_config')
cache_manager.register_handler(AgentConfigVersionTreeCacheHandler, 'eveai_config')
cache_manager.agents_config_cache.set_version_tree_cache(cache_manager.agents_version_tree_cache)
cache_manager.tasks_config_cache.set_version_tree_cache(cache_manager.tasks_version_tree_cache)
cache_manager.tools_config_cache.set_version_tree_cache(cache_manager.tools_version_tree_cache)
cache_manager.specialists_config_cache.set_version_tree_cache(cache_manager.specialists_version_tree_cache)
cache_manager.retrievers_config_cache.set_version_tree_cache(cache_manager.retrievers_version_tree_cache)
cache_manager.prompts_config_cache.set_version_tree_cache(cache_manager.prompts_version_tree_cache)

View File

@@ -0,0 +1,218 @@
from typing import Dict, Any, Type, TypeVar, List
from abc import ABC, abstractmethod
from flask import current_app
from common.extensions import cache_manager, db
from common.models.interaction import EveAIAgent, EveAITask, EveAITool, Specialist
from common.utils.cache.crewai_configuration import (
ProcessedAgentConfig, ProcessedTaskConfig, ProcessedToolConfig,
SpecialistProcessedConfig
)
T = TypeVar('T') # For generic model types
class BaseCrewAIConfigProcessor:
"""Base processor for specialist configurations"""
# Standard mapping between model fields and template placeholders
AGENT_FIELD_MAPPING = {
'role': 'custom_role',
'goal': 'custom_goal',
'backstory': 'custom_backstory'
}
TASK_FIELD_MAPPING = {
'task_description': 'custom_description',
'expected_output': 'custom_expected_output'
}
def __init__(self, tenant_id: int, specialist_id: int):
self.tenant_id = tenant_id
self.specialist_id = specialist_id
self.specialist = self._get_specialist()
self.verbose = self._get_verbose_setting()
def _get_specialist(self) -> Specialist:
"""Get specialist and verify existence"""
specialist = Specialist.query.get(self.specialist_id)
if not specialist:
raise ValueError(f"Specialist {self.specialist_id} not found")
return specialist
def _get_verbose_setting(self) -> bool:
"""Get verbose setting from specialist"""
return bool(self.specialist.tuning)
def _get_db_items(self, model_class: Type[T], type_list: List[str]) -> Dict[str, T]:
"""Get database items of specified type"""
items = (model_class.query
.filter_by(specialist_id=self.specialist_id)
.filter(model_class.type.in_(type_list))
.all())
return {item.type: item for item in items}
def _apply_replacements(self, text: str, replacements: Dict[str, str]) -> str:
"""Apply text replacements to a string"""
result = text
for key, value in replacements.items():
if value is not None: # Only replace if value exists
placeholder = "{" + key + "}"
result = result.replace(placeholder, str(value))
return result
def _process_agent_configs(self, specialist_config: Dict[str, Any]) -> Dict[str, ProcessedAgentConfig]:
"""Process all agent configurations"""
agent_configs = {}
if 'agents' not in specialist_config:
return agent_configs
# Get all DB agents at once
agent_types = [agent_def['type'] for agent_def in specialist_config['agents']]
db_agents = self._get_db_items(EveAIAgent, agent_types)
for agent_def in specialist_config['agents']:
agent_type = agent_def['type']
agent_type_lower = agent_type.lower()
db_agent = db_agents.get(agent_type)
# Get full configuration
config = cache_manager.agents_config_cache.get_config(
agent_type,
agent_def.get('version', '1.0')
)
# Start with YAML values
role = config['role']
goal = config['goal']
backstory = config['backstory']
# Apply DB values if they exist
if db_agent:
for model_field, placeholder in self.AGENT_FIELD_MAPPING.items():
value = getattr(db_agent, model_field)
if value:
placeholder_text = "{" + placeholder + "}"
role = role.replace(placeholder_text, value)
goal = goal.replace(placeholder_text, value)
backstory = backstory.replace(placeholder_text, value)
agent_configs[agent_type_lower] = ProcessedAgentConfig(
role=role,
goal=goal,
backstory=backstory,
name=agent_def.get('name') or config.get('name', agent_type_lower),
type=agent_type,
description=agent_def.get('description') or config.get('description'),
verbose=self.verbose
)
return agent_configs
def _process_task_configs(self, specialist_config: Dict[str, Any]) -> Dict[str, ProcessedTaskConfig]:
"""Process all task configurations"""
task_configs = {}
if 'tasks' not in specialist_config:
return task_configs
# Get all DB tasks at once
task_types = [task_def['type'] for task_def in specialist_config['tasks']]
db_tasks = self._get_db_items(EveAITask, task_types)
for task_def in specialist_config['tasks']:
task_type = task_def['type']
task_type_lower = task_type.lower()
db_task = db_tasks.get(task_type)
# Get full configuration
config = cache_manager.tasks_config_cache.get_config(
task_type,
task_def.get('version', '1.0')
)
# Start with YAML values
task_description = config['task_description']
expected_output = config['expected_output']
# Apply DB values if they exist
if db_task:
for model_field, placeholder in self.TASK_FIELD_MAPPING.items():
value = getattr(db_task, model_field)
if value:
placeholder_text = "{" + placeholder + "}"
task_description = task_description.replace(placeholder_text, value)
expected_output = expected_output.replace(placeholder_text, value)
task_configs[task_type_lower] = ProcessedTaskConfig(
task_description=task_description,
expected_output=expected_output,
name=task_def.get('name') or config.get('name', task_type_lower),
type=task_type,
description=task_def.get('description') or config.get('description'),
verbose=self.verbose
)
return task_configs
def _process_tool_configs(self, specialist_config: Dict[str, Any]) -> Dict[str, ProcessedToolConfig]:
"""Process all tool configurations"""
tool_configs = {}
if 'tools' not in specialist_config:
return tool_configs
# Get all DB tools at once
tool_types = [tool_def['type'] for tool_def in specialist_config['tools']]
db_tools = self._get_db_items(EveAITool, tool_types)
for tool_def in specialist_config['tools']:
tool_type = tool_def['type']
tool_type_lower = tool_type.lower()
db_tool = db_tools.get(tool_type)
# Get full configuration
config = cache_manager.tools_config_cache.get_config(
tool_type,
tool_def.get('version', '1.0')
)
# Combine configuration
tool_config = config.get('configuration', {})
if db_tool and db_tool.configuration:
tool_config.update(db_tool.configuration)
tool_configs[tool_type_lower] = ProcessedToolConfig(
name=tool_def.get('name') or config.get('name', tool_type_lower),
type=tool_type,
description=tool_def.get('description') or config.get('description'),
configuration=tool_config,
verbose=self.verbose
)
return tool_configs
def process_config(self) -> SpecialistProcessedConfig:
"""Process complete specialist configuration"""
try:
# Get full specialist configuration
specialist_config = cache_manager.specialists_config_cache.get_config(
self.specialist.type,
self.specialist.type_version
)
if not specialist_config:
raise ValueError(f"No configuration found for {self.specialist.type}")
# Process all configurations
processed_config = SpecialistProcessedConfig(
agents=self._process_agent_configs(specialist_config),
tasks=self._process_task_configs(specialist_config),
tools=self._process_tool_configs(specialist_config)
)
return processed_config
except Exception as e:
current_app.logger.error(f"Error processing specialist configuration: {e}")
raise

View File

@@ -0,0 +1,126 @@
from dataclasses import dataclass
from typing import Dict, Any, Optional
@dataclass
class ProcessedAgentConfig:
"""Processed and ready-to-use agent configuration"""
role: str
goal: str
backstory: str
name: str
type: str
description: Optional[str] = None
verbose: bool = False
def to_dict(self) -> Dict[str, Any]:
"""Convert to dictionary for serialization"""
return {
'role': self.role,
'goal': self.goal,
'backstory': self.backstory,
'name': self.name,
'type': self.type,
'description': self.description,
'verbose': self.verbose
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> 'ProcessedAgentConfig':
"""Create from dictionary"""
return cls(**data)
@dataclass
class ProcessedTaskConfig:
"""Processed and ready-to-use task configuration"""
task_description: str
expected_output: str
name: str
type: str
description: Optional[str] = None
verbose: bool = False
def to_dict(self) -> Dict[str, Any]:
"""Convert to dictionary for serialization"""
return {
'task_description': self.task_description,
'expected_output': self.expected_output,
'name': self.name,
'type': self.type,
'description': self.description,
'verbose': self.verbose
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> 'ProcessedTaskConfig':
"""Create from dictionary"""
return cls(**data)
@dataclass
class ProcessedToolConfig:
"""Processed and ready-to-use tool configuration"""
name: str
type: str
description: Optional[str] = None
configuration: Optional[Dict[str, Any]] = None
verbose: bool = False
def to_dict(self) -> Dict[str, Any]:
"""Convert to dictionary for serialization"""
return {
'name': self.name,
'type': self.type,
'description': self.description,
'configuration': self.configuration,
'verbose': self.verbose
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> 'ProcessedToolConfig':
"""Create from dictionary"""
return cls(**data)
@dataclass
class SpecialistProcessedConfig:
"""Complete processed configuration for a specialist"""
agents: Dict[str, ProcessedAgentConfig]
tasks: Dict[str, ProcessedTaskConfig]
tools: Dict[str, ProcessedToolConfig]
def to_dict(self) -> Dict[str, Any]:
"""Convert entire configuration to dictionary"""
return {
'agents': {
agent_type: config.to_dict()
for agent_type, config in self.agents.items()
},
'tasks': {
task_type: config.to_dict()
for task_type, config in self.tasks.items()
},
'tools': {
tool_type: config.to_dict()
for tool_type, config in self.tools.items()
}
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> 'SpecialistProcessedConfig':
"""Create from dictionary"""
return cls(
agents={
agent_type: ProcessedAgentConfig.from_dict(config)
for agent_type, config in data['agents'].items()
},
tasks={
task_type: ProcessedTaskConfig.from_dict(config)
for task_type, config in data['tasks'].items()
},
tools={
tool_type: ProcessedToolConfig.from_dict(config)
for tool_type, config in data['tools'].items()
}
)

View File

@@ -0,0 +1,75 @@
from typing import Dict, Any, Type
from flask import current_app
from common.utils.cache.base import CacheHandler
from common.utils.cache.crewai_configuration import SpecialistProcessedConfig
from common.utils.cache.crewai_config_processor import BaseCrewAIConfigProcessor
class CrewAIProcessedConfigCacheHandler(CacheHandler[SpecialistProcessedConfig]):
"""Handles caching of processed specialist configurations"""
handler_name = 'crewai_processed_config_cache'
def __init__(self, region):
super().__init__(region, 'crewai_processed_config')
self.configure_keys('tenant_id', 'specialist_id')
def _to_cache_data(self, instance: SpecialistProcessedConfig) -> Dict[str, Any]:
"""Convert SpecialistProcessedConfig to cache data"""
return instance.to_dict()
def _from_cache_data(self, data: Dict[str, Any], **kwargs) -> SpecialistProcessedConfig:
"""Create SpecialistProcessedConfig from cache data"""
return SpecialistProcessedConfig.from_dict(data)
def _should_cache(self, value: Dict[str, Any]) -> bool:
"""Validate cache data"""
required_keys = {'agents', 'tasks', 'tools'}
if not all(key in value for key in required_keys):
current_app.logger.warning(f'CrewAI Processed Config Cache missing required keys: {required_keys}')
return False
return bool(value['agents'] or value['tasks'])
def get_specialist_config(self, tenant_id: int, specialist_id: int) -> SpecialistProcessedConfig:
"""
Get or create processed configuration for a specialist
Args:
tenant_id: Tenant ID
specialist_id: Specialist ID
Returns:
Processed specialist configuration
Raises:
ValueError: If specialist not found or processor not configured
"""
def creator_func(tenant_id: int, specialist_id: int) -> SpecialistProcessedConfig:
# Create processor instance and process config
processor = BaseCrewAIConfigProcessor(tenant_id, specialist_id)
return processor.process_config()
return self.get(
creator_func,
tenant_id=tenant_id,
specialist_id=specialist_id
)
def invalidate_tenant_specialist(self, tenant_id: int, specialist_id: int):
"""Invalidate cache for a specific tenant's specialist"""
self.invalidate(
tenant_id=tenant_id,
specialist_id=specialist_id
)
current_app.logger.info(
f"Invalidated cache for tenant {tenant_id} specialist {specialist_id}"
)
def register_specialist_cache_handlers(cache_manager) -> None:
"""Register specialist cache handlers with cache manager"""
cache_manager.register_handler(
CrewAIProcessedConfigCacheHandler,
'eveai_chat_workers'
)

View File

@@ -0,0 +1,51 @@
from typing import Type
from flask import Flask
from common.utils.cache.base import CacheHandler
from common.utils.cache.regions import create_cache_regions
from common.utils.cache.config_cache import AgentConfigCacheHandler
class EveAICacheManager:
"""Cache manager with registration capabilities"""
def __init__(self):
self._regions = {}
self._handlers = {}
self._handler_instances = {}
def init_app(self, app: Flask):
"""Initialize cache regions"""
self._regions = create_cache_regions(app)
# Store regions in instance
for region_name, region in self._regions.items():
setattr(self, f"{region_name}_region", region)
app.logger.info(f'Cache regions initialized: {self._regions.keys()}')
def register_handler(self, handler_class: Type[CacheHandler], region: str):
"""Register a cache handler class with its region"""
if not hasattr(handler_class, 'handler_name'):
raise ValueError("Cache handler must define handler_name class attribute")
self._handlers[handler_class] = region
# Create handler instance
region_instance = self._regions[region]
handler_instance = handler_class(region_instance)
self._handler_instances[handler_class.handler_name] = handler_instance
def invalidate_region(self, region_name: str):
"""Invalidate an entire cache region"""
if region_name in self._regions:
self._regions[region_name].invalidate()
else:
raise ValueError(f"Unknown cache region: {region_name}")
def __getattr__(self, name):
"""Handle dynamic access to registered handlers"""
instances = object.__getattribute__(self, '_handler_instances')
if name in instances:
return instances[name]
raise AttributeError(f"'EveAICacheManager' object has no attribute '{name}'")

74
common/utils/cache/regions.py vendored Normal file
View File

@@ -0,0 +1,74 @@
# common/utils/cache/regions.py
import time
from dogpile.cache import make_region
from urllib.parse import urlparse
import os
def get_redis_config(app):
"""
Create Redis configuration dict based on app config
Handles both authenticated and non-authenticated setups
"""
# Parse the REDIS_BASE_URI to get all components
redis_uri = urlparse(app.config['REDIS_BASE_URI'])
config = {
'host': redis_uri.hostname,
'port': int(redis_uri.port or 6379),
'db': 4, # Keep this for later use
'redis_expiration_time': 3600,
'distributed_lock': True,
'thread_local_lock': False,
}
# Add authentication if provided
if redis_uri.username and redis_uri.password:
config.update({
'username': redis_uri.username,
'password': redis_uri.password
})
return config
def create_cache_regions(app):
"""Initialize all cache regions with app config"""
redis_config = get_redis_config(app)
regions = {}
startup_time = int(time.time())
# Region for model-related caching (ModelVariables etc)
model_region = make_region(name='eveai_model').configure(
'dogpile.cache.redis',
arguments=redis_config,
replace_existing_backend=True
)
regions['eveai_model'] = model_region
# Region for eveai_chat_workers components (Specialists, Retrievers, ...)
eveai_chat_workers_region = make_region(name='eveai_chat_workers').configure(
'dogpile.cache.redis',
arguments=redis_config, # arguments={**redis_config, 'db': 4}, # Different DB
replace_existing_backend=True
)
regions['eveai_chat_workers'] = eveai_chat_workers_region
# Region for eveai_workers components (Processors, ...)
eveai_workers_region = make_region(name='eveai_workers').configure(
'dogpile.cache.redis',
arguments=redis_config, # Same config for now
replace_existing_backend=True
)
regions['eveai_workers'] = eveai_workers_region
eveai_config_region = make_region(name='eveai_config').configure(
'dogpile.cache.redis',
arguments=redis_config,
replace_existing_backend=True
)
regions['eveai_config'] = eveai_config_region
return regions

View File

@@ -1,14 +1,14 @@
from celery import Celery
from kombu import Queue
from werkzeug.local import LocalProxy
from redbeat import RedBeatScheduler
celery_app = Celery()
def init_celery(celery, app):
def init_celery(celery, app, is_beat=False):
celery_app.main = app.name
app.logger.debug(f'CELERY_BROKER_URL: {app.config["CELERY_BROKER_URL"]}')
app.logger.debug(f'CELERY_RESULT_BACKEND: {app.config["CELERY_RESULT_BACKEND"]}')
celery_config = {
'broker_url': app.config.get('CELERY_BROKER_URL', 'redis://localhost:6379/0'),
'result_backend': app.config.get('CELERY_RESULT_BACKEND', 'redis://localhost:6379/0'),
@@ -17,19 +17,40 @@ def init_celery(celery, app):
'accept_content': app.config.get('CELERY_ACCEPT_CONTENT', ['json']),
'timezone': app.config.get('CELERY_TIMEZONE', 'UTC'),
'enable_utc': app.config.get('CELERY_ENABLE_UTC', True),
'task_routes': {'eveai_worker.tasks.create_embeddings': {'queue': 'embeddings',
'routing_key': 'embeddings.create_embeddings'}},
}
if is_beat:
# Add configurations specific to Beat scheduler
celery_config['beat_scheduler'] = 'redbeat.RedBeatScheduler'
celery_config['redbeat_lock_key'] = 'redbeat::lock'
celery_config['beat_max_loop_interval'] = 10 # Adjust as needed
celery_app.conf.update(**celery_config)
# Setting up Celery task queues
celery_app.conf.task_queues = (
Queue('default', routing_key='task.#'),
Queue('embeddings', routing_key='embeddings.#', queue_arguments={'x-max-priority': 10}),
Queue('llm_interactions', routing_key='llm_interactions.#', queue_arguments={'x-max-priority': 5}),
)
# Task queues for workers only
if not is_beat:
celery_app.conf.task_queues = (
Queue('default', routing_key='task.#'),
Queue('embeddings', routing_key='embeddings.#', queue_arguments={'x-max-priority': 10}),
Queue('llm_interactions', routing_key='llm_interactions.#', queue_arguments={'x-max-priority': 5}),
Queue('entitlements', routing_key='entitlements.#', queue_arguments={'x-max-priority': 10}),
)
celery_app.conf.task_routes = {
'eveai_workers.*': { # All tasks from eveai_workers module
'queue': 'embeddings',
'routing_key': 'embeddings.#',
},
'eveai_chat_workers.*': { # All tasks from eveai_chat_workers module
'queue': 'llm_interactions',
'routing_key': 'llm_interactions.#',
},
'eveai_entitlements.*': { # All tasks from eveai_entitlements module
'queue': 'entitlements',
'routing_key': 'entitlements.#',
}
}
# Ensuring tasks execute with Flask application context
# Ensure tasks execute with Flask context
class ContextTask(celery.Task):
def __call__(self, *args, **kwargs):
with app.app_context():

View File

@@ -0,0 +1,710 @@
from typing import Optional, List, Union, Dict, Any, Pattern
from pydantic import BaseModel, field_validator, model_validator
from typing_extensions import Annotated
import re
from datetime import datetime
import json
from textwrap import dedent
import yaml
from dataclasses import dataclass
class TaggingField(BaseModel):
"""Represents a single tagging field configuration"""
type: str
required: bool = False
description: Optional[str] = None
allowed_values: Optional[List[Any]] = None # for enum type
min_value: Optional[Union[int, float]] = None # for numeric types
max_value: Optional[Union[int, float]] = None # for numeric types
@field_validator('type', mode='before')
@classmethod
def validate_type(cls, v: str) -> str:
valid_types = ['string', 'integer', 'float', 'date', 'enum']
if v not in valid_types:
raise ValueError(f'type must be one of {valid_types}')
return v
@model_validator(mode='after')
def validate_field_constraints(self) -> 'TaggingField':
# Validate enum constraints
if self.type == 'enum':
if not self.allowed_values:
raise ValueError('allowed_values must be provided for enum type')
elif self.allowed_values is not None:
raise ValueError('allowed_values only valid for enum type')
# Validate numeric constraints
if self.type not in ('integer', 'float'):
if self.min_value is not None or self.max_value is not None:
raise ValueError('min_value/max_value only valid for numeric types')
else:
if self.min_value is not None and self.max_value is not None and self.min_value >= self.max_value:
raise ValueError('min_value must be less than max_value')
return self
class TaggingFields(BaseModel):
"""Represents a collection of tagging fields, mapped by their names"""
fields: Dict[str, TaggingField]
@classmethod
def from_dict(cls, data: Dict[str, Dict[str, Any]]) -> 'TaggingFields':
return cls(fields={
field_name: TaggingField(**field_config)
for field_name, field_config in data.items()
})
def to_dict(self) -> Dict[str, Dict[str, Any]]:
return {
field_name: field.model_dump(exclude_none=True)
for field_name, field in self.fields.items()
}
class ChunkingPatternsField(BaseModel):
"""Represents a set of chunking patterns"""
patterns: List[str]
@field_validator('patterns')
def validate_patterns(cls, patterns):
for pattern in patterns:
try:
re.compile(pattern)
except re.error as e:
raise ValueError(f"Invalid regex pattern '{pattern}': {str(e)}")
return patterns
class ArgumentConstraint(BaseModel):
"""Base class for all argument constraints"""
description: Optional[str] = None
error_message: Optional[str] = None
class NumericConstraint(ArgumentConstraint):
"""Constraints for numeric values (int/float)"""
min_value: Optional[float] = None
max_value: Optional[float] = None
include_min: bool = True # True for >= min_value, False for > min_value
include_max: bool = True # True for <= max_value, False for < max_value
@model_validator(mode='after')
def validate_ranges(self) -> 'NumericConstraint':
if self.min_value is not None and self.max_value is not None:
if self.min_value > self.max_value:
raise ValueError("min_value must be less than or equal to max_value")
return self
def validate(self, value: Union[int, float]) -> bool:
if self.min_value is not None:
if self.include_min and value < self.min_value:
return False
if not self.include_min and value <= self.min_value:
return False
if self.max_value is not None:
if self.include_max and value > self.max_value:
return False
if not self.include_max and value >= self.max_value:
return False
return True
class StringConstraint(ArgumentConstraint):
"""Constraints for string values"""
min_length: Optional[int] = None
max_length: Optional[int] = None
patterns: Optional[List[str]] = None # List of regex patterns to match
pattern_match_all: bool = False # If True, string must match all patterns
forbidden_patterns: Optional[List[str]] = None # List of regex patterns that must not match
allow_empty: bool = False
@field_validator('patterns', 'forbidden_patterns')
@classmethod
def validate_patterns(cls, v: Optional[List[str]]) -> Optional[List[str]]:
if v is not None:
# Validate each pattern compiles
for pattern in v:
try:
re.compile(pattern)
except re.error as e:
raise ValueError(f"Invalid regex pattern '{pattern}': {str(e)}")
return v
def validate(self, value: str) -> bool:
if not self.allow_empty and not value:
return False
if self.min_length is not None and len(value) < self.min_length:
return False
if self.max_length is not None and len(value) > self.max_length:
return False
if self.patterns:
matches = [bool(re.search(pattern, value)) for pattern in self.patterns]
if self.pattern_match_all and not all(matches):
return False
if not self.pattern_match_all and not any(matches):
return False
if self.forbidden_patterns:
for pattern in self.forbidden_patterns:
if re.search(pattern, value):
return False
return True
class DateConstraint(ArgumentConstraint):
"""Constraints for date values"""
min_date: Optional[datetime] = None
max_date: Optional[datetime] = None
include_min: bool = True
include_max: bool = True
allowed_formats: Optional[List[str]] = None # List of allowed date formats
@model_validator(mode='after')
def validate_ranges(self) -> 'DateConstraint':
if self.min_date and self.max_date and self.min_date > self.max_date:
raise ValueError("min_date must be less than or equal to max_date")
return self
def validate(self, value: datetime) -> bool:
if self.min_date is not None:
if self.include_min and value < self.min_date:
return False
if not self.include_min and value <= self.min_date:
return False
if self.max_date is not None:
if self.include_max and value > self.max_date:
return False
if not self.include_max and value >= self.max_date:
return False
return True
class EnumConstraint(ArgumentConstraint):
"""Constraints for enum values"""
allowed_values: List[Any]
case_sensitive: bool = True # For string enums
allow_multiple: bool = False # If True, value can be a list of allowed values
min_selections: Optional[int] = None # When allow_multiple is True
max_selections: Optional[int] = None # When allow_multiple is True
@model_validator(mode='after')
def validate_selections(self) -> 'EnumConstraint':
if self.allow_multiple:
if self.min_selections is not None and self.max_selections is not None:
if self.min_selections > self.max_selections:
raise ValueError("min_selections must be less than or equal to max_selections")
if self.max_selections > len(self.allowed_values):
raise ValueError("max_selections cannot be greater than number of allowed values")
return self
def validate(self, value: Union[Any, List[Any]]) -> bool:
if self.allow_multiple:
if not isinstance(value, list):
return False
if self.min_selections is not None and len(value) < self.min_selections:
return False
if self.max_selections is not None and len(value) > self.max_selections:
return False
for v in value:
if not self._validate_single_value(v):
return False
else:
return self._validate_single_value(value)
return True
def _validate_single_value(self, value: Any) -> bool:
if isinstance(value, str) and not self.case_sensitive:
return any(str(value).lower() == str(v).lower() for v in self.allowed_values)
return value in self.allowed_values
class ArgumentDefinition(BaseModel):
"""Defines an argument with its type and constraints"""
name: str
type: str
description: Optional[str] = None
required: bool = False
default: Optional[Any] = None
constraints: Optional[Union[NumericConstraint, StringConstraint, DateConstraint, EnumConstraint]] = None
@field_validator('type')
@classmethod
def validate_type(cls, v: str) -> str:
valid_types = ['string', 'integer', 'float', 'date', 'enum']
if v not in valid_types:
raise ValueError(f'type must be one of {valid_types}')
return v
@model_validator(mode='after')
def validate_constraints(self) -> 'ArgumentDefinition':
if self.constraints:
expected_constraint_types = {
'string': StringConstraint,
'integer': NumericConstraint,
'float': NumericConstraint,
'date': DateConstraint,
'enum': EnumConstraint
}
expected_type = expected_constraint_types.get(self.type)
if not isinstance(self.constraints, expected_type):
raise ValueError(f'Constraints for type {self.type} must be of type {expected_type.__name__}')
if self.default is not None:
if not self.constraints.validate(self.default):
raise ValueError(f'Default value does not satisfy constraints for {self.name}')
return self
class ArgumentDefinitions(BaseModel):
"""Collection of argument definitions"""
arguments: Dict[str, ArgumentDefinition]
@classmethod
def from_dict(cls, data: Dict[str, Dict[str, Any]]) -> 'ArgumentDefinitions':
return cls(arguments={
arg_name: ArgumentDefinition(**arg_config)
for arg_name, arg_config in data.items()
})
def to_dict(self) -> Dict[str, Dict[str, Any]]:
return {
arg_name: arg.model_dump(exclude_none=True)
for arg_name, arg in self.arguments.items()
}
def validate_argument_values(self, values: Dict[str, Any]) -> Dict[str, str]:
"""
Validate a set of argument values against their definitions
Returns a dictionary of error messages for invalid arguments
"""
errors = {}
# Check for required arguments
for name, arg_def in self.arguments.items():
if arg_def.required and name not in values:
errors[name] = "Required argument missing"
continue
if name in values:
value = values[name]
# Validate type
try:
if arg_def.type == 'integer':
value = int(value)
elif arg_def.type == 'float':
value = float(value)
elif arg_def.type == 'date' and isinstance(value, str):
if arg_def.constraints and arg_def.constraints.allowed_formats:
for fmt in arg_def.constraints.allowed_formats:
try:
value = datetime.strptime(value, fmt)
break
except ValueError:
continue
else:
errors[
name] = f"Invalid date format. Allowed formats: {arg_def.constraints.allowed_formats}"
continue
except (ValueError, TypeError):
errors[name] = f"Invalid type. Expected {arg_def.type}"
continue
# Validate constraints
if arg_def.constraints and not arg_def.constraints.validate(value):
errors[name] = arg_def.constraints.error_message or "Value does not satisfy constraints"
return errors
@dataclass
class DocumentationFormat:
"""Constants for documentation formats"""
MARKDOWN = "markdown"
JSON = "json"
YAML = "yaml"
@dataclass
class DocumentationVersion:
"""Constants for documentation versions"""
BASIC = "basic" # Original documentation without retriever info
EXTENDED = "extended" # Including retriever documentation
def _generate_argument_constraints(field_config: Dict[str, Any]) -> List[Dict[str, Any]]:
"""Generate possible argument constraints based on field type"""
constraints = []
base_constraint = {
"description": f"Constraint for {field_config.get('description', 'field')}",
"error_message": "Optional custom error message"
}
if field_config["type"] == "integer" or field_config["type"] == "float":
constraints.append({
**base_constraint,
"type": "NumericConstraint",
"possible_constraints": {
"min_value": "number",
"max_value": "number",
"include_min": "boolean",
"include_max": "boolean"
},
"example": {
"min_value": field_config.get("min_value", 0),
"max_value": field_config.get("max_value", 100),
"include_min": True,
"include_max": True
}
})
elif field_config["type"] == "string":
constraints.append({
**base_constraint,
"type": "StringConstraint",
"possible_constraints": {
"min_length": "integer",
"max_length": "integer",
"patterns": "list[str]",
"pattern_match_all": "boolean",
"forbidden_patterns": "list[str]",
"allow_empty": "boolean"
},
"example": {
"min_length": 1,
"max_length": 100,
"patterns": ["^[A-Za-z0-9]+$"],
"pattern_match_all": False,
"forbidden_patterns": ["^test_", "_temp$"],
"allow_empty": False
}
})
elif field_config["type"] == "enum":
constraints.append({
**base_constraint,
"type": "EnumConstraint",
"possible_constraints": {
"allowed_values": f"list[{field_config.get('allowed_values', ['value1', 'value2'])}]",
"case_sensitive": "boolean",
"allow_multiple": "boolean",
"min_selections": "integer",
"max_selections": "integer"
},
"example": {
"allowed_values": field_config.get("allowed_values", ["value1", "value2"]),
"case_sensitive": True,
"allow_multiple": True,
"min_selections": 1,
"max_selections": 2
}
})
elif field_config["type"] == "date":
constraints.append({
**base_constraint,
"type": "DateConstraint",
"possible_constraints": {
"min_date": "datetime",
"max_date": "datetime",
"include_min": "boolean",
"include_max": "boolean",
"allowed_formats": "list[str]"
},
"example": {
"min_date": "2024-01-01T00:00:00",
"max_date": "2024-12-31T23:59:59",
"include_min": True,
"include_max": True,
"allowed_formats": ["%Y-%m-%d", "%Y/%m/%d"]
}
})
return constraints
def generate_field_documentation(
tagging_fields: Dict[str, Any],
format: str = "markdown",
version: str = "basic"
) -> str:
"""
Generate documentation for tagging fields configuration.
Args:
tagging_fields: Dictionary containing tagging fields configuration
format: Output format ("markdown", "json", or "yaml")
version: Documentation version ("basic" or "extended")
Returns:
str: Formatted documentation
"""
if version not in [DocumentationVersion.BASIC, DocumentationVersion.EXTENDED]:
raise ValueError(f"Unsupported documentation version: {version}")
# Normalize fields configuration
normalized_fields = {}
for field_name, field_config in tagging_fields.items():
field_doc = {
"name": field_name,
"type": field_config["type"],
"required": field_config.get("required", False),
"description": field_config.get("description", "No description provided"),
"constraints": []
}
# Only include possible arguments in extended version
if version == DocumentationVersion.EXTENDED:
field_doc["possible_arguments"] = _generate_argument_constraints(field_config)
# Add type-specific constraints
if field_config["type"] == "integer" or field_config["type"] == "float":
if "min_value" in field_config:
field_doc["constraints"].append(
f"Minimum value: {field_config['min_value']}")
if "max_value" in field_config:
field_doc["constraints"].append(
f"Maximum value: {field_config['max_value']}")
elif field_config["type"] == "string":
if "min_length" in field_config:
field_doc["constraints"].append(
f"Minimum length: {field_config['min_length']}")
if "max_length" in field_config:
field_doc["constraints"].append(
f"Maximum length: {field_config['max_length']}")
if "patterns" in field_config:
field_doc["constraints"].append(
f"Must match patterns: {', '.join(field_config['patterns'])}")
elif field_config["type"] == "enum":
if "allowed_values" in field_config:
field_doc["constraints"].append(
f"Allowed values: {', '.join(str(v) for v in field_config['allowed_values'])}")
elif field_config["type"] == "date":
if "min_date" in field_config:
field_doc["constraints"].append(
f"Minimum date: {field_config['min_date']}")
if "max_date" in field_config:
field_doc["constraints"].append(
f"Maximum date: {field_config['max_date']}")
if "allowed_formats" in field_config:
field_doc["constraints"].append(
f"Allowed formats: {', '.join(field_config['allowed_formats'])}")
normalized_fields[field_name] = field_doc
# Generate documentation in requested format
if format == DocumentationFormat.MARKDOWN:
return _generate_markdown_docs(normalized_fields, version)
elif format == DocumentationFormat.JSON:
return _generate_json_docs(normalized_fields, version)
elif format == DocumentationFormat.YAML:
return _generate_yaml_docs(normalized_fields, version)
else:
raise ValueError(f"Unsupported documentation format: {format}")
def _generate_markdown_docs(fields: Dict[str, Any], version: str) -> str:
"""Generate markdown documentation"""
docs = ["# Tagging Fields Documentation\n"]
# Add overview table
docs.append("## Fields Overview\n")
docs.append("| Field Name | Type | Required | Description |")
docs.append("|------------|------|----------|-------------|")
for field_name, field in fields.items():
docs.append(
f"| {field_name} | {field['type']} | "
f"{'Yes' if field['required'] else 'No'} | {field['description']} |"
)
# Add detailed field specifications
docs.append("\n## Detailed Field Specifications\n")
for field_name, field in fields.items():
docs.append(f"### {field_name}\n")
docs.append(f"**Type:** {field['type']}")
docs.append(f"**Required:** {'Yes' if field['required'] else 'No'}")
docs.append(f"**Description:** {field['description']}\n")
if field["constraints"]:
docs.append("**Field Constraints:**")
for constraint in field["constraints"]:
docs.append(f"- {constraint}")
docs.append("")
# Add retriever argument documentation only in extended version
if version == DocumentationVersion.EXTENDED and "possible_arguments" in field:
docs.append("**Possible Retriever Arguments:**")
for arg_constraint in field["possible_arguments"]:
docs.append(f"\n*{arg_constraint['type']}*")
docs.append(f"Description: {arg_constraint['description']}")
docs.append("\nPossible constraints:")
for const_name, const_type in arg_constraint["possible_constraints"].items():
docs.append(f"- `{const_name}`: {const_type}")
docs.append("\nExample:")
docs.append("```python")
docs.append(json.dumps(arg_constraint["example"], indent=2))
docs.append("```\n")
# Add example retriever configuration only in extended version
if version == DocumentationVersion.EXTENDED:
docs.append("\n## Example Retriever Configuration\n")
docs.append("```python")
example_config = {
"metadata_filters": {
field_name: field["possible_arguments"][0]["example"]
for field_name, field in fields.items()
if "possible_arguments" in field
}
}
docs.append(json.dumps(example_config, indent=2))
docs.append("```")
return "\n".join(docs)
def _generate_json_docs(fields: Dict[str, Any], version: str) -> str:
"""Generate JSON documentation"""
doc = {
"tagging_fields_documentation": {
"version": version,
"fields": fields
}
}
if version == DocumentationVersion.EXTENDED:
doc["tagging_fields_documentation"]["example_retriever_config"] = {
"metadata_filters": {
field_name: field["possible_arguments"][0]["example"]
for field_name, field in fields.items()
if "possible_arguments" in field
}
}
return json.dumps(doc, indent=2)
def _generate_yaml_docs(fields: Dict[str, Any], version: str) -> str:
"""Generate YAML documentation"""
doc = {
"tagging_fields_documentation": {
"version": version,
"fields": fields
}
}
if version == DocumentationVersion.EXTENDED:
doc["tagging_fields_documentation"]["example_retriever_config"] = {
"metadata_filters": {
field_name: field["possible_arguments"][0]["example"]
for field_name, field in fields.items()
if "possible_arguments" in field
}
}
return yaml.dump(doc, sort_keys=False, default_flow_style=False)
def patterns_to_json(text_area_content: str) -> str:
"""Convert line-based patterns to JSON"""
text_area_content = text_area_content.strip()
if len(text_area_content) == 0:
return json.dumps([])
# Split on newlines and remove empty lines
patterns = [line.strip() for line in text_area_content.split('\n') if line.strip()]
return json.dumps(patterns)
def json_to_patterns(json_content: str) -> str:
"""Convert JSON patterns list to text area content"""
try:
patterns = json.loads(json_content)
if not isinstance(patterns, list):
raise ValueError("JSON must contain a list of patterns")
# Join with newlines
return '\n'.join(patterns)
except json.JSONDecodeError as e:
raise ValueError(f"Invalid JSON format: {e}")
def json_to_pattern_list(json_content: str) -> list:
"""Convert JSON patterns list to text area content"""
try:
if json_content:
patterns = json.loads(json_content)
if not isinstance(patterns, list):
raise ValueError("JSON must contain a list of patterns")
# Unescape if needed
patterns = [pattern.replace('\\\\', '\\') for pattern in patterns]
return patterns
else:
return []
except json.JSONDecodeError as e:
raise ValueError(f"Invalid JSON format: {e}")
def normalize_json_field(value: str | dict | None, field_name: str = "JSON field") -> dict:
"""
Normalize a JSON field value to ensure it's a valid dictionary.
Args:
value: The input value which can be:
- None (will return empty dict)
- String (will be parsed as JSON)
- Dict (will be validated and returned)
field_name: Name of the field for error messages
Returns:
dict: The normalized JSON data as a Python dictionary
Raises:
ValueError: If the input string is not valid JSON or the input dict contains invalid types
"""
# Handle None case
if value is None:
return {}
# Handle dictionary case
if isinstance(value, dict):
try:
# Validate all values are JSON serializable
import json
json.dumps(value)
return value
except TypeError as e:
raise ValueError(f"{field_name} contains invalid types: {str(e)}")
# Handle string case
if isinstance(value, str):
if not value.strip():
return {}
try:
import json
return json.loads(value)
except json.JSONDecodeError as e:
raise ValueError(f"{field_name} contains invalid JSON: {str(e)}")
raise ValueError(f"{field_name} must be a string, dictionary, or None (got {type(value)})")

View File

@@ -1,14 +1,14 @@
from flask import request, current_app, session
from flask_jwt_extended import decode_token, verify_jwt_in_request, get_jwt_identity
from common.models.user import Tenant, TenantDomain
def get_allowed_origins(tenant_id):
session_key = f"allowed_origins_{tenant_id}"
if session_key in session:
current_app.logger.debug(f"Fetching allowed origins for tenant {tenant_id} from session")
return session[session_key]
current_app.logger.debug(f"Fetching allowed origins for tenant {tenant_id} from database")
tenant_domains = TenantDomain.query.filter_by(tenant_id=int(tenant_id)).all()
allowed_origins = [domain.domain for domain in tenant_domains]
@@ -18,51 +18,52 @@ def get_allowed_origins(tenant_id):
def cors_after_request(response, prefix):
current_app.logger.debug(f'CORS after request: {request.path}, prefix: {prefix}')
current_app.logger.debug(f'request.headers: {request.headers}')
current_app.logger.debug(f'request.args: {request.args}')
current_app.logger.debug(f'request is json?: {request.is_json}')
# Exclude health checks from checks
if request.path.startswith('/healthz') or request.path.startswith('/_healthz'):
current_app.logger.debug('Skipping CORS headers for health checks')
response.headers.add('Access-Control-Allow-Origin', '*')
response.headers.add('Access-Control-Allow-Headers', '*')
response.headers.add('Access-Control-Allow-Methods', '*')
return response
# Handle OPTIONS preflight requests
if request.method == 'OPTIONS':
response.headers.add('Access-Control-Allow-Origin', '*')
response.headers.add('Access-Control-Allow-Headers', 'Content-Type,Authorization,X-Tenant-ID')
response.headers.add('Access-Control-Allow-Methods', 'GET,POST,PUT,DELETE,OPTIONS')
response.headers.add('Access-Control-Allow-Credentials', 'true')
return response
tenant_id = None
allowed_origins = []
# Try to get tenant_id from JSON payload
json_data = request.get_json(silent=True)
current_app.logger.debug(f'request.get_json(silent=True): {json_data}')
if json_data and 'tenant_id' in json_data:
tenant_id = json_data['tenant_id']
# Check Socket.IO connection
if 'socket.io' in request.path:
token = request.args.get('token')
if token:
try:
decoded = decode_token(token)
tenant_id = decoded['sub']
except Exception as e:
current_app.logger.error(f'Error decoding token: {e}')
return response
else:
# Fallback to get tenant_id from query parameters or headers if JSON is not available
tenant_id = request.args.get('tenant_id') or request.args.get('tenantId') or request.headers.get('X-Tenant-ID')
current_app.logger.debug(f'Identified tenant_id: {tenant_id}')
# Regular API requests
try:
if verify_jwt_in_request(optional=True):
tenant_id = get_jwt_identity()
except Exception as e:
current_app.logger.error(f'Error verifying JWT: {e}')
return response
if tenant_id:
origin = request.headers.get('Origin')
allowed_origins = get_allowed_origins(tenant_id)
current_app.logger.debug(f'Allowed origins for tenant {tenant_id}: {allowed_origins}')
else:
current_app.logger.warning('tenant_id not found in request')
origin = request.headers.get('Origin')
current_app.logger.debug(f'Origin: {origin}')
if origin in allowed_origins:
response.headers.add('Access-Control-Allow-Origin', origin)
response.headers.add('Access-Control-Allow-Headers', 'Content-Type,Authorization')
response.headers.add('Access-Control-Allow-Methods', 'GET,POST,PUT,DELETE,OPTIONS')
response.headers.add('Access-Control-Allow-Credentials', 'true')
current_app.logger.debug(f'CORS headers set for origin: {origin}')
else:
current_app.logger.warning(f'Origin {origin} not allowed')
if origin in allowed_origins:
response.headers.add('Access-Control-Allow-Origin', origin)
response.headers.add('Access-Control-Allow-Headers', 'Content-Type,Authorization')
response.headers.add('Access-Control-Allow-Methods', 'GET,POST,PUT,DELETE,OPTIONS')
response.headers.add('Access-Control-Allow-Credentials', 'true')
return response

View File

@@ -1,62 +1,104 @@
from flask import request, session
import time
from flask_security import current_user
import json
def log_request_middleware(app):
# @app.before_request
# def log_request_info():
# start_time = time.time()
# app.logger.debug(f"Request URL: {request.url}")
# app.logger.debug(f"Request Method: {request.method}")
# app.logger.debug(f"Request Headers: {request.headers}")
# app.logger.debug(f"Time taken for logging request info: {time.time() - start_time} seconds")
# try:
# app.logger.debug(f"Request Body: {request.get_data()}")
# except Exception as e:
# app.logger.error(f"Error reading request body: {e}")
# app.logger.debug(f"Time taken for logging request body: {time.time() - start_time} seconds")
# @app.before_request
# def check_csrf_token():
# start_time = time.time()
# if request.method == "POST":
# csrf_token = request.form.get("csrf_token")
# app.logger.debug(f"CSRF Token: {csrf_token}")
# app.logger.debug(f"Time taken for logging CSRF token: {time.time() - start_time} seconds")
# @app.before_request
# def log_user_info():
# if current_user and current_user.is_authenticated:
# app.logger.debug(f"Before: User ID: {current_user.id}")
# app.logger.debug(f"Before: User Email: {current_user.email}")
# app.logger.debug(f"Before: User Roles: {current_user.roles}")
# else:
# app.logger.debug("After: No user logged in")
@app.before_request
def log_session_state_before():
app.logger.debug(f'Session state before request: {session.items()}')
# @app.after_request
# def log_response_info(response):
# start_time = time.time()
# app.logger.debug(f"Response Status: {response.status}")
# app.logger.debug(f"Response Headers: {response.headers}")
#
# app.logger.debug(f"Time taken for logging response info: {time.time() - start_time} seconds")
# return response
# @app.after_request
# def log_user_after_request(response):
# if current_user and current_user.is_authenticated:
# app.logger.debug(f"After: User ID: {current_user.id}")
# app.logger.debug(f"after: User Email: {current_user.email}")
# app.logger.debug(f"After: User Roles: {current_user.roles}")
# else:
# app.logger.debug("After: No user logged in")
pass
@app.after_request
def log_session_state_after(response):
app.logger.debug(f'Session state after request: {session.items()}')
return response
def register_request_debugger(app):
@app.before_request
def debug_request_info():
"""Log consolidated request information for debugging"""
# Skip health check endpoints
if request.path.startswith('/_healthz') or request.path.startswith('/healthz'):
return
# Gather all request information in a structured way
debug_info = {
"basic_info": {
"method": request.method,
"path": request.path,
"content_type": request.content_type,
"content_length": request.content_length
},
"environment": {
"remote_addr": request.remote_addr,
"user_agent": str(request.user_agent)
}
}
# Add headers (excluding sensitive ones)
safe_headers = {k: v for k, v in request.headers.items()
if k.lower() not in ('authorization', 'cookie', 'x-api-key')}
debug_info["headers"] = safe_headers
# Add authentication info (presence only)
auth_header = request.headers.get('Authorization', '')
debug_info["auth_info"] = {
"has_auth_header": bool(auth_header),
"auth_type": auth_header.split(' ')[0] if auth_header else None,
"token_length": len(auth_header.split(' ')[1]) if auth_header and len(auth_header.split(' ')) > 1 else 0,
"header_format": 'Valid format' if auth_header.startswith('Bearer ') else 'Invalid format',
"raw_header": auth_header[:10] + '...' if auth_header else None # Show first 10 chars only
}
# Add request data based on type
if request.is_json:
try:
json_data = request.get_json()
if isinstance(json_data, dict):
# Remove sensitive fields from logging
safe_json = {k: v for k, v in json_data.items()
if not any(sensitive in k.lower()
for sensitive in ['password', 'token', 'secret', 'key'])}
debug_info["request_data"] = {
"type": "json",
"content": safe_json
}
except Exception as e:
debug_info["request_data"] = {
"type": "json",
"error": str(e)
}
elif request.form:
safe_form = {k: v for k, v in request.form.items()
if not any(sensitive in k.lower()
for sensitive in ['password', 'token', 'secret', 'key'])}
debug_info["request_data"] = {
"type": "form",
"content": safe_form
}
# Add file information if present
if request.files:
debug_info["files"] = {
name: {
"filename": f.filename,
"content_type": f.content_type,
"content_length": f.content_length if hasattr(f, 'content_length') else None
}
for name, f in request.files.items()
}
# Add CORS information if present
cors_headers = {
"origin": request.headers.get('Origin'),
"request_method": request.headers.get('Access-Control-Request-Method'),
"request_headers": request.headers.get('Access-Control-Request-Headers')
}
if any(cors_headers.values()):
debug_info["cors"] = {k: v for k, v in cors_headers.items() if v is not None}
# Format the debug info as a pretty-printed JSON string with indentation
formatted_debug_info = json.dumps(debug_info, indent=2, sort_keys=True)

View File

@@ -3,29 +3,42 @@ from datetime import datetime as dt, timezone as tz
from sqlalchemy import desc
from sqlalchemy.exc import SQLAlchemyError
from werkzeug.utils import secure_filename
from common.models.document import Document, DocumentVersion
from common.models.document import Document, DocumentVersion, Catalog
from common.extensions import db, minio_client
from common.utils.celery_utils import current_celery
from flask import current_app
from flask_security import current_user
import requests
from urllib.parse import urlparse, unquote
from urllib.parse import urlparse, unquote, urlunparse, parse_qs
import os
from .eveai_exceptions import EveAIInvalidLanguageException, EveAIDoubleURLException, EveAIUnsupportedFileType
from .config_field_types import normalize_json_field
from .eveai_exceptions import (EveAIInvalidLanguageException, EveAIDoubleURLException, EveAIUnsupportedFileType,
EveAIInvalidCatalog, EveAIInvalidDocument, EveAIInvalidDocumentVersion, EveAIException)
from ..models.user import Tenant
from common.utils.model_logging_utils import set_logging_information, update_logging_information
def create_document_stack(api_input, file, filename, extension, tenant_id):
# Create the Document
new_doc = create_document(api_input, filename, tenant_id)
catalog_id = int(api_input.get('catalog_id'))
catalog = Catalog.query.get(catalog_id)
if not catalog:
raise EveAIInvalidCatalog(tenant_id, catalog_id)
new_doc = create_document(api_input, filename, catalog_id)
db.session.add(new_doc)
url = api_input.get('url', '')
if url != '':
url = cope_with_local_url(api_input.get('url', ''))
# Create the DocumentVersion
new_doc_vers = create_version_for_document(new_doc,
api_input.get('url', ''),
new_doc_vers = create_version_for_document(new_doc, tenant_id,
url,
api_input.get('sub_file_type', ''),
api_input.get('language', 'en'),
api_input.get('user_context', ''),
api_input.get('user_metadata'),
api_input.get('catalog_properties')
)
db.session.add(new_doc_vers)
@@ -45,7 +58,7 @@ def create_document_stack(api_input, file, filename, extension, tenant_id):
return new_doc, new_doc_vers
def create_document(form, filename, tenant_id):
def create_document(form, filename, catalog_id):
new_doc = Document()
if form['name'] == '':
new_doc.name = filename.rsplit('.', 1)[0]
@@ -56,13 +69,14 @@ def create_document(form, filename, tenant_id):
new_doc.valid_from = form['valid_from']
else:
new_doc.valid_from = dt.now(tz.utc)
new_doc.tenant_id = tenant_id
new_doc.catalog_id = catalog_id
set_logging_information(new_doc, dt.now(tz.utc))
return new_doc
def create_version_for_document(document, url, language, user_context, user_metadata):
def create_version_for_document(document, tenant_id, url, sub_file_type, language, user_context, user_metadata,
catalog_properties):
new_doc_vers = DocumentVersion()
if url != '':
new_doc_vers.url = url
@@ -76,13 +90,19 @@ def create_version_for_document(document, url, language, user_context, user_meta
new_doc_vers.user_context = user_context
if user_metadata != '' and user_metadata is not None:
new_doc_vers.user_metadata = user_metadata
new_doc_vers.user_metadata = normalize_json_field(user_metadata, "user_metadata")
if catalog_properties != '' and catalog_properties is not None:
new_doc_vers.catalog_properties = normalize_json_field(catalog_properties, "catalog_properties")
if sub_file_type != '':
new_doc_vers.sub_file_type = sub_file_type
new_doc_vers.document = document
set_logging_information(new_doc_vers, dt.now(tz.utc))
mark_tenant_storage_dirty(document.tenant_id)
mark_tenant_storage_dirty(tenant_id)
return new_doc_vers
@@ -99,12 +119,12 @@ def upload_file_for_version(doc_vers, file, extension, tenant_id):
doc_vers.doc_id,
doc_vers.language,
doc_vers.id,
doc_vers.file_name,
f"{doc_vers.id}.{extension}",
file
)
doc_vers.bucket_name = bn
doc_vers.object_name = on
doc_vers.file_size_mb = size / 1048576 # Convert bytes to MB
doc_vers.file_size = size / 1048576 # Convert bytes to MB
db.session.commit()
current_app.logger.info(f'Successfully saved document to MinIO for tenant {tenant_id} for '
@@ -116,35 +136,6 @@ def upload_file_for_version(doc_vers, file, extension, tenant_id):
raise
def set_logging_information(obj, timestamp):
obj.created_at = timestamp
obj.updated_at = timestamp
user_id = get_current_user_id()
if user_id:
obj.created_by = user_id
obj.updated_by = user_id
def update_logging_information(obj, timestamp):
obj.updated_at = timestamp
user_id = get_current_user_id()
if user_id:
obj.updated_by = user_id
def get_current_user_id():
try:
if current_user and current_user.is_authenticated:
return current_user.id
else:
return None
except Exception:
# This will catch any errors if current_user is not available (e.g., in API context)
return None
def get_extension_from_content_type(content_type):
content_type_map = {
'text/html': 'html',
@@ -158,6 +149,8 @@ def get_extension_from_content_type(content_type):
def process_url(url, tenant_id):
url = cope_with_local_url(url)
response = requests.head(url, allow_redirects=True)
content_type = response.headers.get('Content-Type', '').split(';')[0]
@@ -189,57 +182,34 @@ def process_url(url, tenant_id):
return file_content, filename, extension
def process_multiple_urls(urls, tenant_id, api_input):
results = []
for url in urls:
try:
file_content, filename, extension = process_url(url, tenant_id)
def clean_url(url):
tracking_params = {"utm_source", "utm_medium", "utm_campaign", "utm_term", "utm_content",
"hsa_acc", "hsa_cam", "hsa_grp", "hsa_ad", "hsa_src", "hsa_tgt", "hsa_kw",
"hsa_mt", "hsa_net", "hsa_ver", "gad_source", "gbraid"}
url_input = api_input.copy()
url_input.update({
'url': url,
'name': f"{api_input['name']}-{filename}" if api_input['name'] else filename
})
parsed_url = urlparse(url)
query_params = parse_qs(parsed_url.query)
new_doc, new_doc_vers = create_document_stack(url_input, file_content, filename, extension, tenant_id)
task_id = start_embedding_task(tenant_id, new_doc_vers.id)
# Remove tracking params
clean_params = {k: v for k, v in query_params.items() if k not in tracking_params}
results.append({
'url': url,
'document_id': new_doc.id,
'document_version_id': new_doc_vers.id,
'task_id': task_id,
'status': 'success'
})
except Exception as e:
current_app.logger.error(f"Error processing URL {url}: {str(e)}")
results.append({
'url': url,
'status': 'error',
'message': str(e)
})
return results
# Reconstruct the URL
clean_query = "&".join(f"{k}={v[0]}" for k, v in clean_params.items()) if clean_params else ""
cleaned_url = urlunparse(parsed_url._replace(query=clean_query))
return cleaned_url
def start_embedding_task(tenant_id, doc_vers_id):
task = current_celery.send_task('create_embeddings', queue='embeddings', args=[
tenant_id,
doc_vers_id,
])
task = current_celery.send_task('create_embeddings',
args=[tenant_id, doc_vers_id,],
queue='embeddings')
current_app.logger.info(f'Embedding creation started for tenant {tenant_id}, '
f'Document Version {doc_vers_id}. '
f'Embedding creation task: {task.id}')
return task.id
def validate_file_type(extension):
current_app.logger.debug(f'Validating file type {extension}')
current_app.logger.debug(f'Supported file types: {current_app.config["SUPPORTED_FILE_TYPES"]}')
if extension not in current_app.config['SUPPORTED_FILE_TYPES']:
raise EveAIUnsupportedFileType(f"Filetype {extension} is currently not supported. "
f"Supported filetypes: {', '.join(current_app.config['SUPPORTED_FILE_TYPES'])}")
def get_filename_from_url(url):
parsed_url = urlparse(url)
path_parts = parsed_url.path.split('/')
@@ -257,11 +227,16 @@ def get_documents_list(page, per_page):
return pagination
def edit_document(document_id, name, valid_from, valid_to):
doc = Document.query.get_or_404(document_id)
doc.name = name
doc.valid_from = valid_from
doc.valid_to = valid_to
def edit_document(tenant_id, document_id, name, valid_from, valid_to):
doc = Document.query.get(document_id)
if not doc:
raise EveAIInvalidDocument(tenant_id, document_id)
if name:
doc.name = name
if valid_from:
doc.valid_from = valid_from
if valid_to:
doc.valid_to = valid_to
update_logging_information(doc, dt.now(tz.utc))
try:
@@ -273,9 +248,13 @@ def edit_document(document_id, name, valid_from, valid_to):
return None, str(e)
def edit_document_version(version_id, user_context):
doc_vers = DocumentVersion.query.get_or_404(version_id)
def edit_document_version(tenant_id, version_id, user_context, catalog_properties):
doc_vers = DocumentVersion.query.get(version_id)
if not doc_vers:
raise EveAIInvalidDocumentVersion(tenant_id, version_id)
doc_vers.user_context = user_context
doc_vers.catalog_properties = normalize_json_field(catalog_properties, "catalog_properties")
update_logging_information(doc_vers, dt.now(tz.utc))
try:
@@ -287,19 +266,22 @@ def edit_document_version(version_id, user_context):
return None, str(e)
def refresh_document_with_info(doc_id, api_input):
doc = Document.query.get_or_404(doc_id)
def refresh_document_with_info(doc_id, tenant_id, api_input):
doc = Document.query.get(doc_id)
if not doc:
raise EveAIInvalidDocument(tenant_id, doc_id)
old_doc_vers = DocumentVersion.query.filter_by(doc_id=doc_id).order_by(desc(DocumentVersion.id)).first()
if not old_doc_vers.url:
return None, "This document has no URL. Only documents with a URL can be refreshed."
new_doc_vers = create_version_for_document(
doc,
doc, tenant_id,
old_doc_vers.url,
old_doc_vers.sub_file_type,
api_input.get('language', old_doc_vers.language),
api_input.get('user_context', old_doc_vers.user_context),
api_input.get('user_metadata', old_doc_vers.user_metadata)
api_input.get('user_metadata', old_doc_vers.user_metadata),
api_input.get('catalog_properties', old_doc_vers.catalog_properties),
)
set_logging_information(new_doc_vers, dt.now(tz.utc))
@@ -311,40 +293,161 @@ def refresh_document_with_info(doc_id, api_input):
db.session.rollback()
return None, str(e)
response = requests.head(old_doc_vers.url, allow_redirects=True)
url = cope_with_local_url(old_doc_vers.url)
response = requests.head(url, allow_redirects=True)
content_type = response.headers.get('Content-Type', '').split(';')[0]
extension = get_extension_from_content_type(content_type)
response = requests.get(old_doc_vers.url)
response = requests.get(url)
response.raise_for_status()
file_content = response.content
upload_file_for_version(new_doc_vers, file_content, extension, doc.tenant_id)
upload_file_for_version(new_doc_vers, file_content, extension, tenant_id)
task = current_celery.send_task('create_embeddings', queue='embeddings', args=[
doc.tenant_id,
new_doc_vers.id,
])
task = current_celery.send_task('create_embeddings', args=[tenant_id, new_doc_vers.id,], queue='embeddings')
current_app.logger.info(f'Embedding creation started for document {doc_id} on version {new_doc_vers.id} '
f'with task id: {task.id}.')
return new_doc_vers, task.id
def refresh_document_with_content(doc_id: int, tenant_id: int, file_content: bytes, api_input: dict) -> tuple:
"""
Refresh document with new content
Args:
doc_id: Document ID
tenant_id: Tenant ID
file_content: New file content
api_input: Additional document information
Returns:
Tuple of (new_version, task_id)
"""
doc = Document.query.get(doc_id)
if not doc:
raise EveAIInvalidDocument(tenant_id, doc_id)
old_doc_vers = DocumentVersion.query.filter_by(doc_id=doc_id).order_by(desc(DocumentVersion.id)).first()
# Create new version with same file type as original
extension = old_doc_vers.file_type
new_doc_vers = create_version_for_document(
doc, tenant_id,
'', # No URL for content-based updates
old_doc_vers.sub_file_type,
api_input.get('language', old_doc_vers.language),
api_input.get('user_context', old_doc_vers.user_context),
api_input.get('user_metadata', old_doc_vers.user_metadata),
api_input.get('catalog_properties', old_doc_vers.catalog_properties),
)
try:
db.session.add(new_doc_vers)
db.session.commit()
except SQLAlchemyError as e:
db.session.rollback()
return None, str(e)
# Upload new content
upload_file_for_version(new_doc_vers, file_content, extension, tenant_id)
# Start embedding task
task = current_celery.send_task('create_embeddings', args=[tenant_id, new_doc_vers.id], queue='embeddings')
current_app.logger.info(f'Embedding creation started for document {doc_id} on version {new_doc_vers.id} '
f'with task id: {task.id}.')
return new_doc_vers, task.id
# Update the existing refresh_document function to use the new refresh_document_with_info
def refresh_document(doc_id):
def refresh_document(doc_id, tenant_id):
current_app.logger.info(f'Refreshing document {doc_id}')
doc = Document.query.get_or_404(doc_id)
old_doc_vers = DocumentVersion.query.filter_by(doc_id=doc_id).order_by(desc(DocumentVersion.id)).first()
api_input = {
'language': old_doc_vers.language,
'user_context': old_doc_vers.user_context,
'user_metadata': old_doc_vers.user_metadata
'user_metadata': old_doc_vers.user_metadata,
'catalog_properties': old_doc_vers.catalog_properties,
}
return refresh_document_with_info(doc_id, api_input)
return refresh_document_with_info(doc_id, tenant_id, api_input)
# Function triggered when a document_version is created or updated
def mark_tenant_storage_dirty(tenant_id):
tenant = db.session.query(Tenant).filter_by(id=tenant_id).first()
tenant = db.session.query(Tenant).filter_by(id=int(tenant_id)).first()
tenant.storage_dirty = True
db.session.commit()
def cope_with_local_url(url):
parsed_url = urlparse(url)
# Check if this is an internal WordPress URL (TESTING) and rewrite it
if parsed_url.netloc in [current_app.config['EXTERNAL_WORDPRESS_BASE_URL']]:
parsed_url = parsed_url._replace(
scheme=current_app.config['WORDPRESS_PROTOCOL'],
netloc=f"{current_app.config['WORDPRESS_HOST']}:{current_app.config['WORDPRESS_PORT']}"
)
url = urlunparse(parsed_url)
return url
def lookup_document(tenant_id: int, lookup_criteria: dict, metadata_type: str) -> tuple[Document, DocumentVersion]:
"""
Look up a document using metadata criteria
Args:
tenant_id: ID of the tenant
lookup_criteria: Dictionary of key-value pairs to match in metadata
metadata_type: Which metadata to search in ('user_metadata' or 'system_metadata')
Returns:
Tuple of (Document, DocumentVersion) if found
Raises:
ValueError: If invalid metadata_type provided
EveAIException: If lookup fails
"""
if metadata_type not in ['user_metadata', 'system_metadata']:
raise ValueError(f"Invalid metadata_type: {metadata_type}")
try:
# Query for the latest document version matching the criteria
query = (db.session.query(Document, DocumentVersion)
.join(DocumentVersion)
.filter(Document.id == DocumentVersion.doc_id)
.order_by(DocumentVersion.id.desc()))
# Add metadata filtering using PostgreSQL JSONB operators
metadata_field = getattr(DocumentVersion, metadata_type)
for key, value in lookup_criteria.items():
query = query.filter(metadata_field[key].astext == str(value))
# Get first result
result = query.first()
if not result:
raise EveAIException(
f"No document found matching criteria in {metadata_type}",
status_code=404
)
return result
except SQLAlchemyError as e:
current_app.logger.error(f'Database error during document lookup for tenant {tenant_id}: {e}')
raise EveAIException(
"Database error during document lookup",
status_code=500
)
except Exception as e:
current_app.logger.error(f'Error during document lookup for tenant {tenant_id}: {e}')
raise EveAIException(
"Error during document lookup",
status_code=500
)

View File

@@ -10,8 +10,12 @@ class EveAIException(Exception):
def to_dict(self):
rv = dict(self.payload or ())
rv['message'] = self.message
rv['error'] = self.__class__.__name__
return rv
def __str__(self):
return self.message # Return the message when the exception is converted to a string
class EveAIInvalidLanguageException(EveAIException):
"""Raised when an invalid language is provided"""
@@ -41,3 +45,94 @@ class EveAINoLicenseForTenant(EveAIException):
super().__init__(message, status_code, payload)
class EveAITenantNotFound(EveAIException):
"""Raised when a tenant is not found"""
def __init__(self, tenant_id, status_code=400, payload=None):
self.tenant_id = tenant_id
message = f"Tenant {tenant_id} not found"
super().__init__(message, status_code, payload)
class EveAITenantInvalid(EveAIException):
"""Raised when a tenant is invalid"""
def __init__(self, tenant_id, status_code=400, payload=None):
self.tenant_id = tenant_id
# Construct the message dynamically
message = f"Tenant with ID '{tenant_id}' is not valid. Please contact the System Administrator."
super().__init__(message, status_code, payload)
class EveAINoActiveLicense(EveAIException):
"""Raised when a tenant has no active licenses"""
def __init__(self, tenant_id, status_code=400, payload=None):
self.tenant_id = tenant_id
# Construct the message dynamically
message = f"Tenant with ID '{tenant_id}' has no active licenses. Please contact the System Administrator."
super().__init__(message, status_code, payload)
class EveAIInvalidCatalog(EveAIException):
"""Raised when a catalog cannot be found"""
def __init__(self, tenant_id, catalog_id, status_code=400, payload=None):
self.tenant_id = tenant_id
self.catalog_id = catalog_id
# Construct the message dynamically
message = f"Tenant with ID '{tenant_id}' has no valid catalog with ID {catalog_id}. Please contact the System Administrator."
super().__init__(message, status_code, payload)
class EveAIInvalidProcessor(EveAIException):
"""Raised when no valid processor can be found for a given Catalog ID"""
def __init__(self, tenant_id, catalog_id, file_type, status_code=400, payload=None):
self.tenant_id = tenant_id
self.catalog_id = catalog_id
self.file_type = file_type
# Construct the message dynamically
message = (f"Tenant with ID '{tenant_id}' has no valid {file_type} processor for catalog with ID {catalog_id}. "
f"Please contact the System Administrator.")
super().__init__(message, status_code, payload)
class EveAIInvalidDocument(EveAIException):
"""Raised when a tenant has no document with given ID"""
def __init__(self, tenant_id, document_id, status_code=400, payload=None):
self.tenant_id = tenant_id
self.document_id = document_id
# Construct the message dynamically
message = f"Tenant with ID '{tenant_id}' has no document with ID {document_id}."
super().__init__(message, status_code, payload)
class EveAIInvalidDocumentVersion(EveAIException):
"""Raised when a tenant has no document version with given ID"""
def __init__(self, tenant_id, document_version_id, status_code=400, payload=None):
self.tenant_id = tenant_id
self.document_version_id = document_version_id
# Construct the message dynamically
message = f"Tenant with ID '{tenant_id}' has no document version with ID {document_version_id}."
super().__init__(message, status_code, payload)
class EveAISocketInputException(EveAIException):
"""Raised when a socket call receives an invalid payload"""
def __init__(self, message, status_code=400, payload=None):
super.__init__(message, status_code, payload)
class EveAIInvalidEmbeddingModel(EveAIException):
"""Raised when no or an invalid embedding model is provided in the catalog"""
def __init__(self, tenant_id, catalog_id, status_code=400, payload=None):
self.tenant_id = tenant_id
self.catalog_id = catalog_id
# Construct the message dynamically
message = f"Tenant with ID '{tenant_id}' has no or an invalid embedding model in Catalog {catalog_id}."
super().__init__(message, status_code, payload)

View File

@@ -0,0 +1,112 @@
# common/utils/execution_progress.py
from datetime import datetime as dt, timezone as tz
from typing import Generator
from redis import Redis, RedisError
import json
from flask import current_app
class ExecutionProgressTracker:
"""Tracks progress of specialist executions using Redis"""
def __init__(self):
try:
redis_url = current_app.config['SPECIALIST_EXEC_PUBSUB']
self.redis = Redis.from_url(redis_url, socket_timeout=5)
# Test the connection
self.redis.ping()
self.expiry = 3600 # 1 hour expiry
except RedisError as e:
current_app.logger.error(f"Failed to connect to Redis: {str(e)}")
raise
except Exception as e:
current_app.logger.error(f"Unexpected error during Redis initialization: {str(e)}")
raise
def _get_key(self, execution_id: str) -> str:
return f"specialist_execution:{execution_id}"
def send_update(self, ctask_id: str, processing_type: str, data: dict):
"""Send an update about execution progress"""
try:
key = self._get_key(ctask_id)
# First verify Redis is still connected
try:
self.redis.ping()
except RedisError:
current_app.logger.error("Lost Redis connection. Attempting to reconnect...")
self.__init__() # Reinitialize connection
update = {
'processing_type': processing_type,
'data': data,
'timestamp': dt.now(tz=tz.utc)
}
# Log initial state
try:
orig_len = self.redis.llen(key)
# Try to serialize the update and check the result
try:
serialized_update = json.dumps(update, default=str) # Add default handler for datetime
except TypeError as e:
current_app.logger.error(f"Failed to serialize update: {str(e)}")
raise
# Store update in list with pipeline for atomicity
with self.redis.pipeline() as pipe:
pipe.rpush(key, serialized_update)
pipe.publish(key, serialized_update)
pipe.expire(key, self.expiry)
results = pipe.execute()
new_len = self.redis.llen(key)
if new_len <= orig_len:
current_app.logger.error(
f"List length did not increase as expected. Original: {orig_len}, New: {new_len}")
except RedisError as e:
current_app.logger.error(f"Redis operation failed: {str(e)}")
raise
except Exception as e:
current_app.logger.error(f"Unexpected error in send_update: {str(e)}, type: {type(e)}")
raise
def get_updates(self, ctask_id: str) -> Generator[str, None, None]:
key = self._get_key(ctask_id)
pubsub = self.redis.pubsub()
pubsub.subscribe(key)
try:
# First yield any existing updates
length = self.redis.llen(key)
if length > 0:
updates = self.redis.lrange(key, 0, -1)
for update in updates:
update_data = json.loads(update.decode('utf-8'))
# Use processing_type for the event
yield f"event: {update_data['processing_type']}\n"
yield f"data: {json.dumps(update_data)}\n\n"
# Then listen for new updates
while True:
message = pubsub.get_message(timeout=30) # message['type'] is Redis pub/sub type
if message is None:
yield ": keepalive\n\n"
continue
if message['type'] == 'message': # This is Redis pub/sub type
update_data = json.loads(message['data'].decode('utf-8'))
yield f"data: {message['data'].decode('utf-8')}\n\n"
# Check processing_type for completion
if update_data['processing_type'] in ['Task Complete', 'Task Error']:
break
finally:
pubsub.unsubscribe()

View File

@@ -24,9 +24,6 @@ def mw_before_request():
if not tenant_id:
raise Exception('Cannot switch schema for tenant: no tenant defined in session')
for role in current_user.roles:
current_app.logger.debug(f'In middleware: User {current_user.email} has role {role.name}')
# user = User.query.get(current_user.id)
if current_user.has_role('Super User') or current_user.tenant_id == tenant_id:
Database(tenant_id).switch_schema()

View File

@@ -33,6 +33,9 @@ class MinioClient:
def generate_object_name(self, document_id, language, version_id, filename):
return f"{document_id}/{language}/{version_id}/{filename}"
def generate_asset_name(self, asset_version_id, file_name, content_type):
return f"assets/{asset_version_id}/{file_name}.{content_type}"
def upload_document_file(self, tenant_id, document_id, language, version_id, filename, file_data):
bucket_name = self.generate_bucket_name(tenant_id)
object_name = self.generate_object_name(document_id, language, version_id, filename)
@@ -54,9 +57,27 @@ class MinioClient:
except S3Error as err:
raise Exception(f"Error occurred while uploading file: {err}")
def download_document_file(self, tenant_id, document_id, language, version_id, filename):
bucket_name = self.generate_bucket_name(tenant_id)
object_name = self.generate_object_name(document_id, language, version_id, filename)
def upload_asset_file(self, bucket_name, asset_version_id, file_name, file_type, file_data):
object_name = self.generate_asset_name(asset_version_id, file_name, file_type)
try:
if isinstance(file_data, FileStorage):
file_data = file_data.read()
elif isinstance(file_data, io.BytesIO):
file_data = file_data.getvalue()
elif isinstance(file_data, str):
file_data = file_data.encode('utf-8')
elif not isinstance(file_data, bytes):
raise TypeError('Unsupported file type. Expected FileStorage, BytesIO, str, or bytes.')
self.client.put_object(
bucket_name, object_name, io.BytesIO(file_data), len(file_data)
)
return object_name, len(file_data)
except S3Error as err:
raise Exception(f"Error occurred while uploading asset: {err}")
def download_document_file(self, tenant_id, bucket_name, object_name):
try:
response = self.client.get_object(bucket_name, object_name)
return response.read()

View File

@@ -0,0 +1,30 @@
from flask_security import current_user
def set_logging_information(obj, timestamp):
obj.created_at = timestamp
obj.updated_at = timestamp
user_id = get_current_user_id()
if user_id:
obj.created_by = user_id
obj.updated_by = user_id
def update_logging_information(obj, timestamp):
obj.updated_at = timestamp
user_id = get_current_user_id()
if user_id:
obj.updated_by = user_id
def get_current_user_id():
try:
if current_user and current_user.is_authenticated:
return current_user.id
else:
return None
except Exception:
# This will catch any errors if current_user is not available (e.g., in API context)
return None

View File

@@ -1,236 +1,40 @@
import os
from typing import Dict, Any, Optional, Tuple
import langcodes
from flask import current_app
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
from langchain_anthropic import ChatAnthropic
from langchain_core.pydantic_v1 import BaseModel, Field
from typing import List, Any, Iterator
from collections.abc import MutableMapping
from openai import OpenAI
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
from portkey_ai.langchain.portkey_langchain_callback_handler import LangchainCallbackHandler
from langchain_core.language_models import BaseChatModel
from common.langchain.llm_metrics_handler import LLMMetricsHandler
from common.langchain.tracked_openai_embeddings import TrackedOpenAIEmbeddings
from common.langchain.tracked_transcribe import tracked_transcribe
from common.models.document import EmbeddingSmallOpenAI, EmbeddingLargeOpenAI
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
from langchain_mistralai import ChatMistralAI
from flask import current_app
from common.eveai_model.tracked_mistral_embeddings import TrackedMistralAIEmbeddings
from common.langchain.tracked_transcription import TrackedOpenAITranscription
from common.models.user import Tenant
from config.model_config import MODEL_CONFIG
from common.utils.business_event_context import current_event
from common.extensions import template_manager
from common.models.document import EmbeddingMistral
from common.utils.eveai_exceptions import EveAITenantNotFound, EveAIInvalidEmbeddingModel
from crewai import LLM
embedding_llm_model_cache: Dict[Tuple[str, float], BaseChatModel] = {}
crewai_llm_model_cache: Dict[Tuple[str, float], LLM] = {}
llm_metrics_handler = LLMMetricsHandler()
class CitedAnswer(BaseModel):
"""Default docstring - to be replaced with actual prompt"""
def create_language_template(template: str, language: str) -> str:
"""
Replace language placeholder in template with specified language
answer: str = Field(
...,
description="The answer to the user question, based on the given sources",
)
citations: List[int] = Field(
...,
description="The integer IDs of the SPECIFIC sources that were used to generate the answer"
)
insufficient_info: bool = Field(
False, # Default value is set to False
description="A boolean indicating wether given sources were sufficient or not to generate the answer"
)
Args:
template: Template string with {language} placeholder
language: Language code to insert
def set_language_prompt_template(cls, language_prompt):
cls.__doc__ = language_prompt
class ModelVariables(MutableMapping):
def __init__(self, tenant: Tenant):
self.tenant = tenant
self._variables = self._initialize_variables()
self._embedding_model = None
self._llm = None
self._llm_no_rag = None
self._transcription_client = None
self._prompt_templates = {}
self._embedding_db_model = None
self.llm_metrics_handler = LLMMetricsHandler()
self._transcription_client = None
def _initialize_variables(self):
variables = {}
# We initialize the variables that are available knowing the tenant. For the other, we will apply 'lazy loading'
variables['k'] = self.tenant.es_k or 5
variables['similarity_threshold'] = self.tenant.es_similarity_threshold or 0.7
variables['RAG_temperature'] = self.tenant.chat_RAG_temperature or 0.3
variables['no_RAG_temperature'] = self.tenant.chat_no_RAG_temperature or 0.5
variables['embed_tuning'] = self.tenant.embed_tuning or False
variables['rag_tuning'] = self.tenant.rag_tuning or False
variables['rag_context'] = self.tenant.rag_context or " "
# Set HTML Chunking Variables
variables['html_tags'] = self.tenant.html_tags
variables['html_end_tags'] = self.tenant.html_end_tags
variables['html_included_elements'] = self.tenant.html_included_elements
variables['html_excluded_elements'] = self.tenant.html_excluded_elements
variables['html_excluded_classes'] = self.tenant.html_excluded_classes
# Set Chunk Size variables
variables['min_chunk_size'] = self.tenant.min_chunk_size
variables['max_chunk_size'] = self.tenant.max_chunk_size
# Set model providers
variables['embedding_provider'], variables['embedding_model'] = self.tenant.embedding_model.rsplit('.', 1)
variables['llm_provider'], variables['llm_model'] = self.tenant.llm_model.rsplit('.', 1)
variables["templates"] = current_app.config['PROMPT_TEMPLATES'][(f"{variables['llm_provider']}."
f"{variables['llm_model']}")]
current_app.logger.info(f"Loaded prompt templates: \n")
current_app.logger.info(f"{variables['templates']}")
# Set model-specific configurations
model_config = MODEL_CONFIG.get(variables['llm_provider'], {}).get(variables['llm_model'], {})
variables.update(model_config)
variables['annotation_chunk_length'] = current_app.config['ANNOTATION_TEXT_CHUNK_LENGTH'][self.tenant.llm_model]
if variables['tool_calling_supported']:
variables['cited_answer_cls'] = CitedAnswer
variables['max_compression_duration'] = current_app.config['MAX_COMPRESSION_DURATION']
variables['max_transcription_duration'] = current_app.config['MAX_TRANSCRIPTION_DURATION']
variables['compression_cpu_limit'] = current_app.config['COMPRESSION_CPU_LIMIT']
variables['compression_process_delay'] = current_app.config['COMPRESSION_PROCESS_DELAY']
return variables
@property
def embedding_model(self):
api_key = os.getenv('OPENAI_API_KEY')
model = self._variables['embedding_model']
self._embedding_model = TrackedOpenAIEmbeddings(api_key=api_key,
model=model,
)
self._embedding_db_model = EmbeddingSmallOpenAI \
if model == 'text-embedding-3-small' \
else EmbeddingLargeOpenAI
return self._embedding_model
@property
def llm(self):
api_key = self.get_api_key_for_llm()
self._llm = ChatOpenAI(api_key=api_key,
model=self._variables['llm_model'],
temperature=self._variables['RAG_temperature'],
callbacks=[self.llm_metrics_handler])
return self._llm
@property
def llm_no_rag(self):
api_key = self.get_api_key_for_llm()
self._llm_no_rag = ChatOpenAI(api_key=api_key,
model=self._variables['llm_model'],
temperature=self._variables['RAG_temperature'],
callbacks=[self.llm_metrics_handler])
return self._llm_no_rag
def get_api_key_for_llm(self):
if self._variables['llm_provider'] == 'openai':
api_key = os.getenv('OPENAI_API_KEY')
else: # self._variables['llm_provider'] == 'anthropic'
api_key = os.getenv('ANTHROPIC_API_KEY')
return api_key
@property
def transcription_client(self):
api_key = os.getenv('OPENAI_API_KEY')
self._transcription_client = OpenAI(api_key=api_key, )
self._variables['transcription_model'] = 'whisper-1'
return self._transcription_client
def transcribe(self, *args, **kwargs):
return tracked_transcribe(self._transcription_client, *args, **kwargs)
@property
def embedding_db_model(self):
if self._embedding_db_model is None:
self._embedding_db_model = self.get_embedding_db_model()
return self._embedding_db_model
def get_embedding_db_model(self):
current_app.logger.debug("In get_embedding_db_model")
if self._embedding_db_model is None:
self._embedding_db_model = EmbeddingSmallOpenAI \
if self._variables['embedding_model'] == 'text-embedding-3-small' \
else EmbeddingLargeOpenAI
current_app.logger.debug(f"Embedding DB Model: {self._embedding_db_model}")
return self._embedding_db_model
def get_prompt_template(self, template_name: str) -> str:
current_app.logger.info(f"Getting prompt template for {template_name}")
if template_name not in self._prompt_templates:
self._prompt_templates[template_name] = self._load_prompt_template(template_name)
return self._prompt_templates[template_name]
def _load_prompt_template(self, template_name: str) -> str:
# In the future, this method will make an API call to Portkey
# For now, we'll simulate it with a placeholder implementation
# You can replace this with your current prompt loading logic
return self._variables['templates'][template_name]
def __getitem__(self, key: str) -> Any:
current_app.logger.debug(f"ModelVariables: Getting {key}")
# Support older template names (suffix = _template)
if key.endswith('_template'):
key = key[:-len('_template')]
current_app.logger.debug(f"ModelVariables: Getting modified {key}")
if key == 'embedding_model':
return self.embedding_model
elif key == 'embedding_db_model':
return self.embedding_db_model
elif key == 'llm':
return self.llm
elif key == 'llm_no_rag':
return self.llm_no_rag
elif key == 'transcription_client':
return self.transcription_client
elif key in self._variables.get('prompt_templates', []):
return self.get_prompt_template(key)
return self._variables.get(key)
def __setitem__(self, key: str, value: Any) -> None:
self._variables[key] = value
def __delitem__(self, key: str) -> None:
del self._variables[key]
def __iter__(self) -> Iterator[str]:
return iter(self._variables)
def __len__(self):
return len(self._variables)
def get(self, key: str, default: Any = None) -> Any:
return self.__getitem__(key) or default
def update(self, **kwargs) -> None:
self._variables.update(kwargs)
def items(self):
return self._variables.items()
def keys(self):
return self._variables.keys()
def values(self):
return self._variables.values()
def select_model_variables(tenant):
model_variables = ModelVariables(tenant=tenant)
return model_variables
def create_language_template(template, language):
Returns:
str: Template with language placeholder replaced
"""
try:
full_language = langcodes.Language.make(language=language)
language_template = template.replace('{language}', full_language.display_name())
@@ -240,5 +44,244 @@ def create_language_template(template, language):
return language_template
def replace_variable_in_template(template, variable, value):
return template.replace(variable, value)
def replace_variable_in_template(template: str, variable: str, value: str) -> str:
"""
Replace a variable placeholder in template with specified value
Args:
template: Template string with variable placeholder
variable: Variable placeholder to replace (e.g. "{tenant_context}")
value: Value to insert
Returns:
str: Template with variable placeholder replaced
"""
return template.replace(variable, value or "")
def get_embedding_model_and_class(tenant_id, catalog_id, full_embedding_name="mistral.mistral-embed"):
"""
Retrieve the embedding model and embedding model class to store Embeddings
Args:
tenant_id: ID of the tenant
catalog_id: ID of the catalog
full_embedding_name: The full name of the embedding model: <provider>.<model>
Returns:
embedding_model, embedding_model_class
"""
embedding_provider, embedding_model_name = full_embedding_name.split('.')
# Calculate the embedding model to be used
if embedding_provider == "mistral":
api_key = current_app.config['MISTRAL_API_KEY']
embedding_model = TrackedMistralAIEmbeddings(
model=embedding_model_name
)
else:
raise EveAIInvalidEmbeddingModel(tenant_id, catalog_id)
# Calculate the Embedding Model Class to be used to store embeddings
if embedding_model_name == "mistral-embed":
embedding_model_class = EmbeddingMistral
else:
raise EveAIInvalidEmbeddingModel(tenant_id, catalog_id)
return embedding_model, embedding_model_class
def get_embedding_llm(full_model_name='mistral.mistral-small-latest', temperature=0.3):
llm = embedding_llm_model_cache.get((full_model_name, temperature))
if not llm:
llm_provider, llm_model_name = full_model_name.split('.')
if llm_provider == "openai":
llm = ChatOpenAI(
api_key=current_app.config['OPENAI_API_KEY'],
model=llm_model_name,
temperature=temperature,
callbacks=[llm_metrics_handler]
)
elif llm_provider == "mistral":
llm = ChatMistralAI(
api_key=current_app.config['MISTRAL_API_KEY'],
model=llm_model_name,
temperature=temperature,
callbacks=[llm_metrics_handler]
)
embedding_llm_model_cache[(full_model_name, temperature)] = llm
return llm
def get_crewai_llm(full_model_name='mistral.mistral-large-latest', temperature=0.3):
llm = crewai_llm_model_cache.get((full_model_name, temperature))
if not llm:
llm_provider, llm_model_name = full_model_name.split('.')
crew_full_model_name = f"{llm_provider}/{llm_model_name}"
api_key = None
if llm_provider == "openai":
api_key = current_app.config['OPENAI_API_KEY']
elif llm_provider == "mistral":
api_key = current_app.config['MISTRAL_API_KEY']
llm = LLM(
model=crew_full_model_name,
temperature=temperature,
api_key=api_key
)
crewai_llm_model_cache[(full_model_name, temperature)] = llm
return llm
class ModelVariables:
"""Manages model-related variables and configurations"""
def __init__(self, tenant_id: int, variables: Dict[str, Any] = None):
"""
Initialize ModelVariables with tenant and optional template manager
Args:
tenant_id: Tenant instance
variables: Optional variables
"""
current_app.logger.info(f'Model variables initialized with tenant {tenant_id} and variables \n{variables}')
self.tenant_id = tenant_id
self._variables = variables if variables is not None else self._initialize_variables()
current_app.logger.info(f'Model _variables initialized to {self._variables}')
self._llm_instances = {}
self.llm_metrics_handler = LLMMetricsHandler()
self._transcription_model = None
def _initialize_variables(self) -> Dict[str, Any]:
"""Initialize the variables dictionary"""
variables = {}
tenant = Tenant.query.get(self.tenant_id)
if not tenant:
raise EveAITenantNotFound(self.tenant_id)
# Set model providers
variables['llm_provider'], variables['llm_model'] = tenant.llm_model.split('.')
variables['llm_full_model'] = tenant.llm_model
# Set model-specific configurations
model_config = MODEL_CONFIG.get(variables['llm_provider'], {}).get(variables['llm_model'], {})
variables.update(model_config)
# Additional configurations
variables['annotation_chunk_length'] = current_app.config['ANNOTATION_TEXT_CHUNK_LENGTH'][tenant.llm_model]
variables['max_compression_duration'] = current_app.config['MAX_COMPRESSION_DURATION']
variables['max_transcription_duration'] = current_app.config['MAX_TRANSCRIPTION_DURATION']
variables['compression_cpu_limit'] = current_app.config['COMPRESSION_CPU_LIMIT']
variables['compression_process_delay'] = current_app.config['COMPRESSION_PROCESS_DELAY']
return variables
@property
def annotation_chunk_length(self):
return self._variables['annotation_chunk_length']
@property
def max_compression_duration(self):
return self._variables['max_compression_duration']
@property
def max_transcription_duration(self):
return self._variables['max_transcription_duration']
@property
def compression_cpu_limit(self):
return self._variables['compression_cpu_limit']
@property
def compression_process_delay(self):
return self._variables['compression_process_delay']
def get_llm(self, temperature: float = 0.3, **kwargs) -> Any:
"""
Get an LLM instance with specific configuration
Args:
temperature: The temperature for the LLM
**kwargs: Additional configuration parameters
Returns:
An instance of the configured LLM
"""
cache_key = f"{temperature}_{hash(frozenset(kwargs.items()))}"
if cache_key not in self._llm_instances:
provider = self._variables['llm_provider']
model = self._variables['llm_model']
if provider == 'openai':
self._llm_instances[cache_key] = ChatOpenAI(
api_key=os.getenv('OPENAI_API_KEY'),
model=model,
temperature=temperature,
callbacks=[self.llm_metrics_handler],
**kwargs
)
elif provider == 'anthropic':
self._llm_instances[cache_key] = ChatAnthropic(
api_key=os.getenv('ANTHROPIC_API_KEY'),
model=current_app.config['ANTHROPIC_LLM_VERSIONS'][model],
temperature=temperature,
callbacks=[self.llm_metrics_handler],
**kwargs
)
else:
raise ValueError(f"Unsupported LLM provider: {provider}")
return self._llm_instances[cache_key]
@property
def transcription_model(self) -> TrackedOpenAITranscription:
"""Get the transcription model instance"""
if self._transcription_model is None:
api_key = os.getenv('OPENAI_API_KEY')
self._transcription_model = TrackedOpenAITranscription(
api_key=api_key,
model='whisper-1'
)
return self._transcription_model
# Remove the old transcription-related methods since they're now handled by TrackedOpenAITranscription
@property
def transcription_client(self):
raise DeprecationWarning("Use transcription_model instead")
def transcribe(self, *args, **kwargs):
raise DeprecationWarning("Use transcription_model.transcribe() instead")
def get_template(self, template_name: str, version: Optional[str] = None) -> str:
"""
Get a template for the tenant's configured LLM
Args:
template_name: Name of the template to retrieve
version: Optional specific version to retrieve
Returns:
The template content
"""
try:
template = template_manager.get_template(
self._variables['llm_full_model'],
template_name,
version
)
return template.content
except Exception as e:
current_app.logger.error(f"Error getting template {template_name}: {str(e)}")
# Fall back to old template loading if template_manager fails
if template_name in self._variables.get('templates', {}):
return self._variables['templates'][template_name]
raise
# Helper function to get cached model variables
def get_model_variables(tenant_id: int) -> ModelVariables:
return ModelVariables(tenant_id=tenant_id)

View File

@@ -6,7 +6,6 @@ def prefixed_url_for(endpoint, **values):
prefix = request.headers.get('X-Forwarded-Prefix', '')
scheme = request.headers.get('X-Forwarded-Proto', request.scheme)
host = request.headers.get('Host', request.host)
current_app.logger.debug(f'prefix: {prefix}, scheme: {scheme}, host: {host}')
external = values.pop('_external', False)
generated_url = url_for(endpoint, **values)

View File

@@ -1,4 +1,6 @@
import os
import sys
import gevent
import time
from flask import current_app
@@ -28,3 +30,17 @@ def sync_folder(file_path):
dir_fd = os.open(file_path, os.O_RDONLY)
os.fsync(dir_fd)
os.close(dir_fd)
def get_project_root():
"""Get the root directory of the project."""
# Use the module that's actually running (not this file)
module = sys.modules['__main__']
if hasattr(module, '__file__'):
# Get the path to the main module
main_path = os.path.abspath(module.__file__)
# Get the root directory (where the main module is located)
return os.path.dirname(main_path)
else:
# Fallback: use current working directory
return os.getcwd()

View File

@@ -0,0 +1,59 @@
import time
import threading
from contextlib import contextmanager
from functools import wraps
from prometheus_client import Counter, Histogram, Summary, start_http_server, Gauge
from flask import current_app, g, request, Flask
class EveAIMetrics:
"""
Central class for Prometheus metrics infrastructure.
This class initializes the Prometheus HTTP server and provides
shared functionality for metrics across components.
Component-specific metrics should be defined in their respective modules.
"""
def __init__(self, app: Flask = None):
self.app = app
self._metrics_server_started = False
if app is not None:
self.init_app(app)
def init_app(self, app: Flask):
"""Initialize metrics with Flask app and start Prometheus server"""
self.app = app
self._start_metrics_server()
def _start_metrics_server(self):
"""Start the Prometheus metrics HTTP server if not already running"""
if not self._metrics_server_started:
try:
metrics_port = self.app.config.get('PROMETHEUS_PORT', 8000)
start_http_server(metrics_port)
self.app.logger.info(f"Prometheus metrics server started on port {metrics_port}")
self._metrics_server_started = True
except Exception as e:
self.app.logger.error(f"Failed to start metrics server: {e}")
@staticmethod
def get_standard_buckets():
"""
Return the standard duration buckets for histogram metrics.
Components should use these for consistency across the system.
"""
return [0.1, 0.5, 1, 2.5, 5, 10, 15, 30, 60, 120, 240, 360, float('inf')]
@staticmethod
def sanitize_label_values(labels_dict):
"""
Convert all label values to strings as required by Prometheus.
Args:
labels_dict: Dictionary of label name to label value
Returns:
Dictionary with all values converted to strings
"""
return {k: str(v) if v is not None else "" for k, v in labels_dict.items()}

View File

@@ -0,0 +1,78 @@
from pydantic import BaseModel, Field
from typing import Any, Dict, List, Optional, Type, Union
def flatten_pydantic_model(model: BaseModel, merge_strategy: Dict[str, str] = {}) -> Dict[str, Any]:
"""
Flattens a nested Pydantic model by bringing all attributes to the highest level.
:param model: Pydantic model instance to be flattened.
:param merge_strategy: Dictionary defining how to handle duplicate attributes.
:return: Flattened dictionary representation of the model.
"""
flat_dict = {}
def recursive_flatten(obj: BaseModel, parent_key=""):
for field_name, value in obj.model_dump(exclude_unset=True, by_alias=True).items():
new_key = field_name # Maintain original field names
if isinstance(value, BaseModel):
# Recursively flatten nested models
recursive_flatten(value, new_key)
elif isinstance(value, list) and all(isinstance(i, BaseModel) for i in value):
# If it's a list of Pydantic models, flatten each element
for item in value:
recursive_flatten(item, new_key)
else:
if new_key in flat_dict and new_key in merge_strategy:
# Apply merge strategy
if merge_strategy[new_key] == "add":
if isinstance(flat_dict[new_key], list) and isinstance(value, list):
flat_dict[new_key] += value # Concatenate lists
elif isinstance(flat_dict[new_key], (int, float)) and isinstance(value, (int, float)):
flat_dict[new_key] += value # Sum numbers
elif isinstance(flat_dict[new_key], str) and isinstance(value, str):
flat_dict[new_key] += "\n" + value # Concatenate strings
elif merge_strategy[new_key] == "first":
pass # Keep the first occurrence
elif merge_strategy[new_key] == "last":
flat_dict[new_key] = value
else:
flat_dict[new_key] = value
recursive_flatten(model)
return flat_dict
def merge_dicts(base_dict: Dict[str, Any], new_data: Union[Dict[str, Any], BaseModel], merge_strategy: Dict[str, str]) \
-> Dict[str, Any]:
"""
Merges a Pydantic model (or dictionary) into an existing dictionary based on a merge strategy.
:param base_dict: The base dictionary to merge into.
:param new_data: The new Pydantic model or dictionary to merge.
:param merge_strategy: Dict defining how to merge duplicate attributes.
:return: Updated dictionary after merging.
"""
if isinstance(new_data, BaseModel):
new_data = flatten_pydantic_model(new_data) # Convert Pydantic model to dict
for key, value in new_data.items():
if key in base_dict and key in merge_strategy:
strategy = merge_strategy[key]
if strategy == "add":
if isinstance(base_dict[key], list) and isinstance(value, list):
base_dict[key] += value # Concatenate lists
elif isinstance(base_dict[key], (int, float)) and isinstance(value, (int, float)):
base_dict[key] += value # Sum numbers
elif isinstance(base_dict[key], str) and isinstance(value, str):
base_dict[key] += " " + value # Concatenate strings
elif strategy == "first":
pass # Keep the first occurrence (do nothing)
elif strategy == "last":
base_dict[key] = value # Always overwrite with latest value
else:
base_dict[key] = value # Add new field
return base_dict

View File

@@ -1,19 +1,43 @@
from flask import session, current_app
from sqlalchemy import and_
from common.models.user import Tenant
from common.models.entitlements import License
from common.utils.database import Database
from common.utils.eveai_exceptions import EveAITenantNotFound, EveAITenantInvalid, EveAINoActiveLicense
from datetime import datetime as dt, timezone as tz
# Definition of Trigger Handlers
def set_tenant_session_data(sender, user, **kwargs):
current_app.logger.debug(f"Setting tenant session data for user {user.id}")
tenant = Tenant.query.filter_by(id=user.tenant_id).first()
session['tenant'] = tenant.to_dict()
session['default_language'] = tenant.default_language
session['default_embedding_model'] = tenant.embedding_model
session['default_llm_model'] = tenant.llm_model
def clear_tenant_session_data(sender, user, **kwargs):
session.pop('tenant', None)
session.pop('default_language', None)
session.pop('default_embedding_model', None)
session.pop('default_llm_model', None)
session.pop('default_llm_model', None)
def is_valid_tenant(tenant_id):
if tenant_id == 1: # The 'root' tenant, is always valid
return True
tenant = Tenant.query.get(tenant_id)
Database(tenant).switch_schema()
if tenant is None:
raise EveAITenantNotFound()
elif tenant.type == 'Inactive':
raise EveAITenantInvalid(tenant_id)
else:
current_date = dt.now(tz=tz.utc).date()
active_license = (License.query.filter_by(tenant_id=tenant_id)
.filter(and_(License.start_date <= current_date,
License.end_date >= current_date))
.one_or_none())
if not active_license:
raise EveAINoActiveLicense(tenant_id)
return True

View File

@@ -11,7 +11,7 @@ def confirm_token(token, expiration=3600):
try:
email = serializer.loads(token, salt=current_app.config['SECURITY_PASSWORD_SALT'], max_age=expiration)
except Exception as e:
current_app.logger.debug(f'Error confirming token: {e}')
current_app.logger.error(f'Error confirming token: {e}')
raise
return email
@@ -35,14 +35,11 @@ def generate_confirmation_token(email):
def send_confirmation_email(user):
current_app.logger.debug(f'Sending confirmation email to {user.email}')
if not test_smtp_connection():
raise Exception("Failed to connect to SMTP server")
token = generate_confirmation_token(user.email)
confirm_url = prefixed_url_for('security_bp.confirm_email', token=token, _external=True)
current_app.logger.debug(f'Confirmation URL: {confirm_url}')
html = render_template('email/activate.html', confirm_url=confirm_url)
subject = "Please confirm your email"
@@ -56,10 +53,8 @@ def send_confirmation_email(user):
def send_reset_email(user):
current_app.logger.debug(f'Sending reset email to {user.email}')
token = generate_reset_token(user.email)
reset_url = prefixed_url_for('security_bp.reset_password', token=token, _external=True)
current_app.logger.debug(f'Reset URL: {reset_url}')
html = render_template('email/reset_password.html', reset_url=reset_url)
subject = "Reset Your Password"
@@ -98,4 +93,3 @@ def test_smtp_connection():
except Exception as e:
current_app.logger.error(f"Failed to connect to SMTP server: {str(e)}")
return False

View File

@@ -4,7 +4,7 @@ from flask import Flask
def generate_api_key(prefix="EveAI-Chat"):
parts = [str(random.randint(1000, 9999)) for _ in range(5)]
parts = [str(random.randint(1000, 9999)) for _ in range(8)]
return f"{prefix}-{'-'.join(parts)}"

View File

@@ -0,0 +1,196 @@
from datetime import datetime as dt, timezone as tz
from typing import Optional, Dict, Any
from flask import current_app
from sqlalchemy.exc import SQLAlchemyError
from common.extensions import db, cache_manager
from common.models.interaction import (
Specialist, EveAIAgent, EveAITask, EveAITool
)
from common.utils.model_logging_utils import set_logging_information, update_logging_information
def initialize_specialist(specialist_id: int, specialist_type: str, specialist_version: str):
"""
Initialize an agentic specialist by creating all its components based on configuration.
Args:
specialist_id: ID of the specialist to initialize
specialist_type: Type of the specialist
specialist_version: Version of the specialist type to use
Raises:
ValueError: If specialist not found or invalid configuration
SQLAlchemyError: If database operations fail
"""
config = cache_manager.specialists_config_cache.get_config(specialist_type, specialist_version)
if not config:
raise ValueError(f"No configuration found for {specialist_type} version {specialist_version}")
if config['framework'] == 'langchain':
pass # Langchain does not require additional items to be initialized. All configuration is in the specialist.
specialist = Specialist.query.get(specialist_id)
if not specialist:
raise ValueError(f"Specialist with ID {specialist_id} not found")
if config['framework'] == 'crewai':
initialize_crewai_specialist(specialist, config)
def initialize_crewai_specialist(specialist: Specialist, config: Dict[str, Any]):
timestamp = dt.now(tz=tz.utc)
try:
# Initialize agents
if 'agents' in config:
for agent_config in config['agents']:
_create_agent(
specialist_id=specialist.id,
agent_type=agent_config['type'],
agent_version=agent_config['version'],
name=agent_config.get('name'),
description=agent_config.get('description'),
timestamp=timestamp
)
# Initialize tasks
if 'tasks' in config:
for task_config in config['tasks']:
_create_task(
specialist_id=specialist.id,
task_type=task_config['type'],
task_version=task_config['version'],
name=task_config.get('name'),
description=task_config.get('description'),
timestamp=timestamp
)
# Initialize tools
if 'tools' in config:
for tool_config in config['tools']:
_create_tool(
specialist_id=specialist.id,
tool_type=tool_config['type'],
tool_version=tool_config['version'],
name=tool_config.get('name'),
description=tool_config.get('description'),
timestamp=timestamp
)
db.session.commit()
current_app.logger.info(f"Successfully initialized crewai specialist {specialist.id}")
except SQLAlchemyError as e:
db.session.rollback()
current_app.logger.error(f"Database error initializing crewai specialist {specialist.id}: {str(e)}")
raise
except Exception as e:
db.session.rollback()
current_app.logger.error(f"Error initializing crewai specialist {specialist.id}: {str(e)}")
raise
def _create_agent(
specialist_id: int,
agent_type: str,
agent_version: str,
name: Optional[str] = None,
description: Optional[str] = None,
timestamp: Optional[dt] = None
) -> EveAIAgent:
"""Create an agent with the given configuration."""
if timestamp is None:
timestamp = dt.now(tz=tz.utc)
# Get agent configuration from cache
agent_config = cache_manager.agents_config_cache.get_config(agent_type, agent_version)
agent = EveAIAgent(
specialist_id=specialist_id,
name=name or agent_config.get('name', agent_type),
description=description or agent_config.get('metadata').get('description', ''),
type=agent_type,
type_version=agent_version,
role=None,
goal=None,
backstory=None,
tuning=False,
configuration=None,
arguments=None
)
set_logging_information(agent, timestamp)
db.session.add(agent)
current_app.logger.info(f"Created agent {agent.id} of type {agent_type}")
return agent
def _create_task(
specialist_id: int,
task_type: str,
task_version: str,
name: Optional[str] = None,
description: Optional[str] = None,
timestamp: Optional[dt] = None
) -> EveAITask:
"""Create a task with the given configuration."""
if timestamp is None:
timestamp = dt.now(tz=tz.utc)
# Get task configuration from cache
task_config = cache_manager.tasks_config_cache.get_config(task_type, task_version)
task = EveAITask(
specialist_id=specialist_id,
name=name or task_config.get('name', task_type),
description=description or task_config.get('metadata').get('description', ''),
type=task_type,
type_version=task_version,
task_description=None,
expected_output=None,
tuning=False,
configuration=None,
arguments=None,
context=None,
asynchronous=False,
)
set_logging_information(task, timestamp)
db.session.add(task)
current_app.logger.info(f"Created task {task.id} of type {task_type}")
return task
def _create_tool(
specialist_id: int,
tool_type: str,
tool_version: str,
name: Optional[str] = None,
description: Optional[str] = None,
timestamp: Optional[dt] = None
) -> EveAITool:
"""Create a tool with the given configuration."""
if timestamp is None:
timestamp = dt.now(tz=tz.utc)
# Get tool configuration from cache
tool_config = cache_manager.tools_config_cache.get_config(tool_type, tool_version)
tool = EveAITool(
specialist_id=specialist_id,
name=name or tool_config.get('name', tool_type),
description=description or tool_config.get('metadata').get('description', ''),
type=tool_type,
type_version=tool_version,
tuning=False,
configuration=None,
arguments=None,
)
set_logging_information(tool, timestamp)
db.session.add(tool)
current_app.logger.info(f"Created tool {tool.id} of type {tool_type}")
return tool

View File

@@ -0,0 +1,47 @@
import time
from redis import Redis
from common.extensions import cache_manager
def perform_startup_actions(app):
perform_startup_invalidation(app)
def perform_startup_invalidation(app):
"""
Perform cache invalidation only once during startup using a persistent marker (also called flag or semaphore
- see docs).
Uses a combination of lock and marker to ensure invalidation happens exactly once
per deployment.
"""
redis_client = Redis.from_url(app.config['REDIS_BASE_URI'])
startup_time = int(time.time())
marker_key = 'startup_invalidation_completed'
lock_key = 'startup_invalidation_lock'
try:
# First try to get the lock
lock = redis_client.lock(lock_key, timeout=30)
if lock.acquire(blocking=False):
try:
# Check if invalidation was already performed
if not redis_client.get(marker_key):
# Perform invalidation
cache_manager.invalidate_region('eveai_config')
cache_manager.invalidate_region('eveai_chat_workers')
redis_client.setex(marker_key, 180, str(startup_time))
app.logger.info("Startup cache invalidation completed")
else:
app.logger.info("Startup cache invalidation already performed")
finally:
lock.release()
else:
app.logger.info("Another process is handling startup invalidation")
except Exception as e:
app.logger.error(f"Error during startup invalidation: {e}")
# In case of error, we don't want to block the application startup
pass

View File

@@ -0,0 +1,112 @@
from typing import List, Union
import re
class StringListConverter:
"""Utility class for converting between comma-separated strings and lists"""
@staticmethod
def string_to_list(input_string: Union[str, None], allow_empty: bool = True) -> List[str]:
"""
Convert a comma-separated string to a list of strings.
Args:
input_string: Comma-separated string to convert
allow_empty: If True, returns empty list for None/empty input
If False, raises ValueError for None/empty input
Returns:
List of stripped strings
Raises:
ValueError: If input is None/empty and allow_empty is False
"""
if not input_string:
if allow_empty:
return []
raise ValueError("Input string cannot be None or empty")
return [item.strip() for item in input_string.split(',') if item.strip()]
@staticmethod
def list_to_string(input_list: Union[List[str], None], allow_empty: bool = True) -> str:
"""
Convert a list of strings to a comma-separated string.
Args:
input_list: List of strings to convert
allow_empty: If True, returns empty string for None/empty input
If False, raises ValueError for None/empty input
Returns:
Comma-separated string
Raises:
ValueError: If input is None/empty and allow_empty is False
"""
if not input_list:
if allow_empty:
return ''
raise ValueError("Input list cannot be None or empty")
return ', '.join(str(item).strip() for item in input_list)
@staticmethod
def validate_format(input_string: str,
allowed_chars: str = r'a-zA-Z0-9_\-',
min_length: int = 1,
max_length: int = 50) -> bool:
"""
Validate the format of items in a comma-separated string.
Args:
input_string: String to validate
allowed_chars: String of allowed characters (for regex pattern)
min_length: Minimum length for each item
max_length: Maximum length for each item
Returns:
bool: True if format is valid, False otherwise
"""
if not input_string:
return False
# Create regex pattern for individual items
pattern = f'^[{allowed_chars}]{{{min_length},{max_length}}}$'
try:
# Convert to list and check each item
items = StringListConverter.string_to_list(input_string)
return all(bool(re.match(pattern, item)) for item in items)
except Exception:
return False
@staticmethod
def validate_and_convert(input_string: str,
allowed_chars: str = r'a-zA-Z0-9_\-',
min_length: int = 1,
max_length: int = 50) -> List[str]:
"""
Validate and convert a comma-separated string to a list.
Args:
input_string: String to validate and convert
allowed_chars: String of allowed characters (for regex pattern)
min_length: Minimum length for each item
max_length: Maximum length for each item
Returns:
List of validated and converted strings
Raises:
ValueError: If input string format is invalid
"""
if not StringListConverter.validate_format(
input_string, allowed_chars, min_length, max_length
):
raise ValueError(
f"Invalid format. Items must be {min_length}-{max_length} characters "
f"long and contain only these characters: {allowed_chars}"
)
return StringListConverter.string_to_list(input_string)

View File

@@ -0,0 +1,60 @@
from dataclasses import dataclass
from typing import Optional
from datetime import datetime
from flask_jwt_extended import decode_token, verify_jwt_in_request
from flask import current_app
@dataclass
class TokenValidationResult:
"""Clean, simple validation result"""
is_valid: bool
tenant_id: Optional[int] = None
error_message: Optional[str] = None
class TokenValidator:
"""Simplified token validator focused on JWT validation"""
def validate_token(self, token: str) -> TokenValidationResult:
"""
Validate JWT token
Args:
token: The JWT token to validate
Returns:
TokenValidationResult with validation status and tenant_id if valid
"""
try:
# Decode and validate token
decoded_token = decode_token(token)
# Extract tenant_id from token subject
tenant_id = decoded_token.get('sub')
if not tenant_id:
return TokenValidationResult(
is_valid=False,
error_message="Missing tenant ID in token"
)
# Verify token timestamps
now = datetime.utcnow().timestamp()
if not (decoded_token.get('exp', 0) > now >= decoded_token.get('nbf', 0)):
return TokenValidationResult(
is_valid=False,
error_message="Token expired or not yet valid"
)
# Token is valid
return TokenValidationResult(
is_valid=True,
tenant_id=tenant_id
)
except Exception as e:
current_app.logger.error(f"Token validation error: {str(e)}")
return TokenValidationResult(
is_valid=False,
error_message=str(e)
)

View File

@@ -44,7 +44,7 @@ def form_validation_failed(request, form):
for fieldName, errorMessages in form.errors.items():
for err in errorMessages:
flash(f"Error in {fieldName}: {err}", 'danger')
current_app.logger.debug(f"Error in {fieldName}: {err}")
current_app.logger.error(f"Error in {fieldName}: {err}")
def form_to_dict(form):

View File

@@ -0,0 +1,17 @@
version: "1.0.0"
name: "Email Content Agent"
role: >
Email Content Writer
goal: >
Craft a highly personalized email that resonates with the {end_user_role}'s context and identification (personal and
company if available).
{custom_goal}
backstory: >
You are an expert in writing compelling, personalized emails that capture the {end_user_role}'s attention and drive
engagement. You are perfectly multilingual, and can write the mail in the native language of the {end_user_role}.
{custom_backstory}
metadata:
author: "Josako"
date_added: "2025-01-08"
description: "An Agent that writes engaging emails."
changes: "Initial version"

View File

@@ -0,0 +1,16 @@
version: "1.0.0"
name: "Email Engagement Agent"
role: >
Engagement Optimization Specialist {custom_role}
goal: >
You ensure that the email includes strong CTAs and strategically placed engagement hooks that encourage the
{end_user_role} to take immediate action. {custom_goal}
backstory: >
You specialize in optimizing content to ensure that it not only resonates with the recipient but also encourages them
to take the desired action.
{custom_backstory}
metadata:
author: "Josako"
date_added: "2025-01-08"
description: "An Agent that ensures the email is engaging and lead to maximal desired action"
changes: "Initial version"

View File

@@ -0,0 +1,20 @@
version: "1.0.0"
name: "Identification Agent"
role: >
Identification Administrative force. {custom_role}
goal: >
You are an administrative force that tries to gather identification information to complete the administration of an
end-user, the company he or she works for, through monitoring conversations and advising on questions to help you do
your job. You are responsible for completing the company's backend systems (like CRM, ERP, ...) with inputs from the
end user in the conversation.
{custom_goal}
backstory: >
You are and administrative force for {company}, and very proficient in gathering information for the company's backend
systems. You do so by monitoring conversations between one of your colleagues (e.g. sales, finance, support, ...) and
an end user. You ask your colleagues to request additional information to complete your task.
{custom_backstory}
metadata:
author: "Josako"
date_added: "2025-01-08"
description: "An Agent that gathers administrative information"
changes: "Initial version"

View File

@@ -0,0 +1,22 @@
version: "1.0.0"
name: "Rag Agent"
role: >
{company} Spokesperson. {custom_role}
goal: >
You get questions by a human correspondent, and give answers based on a given context, taking into account the history
of the current conversation. {custom_goal}
backstory: >
You are the primary contact for {company}. You are known by {name}, and can be addressed by this name, or you. You are
a very good communicator, and adapt to the style used by the human asking for information (e.g. formal or informal).
You always stay correct and polite, whatever happens. And you ensure no discriminating language is used.
You are perfectly multilingual in all known languages, and do your best to answer questions in {language}, whatever
language the context provided to you is in. You are participating in a conversation, not writing e.g. an email. Do not
include a salutation or closing greeting in your answer.
{custom_backstory}
full_model_name: "mistral.mistral-large-latest"
temperature: 0.3
metadata:
author: "Josako"
date_added: "2025-01-08"
description: "An Agent that does RAG based on a user's question, RAG content & history"
changes: "Initial version"

View File

@@ -0,0 +1,26 @@
version: "1.0.0"
name: "Rag Communication Agent"
role: >
{company} Interaction Responsible. {custom_role}
goal: >
Your team has collected answers to a question asked. But it also created some additional questions to be asked. You
ensure the necessary answers are returned, and make an informed selection of the additional questions that can be
asked (combining them when appropriate), ensuring the human you're communicating to does not get overwhelmed.
{custom_goal}
backstory: >
You are the online communication expert for {company}. You handled a lot of online communications with both customers
and internal employees. You are a master in redacting one coherent reply in a conversation that includes all the
answers, and a selection of additional questions to be asked in a conversation. Although your backoffice team might
want to ask a myriad of questions, you understand that doesn't fit with the way humans communicate. You know how to
combine multiple related questions, and understand how to interweave the questions in the answers when related.
You are perfectly multilingual in all known languages, and do your best to answer questions in {language}, whatever
language the context provided to you is in. Also, ensure that questions asked do not contradict with the answers
given, or aren't obsolete given the answer provided.
You are participating in a conversation, not writing e.g. an email. Do not include a salutation or closing greeting
in your answer.
{custom_backstory}
metadata:
author: "Josako"
date_added: "2025-01-08"
description: "An Agent that consolidates both answers and questions in a consistent reply"
changes: "Initial version"

View File

@@ -0,0 +1,22 @@
version: "1.0.0"
name: "SPIN Sales Assistant"
role: >
Sales Assistant for {company} on {products}. {custom_role}
goal: >
Your main job is to help your sales specialist to analyze an ongoing conversation with a customer, and detect
SPIN-related information. {custom_goal}
backstory: >
You are a sales assistant for {company} on {products}. You are known by {name}, and can be addressed by this name, or you. You are
trained to understand an analyse ongoing conversations. Your are proficient in detecting SPIN-related information in a
conversation.
SPIN stands for:
- Situation information - Understanding the customer's current context
- Problem information - Uncovering challenges and pain points
- Implication information - Exploring consequences of those problems
- Need-payoff information - Helping customers realize value of solutions
{custom_backstory}
metadata:
author: "Josako"
date_added: "2025-01-08"
description: "An Agent that detects SPIN information in an ongoing conversation"
changes: "Initial version"

View File

@@ -0,0 +1,25 @@
version: "1.0.0"
name: "SPIN Sales Specialist"
role: >
Sales Specialist for {company} on {products}. {custom_role}
goal: >
Your main job is to do sales using the SPIN selling methodology in a first conversation with a potential customer.
{custom_goal}
backstory: >
You are a sales specialist for {company} on {products}. You are known by {name}, and can be addressed by this name,
or you. You have an assistant that provides you with already detected SPIN-information in an ongoing conversation. You
decide on follow-up questions for more in-depth information to ensure we get the required information that may lead to
selling {products}.
SPIN stands for:
- Situation information - Understanding the customer's current context
- Problem information - Uncovering challenges and pain points
- Implication information - Exploring consequences of those problems
- Need-payoff information - Helping customers realize value of solutions
{custom_backstory}
You are acquainted with the following product information:
{product_information}
metadata:
author: "Josako"
date_added: "2025-01-08"
description: "An Agent that asks for Follow-up questions for SPIN-process"
changes: "Initial version"

View File

@@ -0,0 +1,18 @@
version: "1.0.0"
name: "Document Template"
configuration:
argument_definition:
name: "variable_defition"
type: "file"
description: "Yaml file defining the arguments in the Document Template."
required: True
content_markdown:
name: "content_markdown"
type: "str"
description: "Actual template file in markdown format."
required: True
metadata:
author: "Josako"
date_added: "2025-03-12"
description: "Asset that defines a template in markdown a specialist can process"
changes: "Initial version"

View File

@@ -1,3 +1,4 @@
import os
from os import environ, path
from datetime import timedelta
import redis
@@ -54,7 +55,6 @@ class Config(object):
# file upload settings
MAX_CONTENT_LENGTH = 50 * 1024 * 1024
UPLOAD_EXTENSIONS = ['.txt', '.pdf', '.png', '.jpg', '.jpeg', '.gif']
# supported languages
SUPPORTED_LANGUAGES = ['en', 'fr', 'nl', 'de', 'es']
@@ -63,14 +63,13 @@ class Config(object):
SUPPORTED_CURRENCIES = ['', '$']
# supported LLMs
SUPPORTED_EMBEDDINGS = ['openai.text-embedding-3-small', 'openai.text-embedding-3-large', 'mistral.mistral-embed']
SUPPORTED_LLMS = ['openai.gpt-4o', 'anthropic.claude-3-5-sonnet', 'openai.gpt-4o-mini']
# SUPPORTED_EMBEDDINGS = ['openai.text-embedding-3-small', 'openai.text-embedding-3-large', 'mistral.mistral-embed']
SUPPORTED_EMBEDDINGS = ['mistral.mistral-embed']
SUPPORTED_LLMS = ['openai.gpt-4o', 'anthropic.claude-3-5-sonnet', 'openai.gpt-4o-mini',
'mistral.mistral-large-latest', 'mistral.mistral-small-latest']
ANTHROPIC_LLM_VERSIONS = {'claude-3-5-sonnet': 'claude-3-5-sonnet-20240620', }
# Load prompt templates dynamically
PROMPT_TEMPLATES = {model: load_prompt_templates(model) for model in SUPPORTED_LLMS}
# Annotation text chunk length
ANNOTATION_TEXT_CHUNK_LENGTH = {
'openai.gpt-4o': 10000,
@@ -78,18 +77,12 @@ class Config(object):
'anthropic.claude-3-5-sonnet': 8000
}
# OpenAI API Keys
# Environemnt Loaders
OPENAI_API_KEY = environ.get('OPENAI_API_KEY')
# Groq API Keys
MISTRAL_API_KEY = environ.get('MISTRAL_API_KEY')
GROQ_API_KEY = environ.get('GROQ_API_KEY')
# Anthropic API Keys
ANTHROPIC_API_KEY = environ.get('ANTHROPIC_API_KEY')
# Portkey API Keys
PORTKEY_API_KEY = environ.get('PORTKEY_API_KEY')
# Celery settings
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
@@ -99,7 +92,7 @@ class Config(object):
# SocketIO settings
# SOCKETIO_ASYNC_MODE = 'threading'
SOCKETIO_ASYNC_MODE = 'gevent'
# SOCKETIO_ASYNC_MODE = 'gevent'
# Session Settings
SESSION_TYPE = 'redis'
@@ -111,6 +104,7 @@ class Config(object):
# JWT settings
JWT_SECRET_KEY = environ.get('JWT_SECRET_KEY')
JWT_ACCESS_TOKEN_EXPIRES = timedelta(hours=1) # Set token expiry to 1 hour
JWT_ACCESS_TOKEN_EXPIRES_DEPLOY = timedelta(hours=24) # Set long-lived token for deployment
# API Encryption
API_ENCRYPTION_KEY = environ.get('API_ENCRYPTION_KEY')
@@ -138,16 +132,16 @@ class Config(object):
MAIL_USE_SSL = True
MAIL_USERNAME = environ.get('MAIL_USERNAME')
MAIL_PASSWORD = environ.get('MAIL_PASSWORD')
MAIL_DEFAULT_SENDER = ('eveAI Admin', MAIL_USERNAME)
MAIL_DEFAULT_SENDER = ('Evie', MAIL_USERNAME)
# Email settings for API key notifications
PROMOTIONAL_IMAGE_URL = 'https://askeveai.com/wp-content/uploads/2024/07/Evie-Call-scaled.jpg' # Replace with your actual URL
# Langsmith settings
LANGCHAIN_TRACING_V2 = True
LANGCHAIN_ENDPOINT = 'https://api.smith.langchain.com'
LANGCHAIN_PROJECT = "eveai"
SUPPORTED_FILE_TYPES = ['pdf', 'html', 'md', 'txt', 'mp3', 'mp4', 'ogg', 'srt']
TENANT_TYPES = ['Active', 'Demo', 'Inactive', 'Test']
# The maximum number of seconds allowed for audio compression (to save resources)
@@ -159,6 +153,13 @@ class Config(object):
# Delay between compressing chunks in seconds
COMPRESSION_PROCESS_DELAY = 1
# WordPress Integration Settings
WORDPRESS_PROTOCOL = os.environ.get('WORDPRESS_PROTOCOL', 'http')
WORDPRESS_HOST = os.environ.get('WORDPRESS_HOST', 'host.docker.internal')
WORDPRESS_PORT = os.environ.get('WORDPRESS_PORT', '10003')
WORDPRESS_BASE_URL = f"{WORDPRESS_PROTOCOL}://{WORDPRESS_HOST}:{WORDPRESS_PORT}"
EXTERNAL_WORDPRESS_BASE_URL = 'localhost:10003'
class DevConfig(Config):
DEVELOPMENT = True
@@ -181,13 +182,23 @@ class DevConfig(Config):
# file upload settings
# UPLOAD_FOLDER = '/app/tenant_files'
# Redis Settings
REDIS_URL = 'redis'
REDIS_PORT = '6379'
REDIS_BASE_URI = f'redis://{REDIS_URL}:{REDIS_PORT}'
# Celery settings
# eveai_app Redis Settings
CELERY_BROKER_URL = 'redis://redis:6379/0'
CELERY_RESULT_BACKEND = 'redis://redis:6379/0'
CELERY_BROKER_URL = f'{REDIS_BASE_URI}/0'
CELERY_RESULT_BACKEND = f'{REDIS_BASE_URI}/0'
# eveai_chat Redis Settings
CELERY_BROKER_URL_CHAT = 'redis://redis:6379/3'
CELERY_RESULT_BACKEND_CHAT = 'redis://redis:6379/3'
CELERY_BROKER_URL_CHAT = f'{REDIS_BASE_URI}/3'
CELERY_RESULT_BACKEND_CHAT = f'{REDIS_BASE_URI}/3'
# eveai_chat_workers cache Redis Settings
CHAT_WORKER_CACHE_URL = f'{REDIS_BASE_URI}/4'
# specialist execution pub/sub Redis Settings
SPECIALIST_EXEC_PUBSUB = f'{REDIS_BASE_URI}/5'
# Unstructured settings
# UNSTRUCTURED_API_KEY = 'pDgCrXumYhM3CNvjvwV8msMldXC3uw'
@@ -195,13 +206,13 @@ class DevConfig(Config):
# UNSTRUCTURED_FULL_URL = 'https://flowitbv-16c4us0m.api.unstructuredapp.io/general/v0/general'
# SocketIO settings
SOCKETIO_MESSAGE_QUEUE = 'redis://redis:6379/1'
SOCKETIO_CORS_ALLOWED_ORIGINS = '*'
SOCKETIO_LOGGER = True
SOCKETIO_ENGINEIO_LOGGER = True
SOCKETIO_PING_TIMEOUT = 20000
SOCKETIO_PING_INTERVAL = 25000
SOCKETIO_MAX_IDLE_TIME = timedelta(minutes=60) # Changing this value ==> change maxConnectionDuration value in
# SOCKETIO_MESSAGE_QUEUE = f'{REDIS_BASE_URI}/1'
# SOCKETIO_CORS_ALLOWED_ORIGINS = '*'
# SOCKETIO_LOGGER = True
# SOCKETIO_ENGINEIO_LOGGER = True
# SOCKETIO_PING_TIMEOUT = 20000
# SOCKETIO_PING_INTERVAL = 25000
# SOCKETIO_MAX_IDLE_TIME = timedelta(minutes=60) # Changing this value ==> change maxConnectionDuration value in
# eveai-chat-widget.js
# Google Cloud settings
@@ -211,7 +222,7 @@ class DevConfig(Config):
GC_CRYPTO_KEY = 'envelope-encryption-key'
# Session settings
SESSION_REDIS = redis.from_url('redis://redis:6379/2')
SESSION_REDIS = redis.from_url(f'{REDIS_BASE_URI}/2')
# PATH settings
ffmpeg_path = '/usr/bin/ffmpeg'
@@ -278,18 +289,22 @@ class ProdConfig(Config):
# eveai_chat Redis Settings
CELERY_BROKER_URL_CHAT = f'{REDIS_BASE_URI}/3'
CELERY_RESULT_BACKEND_CHAT = f'{REDIS_BASE_URI}/3'
# eveai_chat_workers cache Redis Settings
CHAT_WORKER_CACHE_URL = f'{REDIS_BASE_URI}/4'
# specialist execution pub/sub Redis Settings
SPECIALIST_EXEC_PUBSUB = f'{REDIS_BASE_URI}/5'
# Session settings
SESSION_REDIS = redis.from_url(f'{REDIS_BASE_URI}/2')
# SocketIO settings
SOCKETIO_MESSAGE_QUEUE = f'{REDIS_BASE_URI}/1'
SOCKETIO_CORS_ALLOWED_ORIGINS = '*'
SOCKETIO_LOGGER = True
SOCKETIO_ENGINEIO_LOGGER = True
SOCKETIO_PING_TIMEOUT = 20000
SOCKETIO_PING_INTERVAL = 25000
SOCKETIO_MAX_IDLE_TIME = timedelta(minutes=60) # Changing this value ==> change maxConnectionDuration value in
# SOCKETIO_MESSAGE_QUEUE = f'{REDIS_BASE_URI}/1'
# SOCKETIO_CORS_ALLOWED_ORIGINS = '*'
# SOCKETIO_LOGGER = True
# SOCKETIO_ENGINEIO_LOGGER = True
# SOCKETIO_PING_TIMEOUT = 20000
# SOCKETIO_PING_INTERVAL = 25000
# SOCKETIO_MAX_IDLE_TIME = timedelta(minutes=60) # Changing this value ==> change maxConnectionDuration value in
# eveai-chat-widget.js
# Google Cloud settings

View File

@@ -1,4 +1,8 @@
import json
import os
from datetime import datetime as dt, timezone as tz
from flask import current_app
from graypy import GELFUDPHandler
import logging
import logging.config
@@ -9,24 +13,173 @@ GRAYLOG_PORT = int(os.environ.get('GRAYLOG_PORT', 12201))
env = os.environ.get('FLASK_ENV', 'development')
class CustomLogRecord(logging.LogRecord):
class TuningLogRecord(logging.LogRecord):
"""Extended LogRecord that handles both tuning and business event logging"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# Initialize extra fields after parent initialization
self._extra_fields = {}
self._is_tuning_log = False
self._tuning_type = None
self._tuning_tenant_id = None
self._tuning_catalog_id = None
self._tuning_specialist_id = None
self._tuning_retriever_id = None
self._tuning_processor_id = None
self.component = os.environ.get('COMPONENT_NAME', 'eveai_app')
def __setattr__(self, name, value):
if name not in {'event_type', 'tenant_id', 'trace_id', 'span_id', 'span_name', 'parent_span_id',
'document_version_id', 'chat_session_id', 'interaction_id', 'environment'}:
super().__setattr__(name, value)
def getMessage(self):
"""
Override getMessage to handle both string and dict messages
"""
msg = self.msg
if self.args:
msg = msg % self.args
return msg
@property
def is_tuning_log(self):
return self._is_tuning_log
@is_tuning_log.setter
def is_tuning_log(self, value):
object.__setattr__(self, '_is_tuning_log', value)
@property
def tuning_type(self):
return self._tuning_type
@tuning_type.setter
def tuning_type(self, value):
object.__setattr__(self, '_tuning_type', value)
def get_tuning_data(self):
"""Get all tuning-related data if this is a tuning log"""
if not self._is_tuning_log:
return {}
return {
'is_tuning_log': self._is_tuning_log,
'tuning_type': self._tuning_type,
'tuning_tenant_id': self._tuning_tenant_id,
'tuning_catalog_id': self._tuning_catalog_id,
'tuning_specialist_id': self._tuning_specialist_id,
'tuning_retriever_id': self._tuning_retriever_id,
'tuning_processor_id': self._tuning_processor_id,
}
def set_tuning_data(self, tenant_id=None, catalog_id=None, specialist_id=None,
retriever_id=None, processor_id=None):
"""Set tuning-specific data"""
object.__setattr__(self, '_tuning_tenant_id', tenant_id)
object.__setattr__(self, '_tuning_catalog_id', catalog_id)
object.__setattr__(self, '_tuning_specialist_id', specialist_id)
object.__setattr__(self, '_tuning_retriever_id', retriever_id)
object.__setattr__(self, '_tuning_processor_id', processor_id)
def custom_log_record_factory(*args, **kwargs):
record = CustomLogRecord(*args, **kwargs)
return record
class TuningFormatter(logging.Formatter):
"""Universal formatter for all tuning logs"""
def __init__(self, fmt=None, datefmt=None):
super().__init__(fmt or '%(asctime)s [%(levelname)s] %(name)s: %(message)s',
datefmt or '%Y-%m-%d %H:%M:%S')
def format(self, record):
# First format with the default formatter to handle basic fields
formatted_msg = super().format(record)
# If this is a tuning log, add the additional context
if getattr(record, 'is_tuning_log', False):
try:
identifiers = []
if hasattr(record, 'tenant_id') and record.tenant_id:
identifiers.append(f"Tenant: {record.tenant_id}")
if hasattr(record, 'catalog_id') and record.catalog_id:
identifiers.append(f"Catalog: {record.catalog_id}")
if hasattr(record, 'processor_id') and record.processor_id:
identifiers.append(f"Processor: {record.processor_id}")
formatted_msg = (
f"{formatted_msg}\n"
f"[TUNING {record.tuning_type}] [{' | '.join(identifiers)}]"
)
if hasattr(record, 'tuning_data') and record.tuning_data:
formatted_msg += f"\nData: {json.dumps(record.tuning_data, indent=2)}"
except Exception as e:
return f"{formatted_msg} (Error formatting tuning data: {str(e)})"
return formatted_msg
class GraylogFormatter(logging.Formatter):
"""Maintains existing Graylog formatting while adding tuning fields"""
def format(self, record):
if getattr(record, 'is_tuning_log', False):
# Add tuning-specific fields to Graylog
record.tuning_fields = {
'is_tuning_log': True,
'tuning_type': record.tuning_type,
'tenant_id': record.tenant_id,
'catalog_id': record.catalog_id,
'specialist_id': record.specialist_id,
'retriever_id': record.retriever_id,
'processor_id': record.processor_id,
}
return super().format(record)
class TuningLogger:
"""Helper class to manage tuning logs with consistent structure"""
def __init__(self, logger_name, tenant_id=None, catalog_id=None, specialist_id=None, retriever_id=None, processor_id=None):
self.logger = logging.getLogger(logger_name)
self.tenant_id = tenant_id
self.catalog_id = catalog_id
self.specialist_id = specialist_id
self.retriever_id = retriever_id
self.processor_id = processor_id
def log_tuning(self, tuning_type: str, message: str, data=None, level=logging.DEBUG):
"""Log a tuning event with structured data"""
try:
# Create a standard LogRecord for tuning
record = logging.LogRecord(
name=self.logger.name,
level=level,
pathname='',
lineno=0,
msg=message,
args=(),
exc_info=None
)
# Add tuning-specific attributes
record.is_tuning_log = True
record.tuning_type = tuning_type
record.tenant_id = self.tenant_id
record.catalog_id = self.catalog_id
record.specialist_id = self.specialist_id
record.retriever_id = self.retriever_id
record.processor_id = self.processor_id
if data:
record.tuning_data = data
# Process the record
self.logger.handle(record)
except Exception as e:
fallback_logger = logging.getLogger('eveai_workers')
fallback_logger.exception(f"Failed to log tuning message: {str(e)}")
# Set the custom log record factory
logging.setLogRecordFactory(custom_log_record_factory)
logging.setLogRecordFactory(TuningLogRecord)
LOGGING = {
@@ -37,88 +190,104 @@ LOGGING = {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/eveai_app.log',
'maxBytes': 1024 * 1024 * 5, # 5MB
'backupCount': 10,
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
},
'file_workers': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/eveai_workers.log',
'maxBytes': 1024 * 1024 * 5, # 5MB
'backupCount': 10,
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
},
'file_chat': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/eveai_chat.log',
'maxBytes': 1024 * 1024 * 5, # 5MB
'backupCount': 10,
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
},
'file_chat_workers': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/eveai_chat_workers.log',
'maxBytes': 1024 * 1024 * 5, # 5MB
'backupCount': 10,
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
},
'file_api': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/eveai_api.log',
'maxBytes': 1024 * 1024 * 5, # 5MB
'backupCount': 10,
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
},
'file_beat': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/eveai_beat.log',
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
},
'file_entitlements': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/eveai_entitlements.log',
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
},
'file_sqlalchemy': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/sqlalchemy.log',
'maxBytes': 1024 * 1024 * 5, # 5MB
'backupCount': 10,
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
},
'file_mailman': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/mailman.log',
'maxBytes': 1024 * 1024 * 5, # 5MB
'backupCount': 10,
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
},
'file_security': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/security.log',
'maxBytes': 1024 * 1024 * 5, # 5MB
'backupCount': 10,
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
},
'file_rag_tuning': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/rag_tuning.log',
'maxBytes': 1024 * 1024 * 5, # 5MB
'backupCount': 10,
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
},
'file_embed_tuning': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/embed_tuning.log',
'maxBytes': 1024 * 1024 * 5, # 5MB
'backupCount': 10,
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
},
'file_business_events': {
'level': 'INFO',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/business_events.log',
'maxBytes': 1024 * 1024 * 5, # 5MB
'backupCount': 10,
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
},
'console': {
@@ -126,25 +295,38 @@ LOGGING = {
'level': 'DEBUG',
'formatter': 'standard',
},
'tuning_file': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/tuning.log',
'maxBytes': 1024 * 1024 * 3, # 3MB
'backupCount': 3,
'formatter': 'tuning',
},
'graylog': {
'level': 'DEBUG',
'class': 'graypy.GELFUDPHandler',
'host': GRAYLOG_HOST,
'port': GRAYLOG_PORT,
'debugging_fields': True, # Set to True if you want to include debugging fields
'extra_fields': True, # Set to True if you want to include extra fields
'debugging_fields': True,
'formatter': 'graylog'
},
},
'formatters': {
'standard': {
'format': '%(asctime)s [%(levelname)s] %(name)s (%(component)s) [%(module)s:%(lineno)d in %(funcName)s] '
'[Thread: %(threadName)s]: %(message)s'
'format': '%(asctime)s [%(levelname)s] %(name)s (%(component)s) [%(module)s:%(lineno)d]: %(message)s',
'datefmt': '%Y-%m-%d %H:%M:%S'
},
'graylog': {
'format': '[%(levelname)s] %(name)s (%(component)s) [%(module)s:%(lineno)d in %(funcName)s] '
'[Thread: %(threadName)s]: %(message)s',
'datefmt': '%Y-%m-%d %H:%M:%S',
'()': GraylogFormatter
},
'tuning': {
'()': TuningFormatter,
'datefmt': '%Y-%m-%d %H:%M:%S UTC'
}
},
'loggers': {
'eveai_app': { # logger for the eveai_app
@@ -172,6 +354,16 @@ LOGGING = {
'level': 'DEBUG',
'propagate': False
},
'eveai_beat': { # logger for the eveai_beat
'handlers': ['file_beat', 'graylog', ] if env == 'production' else ['file_beat', ],
'level': 'DEBUG',
'propagate': False
},
'eveai_entitlements': { # logger for the eveai_entitlements
'handlers': ['file_entitlements', 'graylog', ] if env == 'production' else ['file_entitlements', ],
'level': 'DEBUG',
'propagate': False
},
'sqlalchemy.engine': { # logger for the sqlalchemy
'handlers': ['file_sqlalchemy', 'graylog', ] if env == 'production' else ['file_sqlalchemy', ],
'level': 'DEBUG',
@@ -187,21 +379,17 @@ LOGGING = {
'level': 'DEBUG',
'propagate': False
},
'rag_tuning': { # logger for the rag_tuning
'handlers': ['file_rag_tuning', 'graylog', ] if env == 'production' else ['file_rag_tuning', ],
'level': 'DEBUG',
'propagate': False
},
'embed_tuning': { # logger for the embed_tuning
'handlers': ['file_embed_tuning', 'graylog', ] if env == 'production' else ['file_embed_tuning', ],
'level': 'DEBUG',
'propagate': False
},
'business_events': {
'handlers': ['file_business_events', 'graylog'],
'level': 'DEBUG',
'propagate': False
},
# Single tuning logger
'tuning': {
'handlers': ['tuning_file', 'graylog'] if env == 'production' else ['tuning_file'],
'level': 'DEBUG',
'propagate': False,
},
'': { # root logger
'handlers': ['console'],
'level': 'WARNING', # Set higher level for root to minimize noise

View File

@@ -1,88 +0,0 @@
html_parse: |
You are a top administrative assistant specialized in transforming given HTML into markdown formatted files. The generated files will be used to generate embeddings in a RAG-system.
# Best practices are:
- Respect wordings and language(s) used in the HTML.
- The following items need to be considered: headings, paragraphs, listed items (numbered or not) and tables. Images can be neglected.
- Sub-headers can be used as lists. This is true when a header is followed by a series of sub-headers without content (paragraphs or listed items). Present those sub-headers as a list.
- Be careful of encoding of the text. Everything needs to be human readable.
Process the file carefully, and take a stepped approach. The resulting markdown should be the result of the processing of the complete input html file. Answer with the pure markdown, without any other text.
HTML is between triple backticks.
```{html}```
pdf_parse: |
You are a top administrative aid specialized in transforming given PDF-files into markdown formatted files. The generated files will be used to generate embeddings in a RAG-system.
# Best practices are:
- Respect wordings and language(s) used in the PDF.
- The following items need to be considered: headings, paragraphs, listed items (numbered or not) and tables. Images can be neglected.
- When headings are numbered, show the numbering and define the header level.
- A new item is started when a <return> is found before a full line is reached. In order to know the number of characters in a line, please check the document and the context within the document (e.g. an image could limit the number of characters temporarily).
- Paragraphs are to be stripped of newlines so they become easily readable.
- Be careful of encoding of the text. Everything needs to be human readable.
Process the file carefully, and take a stepped approach. The resulting markdown should be the result of the processing of the complete input pdf content. Answer with the pure markdown, without any other text.
PDF content is between triple backticks.
```{pdf_content}```
summary: |
Write a concise summary of the text in {language}. The text is delimited between triple backticks.
```{text}```
rag: |
Answer the question based on the following context, delimited between triple backticks.
{tenant_context}
Use the following {language} in your communication, and cite the sources used.
If the question cannot be answered using the given context, say "I have insufficient information to answer this question."
Context:
```{context}```
Question:
{question}
history: |
You are a helpful assistant that details a question based on a previous context,
in such a way that the question is understandable without the previous context.
The context is a conversation history, with the HUMAN asking questions, the AI answering questions.
The history is delimited between triple backticks.
You answer by stating the question in {language}.
History:
```{history}```
Question to be detailed:
{question}
encyclopedia: |
You have a lot of background knowledge, and as such you are some kind of
'encyclopedia' to explain general terminology. Only answer if you have a clear understanding of the question.
If not, say you do not have sufficient information to answer the question. Use the {language} in your communication.
Question:
{question}
transcript: |
"""You are a top administrative assistant specialized in transforming given transcriptions into markdown formatted files. Your task is to process and improve the given transcript, not to summarize it.
IMPORTANT INSTRUCTIONS:
1. DO NOT summarize the transcript and don't make your own interpretations. Return the FULL, COMPLETE transcript with improvements.
2. Improve any errors in the transcript based on context.
3. Respect the original wording and language(s) used in the transcription. Main Language used is {language}.
4. Divide the transcript into paragraphs for better readability. Each paragraph ONLY contains ORIGINAL TEXT.
5. Group related paragraphs into logical sections.
6. Add appropriate headers (using markdown syntax) to each section in {language}.
7. We do not need an overall title. Just add logical headers
8. Ensure that the entire transcript is included in your response, from start to finish.
REMEMBER:
- Your output should be the complete transcript in markdown format, NOT A SUMMARY OR ANALYSIS.
- Include EVERYTHING from the original transcript, just organized and formatted better.
- Just return the markdown version of the transcript, without any other text such as an introduction or a summary.
Here is the transcript to process (between triple backticks):
```{transcript}```
Process this transcript according to the instructions above and return the full, formatted markdown version.
"""

View File

@@ -0,0 +1,12 @@
version: "1.0.0"
content: |
You have a lot of background knowledge, and as such you are some kind of
'encyclopedia' to explain general terminology. Only answer if you have a clear understanding of the question.
If not, say you do not have sufficient information to answer the question. Use the {language} in your communication.
Question:
{question}
metadata:
author: "Josako"
date_added: "2024-11-10"
description: "A background information retriever for Evie"
changes: "Initial version migrated from flat file structure"

View File

@@ -0,0 +1,16 @@
version: "1.0.0"
content: |
You are a helpful assistant that details a question based on a conversation history, in such a way that the detailed
question is understandable without that history. The conversation is a consequence of questions and context provided
by the HUMAN, and the AI (you) answering back, in chronological order. The most recent (i.e. last) elements are the
most important when detailing the question.
You answer by stating the detailed question in {language}.
History:
```{history}```
Question to be detailed:
{question}
metadata:
author: "Josako"
date_added: "2024-11-10"
description: "Prompt to further detail a question based on the previous conversation"
changes: "Initial version migrated from flat file structure"

View File

@@ -0,0 +1,21 @@
version: "1.0.0"
content: |
You are a top administrative assistant specialized in transforming given HTML into markdown formatted files. The generated files will be used to generate embeddings in a RAG-system.
# Best practices are:
- Respect wordings and language(s) used in the HTML.
- The following items need to be considered: headings, paragraphs, listed items (numbered or not) and tables. Images can be neglected.
- Sub-headers can be used as lists. This is true when a header is followed by a series of sub-headers without content (paragraphs or listed items). Present those sub-headers as a list.
- Be careful of encoding of the text. Everything needs to be human readable.
Process the file carefully, and take a stepped approach. The resulting markdown should be the result of the processing of the complete input html file. Answer with the pure markdown, without any other text.
HTML is between triple backquotes.
```{html}```
model: "mistral.mistral-small-latest"
metadata:
author: "Josako"
date_added: "2024-11-10"
description: "An aid in transforming HTML-based inputs to markdown"
changes: "Initial version migrated from flat file structure"

View File

@@ -1,79 +0,0 @@
html_parse: |
You are a top administrative assistant specialized in transforming given HTML into markdown formatted files. The generated files will be used to generate embeddings in a RAG-system.
# Best practices are:
- Respect wordings and language(s) used in the HTML.
- The following items need to be considered: headings, paragraphs, listed items (numbered or not) and tables. Images can be neglected.
- Sub-headers can be used as lists. This is true when a header is followed by a series of sub-headers without content (paragraphs or listed items). Present those sub-headers as a list.
- Be careful of encoding of the text. Everything needs to be human readable.
Process the file carefully, and take a stepped approach. The resulting markdown should be the result of the processing of the complete input html file. Answer with the pure markdown, without any other text.
HTML is between triple backquotes.
```{html}```
pdf_parse: |
You are a top administrative aid specialized in transforming given PDF-files into markdown formatted files. The generated files will be used to generate embeddings in a RAG-system.
# Best practices are:
- Respect wordings and language(s) used in the PDF.
- The following items need to be considered: headings, paragraphs, listed items (numbered or not) and tables. Images can be neglected.
- When headings are numbered, show the numbering and define the header level.
- A new item is started when a <return> is found before a full line is reached. In order to know the number of characters in a line, please check the document and the context within the document (e.g. an image could limit the number of characters temporarily).
- Paragraphs are to be stripped of newlines so they become easily readable.
- Be careful of encoding of the text. Everything needs to be human readable.
Process the file carefully, and take a stepped approach. The resulting markdown should be the result of the processing of the complete input pdf content. Answer with the pure markdown, without any other text.
PDF content is between triple backquotes.
```{pdf_content}```
summary: |
Write a concise summary of the text in {language}. The text is delimited between triple backquotes.
```{text}```
rag: |
Answer the question based on the following context, delimited between triple backquotes.
{tenant_context}
Use the following {language} in your communication, and cite the sources used.
If the question cannot be answered using the given context, say "I have insufficient information to answer this question."
Context:
```{context}```
Question:
{question}
history: |
You are a helpful assistant that details a question based on a previous context,
in such a way that the question is understandable without the previous context.
The context is a conversation history, with the HUMAN asking questions, the AI answering questions.
The history is delimited between triple backquotes.
You answer by stating the question in {language}.
History:
```{history}```
Question to be detailed:
{question}
encyclopedia: |
You have a lot of background knowledge, and as such you are some kind of
'encyclopedia' to explain general terminology. Only answer if you have a clear understanding of the question.
If not, say you do not have sufficient information to answer the question. Use the {language} in your communication.
Question:
{question}
transcript: |
You are a top administrative assistant specialized in transforming given transcriptions into markdown formatted files. The generated files will be used to generate embeddings in a RAG-system. The transcriptions originate from podcast, videos and similar material.
# Best practices and steps are:
- Respect wordings and language(s) used in the transcription. Main language is {language}.
- Sometimes, the transcript contains speech of several people participating in a conversation. Although these are not obvious from reading the file, try to detect when other people are speaking.
- Divide the transcript into several logical parts. Ensure questions and their answers are in the same logical part.
- annotate the text to identify these logical parts using headings in {language}.
- improve errors in the transcript given the context, but do not change the meaning and intentions of the transcription.
Process the file carefully, and take a stepped approach. The resulting markdown should be the result of processing the complete input transcription. Answer with the pure markdown, without any other text.
The transcript is between triple backquotes.
```{transcript}```

View File

@@ -1,84 +0,0 @@
html_parse: |
You are a top administrative assistant specialized in transforming given HTML into markdown formatted files. The generated files will be used to generate embeddings in a RAG-system.
# Best practices are:
- Respect wordings and language(s) used in the HTML.
- The following items need to be considered: headings, paragraphs, listed items (numbered or not) and tables. Images can be neglected.
- Sub-headers can be used as lists. This is true when a header is followed by a series of sub-headers without content (paragraphs or listed items). Present those sub-headers as a list.
- Be careful of encoding of the text. Everything needs to be human readable.
Process the file carefully, and take a stepped approach. The resulting markdown should be the result of the processing of the complete input html file. Answer with the pure markdown, without any other text.
HTML is between triple backquotes.
```{html}```
pdf_parse: |
You are a top administrative aid specialized in transforming given PDF-files into markdown formatted files. The generated files will be used to generate embeddings in a RAG-system.
The content you get is already processed (some markdown already generated), but needs to be corrected. For large files, you may receive only portions of the full file. Consider this when processing the content.
# Best practices are:
- Respect wordings and language(s) used in the provided content.
- The following items need to be considered: headings, paragraphs, listed items (numbered or not) and tables. Images can be neglected.
- When headings are numbered, show the numbering and define the header level. You may have to correct current header levels, as preprocessing is known to make errors.
- A new item is started when a <return> is found before a full line is reached. In order to know the number of characters in a line, please check the document and the context within the document (e.g. an image could limit the number of characters temporarily).
- Paragraphs are to be stripped of newlines so they become easily readable.
- Be careful of encoding of the text. Everything needs to be human readable.
Process the file carefully, and take a stepped approach. The resulting markdown should be the result of the processing of the complete input pdf content. Answer with the pure markdown, without any other text.
PDF content is between triple backquotes.
```{pdf_content}```
summary: |
Write a concise summary of the text in {language}. The text is delimited between triple backquotes.
```{text}```
rag: |
Answer the question based on the following context, delimited between triple backquotes.
{tenant_context}
Use the following {language} in your communication, and cite the sources used.
If the question cannot be answered using the given context, say "I have insufficient information to answer this question."
Context:
```{context}```
Question:
{question}
history: |
You are a helpful assistant that details a question based on a previous context,
in such a way that the question is understandable without the previous context.
The context is a conversation history, with the HUMAN asking questions, the AI answering questions.
The history is delimited between triple backquotes.
You answer by stating the question in {language}.
History:
```{history}```
Question to be detailed:
{question}
encyclopedia: |
You have a lot of background knowledge, and as such you are some kind of
'encyclopedia' to explain general terminology. Only answer if you have a clear understanding of the question.
If not, say you do not have sufficient information to answer the question. Use the {language} in your communication.
Question:
{question}
transcript: |
You are a top administrative assistant specialized in transforming given transcriptions into markdown formatted files. The generated files will be used to generate embeddings in a RAG-system. The transcriptions originate from podcast, videos and similar material.
You may receive information in different chunks. If you're not receiving the first chunk, you'll get the last part of the previous chunk, including it's title in between triple $. Consider this last part and the title as the start of the new chunk.
# Best practices and steps are:
- Respect wordings and language(s) used in the transcription. Main language is {language}.
- Sometimes, the transcript contains speech of several people participating in a conversation. Although these are not obvious from reading the file, try to detect when other people are speaking.
- Divide the transcript into several logical parts. Ensure questions and their answers are in the same logical part. Don't make logical parts too small. They should contain at least 7 or 8 sentences.
- annotate the text to identify these logical parts using headings in {language}.
- improve errors in the transcript given the context, but do not change the meaning and intentions of the transcription.
Process the file carefully, and take a stepped approach. The resulting markdown should be the result of processing the complete input transcription. Answer with the pure markdown, without any other text.
The transcript is between triple backquotes.
$$${previous_part}$$$
```{transcript}```

View File

@@ -0,0 +1,12 @@
version: "1.0.0"
content: |
You have a lot of background knowledge, and as such you are some kind of
'encyclopedia' to explain general terminology. Only answer if you have a clear understanding of the question.
If not, say you do not have sufficient information to answer the question. Use the {language} in your communication.
Question:
{question}
metadata:
author: "Josako"
date_added: "2024-11-10"
description: "A background information retriever for Evie"
changes: "Initial version migrated from flat file structure"

View File

@@ -0,0 +1,16 @@
version: "1.0.0"
content: |
You are a helpful assistant that details a question based on a previous context,
in such a way that the question is understandable without the previous context.
The context is a conversation history, with the HUMAN asking questions, the AI answering questions.
The history is delimited between triple backquotes.
You answer by stating the question in {language}.
History:
```{history}```
Question to be detailed:
{question}
metadata:
author: "Josako"
date_added: "2024-11-10"
description: "Prompt to further detail a question based on the previous conversation"
changes: "Initial version migrated from flat file structure"

View File

@@ -0,0 +1,20 @@
version: "1.0.0"
content: |
You are a top administrative assistant specialized in transforming given HTML into markdown formatted files. The generated files will be used to generate embeddings in a RAG-system.
# Best practices are:
- Respect wordings and language(s) used in the HTML.
- The following items need to be considered: headings, paragraphs, listed items (numbered or not) and tables. Images can be neglected.
- Sub-headers can be used as lists. This is true when a header is followed by a series of sub-headers without content (paragraphs or listed items). Present those sub-headers as a list.
- Be careful of encoding of the text. Everything needs to be human readable.
Process the file carefully, and take a stepped approach. The resulting markdown should be the result of the processing of the complete input html file. Answer with the pure markdown, without any other text.
HTML is between triple backquotes.
```{html}```
metadata:
author: "Josako"
date_added: "2024-11-10"
description: "An aid in transforming HTML-based inputs to markdown"
changes: "Initial version migrated from flat file structure"

View File

@@ -0,0 +1,23 @@
version: "1.0.0"
content: |
You are a top administrative aid specialized in transforming given PDF-files into markdown formatted files. The generated files will be used to generate embeddings in a RAG-system.
The content you get is already processed (some markdown already generated), but needs to be corrected. For large files, you may receive only portions of the full file. Consider this when processing the content.
# Best practices are:
- Respect wordings and language(s) used in the provided content.
- The following items need to be considered: headings, paragraphs, listed items (numbered or not) and tables. Images can be neglected.
- When headings are numbered, show the numbering and define the header level. You may have to correct current header levels, as preprocessing is known to make errors.
- A new item is started when a <return> is found before a full line is reached. In order to know the number of characters in a line, please check the document and the context within the document (e.g. an image could limit the number of characters temporarily).
- Paragraphs are to be stripped of newlines so they become easily readable.
- Be careful of encoding of the text. Everything needs to be human readable.
Process the file carefully, and take a stepped approach. The resulting markdown should be the result of the processing of the complete input pdf content. Answer with the pure markdown, without any other text.
PDF content is between triple backquotes.
```{pdf_content}```
metadata:
author: "Josako"
date_added: "2024-11-10"
description: "A assistant to parse PDF-content into markdown"
changes: "Initial version migrated from flat file structure"

View File

@@ -0,0 +1,15 @@
version: "1.0.0"
content: |
Answer the question based on the following context, delimited between triple backquotes.
{tenant_context}
Use the following {language} in your communication, and cite the sources used at the end of the full conversation.
If the question cannot be answered using the given context, say "I have insufficient information to answer this question."
Context:
```{context}```
Question:
{question}
metadata:
author: "Josako"
date_added: "2024-11-10"
description: "The Main RAG retriever"
changes: "Initial version migrated from flat file structure"

View File

@@ -0,0 +1,9 @@
version: "1.0.0"
content: |
Write a concise summary of the text in {language}. The text is delimited between triple backquotes.
```{text}```
metadata:
author: "Josako"
date_added: "2024-11-10"
description: "An assistant to create a summary when multiple chunks are required for 1 file"
changes: "Initial version migrated from flat file structure"

View File

@@ -0,0 +1,25 @@
version: "1.0.0"
content: |
You are a top administrative assistant specialized in transforming given transcriptions into markdown formatted files. The generated files will be used to generate embeddings in a RAG-system. The transcriptions originate from podcast, videos and similar material.
You may receive information in different chunks. If you're not receiving the first chunk, you'll get the last part of the previous chunk, including it's title in between triple $. Consider this last part and the title as the start of the new chunk.
# Best practices and steps are:
- Respect wordings and language(s) used in the transcription. Main language is {language}.
- Sometimes, the transcript contains speech of several people participating in a conversation. Although these are not obvious from reading the file, try to detect when other people are speaking.
- Divide the transcript into several logical parts. Ensure questions and their answers are in the same logical part. Don't make logical parts too small. They should contain at least 7 or 8 sentences.
- annotate the text to identify these logical parts using headings in {language}.
- improve errors in the transcript given the context, but do not change the meaning and intentions of the transcription.
Process the file carefully, and take a stepped approach. The resulting markdown should be the result of processing the complete input transcription. Answer with the pure markdown, without any other text.
The transcript is between triple backquotes.
$$${previous_part}$$$
```{transcript}```
metadata:
author: "Josako"
date_added: "2024-11-10"
description: "An assistant to transform a transcript to markdown."
changes: "Initial version migrated from flat file structure"

View File

@@ -0,0 +1,23 @@
version: "1.0.0"
content: |
You are a top administrative aid specialized in transforming given PDF-files into markdown formatted files. The generated files will be used to generate embeddings in a RAG-system.
The content you get is already processed (some markdown already generated), but needs to be corrected. For large files, you may receive only portions of the full file. Consider this when processing the content.
# Best practices are:
- Respect wordings and language(s) used in the provided content.
- The following items need to be considered: headings, paragraphs, listed items (numbered or not) and tables. Images can be neglected.
- When headings are numbered, show the numbering and define the header level. You may have to correct current header levels, as preprocessing is known to make errors.
- A new item is started when a <return> is found before a full line is reached. In order to know the number of characters in a line, please check the document and the context within the document (e.g. an image could limit the number of characters temporarily).
- Paragraphs are to be stripped of newlines so they become easily readable.
- Be careful of encoding of the text. Everything needs to be human readable.
Process the file carefully, and take a stepped approach. The resulting markdown should be the result of the processing of the complete input pdf content. Answer with the pure markdown, without any other text.
PDF content is between triple backquotes.
```{pdf_content}```
metadata:
author: "Josako"
date_added: "2024-11-10"
description: "An assistant to parse PDF-content into markdown"
changes: "Initial version migrated from flat file structure"

View File

@@ -0,0 +1,15 @@
version: "1.0.0"
content: |
Answer the question based on the following context, delimited between triple backquotes.
{tenant_context}
Use the following {language} in your communication, and cite the sources used at the end of the full conversation.
If the question cannot be answered using the given context, say "I have insufficient information to answer this question."
Context:
```{context}```
Question:
{question}
metadata:
author: "Josako"
date_added: "2024-11-10"
description: "The Main RAG retriever"
changes: "Initial version migrated from flat file structure"

View File

@@ -0,0 +1,9 @@
version: "1.0.0"
content: |
Write a concise summary of the text in {language}. The text is delimited between triple backquotes.
```{text}```
metadata:
author: "Josako"
date_added: "2024-11-10"
description: "An assistant to create a summary when multiple chunks are required for 1 file"
changes: "Initial version migrated from flat file structure"

Some files were not shown because too many files have changed in this diff Show More