53 Commits

Author SHA1 Message Date
Josako
344ea26ecc - Security improvements to Docker images (Docker Scout advise) 2024-11-27 12:27:28 +01:00
Josako
98cb4e4f2f - Created a new eveai_chat plugin to support the new dynamic possibilities of the Specialists. Currently only supports standard Rag retrievers (i.e. no extra arguments). 2024-11-27 12:26:49 +01:00
Josako
07d89d204f - Created a new eveai_chat plugin to support the new dynamic possibilities of the Specialists. Currently only supports standard Rag retrievers (i.e. no extra arguments). 2024-11-26 13:35:29 +01:00
Josako
7702a6dfcc - Modernized authentication with the introduction of TenantProject
- Created a base mail template
- Adapt and improve document API to usage of catalogs and processors
- Adapt eveai_sync to new authentication mechanism and usage of catalogs and processors
2024-11-21 17:24:33 +01:00
Josako
4c009949b3 - Changes to support SpecialistID being passed iso CatalogID
- Removed error that stopped sync
2024-11-15 13:13:45 +01:00
Josako
aa4ac3ec7c - Changes to support SpecialistID being passed iso CatalogID
- Removed error that stopped sync
2024-11-15 13:13:33 +01:00
Josako
1807435339 - Introduction of dynamic Retrievers & Specialists
- Introduction of dynamic Processors
- Introduction of caching system
- Introduction of a better template manager
- Adaptation of ModelVariables to support dynamic Processors / Retrievers / Specialists
- Start adaptation of chat client
2024-11-15 10:00:53 +01:00
Josako
55a8a95f79 - Finalisation of the Specialist model, forms and views 2024-11-04 11:22:40 +01:00
Josako
503ea7965d - Temporary checkin to branch for the rest of the introduction of experts 2024-11-03 16:18:14 +01:00
Josako
88f4db1178 - Organise retrievers 2024-11-01 11:19:55 +01:00
Josako
2df291ea91 - Organise retrievers 2024-11-01 11:19:34 +01:00
Josako
5841525b4c - When no explicit path is given in the browser, we automatically get redirected to the admin interface (eveai_app)
- Tuning moved to Retriever iso in the configuration, as this is an attribute that should be available for all types of Retrievers
2024-10-31 08:32:02 +01:00
Josako
532073d38e - Add dynamic fields to DocumentVersion in case the Catalog requires it. 2024-10-30 13:52:18 +01:00
Josako
43547287b1 - Refining & Enhancing dynamic fields
- Creating a specialized Form class for handling dynamic fields
- Refinement of HTML-macros to handle dynamic fields
- Introduction of dynamic fields for Catalogs
2024-10-29 09:17:44 +01:00
Josako
aa358df28e - Allowing for multiple types of Catalogs
- Introduction of retrievers
- Ensuring processing information is collected from Catalog iso Tenant
- Introduction of a generic Form class to enable dynamic fields based on a configuration
- Realisation of Retriever functionality to support dynamic fields
2024-10-25 14:11:47 +02:00
Josako
30fec27488 - Release script added to tag in both git and docker 2024-10-21 07:45:06 +02:00
Josako
5e77b478dd - Release script added to tag in both git and docker 2024-10-17 11:22:18 +02:00
Josako
6f71259822 - Changelog update 2024-10-17 10:35:51 +02:00
Josako
74cc7ae95e - Adapt Sync Wordpress Component to Catalog introduction
- Small bug fixes
2024-10-17 10:31:13 +02:00
Josako
7f12c8b355 - Remove obsolete fields from Tenant model (Catalog introduction) 2024-10-16 13:59:57 +02:00
Josako
6069f5f7e5 - Catalog functionality integrated into document and document_version views
- small bugfixes and improvements
2024-10-16 13:09:19 +02:00
Josako
3e644f1652 - Add Catalog Functionality 2024-10-15 18:14:57 +02:00
Josako
3316a8bc47 - Small changes to show when upgrades are finished 2024-10-14 16:40:56 +02:00
Josako
270479c77d - Add Catalog Concept to Document Domain
- Create Catalog views
- Modify document stack creation
2024-10-14 13:56:23 +02:00
Josako
0f4558d775 - Small fix in interaction view, as it still refered to file_name 2024-10-11 18:14:35 +02:00
Josako
9f5f090f0c - License Usage Calculation realised
- View License Usages
- Celery Beat container added
- First schedule in Celery Beat for calculating usage (hourly)
- repopack can now split for different components
- Various fixes as consequece of changing file_location / file_name ==> bucket_name / object_name
- Celery Routing / Queuing updated
2024-10-11 16:33:36 +02:00
Josako
5ffad160b1 - Prepared Release 1.0.10-alfa 2024-10-08 09:18:59 +02:00
Josako
d6a7743f26 - Minor corrections to entitlement changes and upgrades
- started new eveai_entitlements component (not finished)
2024-10-08 09:12:16 +02:00
Josako
9782e31ae5 - Refined entitlements to work with MiB for both embeddings and storage
- Improved DocumentVersion storage attributes to reflect Minio settings
- Added size to DocumentVersions to easily calculate usage
- License / LicenseTier forms and views added
2024-10-07 14:17:44 +02:00
Josako
f638860e90 - Improvements on audio processing to limit CPU and memory usage
- Removed Portkey from the equation, and defined explicit monitoring using Langchain native code
- Optimization of Business Event logging
2024-10-02 14:12:16 +02:00
Josako
b700cfac64 - Improvements on audio processing to limit CPU and memory usage
- Removed Portkey from the equation, and defined explicit monitoring using Langchain native code
- Optimization of Business Event logging
2024-10-02 14:11:46 +02:00
Josako
883175b8f5 - Portkey log retrieval started
- flower container added (dev and prod)
2024-10-01 08:01:59 +02:00
Josako
ae697df4c9 Session_id was not correctly stored for chat sessions, and it was defined as an integer iso a UUID in the database 2024-09-27 11:24:43 +02:00
Josako
d9cb00fcdc Business event tracing completed for both eveai_workers tasks and eveai_chat_workers tasks 2024-09-27 10:53:42 +02:00
Josako
ee1b0f1cfa Start log tracing to log business events. Storage in both database and logging-backend. 2024-09-25 15:39:25 +02:00
Josako
a740c96630 - turned model_variables into a class with lazy loading
- some improvements to Healthchecks
2024-09-24 10:48:52 +02:00
Josako
67bdeac434 - Improvements and bugfixes to HealthChecks 2024-09-16 16:17:54 +02:00
Josako
1622591afd Adding code to backend. 2024-09-16 09:39:34 +02:00
Josako
6cf660e622 - Adding a Tenant Type
- Allow filtering on Tenant Types & searching for parts of Tenant names
- Implement health checks
- Start Prometheus monitoring (needs to be finalized)
- Refine audio_processor and srt_processor to reduce duplicate code and support for larger files
- Introduce repopack to reason in LLMs about the code
2024-09-13 15:43:40 +02:00
Josako
9e14824249 - Furter refinement of the API, adding functionality for refreshing documents and returning Token expiration time when retrieving token
- Implementation of a first version of a Wordpress plugin
- Adding api service to nginx.conf
2024-09-11 16:31:13 +02:00
Josako
76cb825660 - Full API application, streamlined, de-duplication of document handling code into document_utils.py
- Added meta-data fields to DocumentVersion
- Docker container to support API
2024-09-09 16:11:42 +02:00
Josako
341ba47d1c - Bugfixing 2024-09-05 14:31:54 +02:00
Josako
1fa33c029b - Correcting mistakes in tenant schema migrations 2024-09-03 11:50:25 +02:00
Josako
bcf7d439f3 - Old migration files that were not added to GIT 2024-09-03 11:49:46 +02:00
Josako
b9acf4d2ae - Add CHANGELOG.md 2024-09-02 14:04:44 +02:00
Josako
ae7bf3dbae - Correct default language when adding Documents and URLs 2024-09-02 14:04:22 +02:00
Josako
914c265afe - Improvements on document uploads (accept other files than html-files when entering a URL)
- Introduction of API-functionality (to be continued). Deduplication of document and url uploads between views and api.
- Improvements on document processing - introduction of processor classes to streamline document inputs
- Removed pure Youtube functionality, as Youtube retrieval of documents continuously changes. But added upload of srt, mp3, ogg and mp4
2024-09-02 12:37:44 +02:00
Josako
a158655247 - Add API Key Registration to tenant 2024-08-29 10:42:39 +02:00
Josako
bc350af247 - Allow the chat-widget to connect to multiple servers (e.g. development and production)
- Created a full session overview
2024-08-28 10:11:31 +02:00
Josako
6062b7646c - Allow multiple instances of Evie on 1 website. Shortcode is now parametrized. 2024-08-27 10:31:33 +02:00
Josako
122d1a18df - Allow for more complex and longer PDFs to be uploaded to Evie. First implmentation of a processor for specific file types.
- Allow URLs to contain other information than just HTML information. It can alose refer to e.g. PDF-files.
2024-08-27 07:05:56 +02:00
Josako
2ca006d82c Added excluded element classes to HTML parsing to allow for more complex document parsing
Added chunking to conversion of HTML to markdown in case of large files
2024-08-22 16:41:13 +02:00
Josako
a9f9b04117 Bugfix for ResetPasswordForm in config.py 2024-08-22 07:10:30 +02:00
297 changed files with 17495 additions and 10411 deletions

33
.gitignore vendored
View File

@@ -12,3 +12,36 @@ docker/tenant_files/
**/.DS_Store **/.DS_Store
__pycache__ __pycache__
**/__pycache__ **/__pycache__
/.idea
*.pyc
*.pyc
common/.DS_Store
common/__pycache__/__init__.cpython-312.pyc
common/__pycache__/extensions.cpython-312.pyc
common/models/__pycache__/__init__.cpython-312.pyc
common/models/__pycache__/document.cpython-312.pyc
common/models/__pycache__/interaction.cpython-312.pyc
common/models/__pycache__/user.cpython-312.pyc
common/utils/.DS_Store
common/utils/__pycache__/__init__.cpython-312.pyc
common/utils/__pycache__/celery_utils.cpython-312.pyc
common/utils/__pycache__/nginx_utils.cpython-312.pyc
common/utils/__pycache__/security.cpython-312.pyc
common/utils/__pycache__/simple_encryption.cpython-312.pyc
common/utils/__pycache__/template_filters.cpython-312.pyc
config/.DS_Store
config/__pycache__/__init__.cpython-312.pyc
config/__pycache__/config.cpython-312.pyc
config/__pycache__/logging_config.cpython-312.pyc
eveai_app/.DS_Store
eveai_app/__pycache__/__init__.cpython-312.pyc
eveai_app/__pycache__/errors.cpython-312.pyc
eveai_chat/.DS_Store
migrations/.DS_Store
migrations/public/.DS_Store
scripts/.DS_Store
scripts/__pycache__/run_eveai_app.cpython-312.pyc
/eveai_repo.txt
*repo.txt
/docker/eveai_logs/
/integrations/Wordpress/eveai_sync.zip

6
.idea/sqldialects.xml generated
View File

@@ -1,6 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="SqlDialectMappings">
<file url="PROJECT" dialect="PostgreSQL" />
</component>
</project>

20
.repopackignore_base Normal file
View File

@@ -0,0 +1,20 @@
# Add patterns to ignore here, one per line
# Example:
# *.log
# tmp/
logs/
nginx/static/assets/fonts/
nginx/static/assets/img/
nginx/static/assets/js/
nginx/static/scss/
patched_packages/
migrations/
*material*
*nucleo*
*package*
nginx/mime.types
*.gitignore*
.python-version
.repopackignore*
repopack.config.json
*repo.txt

View File

@@ -0,0 +1,12 @@
docker/
eveai_api/
eveai_app/
eveai_beat/
eveai_chat/
eveai_chat_workers/
eveai_entitlements/
eveai_workers/
instance/
integrations/
nginx/
scripts/

12
.repopackignore_docker Normal file
View File

@@ -0,0 +1,12 @@
common/
config/
eveai_api/
eveai_app/
eveai_beat/
eveai_chat/
eveai_chat_workers/
eveai_entitlements/
eveai_workers/
instance/
integrations/
nginx/

10
.repopackignore_eveai_api Normal file
View File

@@ -0,0 +1,10 @@
docker/
eveai_app/
eveai_beat/
eveai_chat/
eveai_chat_workers/
eveai_entitlements/
instance/
integrations/Wordpress/eveai-chat
nginx/
scripts/

11
.repopackignore_eveai_app Normal file
View File

@@ -0,0 +1,11 @@
docker/
eveai_api/
eveai_beat/
eveai_chat/
eveai_chat_workers/
eveai_entitlements/
eveai_workers/
instance/
integrations/
nginx/
scripts/

View File

@@ -0,0 +1,11 @@
docker/
eveai_api/
eveai_app/
eveai_chat/
eveai_chat_workers/
eveai_entitlements/
eveai_workers/
instance/
integrations/
nginx/
scripts/

View File

@@ -0,0 +1,10 @@
docker/
eveai_app/
eveai_beat/
eveai_chat_workers/
eveai_entitlements/
eveai_workers/
instance/
integrations/Wordpress/eveai_sync
nginx/
scripts/

View File

@@ -0,0 +1,11 @@
docker/
eveai_api/
eveai_app/
eveai_beat/
eveai_chat/
eveai_entitlements/
eveai_workers/
instance/
integrations/
nginx/
scripts/

View File

@@ -0,0 +1,11 @@
docker/
eveai_api/
eveai_app/
eveai_beat/
eveai_chat/
eveai_chat_workers/
eveai_workers/
instance/
integrations/
nginx/
scripts/

View File

@@ -0,0 +1,11 @@
docker/
eveai_api/
eveai_app/
eveai_beat/
eveai_chat/
eveai_chat_workers/
eveai_entitlements/
instance/
integrations/
nginx/
scripts/

4
.repopackignore_full Normal file
View File

@@ -0,0 +1,4 @@
docker
integrations
nginx
scripts

View File

@@ -0,0 +1,13 @@
common/
config/
docker/
eveai_api/
eveai_app/
eveai_beat/
eveai_chat/
eveai_chat_workers/
eveai_entitlements/
eveai_workers/
instance/
nginx/
scripts/

11
.repopackignore_nginx Normal file
View File

@@ -0,0 +1,11 @@
docker/
eveai_api/
eveai_app/
eveai_beat/
eveai_chat/
eveai_chat_workers/
eveai_entitlements/
eveai_workers/
instance/
integrations/
scripts/

238
CHANGELOG.md Normal file
View File

@@ -0,0 +1,238 @@
# Changelog
All notable changes to EveAI will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
### Added
- For new features.
### Changed
- For changes in existing functionality.
### Deprecated
- For soon-to-be removed features.
### Removed
- For now removed features.
### Fixed
- Set default language when registering Documents or URLs.
### Security
- In case of vulnerabilities.
## [2.0.0-alfa]
### Added
- Introduction of dynamic Retrievers & Specialists
- Introduction of dynamic Processors
- Introduction of caching system
- Introduction of a better template manager
### Changed
- For changes in existing functionality.
### Deprecated
- For soon-to-be removed features.
### Removed
- For now removed features.
### Fixed
- Set default language when registering Documents or URLs.
### Security
- In case of vulnerabilities.
## [1.0.14-alfa]
### Added
- New release script added to tag images with release number
- Allow the addition of multiple types of Catalogs
- Generic functionality to enable dynamic fields
- Addition of Retrievers to allow for smart collection of information in Catalogs
- Add dynamic fields to Catalog / Retriever / DocumentVersion
### Changed
- Processing parameters defined at Catalog level iso Tenant level
- Reroute 'blank' paths to 'admin'
### Deprecated
- For soon-to-be removed features.
### Removed
- For now removed features.
### Fixed
- Set default language when registering Documents or URLs.
### Security
- In case of vulnerabilities.
## [1.0.13-alfa]
### Added
- Finished Catalog introduction
- Reinitialization of WordPress site for syncing
### Changed
- Modification of WordPress Sync Component
- Cleanup of attributes in Tenant
### Fixed
- Overall bugfixes as result from the Catalog introduction
## [1.0.12-alfa]
### Added
- Added Catalog functionality
### Changed
- For changes in existing functionality.
### Deprecated
- For soon-to-be removed features.
### Removed
- For now removed features.
### Fixed
- Set default language when registering Documents or URLs.
### Security
- In case of vulnerabilities.
## [1.0.11-alfa]
### Added
- License Usage Calculation realised
- View License Usages
- Celery Beat container added
- First schedule in Celery Beat for calculating usage (hourly)
### Changed
- repopack can now split for different components
### Fixed
- Various fixes as consequence of changing file_location / file_name ==> bucket_name / object_name
- Celery Routing / Queuing updated
## [1.0.10-alfa]
### Added
- BusinessEventLog monitoring using Langchain native code
### Changed
- Allow longer audio files (or video) to be uploaded and processed
- Storage and Embedding usage now expressed in MiB iso tokens (more logical)
- Views for License / LicenseTier
### Removed
- Portkey removed for monitoring usage
## [1.0.9-alfa] - 2024/10/01
### Added
- Business Event tracing (eveai_workers & eveai_chat_workers)
- Flower Container added for monitoring
### Changed
- Healthcheck improvements
- model_utils turned into a class with lazy loading
### Deprecated
- For soon-to-be removed features.
### Removed
- For now removed features.
### Fixed
- Set default language when registering Documents or URLs.
## [1.0.8-alfa] - 2024-09-12
### Added
- Tenant type defined to allow for active, inactive, demo ... tenants
- Search and filtering functionality on Tenants
- Implementation of health checks (1st version)
- Provision for Prometheus monitoring (no implementation yet)
- Refine audio_processor and srt_processor to reduce duplicate code and support larger files
- Introduction of repopack to reason in LLMs about the code
### Fixed
- Refine audio_processor and srt_processor to reduce duplicate code and support larger files
## [1.0.7-alfa] - 2024-09-12
### Added
- Full Document API allowing for creation, updating and invalidation of documents.
- Metadata fields (JSON) added to DocumentVersion, allowing end-users to add structured information
- Wordpress plugin eveai_sync to synchronize Wordpress content with EveAI
### Fixed
- Maximal deduplication of code between views and api in document_utils.py
## [1.0.6-alfa] - 2024-09-03
### Fixed
- Problems with tenant scheme migrations - may have to be revisited
- Correction of default language settings when uploading docs or URLs
- Addition of a CHANGELOG.md file
## [1.0.5-alfa] - 2024-09-02
### Added
- Allow chatwidget to connect to multiple servers (e.g. development and production)
- Start implementation of API
- Add API-key functionality to tenants
- Deduplication of API and Document view code
- Allow URL addition to accept all types of files, not just HTML
- Allow new file types upload: srt, mp3, ogg, mp4
- Improve processing of different file types using Processor classes
### Removed
- Removed direct upload of Youtube URLs, due to continuous changes in Youtube website
## [1.0.4-alfa] - 2024-08-27
Skipped
## [1.0.3-alfa] - 2024-08-27
### Added
- Refinement of HTML processing - allow for excluded classes and elements.
- Allow for multiple instances of Evie on 1 website (pure + Wordpress plugin)
### Changed
- PDF Processing extracted in new PDF Processor class.
- Allow for longer and more complex PDFs to be uploaded.
## [1.0.2-alfa] - 2024-08-22
### Fixed
- Bugfix for ResetPasswordForm in config.py
## [1.0.1-alfa] - 2024-08-21
### Added
- Full Document Version Overview
### Changed
- Improvements to user creation and registration, renewal of passwords, ...
## [1.0.0-alfa] - 2024-08-16
### Added
- Initial release of the project.
### Changed
- None
### Fixed
- None
[Unreleased]: https://github.com/username/repo/compare/v1.0.0...HEAD
[1.0.0]: https://github.com/username/repo/releases/tag/v1.0.0

View File

@@ -9,8 +9,11 @@ from flask_socketio import SocketIO
from flask_jwt_extended import JWTManager from flask_jwt_extended import JWTManager
from flask_session import Session from flask_session import Session
from flask_wtf import CSRFProtect from flask_wtf import CSRFProtect
from flask_restx import Api
from prometheus_flask_exporter import PrometheusMetrics
from .utils.nginx_utils import prefixed_url_for from .langchain.templates.template_manager import TemplateManager
from .utils.cache.eveai_cache_manager import EveAICacheManager
from .utils.simple_encryption import SimpleEncryption from .utils.simple_encryption import SimpleEncryption
from .utils.minio_utils import MinioClient from .utils.minio_utils import MinioClient
@@ -27,8 +30,9 @@ cors = CORS()
socketio = SocketIO() socketio = SocketIO()
jwt = JWTManager() jwt = JWTManager()
session = Session() session = Session()
api_rest = Api()
# kms_client = JosKMSClient.from_service_account_json('config/gc_sa_eveai.json')
simple_encryption = SimpleEncryption() simple_encryption = SimpleEncryption()
minio_client = MinioClient() minio_client = MinioClient()
metrics = PrometheusMetrics.for_app_factory()
template_manager = TemplateManager()
cache_manager = EveAICacheManager()

View File

@@ -1,44 +0,0 @@
from langchain_core.retrievers import BaseRetriever
from sqlalchemy import asc
from sqlalchemy.exc import SQLAlchemyError
from pydantic import BaseModel, Field
from typing import Any, Dict
from flask import current_app
from common.extensions import db
from common.models.interaction import ChatSession, Interaction
from common.utils.datetime_utils import get_date_in_timezone
class EveAIHistoryRetriever(BaseRetriever):
model_variables: Dict[str, Any] = Field(...)
session_id: str = Field(...)
def __init__(self, model_variables: Dict[str, Any], session_id: str):
super().__init__()
self.model_variables = model_variables
self.session_id = session_id
def _get_relevant_documents(self, query: str):
current_app.logger.debug(f'Retrieving history of interactions for query: {query}')
try:
query_obj = (
db.session.query(Interaction)
.join(ChatSession, Interaction.chat_session_id == ChatSession.id)
.filter(ChatSession.session_id == self.session_id)
.order_by(asc(Interaction.id))
)
interactions = query_obj.all()
result = []
for interaction in interactions:
result.append(f'HUMAN:\n{interaction.detailed_question}\n\nAI: \n{interaction.answer}\n\n')
except SQLAlchemyError as e:
current_app.logger.error(f'Error retrieving history of interactions: {e}')
db.session.rollback()
return []
return result

View File

@@ -1,129 +0,0 @@
from langchain_core.retrievers import BaseRetriever
from sqlalchemy import func, and_, or_, desc
from sqlalchemy.exc import SQLAlchemyError
from pydantic import BaseModel, Field
from typing import Any, Dict
from flask import current_app
from common.extensions import db
from common.models.document import Document, DocumentVersion
from common.utils.datetime_utils import get_date_in_timezone
class EveAIRetriever(BaseRetriever):
model_variables: Dict[str, Any] = Field(...)
tenant_info: Dict[str, Any] = Field(...)
def __init__(self, model_variables: Dict[str, Any], tenant_info: Dict[str, Any]):
super().__init__()
self.model_variables = model_variables
self.tenant_info = tenant_info
def _get_relevant_documents(self, query: str):
current_app.logger.debug(f'Retrieving relevant documents for query: {query}')
query_embedding = self._get_query_embedding(query)
db_class = self.model_variables['embedding_db_model']
similarity_threshold = self.model_variables['similarity_threshold']
k = self.model_variables['k']
if self.tenant_info['rag_tuning']:
try:
current_date = get_date_in_timezone(self.tenant_info['timezone'])
current_app.rag_tuning_logger.debug(f'Current date: {current_date}\n')
# Debug query to show similarity for all valid documents (without chunk text)
debug_query = (
db.session.query(
Document.id.label('document_id'),
DocumentVersion.id.label('version_id'),
db_class.id.label('embedding_id'),
(1 - db_class.embedding.cosine_distance(query_embedding)).label('similarity')
)
.join(DocumentVersion, db_class.doc_vers_id == DocumentVersion.id)
.join(Document, DocumentVersion.doc_id == Document.id)
.filter(
or_(Document.valid_from.is_(None), func.date(Document.valid_from) <= current_date),
or_(Document.valid_to.is_(None), func.date(Document.valid_to) >= current_date)
)
.order_by(desc('similarity'))
)
debug_results = debug_query.all()
current_app.logger.debug("Debug: Similarity for all valid documents:")
for row in debug_results:
current_app.rag_tuning_logger.debug(f"Doc ID: {row.document_id}, "
f"Version ID: {row.version_id}, "
f"Embedding ID: {row.embedding_id}, "
f"Similarity: {row.similarity}")
current_app.rag_tuning_logger.debug(f'---------------------------------------\n')
except SQLAlchemyError as e:
current_app.logger.error(f'Error generating overview: {e}')
db.session.rollback()
if self.tenant_info['rag_tuning']:
current_app.rag_tuning_logger.debug(f'Parameters for Retrieval of documents: \n')
current_app.rag_tuning_logger.debug(f'Similarity Threshold: {similarity_threshold}\n')
current_app.rag_tuning_logger.debug(f'K: {k}\n')
current_app.rag_tuning_logger.debug(f'---------------------------------------\n')
try:
current_date = get_date_in_timezone(self.tenant_info['timezone'])
# Subquery to find the latest version of each document
subquery = (
db.session.query(
DocumentVersion.doc_id,
func.max(DocumentVersion.id).label('latest_version_id')
)
.group_by(DocumentVersion.doc_id)
.subquery()
)
# Main query to filter embeddings
query_obj = (
db.session.query(db_class,
(1 - db_class.embedding.cosine_distance(query_embedding)).label('similarity'))
.join(DocumentVersion, db_class.doc_vers_id == DocumentVersion.id)
.join(Document, DocumentVersion.doc_id == Document.id)
.join(subquery, DocumentVersion.id == subquery.c.latest_version_id)
.filter(
or_(Document.valid_from.is_(None), func.date(Document.valid_from) <= current_date),
or_(Document.valid_to.is_(None), func.date(Document.valid_to) >= current_date),
(1 - db_class.embedding.cosine_distance(query_embedding)) > similarity_threshold
)
.order_by(desc('similarity'))
.limit(k)
)
if self.tenant_info['rag_tuning']:
current_app.rag_tuning_logger.debug(f'Query executed for Retrieval of documents: \n')
current_app.rag_tuning_logger.debug(f'{query_obj.statement}\n')
current_app.rag_tuning_logger.debug(f'---------------------------------------\n')
res = query_obj.all()
if self.tenant_info['rag_tuning']:
current_app.rag_tuning_logger.debug(f'Retrieved {len(res)} relevant documents \n')
current_app.rag_tuning_logger.debug(f'Data retrieved: \n')
current_app.rag_tuning_logger.debug(f'{res}\n')
current_app.rag_tuning_logger.debug(f'---------------------------------------\n')
result = []
for doc in res:
if self.tenant_info['rag_tuning']:
current_app.rag_tuning_logger.debug(f'Document ID: {doc[0].id} - Distance: {doc[1]}\n')
current_app.rag_tuning_logger.debug(f'Chunk: \n {doc[0].chunk}\n\n')
result.append(f'SOURCE: {doc[0].id}\n\n{doc[0].chunk}\n\n')
except SQLAlchemyError as e:
current_app.logger.error(f'Error retrieving relevant documents: {e}')
db.session.rollback()
return []
return result
def _get_query_embedding(self, query: str):
embedding_model = self.model_variables['embedding_model']
query_embedding = embedding_model.embed_query(query)
return query_embedding

View File

@@ -0,0 +1,49 @@
import time
from langchain.callbacks.base import BaseCallbackHandler
from typing import Dict, Any, List
from langchain.schema import LLMResult
from common.utils.business_event_context import current_event
from flask import current_app
class LLMMetricsHandler(BaseCallbackHandler):
def __init__(self):
self.total_tokens: int = 0
self.prompt_tokens: int = 0
self.completion_tokens: int = 0
self.start_time: float = 0
self.end_time: float = 0
self.total_time: float = 0
def reset(self):
self.total_tokens = 0
self.prompt_tokens = 0
self.completion_tokens = 0
self.start_time = 0
self.end_time = 0
self.total_time = 0
def on_llm_start(self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) -> None:
self.start_time = time.time()
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
self.end_time = time.time()
self.total_time = self.end_time - self.start_time
usage = response.llm_output.get('token_usage', {})
self.prompt_tokens += usage.get('prompt_tokens', 0)
self.completion_tokens += usage.get('completion_tokens', 0)
self.total_tokens = self.prompt_tokens + self.completion_tokens
metrics = self.get_metrics()
current_event.log_llm_metrics(metrics)
self.reset() # Reset for the next call
def get_metrics(self) -> Dict[str, int | float]:
return {
'total_tokens': self.total_tokens,
'prompt_tokens': self.prompt_tokens,
'completion_tokens': self.completion_tokens,
'time_elapsed': self.total_time,
'interaction_type': 'LLM',
}

View File

@@ -0,0 +1,23 @@
# Output Schema Management - common/langchain/outputs/base.py
from typing import Dict, Type, Any
from pydantic import BaseModel
class BaseSpecialistOutput(BaseModel):
"""Base class for all specialist outputs"""
pass
class OutputRegistry:
"""Registry for specialist output schemas"""
_schemas: Dict[str, Type[BaseSpecialistOutput]] = {}
@classmethod
def register(cls, specialist_type: str, schema_class: Type[BaseSpecialistOutput]):
cls._schemas[specialist_type] = schema_class
@classmethod
def get_schema(cls, specialist_type: str) -> Type[BaseSpecialistOutput]:
if specialist_type not in cls._schemas:
raise ValueError(f"No output schema registered for {specialist_type}")
return cls._schemas[specialist_type]

View File

@@ -0,0 +1,22 @@
# RAG Specialist Output - common/langchain/outputs/rag.py
from typing import List
from pydantic import Field
from .base import BaseSpecialistOutput
class RAGOutput(BaseSpecialistOutput):
"""Output schema for RAG specialist"""
"""Default docstring - to be replaced with actual prompt"""
answer: str = Field(
...,
description="The answer to the user question, based on the given sources",
)
citations: List[int] = Field(
...,
description="The integer IDs of the SPECIFIC sources that were used to generate the answer"
)
insufficient_info: bool = Field(
False, # Default value is set to False
description="A boolean indicating whether given sources were sufficient or not to generate the answer"
)

View File

@@ -0,0 +1,154 @@
import os
import yaml
from typing import Dict, Optional, Any
from packaging import version
from dataclasses import dataclass
from flask import current_app, Flask
from common.utils.os_utils import get_project_root
@dataclass
class PromptTemplate:
"""Represents a versioned prompt template"""
content: str
version: str
metadata: Dict[str, Any]
class TemplateManager:
"""Manages versioned prompt templates"""
def __init__(self):
self.templates_dir = None
self._templates = None
self.app = None
def init_app(self, app: Flask) -> None:
# Initialize template manager
base_dir = "/app"
self.templates_dir = os.path.join(base_dir, 'config', 'prompts')
app.logger.debug(f'Loading templates from {self.templates_dir}')
self.app = app
self._templates = self._load_templates()
# Log available templates for each supported model
for llm in app.config['SUPPORTED_LLMS']:
try:
available_templates = self.list_templates(llm)
app.logger.info(f"Loaded templates for {llm}: {available_templates}")
except ValueError:
app.logger.warning(f"No templates found for {llm}")
def _load_templates(self) -> Dict[str, Dict[str, Dict[str, PromptTemplate]]]:
"""
Load all template versions from the templates directory.
Structure: {provider.model -> {template_name -> {version -> template}}}
Directory structure:
prompts/
├── provider/
│ └── model/
│ └── template_name/
│ └── version.yaml
"""
templates = {}
# Iterate through providers (anthropic, openai)
for provider in os.listdir(self.templates_dir):
provider_path = os.path.join(self.templates_dir, provider)
if not os.path.isdir(provider_path):
continue
# Iterate through models (claude-3, gpt-4o)
for model in os.listdir(provider_path):
model_path = os.path.join(provider_path, model)
if not os.path.isdir(model_path):
continue
provider_model = f"{provider}.{model}"
templates[provider_model] = {}
# Iterate through template types (rag, summary, etc.)
for template_name in os.listdir(model_path):
template_path = os.path.join(model_path, template_name)
if not os.path.isdir(template_path):
continue
template_versions = {}
# Load all version files for this template
for version_file in os.listdir(template_path):
if not version_file.endswith('.yaml'):
continue
version_str = version_file[:-5] # Remove .yaml
if not self._is_valid_version(version_str):
current_app.logger.warning(
f"Invalid version format for {template_name}: {version_str}")
continue
try:
with open(os.path.join(template_path, version_file)) as f:
template_data = yaml.safe_load(f)
# Verify required fields
if not template_data.get('content'):
raise ValueError("Template content is required")
template_versions[version_str] = PromptTemplate(
content=template_data['content'],
version=version_str,
metadata=template_data.get('metadata', {})
)
except Exception as e:
current_app.logger.error(
f"Error loading template {template_name} version {version_str}: {e}")
continue
if template_versions:
templates[provider_model][template_name] = template_versions
return templates
def _is_valid_version(self, version_str: str) -> bool:
"""Validate semantic versioning string"""
try:
version.parse(version_str)
return True
except version.InvalidVersion:
return False
def get_template(self,
provider_model: str,
template_name: str,
template_version: Optional[str] = None) -> PromptTemplate:
"""
Get a specific template version. If version not specified,
returns the latest version.
"""
if provider_model not in self._templates:
raise ValueError(f"Unknown provider.model: {provider_model}")
if template_name not in self._templates[provider_model]:
raise ValueError(f"Unknown template: {template_name}")
versions = self._templates[provider_model][template_name]
if template_version:
if template_version not in versions:
raise ValueError(f"Template version {template_version} not found")
return versions[template_version]
# Return latest version
latest = max(versions.keys(), key=version.parse)
return versions[latest]
def list_templates(self, provider_model: str) -> Dict[str, list]:
"""
List all available templates and their versions for a provider.model
Returns: {template_name: [version1, version2, ...]}
"""
if provider_model not in self._templates:
raise ValueError(f"Unknown provider.model: {provider_model}")
return {
template_name: sorted(versions.keys(), key=version.parse)
for template_name, versions in self._templates[provider_model].items()
}

View File

@@ -0,0 +1,51 @@
from langchain_openai import OpenAIEmbeddings
from typing import List, Any
import time
from common.utils.business_event_context import current_event
class TrackedOpenAIEmbeddings(OpenAIEmbeddings):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def embed_documents(self, texts: list[str]) -> list[list[float]]:
start_time = time.time()
result = super().embed_documents(texts)
end_time = time.time()
# Estimate token usage (OpenAI uses tiktoken for this)
import tiktoken
enc = tiktoken.encoding_for_model(self.model)
total_tokens = sum(len(enc.encode(text)) for text in texts)
metrics = {
'total_tokens': total_tokens,
'prompt_tokens': total_tokens, # For embeddings, all tokens are prompt tokens
'completion_tokens': 0,
'time_elapsed': end_time - start_time,
'interaction_type': 'Embedding',
}
current_event.log_llm_metrics(metrics)
return result
def embed_query(self, text: str) -> List[float]:
start_time = time.time()
result = super().embed_query(text)
end_time = time.time()
# Estimate token usage
import tiktoken
enc = tiktoken.encoding_for_model(self.model)
total_tokens = len(enc.encode(text))
metrics = {
'total_tokens': total_tokens,
'prompt_tokens': total_tokens,
'completion_tokens': 0,
'time_elapsed': end_time - start_time,
'interaction_type': 'Embedding',
}
current_event.log_llm_metrics(metrics)
return result

View File

@@ -0,0 +1,77 @@
# common/langchain/tracked_transcription.py
from typing import Any, Optional, Dict
import time
from openai import OpenAI
from common.utils.business_event_context import current_event
class TrackedOpenAITranscription:
"""Wrapper for OpenAI transcription with metric tracking"""
def __init__(self, api_key: str, **kwargs: Any):
"""Initialize with OpenAI client settings"""
self.client = OpenAI(api_key=api_key)
self.model = kwargs.get('model', 'whisper-1')
def transcribe(self,
file: Any,
model: Optional[str] = None,
language: Optional[str] = None,
prompt: Optional[str] = None,
response_format: Optional[str] = None,
temperature: Optional[float] = None,
duration: Optional[int] = None) -> str:
"""
Transcribe audio with metrics tracking
Args:
file: Audio file to transcribe
model: Model to use (defaults to whisper-1)
language: Optional language of the audio
prompt: Optional prompt to guide transcription
response_format: Response format (json, text, etc)
temperature: Sampling temperature
duration: Duration of audio in seconds for metrics
Returns:
Transcription text
"""
start_time = time.time()
try:
# Create transcription options
options = {
"file": file,
"model": model or self.model,
}
if language:
options["language"] = language
if prompt:
options["prompt"] = prompt
if response_format:
options["response_format"] = response_format
if temperature:
options["temperature"] = temperature
response = self.client.audio.transcriptions.create(**options)
# Calculate metrics
end_time = time.time()
# Token usage for transcriptions is based on audio duration
metrics = {
'total_tokens': duration or 600, # Default to 10 minutes if duration not provided
'prompt_tokens': 0, # For transcriptions, all tokens are completion
'completion_tokens': duration or 600,
'time_elapsed': end_time - start_time,
'interaction_type': 'ASR',
}
current_event.log_llm_metrics(metrics)
# Return text from response
if isinstance(response, str):
return response
return response.text
except Exception as e:
raise Exception(f"Transcription failed: {str(e)}")

2
common/models/README.txt Normal file
View File

@@ -0,0 +1,2 @@
If models are added to the public schema (i.e. in the user domain), ensure to add their corresponding tables to the
env.py, get_public_table_names, for tenant migrations!

View File

@@ -1,18 +1,87 @@
from common.extensions import db from common.extensions import db
from .user import User, Tenant from .user import User, Tenant
from pgvector.sqlalchemy import Vector from pgvector.sqlalchemy import Vector
from sqlalchemy.dialects.postgresql import JSONB
from sqlalchemy.dialects.postgresql import ARRAY
import sqlalchemy as sa
class Catalog(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(50), nullable=False)
description = db.Column(db.Text, nullable=True)
type = db.Column(db.String(50), nullable=False, default="STANDARD_CATALOG")
min_chunk_size = db.Column(db.Integer, nullable=True, default=2000)
max_chunk_size = db.Column(db.Integer, nullable=True, default=3000)
# Meta Data
user_metadata = db.Column(JSONB, nullable=True)
system_metadata = db.Column(JSONB, nullable=True)
configuration = db.Column(JSONB, nullable=True)
# Versioning Information
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
created_by = db.Column(db.Integer, db.ForeignKey(User.id), nullable=True)
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey(User.id))
class Processor(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(50), nullable=False)
description = db.Column(db.Text, nullable=True)
catalog_id = db.Column(db.Integer, db.ForeignKey('catalog.id'), nullable=True)
type = db.Column(db.String(50), nullable=False)
sub_file_type = db.Column(db.String(50), nullable=True)
# Tuning enablers
tuning = db.Column(db.Boolean, nullable=True, default=False)
# Meta Data
user_metadata = db.Column(JSONB, nullable=True)
system_metadata = db.Column(JSONB, nullable=True)
configuration = db.Column(JSONB, nullable=True)
# Versioning Information
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
created_by = db.Column(db.Integer, db.ForeignKey(User.id), nullable=True)
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey(User.id))
class Retriever(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(50), nullable=False)
description = db.Column(db.Text, nullable=True)
catalog_id = db.Column(db.Integer, db.ForeignKey('catalog.id'), nullable=True)
type = db.Column(db.String(50), nullable=False, default="STANDARD_RAG")
tuning = db.Column(db.Boolean, nullable=True, default=False)
# Meta Data
user_metadata = db.Column(JSONB, nullable=True)
system_metadata = db.Column(JSONB, nullable=True)
configuration = db.Column(JSONB, nullable=True)
arguments = db.Column(JSONB, nullable=True)
# Versioning Information
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
created_by = db.Column(db.Integer, db.ForeignKey(User.id), nullable=True)
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey(User.id))
class Document(db.Model): class Document(db.Model):
id = db.Column(db.Integer, primary_key=True) id = db.Column(db.Integer, primary_key=True)
# tenant_id = db.Column(db.Integer, db.ForeignKey(Tenant.id), nullable=False)
catalog_id = db.Column(db.Integer, db.ForeignKey(Catalog.id), nullable=True)
name = db.Column(db.String(100), nullable=False) name = db.Column(db.String(100), nullable=False)
tenant_id = db.Column(db.Integer, db.ForeignKey(Tenant.id), nullable=False)
valid_from = db.Column(db.DateTime, nullable=True) valid_from = db.Column(db.DateTime, nullable=True)
valid_to = db.Column(db.DateTime, nullable=True) valid_to = db.Column(db.DateTime, nullable=True)
# Versioning Information # Versioning Information
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now()) created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
created_by = db.Column(db.Integer, db.ForeignKey(User.id), nullable=False) created_by = db.Column(db.Integer, db.ForeignKey(User.id), nullable=True)
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now()) updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey(User.id)) updated_by = db.Column(db.Integer, db.ForeignKey(User.id))
@@ -27,12 +96,17 @@ class DocumentVersion(db.Model):
id = db.Column(db.Integer, primary_key=True) id = db.Column(db.Integer, primary_key=True)
doc_id = db.Column(db.Integer, db.ForeignKey(Document.id), nullable=False) doc_id = db.Column(db.Integer, db.ForeignKey(Document.id), nullable=False)
url = db.Column(db.String(200), nullable=True) url = db.Column(db.String(200), nullable=True)
file_location = db.Column(db.String(255), nullable=True) bucket_name = db.Column(db.String(255), nullable=True)
file_name = db.Column(db.String(200), nullable=True) object_name = db.Column(db.String(200), nullable=True)
file_type = db.Column(db.String(20), nullable=True) file_type = db.Column(db.String(20), nullable=True)
sub_file_type = db.Column(db.String(50), nullable=True)
file_size = db.Column(db.Float, nullable=True)
language = db.Column(db.String(2), nullable=False) language = db.Column(db.String(2), nullable=False)
user_context = db.Column(db.Text, nullable=True) user_context = db.Column(db.Text, nullable=True)
system_context = db.Column(db.Text, nullable=True) system_context = db.Column(db.Text, nullable=True)
user_metadata = db.Column(JSONB, nullable=True)
system_metadata = db.Column(JSONB, nullable=True)
catalog_properties = db.Column(JSONB, nullable=True)
# Versioning Information # Versioning Information
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now()) created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
@@ -52,12 +126,6 @@ class DocumentVersion(db.Model):
def __repr__(self): def __repr__(self):
return f"<DocumentVersion {self.document_language.document_id}.{self.document_language.language}>.{self.id}>" return f"<DocumentVersion {self.document_language.document_id}.{self.document_language.language}>.{self.id}>"
def calc_file_location(self):
return f"{self.document.tenant_id}/{self.document.id}/{self.language}"
def calc_file_name(self):
return f"{self.id}.{self.file_type}"
class Embedding(db.Model): class Embedding(db.Model):
__tablename__ = 'embeddings' __tablename__ = 'embeddings'

View File

@@ -0,0 +1,110 @@
from common.extensions import db
class BusinessEventLog(db.Model):
__bind_key__ = 'public'
__table_args__ = {'schema': 'public'}
id = db.Column(db.Integer, primary_key=True)
timestamp = db.Column(db.DateTime, nullable=False)
event_type = db.Column(db.String(50), nullable=False)
tenant_id = db.Column(db.Integer, nullable=False)
trace_id = db.Column(db.String(50), nullable=False)
span_id = db.Column(db.String(50))
span_name = db.Column(db.String(50))
parent_span_id = db.Column(db.String(50))
document_version_id = db.Column(db.Integer)
document_version_file_size = db.Column(db.Float)
chat_session_id = db.Column(db.String(50))
interaction_id = db.Column(db.Integer)
environment = db.Column(db.String(20))
llm_metrics_total_tokens = db.Column(db.Integer)
llm_metrics_prompt_tokens = db.Column(db.Integer)
llm_metrics_completion_tokens = db.Column(db.Integer)
llm_metrics_total_time = db.Column(db.Float)
llm_metrics_call_count = db.Column(db.Integer)
llm_interaction_type = db.Column(db.String(20))
message = db.Column(db.Text)
license_usage_id = db.Column(db.Integer, db.ForeignKey('public.license_usage.id'), nullable=True)
license_usage = db.relationship('LicenseUsage', backref='events')
class License(db.Model):
__bind_key__ = 'public'
__table_args__ = {'schema': 'public'}
id = db.Column(db.Integer, primary_key=True)
tenant_id = db.Column(db.Integer, db.ForeignKey('public.tenant.id'), nullable=False)
tier_id = db.Column(db.Integer, db.ForeignKey('public.license_tier.id'),nullable=False) # 'small', 'medium', 'custom'
start_date = db.Column(db.Date, nullable=False)
end_date = db.Column(db.Date, nullable=True)
currency = db.Column(db.String(20), nullable=False)
yearly_payment = db.Column(db.Boolean, nullable=False, default=False)
basic_fee = db.Column(db.Float, nullable=False)
max_storage_mb = db.Column(db.Integer, nullable=False)
additional_storage_price = db.Column(db.Float, nullable=False)
additional_storage_bucket = db.Column(db.Integer, nullable=False)
included_embedding_mb = db.Column(db.Integer, nullable=False)
additional_embedding_price = db.Column(db.Numeric(10, 4), nullable=False)
additional_embedding_bucket = db.Column(db.Integer, nullable=False)
included_interaction_tokens = db.Column(db.Integer, nullable=False)
additional_interaction_token_price = db.Column(db.Numeric(10, 4), nullable=False)
additional_interaction_bucket = db.Column(db.Integer, nullable=False)
overage_embedding = db.Column(db.Float, nullable=False, default=0)
overage_interaction = db.Column(db.Float, nullable=False, default=0)
tenant = db.relationship('Tenant', back_populates='licenses')
license_tier = db.relationship('LicenseTier', back_populates='licenses')
usages = db.relationship('LicenseUsage', order_by='LicenseUsage.period_start_date', back_populates='license')
class LicenseTier(db.Model):
__bind_key__ = 'public'
__table_args__ = {'schema': 'public'}
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(50), nullable=False)
version = db.Column(db.String(50), nullable=False)
start_date = db.Column(db.Date, nullable=False)
end_date = db.Column(db.Date, nullable=True)
basic_fee_d = db.Column(db.Float, nullable=True)
basic_fee_e = db.Column(db.Float, nullable=True)
max_storage_mb = db.Column(db.Integer, nullable=False)
additional_storage_price_d = db.Column(db.Numeric(10, 4), nullable=False)
additional_storage_price_e = db.Column(db.Numeric(10, 4), nullable=False)
additional_storage_bucket = db.Column(db.Integer, nullable=False)
included_embedding_mb = db.Column(db.Integer, nullable=False)
additional_embedding_price_d = db.Column(db.Numeric(10, 4), nullable=False)
additional_embedding_price_e = db.Column(db.Numeric(10, 4), nullable=False)
additional_embedding_bucket = db.Column(db.Integer, nullable=False)
included_interaction_tokens = db.Column(db.Integer, nullable=False)
additional_interaction_token_price_d = db.Column(db.Numeric(10, 4), nullable=False)
additional_interaction_token_price_e = db.Column(db.Numeric(10, 4), nullable=False)
additional_interaction_bucket = db.Column(db.Integer, nullable=False)
standard_overage_embedding = db.Column(db.Float, nullable=False, default=0)
standard_overage_interaction = db.Column(db.Float, nullable=False, default=0)
licenses = db.relationship('License', back_populates='license_tier')
class LicenseUsage(db.Model):
__bind_key__ = 'public'
__table_args__ = {'schema': 'public'}
id = db.Column(db.Integer, primary_key=True)
license_id = db.Column(db.Integer, db.ForeignKey('public.license.id'), nullable=False)
tenant_id = db.Column(db.Integer, db.ForeignKey('public.tenant.id'), nullable=False)
storage_mb_used = db.Column(db.Float, default=0)
embedding_mb_used = db.Column(db.Float, default=0)
embedding_prompt_tokens_used = db.Column(db.Integer, default=0)
embedding_completion_tokens_used = db.Column(db.Integer, default=0)
embedding_total_tokens_used = db.Column(db.Integer, default=0)
interaction_prompt_tokens_used = db.Column(db.Integer, default=0)
interaction_completion_tokens_used = db.Column(db.Integer, default=0)
interaction_total_tokens_used = db.Column(db.Integer, default=0)
period_start_date = db.Column(db.Date, nullable=False)
period_end_date = db.Column(db.Date, nullable=False)
license = db.relationship('License', back_populates='usages')

View File

@@ -1,6 +1,8 @@
from sqlalchemy.dialects.postgresql import JSONB
from ..extensions import db from ..extensions import db
from .user import User, Tenant from .user import User, Tenant
from .document import Embedding from .document import Embedding, Retriever
class ChatSession(db.Model): class ChatSession(db.Model):
@@ -18,14 +20,32 @@ class ChatSession(db.Model):
return f"<ChatSession {self.id} by {self.user_id}>" return f"<ChatSession {self.id} by {self.user_id}>"
class Specialist(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(50), nullable=False)
description = db.Column(db.Text, nullable=True)
type = db.Column(db.String(50), nullable=False, default="STANDARD_RAG")
tuning = db.Column(db.Boolean, nullable=True, default=False)
configuration = db.Column(JSONB, nullable=True)
arguments = db.Column(JSONB, nullable=True)
# Relationship to retrievers through the association table
retrievers = db.relationship('SpecialistRetriever', backref='specialist', lazy=True,
cascade="all, delete-orphan")
# Versioning Information
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
created_by = db.Column(db.Integer, db.ForeignKey(User.id), nullable=True)
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey(User.id))
class Interaction(db.Model): class Interaction(db.Model):
id = db.Column(db.Integer, primary_key=True) id = db.Column(db.Integer, primary_key=True)
chat_session_id = db.Column(db.Integer, db.ForeignKey(ChatSession.id), nullable=False) chat_session_id = db.Column(db.Integer, db.ForeignKey(ChatSession.id), nullable=False)
question = db.Column(db.Text, nullable=False) specialist_id = db.Column(db.Integer, db.ForeignKey(Specialist.id), nullable=True)
detailed_question = db.Column(db.Text, nullable=True) specialist_arguments = db.Column(JSONB, nullable=True)
answer = db.Column(db.Text, nullable=True) specialist_results = db.Column(JSONB, nullable=True)
algorithm_used = db.Column(db.String(20), nullable=True)
language = db.Column(db.String(2), nullable=False)
timezone = db.Column(db.String(30), nullable=True) timezone = db.Column(db.String(30), nullable=True)
appreciation = db.Column(db.Integer, nullable=True) appreciation = db.Column(db.Integer, nullable=True)
@@ -44,3 +64,10 @@ class Interaction(db.Model):
class InteractionEmbedding(db.Model): class InteractionEmbedding(db.Model):
interaction_id = db.Column(db.Integer, db.ForeignKey(Interaction.id, ondelete='CASCADE'), primary_key=True) interaction_id = db.Column(db.Integer, db.ForeignKey(Interaction.id, ondelete='CASCADE'), primary_key=True)
embedding_id = db.Column(db.Integer, db.ForeignKey(Embedding.id, ondelete='CASCADE'), primary_key=True) embedding_id = db.Column(db.Integer, db.ForeignKey(Embedding.id, ondelete='CASCADE'), primary_key=True)
class SpecialistRetriever(db.Model):
specialist_id = db.Column(db.Integer, db.ForeignKey(Specialist.id, ondelete='CASCADE'), primary_key=True)
retriever_id = db.Column(db.Integer, db.ForeignKey(Retriever.id, ondelete='CASCADE'), primary_key=True)
retriever = db.relationship("Retriever", backref="specialist_retrievers")

View File

@@ -1,8 +1,11 @@
from datetime import date
from common.extensions import db from common.extensions import db
from flask_security import UserMixin, RoleMixin from flask_security import UserMixin, RoleMixin
from sqlalchemy.dialects.postgresql import ARRAY from sqlalchemy.dialects.postgresql import ARRAY
import sqlalchemy as sa import sqlalchemy as sa
from sqlalchemy import CheckConstraint
from common.models.entitlements import License
class Tenant(db.Model): class Tenant(db.Model):
@@ -21,6 +24,7 @@ class Tenant(db.Model):
website = db.Column(db.String(255), nullable=True) website = db.Column(db.String(255), nullable=True)
timezone = db.Column(db.String(50), nullable=True, default='UTC') timezone = db.Column(db.String(50), nullable=True, default='UTC')
rag_context = db.Column(db.Text, nullable=True) rag_context = db.Column(db.Text, nullable=True)
type = db.Column(db.String(20), nullable=True, server_default='Active')
# language information # language information
default_language = db.Column(db.String(2), nullable=True) default_language = db.Column(db.String(2), nullable=True)
@@ -30,37 +34,24 @@ class Tenant(db.Model):
embedding_model = db.Column(db.String(50), nullable=True) embedding_model = db.Column(db.String(50), nullable=True)
llm_model = db.Column(db.String(50), nullable=True) llm_model = db.Column(db.String(50), nullable=True)
# Embedding variables # Entitlements
html_tags = db.Column(ARRAY(sa.String(10)), nullable=True, default=['p', 'h1', 'h2', 'h3', 'h4', 'h5', 'h6', 'li']) currency = db.Column(db.String(20), nullable=True)
html_end_tags = db.Column(ARRAY(sa.String(10)), nullable=True, default=['p', 'li']) storage_dirty = db.Column(db.Boolean, nullable=True, default=False)
html_included_elements = db.Column(ARRAY(sa.String(50)), nullable=True)
html_excluded_elements = db.Column(ARRAY(sa.String(50)), nullable=True)
min_chunk_size = db.Column(db.Integer, nullable=True, default=2000)
max_chunk_size = db.Column(db.Integer, nullable=True, default=3000)
# Embedding search variables
es_k = db.Column(db.Integer, nullable=True, default=5)
es_similarity_threshold = db.Column(db.Float, nullable=True, default=0.7)
# Chat variables
chat_RAG_temperature = db.Column(db.Float, nullable=True, default=0.3)
chat_no_RAG_temperature = db.Column(db.Float, nullable=True, default=0.5)
fallback_algorithms = db.Column(ARRAY(sa.String(50)), nullable=True)
# Licensing Information
license_start_date = db.Column(db.Date, nullable=True)
license_end_date = db.Column(db.Date, nullable=True)
allowed_monthly_interactions = db.Column(db.Integer, nullable=True)
encrypted_chat_api_key = db.Column(db.String(500), nullable=True)
# Tuning enablers
embed_tuning = db.Column(db.Boolean, nullable=True, default=False)
rag_tuning = db.Column(db.Boolean, nullable=True, default=False)
# Relations # Relations
users = db.relationship('User', backref='tenant') users = db.relationship('User', backref='tenant')
domains = db.relationship('TenantDomain', backref='tenant') domains = db.relationship('TenantDomain', backref='tenant')
licenses = db.relationship('License', back_populates='tenant')
license_usages = db.relationship('LicenseUsage', backref='tenant')
@property
def current_license(self):
today = date.today()
return License.query.filter(
License.tenant_id == self.id,
License.start_date <= today,
(License.end_date.is_(None) | (License.end_date >= today))
).order_by(License.start_date.desc()).first()
def __repr__(self): def __repr__(self):
return f"<Tenant {self.id}: {self.name}>" return f"<Tenant {self.id}: {self.name}>"
@@ -72,26 +63,12 @@ class Tenant(db.Model):
'website': self.website, 'website': self.website,
'timezone': self.timezone, 'timezone': self.timezone,
'rag_context': self.rag_context, 'rag_context': self.rag_context,
'type': self.type,
'default_language': self.default_language, 'default_language': self.default_language,
'allowed_languages': self.allowed_languages, 'allowed_languages': self.allowed_languages,
'embedding_model': self.embedding_model, 'embedding_model': self.embedding_model,
'llm_model': self.llm_model, 'llm_model': self.llm_model,
'html_tags': self.html_tags, 'currency': self.currency,
'html_end_tags': self.html_end_tags,
'html_included_elements': self.html_included_elements,
'html_excluded_elements': self.html_excluded_elements,
'min_chunk_size': self.min_chunk_size,
'max_chunk_size': self.max_chunk_size,
'es_k': self.es_k,
'es_similarity_threshold': self.es_similarity_threshold,
'chat_RAG_temperature': self.chat_RAG_temperature,
'chat_no_RAG_temperature': self.chat_no_RAG_temperature,
'fallback_algorithms': self.fallback_algorithms,
'license_start_date': self.license_start_date,
'license_end_date': self.license_end_date,
'allowed_monthly_interactions': self.allowed_monthly_interactions,
'embed_tuning': self.embed_tuning,
'rag_tuning': self.rag_tuning,
} }
@@ -133,6 +110,8 @@ class User(db.Model, UserMixin):
fs_uniquifier = db.Column(db.String(255), unique=True, nullable=False) fs_uniquifier = db.Column(db.String(255), unique=True, nullable=False)
confirmed_at = db.Column(db.DateTime, nullable=True) confirmed_at = db.Column(db.DateTime, nullable=True)
valid_to = db.Column(db.Date, nullable=True) valid_to = db.Column(db.Date, nullable=True)
is_primary_contact = db.Column(db.Boolean, nullable=True, default=False)
is_financial_contact = db.Column(db.Boolean, nullable=True, default=False)
# Security Trackable Information # Security Trackable Information
last_login_at = db.Column(db.DateTime, nullable=True) last_login_at = db.Column(db.DateTime, nullable=True)
@@ -173,3 +152,29 @@ class TenantDomain(db.Model):
def __repr__(self): def __repr__(self):
return f"<TenantDomain {self.id}: {self.domain}>" return f"<TenantDomain {self.id}: {self.domain}>"
class TenantProject(db.Model):
__bind_key__ = 'public'
__table_args__ = {'schema': 'public'}
id = db.Column(db.Integer, primary_key=True)
tenant_id = db.Column(db.Integer, db.ForeignKey('public.tenant.id'), nullable=False)
name = db.Column(db.String(50), nullable=False)
description = db.Column(db.Text, nullable=True)
services = db.Column(ARRAY(sa.String(50)), nullable=False)
encrypted_api_key = db.Column(db.String(500), nullable=True)
visual_api_key = db.Column(db.String(20), nullable=True)
active = db.Column(db.Boolean, nullable=False, default=True)
responsible_email = db.Column(db.String(255), nullable=True)
# Versioning Information
created_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now())
created_by = db.Column(db.Integer, db.ForeignKey('public.user.id'), nullable=True)
updated_at = db.Column(db.DateTime, nullable=False, server_default=db.func.now(), onupdate=db.func.now())
updated_by = db.Column(db.Integer, db.ForeignKey('public.user.id'))
# Relations
tenant = db.relationship('Tenant', backref='projects')
def __repr__(self):
return f"<TenantProject {self.id}: {self.name}>"

View File

@@ -0,0 +1,245 @@
import os
import uuid
from contextlib import contextmanager
from datetime import datetime
from typing import Dict, Any, Optional
from datetime import datetime as dt, timezone as tz
import logging
from .business_event_context import BusinessEventContext
from common.models.entitlements import BusinessEventLog
from common.extensions import db
class BusinessEvent:
# The BusinessEvent class itself is a context manager, but it doesn't use the @contextmanager decorator.
# Instead, it defines __enter__ and __exit__ methods explicitly. This is because we're doing something a bit more
# complex - we're interacting with the BusinessEventContext and the _business_event_stack.
def __init__(self, event_type: str, tenant_id: int, **kwargs):
self.event_type = event_type
self.tenant_id = tenant_id
self.trace_id = str(uuid.uuid4())
self.span_id = None
self.span_name = None
self.parent_span_id = None
self.document_version_id = kwargs.get('document_version_id')
self.document_version_file_size = kwargs.get('document_version_file_size')
self.chat_session_id = kwargs.get('chat_session_id')
self.interaction_id = kwargs.get('interaction_id')
self.environment = os.environ.get("FLASK_ENV", "development")
self.span_counter = 0
self.spans = []
self.llm_metrics = {
'total_tokens': 0,
'prompt_tokens': 0,
'completion_tokens': 0,
'total_time': 0,
'call_count': 0,
'interaction_type': None
}
def update_attribute(self, attribute: str, value: any):
if hasattr(self, attribute):
setattr(self, attribute, value)
else:
raise AttributeError(f"'{self.__class__.__name__}' object has no attribute '{attribute}'")
def update_llm_metrics(self, metrics: dict):
self.llm_metrics['total_tokens'] += metrics['total_tokens']
self.llm_metrics['prompt_tokens'] += metrics['prompt_tokens']
self.llm_metrics['completion_tokens'] += metrics['completion_tokens']
self.llm_metrics['total_time'] += metrics['time_elapsed']
self.llm_metrics['call_count'] += 1
self.llm_metrics['interaction_type'] = metrics['interaction_type']
def reset_llm_metrics(self):
self.llm_metrics['total_tokens'] = 0
self.llm_metrics['prompt_tokens'] = 0
self.llm_metrics['completion_tokens'] = 0
self.llm_metrics['total_time'] = 0
self.llm_metrics['call_count'] = 0
self.llm_metrics['interaction_type'] = None
@contextmanager
def create_span(self, span_name: str):
# The create_span method is designed to be used as a context manager. We want to perform some actions when
# entering the span (like setting the span ID and name) and some actions when exiting the span (like removing
# these temporary attributes). The @contextmanager decorator allows us to write this method in a way that
# clearly separates the "entry" and "exit" logic, with the yield statement in between.
parent_span_id = self.span_id
self.span_counter += 1
new_span_id = str(uuid.uuid4())
# Save the current span info
self.spans.append((self.span_id, self.span_name, self.parent_span_id))
# Set the new span info
self.span_id = new_span_id
self.span_name = span_name
self.parent_span_id = parent_span_id
self.log(f"Starting span {span_name}")
try:
yield
finally:
if self.llm_metrics['call_count'] > 0:
self.log_final_metrics()
self.reset_llm_metrics()
self.log(f"Ending span {span_name}")
# Restore the previous span info
if self.spans:
self.span_id, self.span_name, self.parent_span_id = self.spans.pop()
else:
self.span_id = None
self.span_name = None
self.parent_span_id = None
def log(self, message: str, level: str = 'info'):
logger = logging.getLogger('business_events')
log_data = {
'event_type': self.event_type,
'tenant_id': self.tenant_id,
'trace_id': self.trace_id,
'span_id': self.span_id,
'span_name': self.span_name,
'parent_span_id': self.parent_span_id,
'document_version_id': self.document_version_id,
'document_version_file_size': self.document_version_file_size,
'chat_session_id': self.chat_session_id,
'interaction_id': self.interaction_id,
'environment': self.environment,
}
# log to Graylog
getattr(logger, level)(message, extra=log_data)
# Log to database
event_log = BusinessEventLog(
timestamp=dt.now(tz=tz.utc),
event_type=self.event_type,
tenant_id=self.tenant_id,
trace_id=self.trace_id,
span_id=self.span_id,
span_name=self.span_name,
parent_span_id=self.parent_span_id,
document_version_id=self.document_version_id,
document_version_file_size=self.document_version_file_size,
chat_session_id=self.chat_session_id,
interaction_id=self.interaction_id,
environment=self.environment,
message=message
)
db.session.add(event_log)
db.session.commit()
def log_llm_metrics(self, metrics: dict, level: str = 'info'):
self.update_llm_metrics(metrics)
message = "LLM Metrics"
logger = logging.getLogger('business_events')
log_data = {
'event_type': self.event_type,
'tenant_id': self.tenant_id,
'trace_id': self.trace_id,
'span_id': self.span_id,
'span_name': self.span_name,
'parent_span_id': self.parent_span_id,
'document_version_id': self.document_version_id,
'document_version_file_size': self.document_version_file_size,
'chat_session_id': self.chat_session_id,
'interaction_id': self.interaction_id,
'environment': self.environment,
'llm_metrics_total_tokens': metrics['total_tokens'],
'llm_metrics_prompt_tokens': metrics['prompt_tokens'],
'llm_metrics_completion_tokens': metrics['completion_tokens'],
'llm_metrics_total_time': metrics['time_elapsed'],
'llm_interaction_type': metrics['interaction_type'],
}
# log to Graylog
getattr(logger, level)(message, extra=log_data)
# Log to database
event_log = BusinessEventLog(
timestamp=dt.now(tz=tz.utc),
event_type=self.event_type,
tenant_id=self.tenant_id,
trace_id=self.trace_id,
span_id=self.span_id,
span_name=self.span_name,
parent_span_id=self.parent_span_id,
document_version_id=self.document_version_id,
document_version_file_size=self.document_version_file_size,
chat_session_id=self.chat_session_id,
interaction_id=self.interaction_id,
environment=self.environment,
llm_metrics_total_tokens=metrics['total_tokens'],
llm_metrics_prompt_tokens=metrics['prompt_tokens'],
llm_metrics_completion_tokens=metrics['completion_tokens'],
llm_metrics_total_time=metrics['time_elapsed'],
llm_interaction_type=metrics['interaction_type'],
message=message
)
db.session.add(event_log)
db.session.commit()
def log_final_metrics(self, level: str = 'info'):
logger = logging.getLogger('business_events')
message = "Final LLM Metrics"
log_data = {
'event_type': self.event_type,
'tenant_id': self.tenant_id,
'trace_id': self.trace_id,
'span_id': self.span_id,
'span_name': self.span_name,
'parent_span_id': self.parent_span_id,
'document_version_id': self.document_version_id,
'document_version_file_size': self.document_version_file_size,
'chat_session_id': self.chat_session_id,
'interaction_id': self.interaction_id,
'environment': self.environment,
'llm_metrics_total_tokens': self.llm_metrics['total_tokens'],
'llm_metrics_prompt_tokens': self.llm_metrics['prompt_tokens'],
'llm_metrics_completion_tokens': self.llm_metrics['completion_tokens'],
'llm_metrics_total_time': self.llm_metrics['total_time'],
'llm_metrics_call_count': self.llm_metrics['call_count'],
'llm_interaction_type': self.llm_metrics['interaction_type'],
}
# log to Graylog
getattr(logger, level)(message, extra=log_data)
# Log to database
event_log = BusinessEventLog(
timestamp=dt.now(tz=tz.utc),
event_type=self.event_type,
tenant_id=self.tenant_id,
trace_id=self.trace_id,
span_id=self.span_id,
span_name=self.span_name,
parent_span_id=self.parent_span_id,
document_version_id=self.document_version_id,
document_version_file_size=self.document_version_file_size,
chat_session_id=self.chat_session_id,
interaction_id=self.interaction_id,
environment=self.environment,
llm_metrics_total_tokens=self.llm_metrics['total_tokens'],
llm_metrics_prompt_tokens=self.llm_metrics['prompt_tokens'],
llm_metrics_completion_tokens=self.llm_metrics['completion_tokens'],
llm_metrics_total_time=self.llm_metrics['total_time'],
llm_metrics_call_count=self.llm_metrics['call_count'],
llm_interaction_type=self.llm_metrics['interaction_type'],
message=message
)
db.session.add(event_log)
db.session.commit()
def __enter__(self):
self.log(f'Starting Trace for {self.event_type}')
return BusinessEventContext(self).__enter__()
def __exit__(self, exc_type, exc_val, exc_tb):
if self.llm_metrics['call_count'] > 0:
self.log_final_metrics()
self.reset_llm_metrics()
self.log(f'Ending Trace for {self.event_type}')
return BusinessEventContext(self).__exit__(exc_type, exc_val, exc_tb)

View File

@@ -0,0 +1,25 @@
from werkzeug.local import LocalProxy, LocalStack
_business_event_stack = LocalStack()
def _get_current_event():
top = _business_event_stack.top
if top is None:
raise RuntimeError("No business event context found. Are you sure you're in a business event?")
return top
current_event = LocalProxy(_get_current_event)
class BusinessEventContext:
def __init__(self, event):
self.event = event
def __enter__(self):
_business_event_stack.push(self.event)
return self.event
def __exit__(self, exc_type, exc_val, exc_tb):
_business_event_stack.pop()

0
common/utils/cache/__init__old.py vendored Normal file
View File

89
common/utils/cache/base.py vendored Normal file
View File

@@ -0,0 +1,89 @@
# common/utils/cache/base.py
from typing import Any, Dict, List, Optional, TypeVar, Generic, Type
from dataclasses import dataclass
from flask import Flask
from dogpile.cache import CacheRegion
T = TypeVar('T')
@dataclass
class CacheKey:
"""Represents a cache key with multiple components"""
components: Dict[str, Any]
def __str__(self) -> str:
return ":".join(f"{k}={v}" for k, v in sorted(self.components.items()))
class CacheInvalidationManager:
"""Manages cache invalidation subscriptions"""
def __init__(self):
self._subscribers = {}
def subscribe(self, model: str, handler: 'CacheHandler', key_fields: List[str]):
if model not in self._subscribers:
self._subscribers[model] = []
self._subscribers[model].append((handler, key_fields))
def notify_change(self, model: str, **identifiers):
if model in self._subscribers:
for handler, key_fields in self._subscribers[model]:
if all(field in identifiers for field in key_fields):
handler.invalidate_by_model(model, **identifiers)
class CacheHandler(Generic[T]):
"""Base cache handler implementation"""
def __init__(self, region: CacheRegion, prefix: str):
self.region = region
self.prefix = prefix
self._key_components = []
def configure_keys(self, *components: str):
self._key_components = components
return self
def subscribe_to_model(self, model: str, key_fields: List[str]):
invalidation_manager.subscribe(model, self, key_fields)
return self
def generate_key(self, **identifiers) -> str:
missing = set(self._key_components) - set(identifiers.keys())
if missing:
raise ValueError(f"Missing key components: {missing}")
key = CacheKey({k: identifiers[k] for k in self._key_components})
return f"{self.prefix}:{str(key)}"
def get(self, creator_func, **identifiers) -> T:
cache_key = self.generate_key(**identifiers)
def creator():
instance = creator_func(**identifiers)
return self.to_cache_data(instance)
cached_data = self.region.get_or_create(
cache_key,
creator,
should_cache_fn=self.should_cache
)
return self.from_cache_data(cached_data, **identifiers)
def invalidate(self, **identifiers):
cache_key = self.generate_key(**identifiers)
self.region.delete(cache_key)
def invalidate_by_model(self, model: str, **identifiers):
try:
self.invalidate(**identifiers)
except ValueError:
pass
# Create global invalidation manager
invalidation_manager = CacheInvalidationManager()

View File

@@ -0,0 +1,39 @@
from typing import Type
from flask import Flask
from common.utils.cache.base import CacheHandler
class EveAICacheManager:
"""Cache manager with registration capabilities"""
def __init__(self):
self._regions = {}
self._handlers = {}
def init_app(self, app: Flask):
"""Initialize cache regions"""
from common.utils.cache.regions import create_cache_regions
self._regions = create_cache_regions(app)
# Store regions in instance
for region_name, region in self._regions.items():
setattr(self, f"{region_name}_region", region)
# Initialize all registered handlers with their regions
for handler_class, region_name in self._handlers.items():
region = self._regions[region_name]
handler_instance = handler_class(region)
handler_name = getattr(handler_class, 'handler_name', None)
if handler_name:
app.logger.debug(f"{handler_name} is registered")
setattr(self, handler_name, handler_instance)
app.logger.info('Cache regions initialized: ' + ', '.join(self._regions.keys()))
def register_handler(self, handler_class: Type[CacheHandler], region: str):
"""Register a cache handler class with its region"""
if not hasattr(handler_class, 'handler_name'):
raise ValueError("Cache handler must define handler_name class attribute")
self._handlers[handler_class] = region

65
common/utils/cache/regions.py vendored Normal file
View File

@@ -0,0 +1,65 @@
# common/utils/cache/regions.py
from dogpile.cache import make_region
from urllib.parse import urlparse
import os
def get_redis_config(app):
"""
Create Redis configuration dict based on app config
Handles both authenticated and non-authenticated setups
"""
# Parse the REDIS_BASE_URI to get all components
redis_uri = urlparse(app.config['REDIS_BASE_URI'])
config = {
'host': redis_uri.hostname,
'port': int(redis_uri.port or 6379),
'db': 4, # Keep this for later use
'redis_expiration_time': 3600,
'distributed_lock': True,
'thread_local_lock': False,
}
# Add authentication if provided
if redis_uri.username and redis_uri.password:
config.update({
'username': redis_uri.username,
'password': redis_uri.password
})
return config
def create_cache_regions(app):
"""Initialize all cache regions with app config"""
redis_config = get_redis_config(app)
regions = {}
# Region for model-related caching (ModelVariables etc)
model_region = make_region(name='eveai_model').configure(
'dogpile.cache.redis',
arguments=redis_config,
replace_existing_backend=True
)
regions['eveai_model'] = model_region
# Region for eveai_chat_workers components (Specialists, Retrievers, ...)
eveai_chat_workers_region = make_region(name='eveai_chat_workers').configure(
'dogpile.cache.redis',
arguments=redis_config, # arguments={**redis_config, 'db': 4}, # Different DB
replace_existing_backend=True
)
regions['eveai_chat_workers'] = eveai_chat_workers_region
# Region for eveai_workers components (Processors, ...)
eveai_workers_region = make_region(name='eveai_workers').configure(
'dogpile.cache.redis',
arguments=redis_config, # Same config for now
replace_existing_backend=True
)
regions['eveai_workers'] = eveai_workers_region
return regions

View File

@@ -1,14 +1,14 @@
from celery import Celery from celery import Celery
from kombu import Queue from kombu import Queue
from werkzeug.local import LocalProxy from werkzeug.local import LocalProxy
from redbeat import RedBeatScheduler
celery_app = Celery() celery_app = Celery()
def init_celery(celery, app): def init_celery(celery, app, is_beat=False):
celery_app.main = app.name celery_app.main = app.name
app.logger.debug(f'CELERY_BROKER_URL: {app.config["CELERY_BROKER_URL"]}')
app.logger.debug(f'CELERY_RESULT_BACKEND: {app.config["CELERY_RESULT_BACKEND"]}')
celery_config = { celery_config = {
'broker_url': app.config.get('CELERY_BROKER_URL', 'redis://localhost:6379/0'), 'broker_url': app.config.get('CELERY_BROKER_URL', 'redis://localhost:6379/0'),
'result_backend': app.config.get('CELERY_RESULT_BACKEND', 'redis://localhost:6379/0'), 'result_backend': app.config.get('CELERY_RESULT_BACKEND', 'redis://localhost:6379/0'),
@@ -17,19 +17,40 @@ def init_celery(celery, app):
'accept_content': app.config.get('CELERY_ACCEPT_CONTENT', ['json']), 'accept_content': app.config.get('CELERY_ACCEPT_CONTENT', ['json']),
'timezone': app.config.get('CELERY_TIMEZONE', 'UTC'), 'timezone': app.config.get('CELERY_TIMEZONE', 'UTC'),
'enable_utc': app.config.get('CELERY_ENABLE_UTC', True), 'enable_utc': app.config.get('CELERY_ENABLE_UTC', True),
'task_routes': {'eveai_worker.tasks.create_embeddings': {'queue': 'embeddings',
'routing_key': 'embeddings.create_embeddings'}},
} }
if is_beat:
# Add configurations specific to Beat scheduler
celery_config['beat_scheduler'] = 'redbeat.RedBeatScheduler'
celery_config['redbeat_lock_key'] = 'redbeat::lock'
celery_config['beat_max_loop_interval'] = 10 # Adjust as needed
celery_app.conf.update(**celery_config) celery_app.conf.update(**celery_config)
# Setting up Celery task queues # Task queues for workers only
celery_app.conf.task_queues = ( if not is_beat:
Queue('default', routing_key='task.#'), celery_app.conf.task_queues = (
Queue('embeddings', routing_key='embeddings.#', queue_arguments={'x-max-priority': 10}), Queue('default', routing_key='task.#'),
Queue('llm_interactions', routing_key='llm_interactions.#', queue_arguments={'x-max-priority': 5}), Queue('embeddings', routing_key='embeddings.#', queue_arguments={'x-max-priority': 10}),
) Queue('llm_interactions', routing_key='llm_interactions.#', queue_arguments={'x-max-priority': 5}),
Queue('entitlements', routing_key='entitlements.#', queue_arguments={'x-max-priority': 10}),
)
celery_app.conf.task_routes = {
'eveai_workers.*': { # All tasks from eveai_workers module
'queue': 'embeddings',
'routing_key': 'embeddings.#',
},
'eveai_chat_workers.*': { # All tasks from eveai_chat_workers module
'queue': 'llm_interactions',
'routing_key': 'llm_interactions.#',
},
'eveai_entitlements.*': { # All tasks from eveai_entitlements module
'queue': 'entitlements',
'routing_key': 'entitlements.#',
}
}
# Ensuring tasks execute with Flask application context # Ensure tasks execute with Flask context
class ContextTask(celery.Task): class ContextTask(celery.Task):
def __call__(self, *args, **kwargs): def __call__(self, *args, **kwargs):
with app.app_context(): with app.app_context():
@@ -37,6 +58,39 @@ def init_celery(celery, app):
celery.Task = ContextTask celery.Task = ContextTask
# Original init_celery before updating for beat
# def init_celery(celery, app):
# celery_app.main = app.name
# app.logger.debug(f'CELERY_BROKER_URL: {app.config["CELERY_BROKER_URL"]}')
# app.logger.debug(f'CELERY_RESULT_BACKEND: {app.config["CELERY_RESULT_BACKEND"]}')
# celery_config = {
# 'broker_url': app.config.get('CELERY_BROKER_URL', 'redis://localhost:6379/0'),
# 'result_backend': app.config.get('CELERY_RESULT_BACKEND', 'redis://localhost:6379/0'),
# 'task_serializer': app.config.get('CELERY_TASK_SERIALIZER', 'json'),
# 'result_serializer': app.config.get('CELERY_RESULT_SERIALIZER', 'json'),
# 'accept_content': app.config.get('CELERY_ACCEPT_CONTENT', ['json']),
# 'timezone': app.config.get('CELERY_TIMEZONE', 'UTC'),
# 'enable_utc': app.config.get('CELERY_ENABLE_UTC', True),
# 'task_routes': {'eveai_worker.tasks.create_embeddings': {'queue': 'embeddings',
# 'routing_key': 'embeddings.create_embeddings'}},
# }
# celery_app.conf.update(**celery_config)
#
# # Setting up Celery task queues
# celery_app.conf.task_queues = (
# Queue('default', routing_key='task.#'),
# Queue('embeddings', routing_key='embeddings.#', queue_arguments={'x-max-priority': 10}),
# Queue('llm_interactions', routing_key='llm_interactions.#', queue_arguments={'x-max-priority': 5}),
# )
#
# # Ensuring tasks execute with Flask application context
# class ContextTask(celery.Task):
# def __call__(self, *args, **kwargs):
# with app.app_context():
# return self.run(*args, **kwargs)
#
# celery.Task = ContextTask
def make_celery(app_name, config): def make_celery(app_name, config):
return celery_app return celery_app

View File

@@ -0,0 +1,613 @@
from typing import Optional, List, Union, Dict, Any, Pattern
from pydantic import BaseModel, field_validator, model_validator
from typing_extensions import Annotated
import re
from datetime import datetime
import json
from textwrap import dedent
import yaml
from dataclasses import dataclass
class TaggingField(BaseModel):
"""Represents a single tagging field configuration"""
type: str
required: bool = False
description: Optional[str] = None
allowed_values: Optional[List[Any]] = None # for enum type
min_value: Optional[Union[int, float]] = None # for numeric types
max_value: Optional[Union[int, float]] = None # for numeric types
@field_validator('type', mode='before')
@classmethod
def validate_type(cls, v: str) -> str:
valid_types = ['string', 'integer', 'float', 'date', 'enum']
if v not in valid_types:
raise ValueError(f'type must be one of {valid_types}')
return v
@model_validator(mode='after')
def validate_field_constraints(self) -> 'TaggingField':
# Validate enum constraints
if self.type == 'enum':
if not self.allowed_values:
raise ValueError('allowed_values must be provided for enum type')
elif self.allowed_values is not None:
raise ValueError('allowed_values only valid for enum type')
# Validate numeric constraints
if self.type not in ('integer', 'float'):
if self.min_value is not None or self.max_value is not None:
raise ValueError('min_value/max_value only valid for numeric types')
else:
if self.min_value is not None and self.max_value is not None and self.min_value >= self.max_value:
raise ValueError('min_value must be less than max_value')
return self
class TaggingFields(BaseModel):
"""Represents a collection of tagging fields, mapped by their names"""
fields: Dict[str, TaggingField]
@classmethod
def from_dict(cls, data: Dict[str, Dict[str, Any]]) -> 'TaggingFields':
return cls(fields={
field_name: TaggingField(**field_config)
for field_name, field_config in data.items()
})
def to_dict(self) -> Dict[str, Dict[str, Any]]:
return {
field_name: field.model_dump(exclude_none=True)
for field_name, field in self.fields.items()
}
class ArgumentConstraint(BaseModel):
"""Base class for all argument constraints"""
description: Optional[str] = None
error_message: Optional[str] = None
class NumericConstraint(ArgumentConstraint):
"""Constraints for numeric values (int/float)"""
min_value: Optional[float] = None
max_value: Optional[float] = None
include_min: bool = True # True for >= min_value, False for > min_value
include_max: bool = True # True for <= max_value, False for < max_value
@model_validator(mode='after')
def validate_ranges(self) -> 'NumericConstraint':
if self.min_value is not None and self.max_value is not None:
if self.min_value > self.max_value:
raise ValueError("min_value must be less than or equal to max_value")
return self
def validate(self, value: Union[int, float]) -> bool:
if self.min_value is not None:
if self.include_min and value < self.min_value:
return False
if not self.include_min and value <= self.min_value:
return False
if self.max_value is not None:
if self.include_max and value > self.max_value:
return False
if not self.include_max and value >= self.max_value:
return False
return True
class StringConstraint(ArgumentConstraint):
"""Constraints for string values"""
min_length: Optional[int] = None
max_length: Optional[int] = None
patterns: Optional[List[str]] = None # List of regex patterns to match
pattern_match_all: bool = False # If True, string must match all patterns
forbidden_patterns: Optional[List[str]] = None # List of regex patterns that must not match
allow_empty: bool = False
@field_validator('patterns', 'forbidden_patterns')
@classmethod
def validate_patterns(cls, v: Optional[List[str]]) -> Optional[List[str]]:
if v is not None:
# Validate each pattern compiles
for pattern in v:
try:
re.compile(pattern)
except re.error as e:
raise ValueError(f"Invalid regex pattern '{pattern}': {str(e)}")
return v
def validate(self, value: str) -> bool:
if not self.allow_empty and not value:
return False
if self.min_length is not None and len(value) < self.min_length:
return False
if self.max_length is not None and len(value) > self.max_length:
return False
if self.patterns:
matches = [bool(re.search(pattern, value)) for pattern in self.patterns]
if self.pattern_match_all and not all(matches):
return False
if not self.pattern_match_all and not any(matches):
return False
if self.forbidden_patterns:
for pattern in self.forbidden_patterns:
if re.search(pattern, value):
return False
return True
class DateConstraint(ArgumentConstraint):
"""Constraints for date values"""
min_date: Optional[datetime] = None
max_date: Optional[datetime] = None
include_min: bool = True
include_max: bool = True
allowed_formats: Optional[List[str]] = None # List of allowed date formats
@model_validator(mode='after')
def validate_ranges(self) -> 'DateConstraint':
if self.min_date and self.max_date and self.min_date > self.max_date:
raise ValueError("min_date must be less than or equal to max_date")
return self
def validate(self, value: datetime) -> bool:
if self.min_date is not None:
if self.include_min and value < self.min_date:
return False
if not self.include_min and value <= self.min_date:
return False
if self.max_date is not None:
if self.include_max and value > self.max_date:
return False
if not self.include_max and value >= self.max_date:
return False
return True
class EnumConstraint(ArgumentConstraint):
"""Constraints for enum values"""
allowed_values: List[Any]
case_sensitive: bool = True # For string enums
allow_multiple: bool = False # If True, value can be a list of allowed values
min_selections: Optional[int] = None # When allow_multiple is True
max_selections: Optional[int] = None # When allow_multiple is True
@model_validator(mode='after')
def validate_selections(self) -> 'EnumConstraint':
if self.allow_multiple:
if self.min_selections is not None and self.max_selections is not None:
if self.min_selections > self.max_selections:
raise ValueError("min_selections must be less than or equal to max_selections")
if self.max_selections > len(self.allowed_values):
raise ValueError("max_selections cannot be greater than number of allowed values")
return self
def validate(self, value: Union[Any, List[Any]]) -> bool:
if self.allow_multiple:
if not isinstance(value, list):
return False
if self.min_selections is not None and len(value) < self.min_selections:
return False
if self.max_selections is not None and len(value) > self.max_selections:
return False
for v in value:
if not self._validate_single_value(v):
return False
else:
return self._validate_single_value(value)
return True
def _validate_single_value(self, value: Any) -> bool:
if isinstance(value, str) and not self.case_sensitive:
return any(str(value).lower() == str(v).lower() for v in self.allowed_values)
return value in self.allowed_values
class ArgumentDefinition(BaseModel):
"""Defines an argument with its type and constraints"""
name: str
type: str
description: Optional[str] = None
required: bool = False
default: Optional[Any] = None
constraints: Optional[Union[NumericConstraint, StringConstraint, DateConstraint, EnumConstraint]] = None
@field_validator('type')
@classmethod
def validate_type(cls, v: str) -> str:
valid_types = ['string', 'integer', 'float', 'date', 'enum']
if v not in valid_types:
raise ValueError(f'type must be one of {valid_types}')
return v
@model_validator(mode='after')
def validate_constraints(self) -> 'ArgumentDefinition':
if self.constraints:
expected_constraint_types = {
'string': StringConstraint,
'integer': NumericConstraint,
'float': NumericConstraint,
'date': DateConstraint,
'enum': EnumConstraint
}
expected_type = expected_constraint_types.get(self.type)
if not isinstance(self.constraints, expected_type):
raise ValueError(f'Constraints for type {self.type} must be of type {expected_type.__name__}')
if self.default is not None:
if not self.constraints.validate(self.default):
raise ValueError(f'Default value does not satisfy constraints for {self.name}')
return self
class ArgumentDefinitions(BaseModel):
"""Collection of argument definitions"""
arguments: Dict[str, ArgumentDefinition]
@classmethod
def from_dict(cls, data: Dict[str, Dict[str, Any]]) -> 'ArgumentDefinitions':
return cls(arguments={
arg_name: ArgumentDefinition(**arg_config)
for arg_name, arg_config in data.items()
})
def to_dict(self) -> Dict[str, Dict[str, Any]]:
return {
arg_name: arg.model_dump(exclude_none=True)
for arg_name, arg in self.arguments.items()
}
def validate_argument_values(self, values: Dict[str, Any]) -> Dict[str, str]:
"""
Validate a set of argument values against their definitions
Returns a dictionary of error messages for invalid arguments
"""
errors = {}
# Check for required arguments
for name, arg_def in self.arguments.items():
if arg_def.required and name not in values:
errors[name] = "Required argument missing"
continue
if name in values:
value = values[name]
# Validate type
try:
if arg_def.type == 'integer':
value = int(value)
elif arg_def.type == 'float':
value = float(value)
elif arg_def.type == 'date' and isinstance(value, str):
if arg_def.constraints and arg_def.constraints.allowed_formats:
for fmt in arg_def.constraints.allowed_formats:
try:
value = datetime.strptime(value, fmt)
break
except ValueError:
continue
else:
errors[
name] = f"Invalid date format. Allowed formats: {arg_def.constraints.allowed_formats}"
continue
except (ValueError, TypeError):
errors[name] = f"Invalid type. Expected {arg_def.type}"
continue
# Validate constraints
if arg_def.constraints and not arg_def.constraints.validate(value):
errors[name] = arg_def.constraints.error_message or "Value does not satisfy constraints"
return errors
@dataclass
class DocumentationFormat:
"""Constants for documentation formats"""
MARKDOWN = "markdown"
JSON = "json"
YAML = "yaml"
@dataclass
class DocumentationVersion:
"""Constants for documentation versions"""
BASIC = "basic" # Original documentation without retriever info
EXTENDED = "extended" # Including retriever documentation
def _generate_argument_constraints(field_config: Dict[str, Any]) -> List[Dict[str, Any]]:
"""Generate possible argument constraints based on field type"""
constraints = []
base_constraint = {
"description": f"Constraint for {field_config.get('description', 'field')}",
"error_message": "Optional custom error message"
}
if field_config["type"] == "integer" or field_config["type"] == "float":
constraints.append({
**base_constraint,
"type": "NumericConstraint",
"possible_constraints": {
"min_value": "number",
"max_value": "number",
"include_min": "boolean",
"include_max": "boolean"
},
"example": {
"min_value": field_config.get("min_value", 0),
"max_value": field_config.get("max_value", 100),
"include_min": True,
"include_max": True
}
})
elif field_config["type"] == "string":
constraints.append({
**base_constraint,
"type": "StringConstraint",
"possible_constraints": {
"min_length": "integer",
"max_length": "integer",
"patterns": "list[str]",
"pattern_match_all": "boolean",
"forbidden_patterns": "list[str]",
"allow_empty": "boolean"
},
"example": {
"min_length": 1,
"max_length": 100,
"patterns": ["^[A-Za-z0-9]+$"],
"pattern_match_all": False,
"forbidden_patterns": ["^test_", "_temp$"],
"allow_empty": False
}
})
elif field_config["type"] == "enum":
constraints.append({
**base_constraint,
"type": "EnumConstraint",
"possible_constraints": {
"allowed_values": f"list[{field_config.get('allowed_values', ['value1', 'value2'])}]",
"case_sensitive": "boolean",
"allow_multiple": "boolean",
"min_selections": "integer",
"max_selections": "integer"
},
"example": {
"allowed_values": field_config.get("allowed_values", ["value1", "value2"]),
"case_sensitive": True,
"allow_multiple": True,
"min_selections": 1,
"max_selections": 2
}
})
elif field_config["type"] == "date":
constraints.append({
**base_constraint,
"type": "DateConstraint",
"possible_constraints": {
"min_date": "datetime",
"max_date": "datetime",
"include_min": "boolean",
"include_max": "boolean",
"allowed_formats": "list[str]"
},
"example": {
"min_date": "2024-01-01T00:00:00",
"max_date": "2024-12-31T23:59:59",
"include_min": True,
"include_max": True,
"allowed_formats": ["%Y-%m-%d", "%Y/%m/%d"]
}
})
return constraints
def generate_field_documentation(
tagging_fields: Dict[str, Any],
format: str = "markdown",
version: str = "basic"
) -> str:
"""
Generate documentation for tagging fields configuration.
Args:
tagging_fields: Dictionary containing tagging fields configuration
format: Output format ("markdown", "json", or "yaml")
version: Documentation version ("basic" or "extended")
Returns:
str: Formatted documentation
"""
if version not in [DocumentationVersion.BASIC, DocumentationVersion.EXTENDED]:
raise ValueError(f"Unsupported documentation version: {version}")
# Normalize fields configuration
normalized_fields = {}
for field_name, field_config in tagging_fields.items():
field_doc = {
"name": field_name,
"type": field_config["type"],
"required": field_config.get("required", False),
"description": field_config.get("description", "No description provided"),
"constraints": []
}
# Only include possible arguments in extended version
if version == DocumentationVersion.EXTENDED:
field_doc["possible_arguments"] = _generate_argument_constraints(field_config)
# Add type-specific constraints
if field_config["type"] == "integer" or field_config["type"] == "float":
if "min_value" in field_config:
field_doc["constraints"].append(
f"Minimum value: {field_config['min_value']}")
if "max_value" in field_config:
field_doc["constraints"].append(
f"Maximum value: {field_config['max_value']}")
elif field_config["type"] == "string":
if "min_length" in field_config:
field_doc["constraints"].append(
f"Minimum length: {field_config['min_length']}")
if "max_length" in field_config:
field_doc["constraints"].append(
f"Maximum length: {field_config['max_length']}")
if "patterns" in field_config:
field_doc["constraints"].append(
f"Must match patterns: {', '.join(field_config['patterns'])}")
elif field_config["type"] == "enum":
if "allowed_values" in field_config:
field_doc["constraints"].append(
f"Allowed values: {', '.join(str(v) for v in field_config['allowed_values'])}")
elif field_config["type"] == "date":
if "min_date" in field_config:
field_doc["constraints"].append(
f"Minimum date: {field_config['min_date']}")
if "max_date" in field_config:
field_doc["constraints"].append(
f"Maximum date: {field_config['max_date']}")
if "allowed_formats" in field_config:
field_doc["constraints"].append(
f"Allowed formats: {', '.join(field_config['allowed_formats'])}")
normalized_fields[field_name] = field_doc
# Generate documentation in requested format
if format == DocumentationFormat.MARKDOWN:
return _generate_markdown_docs(normalized_fields, version)
elif format == DocumentationFormat.JSON:
return _generate_json_docs(normalized_fields, version)
elif format == DocumentationFormat.YAML:
return _generate_yaml_docs(normalized_fields, version)
else:
raise ValueError(f"Unsupported documentation format: {format}")
def _generate_markdown_docs(fields: Dict[str, Any], version: str) -> str:
"""Generate markdown documentation"""
docs = ["# Tagging Fields Documentation\n"]
# Add overview table
docs.append("## Fields Overview\n")
docs.append("| Field Name | Type | Required | Description |")
docs.append("|------------|------|----------|-------------|")
for field_name, field in fields.items():
docs.append(
f"| {field_name} | {field['type']} | "
f"{'Yes' if field['required'] else 'No'} | {field['description']} |"
)
# Add detailed field specifications
docs.append("\n## Detailed Field Specifications\n")
for field_name, field in fields.items():
docs.append(f"### {field_name}\n")
docs.append(f"**Type:** {field['type']}")
docs.append(f"**Required:** {'Yes' if field['required'] else 'No'}")
docs.append(f"**Description:** {field['description']}\n")
if field["constraints"]:
docs.append("**Field Constraints:**")
for constraint in field["constraints"]:
docs.append(f"- {constraint}")
docs.append("")
# Add retriever argument documentation only in extended version
if version == DocumentationVersion.EXTENDED and "possible_arguments" in field:
docs.append("**Possible Retriever Arguments:**")
for arg_constraint in field["possible_arguments"]:
docs.append(f"\n*{arg_constraint['type']}*")
docs.append(f"Description: {arg_constraint['description']}")
docs.append("\nPossible constraints:")
for const_name, const_type in arg_constraint["possible_constraints"].items():
docs.append(f"- `{const_name}`: {const_type}")
docs.append("\nExample:")
docs.append("```python")
docs.append(json.dumps(arg_constraint["example"], indent=2))
docs.append("```\n")
# Add example retriever configuration only in extended version
if version == DocumentationVersion.EXTENDED:
docs.append("\n## Example Retriever Configuration\n")
docs.append("```python")
example_config = {
"metadata_filters": {
field_name: field["possible_arguments"][0]["example"]
for field_name, field in fields.items()
if "possible_arguments" in field
}
}
docs.append(json.dumps(example_config, indent=2))
docs.append("```")
return "\n".join(docs)
def _generate_json_docs(fields: Dict[str, Any], version: str) -> str:
"""Generate JSON documentation"""
doc = {
"tagging_fields_documentation": {
"version": version,
"fields": fields
}
}
if version == DocumentationVersion.EXTENDED:
doc["tagging_fields_documentation"]["example_retriever_config"] = {
"metadata_filters": {
field_name: field["possible_arguments"][0]["example"]
for field_name, field in fields.items()
if "possible_arguments" in field
}
}
return json.dumps(doc, indent=2)
def _generate_yaml_docs(fields: Dict[str, Any], version: str) -> str:
"""Generate YAML documentation"""
doc = {
"tagging_fields_documentation": {
"version": version,
"fields": fields
}
}
if version == DocumentationVersion.EXTENDED:
doc["tagging_fields_documentation"]["example_retriever_config"] = {
"metadata_filters": {
field_name: field["possible_arguments"][0]["example"]
for field_name, field in fields.items()
if "possible_arguments" in field
}
}
return yaml.dump(doc, sort_keys=False, default_flow_style=False)

View File

@@ -1,14 +1,14 @@
from flask import request, current_app, session from flask import request, current_app, session
from flask_jwt_extended import decode_token, verify_jwt_in_request, get_jwt_identity
from common.models.user import Tenant, TenantDomain from common.models.user import Tenant, TenantDomain
def get_allowed_origins(tenant_id): def get_allowed_origins(tenant_id):
session_key = f"allowed_origins_{tenant_id}" session_key = f"allowed_origins_{tenant_id}"
if session_key in session: if session_key in session:
current_app.logger.debug(f"Fetching allowed origins for tenant {tenant_id} from session")
return session[session_key] return session[session_key]
current_app.logger.debug(f"Fetching allowed origins for tenant {tenant_id} from database")
tenant_domains = TenantDomain.query.filter_by(tenant_id=int(tenant_id)).all() tenant_domains = TenantDomain.query.filter_by(tenant_id=int(tenant_id)).all()
allowed_origins = [domain.domain for domain in tenant_domains] allowed_origins = [domain.domain for domain in tenant_domains]
@@ -18,43 +18,52 @@ def get_allowed_origins(tenant_id):
def cors_after_request(response, prefix): def cors_after_request(response, prefix):
current_app.logger.debug(f'CORS after request: {request.path}, prefix: {prefix}') # Exclude health checks from checks
current_app.logger.debug(f'request.headers: {request.headers}') if request.path.startswith('/healthz') or request.path.startswith('/_healthz'):
current_app.logger.debug(f'request.args: {request.args}') response.headers.add('Access-Control-Allow-Origin', '*')
current_app.logger.debug(f'request is json?: {request.is_json}') response.headers.add('Access-Control-Allow-Headers', '*')
response.headers.add('Access-Control-Allow-Methods', '*')
return response
# Handle OPTIONS preflight requests
if request.method == 'OPTIONS':
response.headers.add('Access-Control-Allow-Origin', '*')
response.headers.add('Access-Control-Allow-Headers', 'Content-Type,Authorization,X-Tenant-ID')
response.headers.add('Access-Control-Allow-Methods', 'GET,POST,PUT,DELETE,OPTIONS')
response.headers.add('Access-Control-Allow-Credentials', 'true')
return response
tenant_id = None tenant_id = None
allowed_origins = [] allowed_origins = []
# Try to get tenant_id from JSON payload # Check Socket.IO connection
json_data = request.get_json(silent=True) if 'socket.io' in request.path:
current_app.logger.debug(f'request.get_json(silent=True): {json_data}') token = request.args.get('token')
if token:
if json_data and 'tenant_id' in json_data: try:
tenant_id = json_data['tenant_id'] decoded = decode_token(token)
tenant_id = decoded['sub']
except Exception as e:
current_app.logger.error(f'Error decoding token: {e}')
return response
else: else:
# Fallback to get tenant_id from query parameters or headers if JSON is not available # Regular API requests
tenant_id = request.args.get('tenant_id') or request.args.get('tenantId') or request.headers.get('X-Tenant-ID') try:
if verify_jwt_in_request(optional=True):
current_app.logger.debug(f'Identified tenant_id: {tenant_id}') tenant_id = get_jwt_identity()
except Exception as e:
current_app.logger.error(f'Error verifying JWT: {e}')
return response
if tenant_id: if tenant_id:
origin = request.headers.get('Origin')
allowed_origins = get_allowed_origins(tenant_id) allowed_origins = get_allowed_origins(tenant_id)
current_app.logger.debug(f'Allowed origins for tenant {tenant_id}: {allowed_origins}')
else:
current_app.logger.warning('tenant_id not found in request')
origin = request.headers.get('Origin') if origin in allowed_origins:
current_app.logger.debug(f'Origin: {origin}') response.headers.add('Access-Control-Allow-Origin', origin)
response.headers.add('Access-Control-Allow-Headers', 'Content-Type,Authorization')
if origin in allowed_origins: response.headers.add('Access-Control-Allow-Methods', 'GET,POST,PUT,DELETE,OPTIONS')
response.headers.add('Access-Control-Allow-Origin', origin) response.headers.add('Access-Control-Allow-Credentials', 'true')
response.headers.add('Access-Control-Allow-Headers', 'Content-Type,Authorization')
response.headers.add('Access-Control-Allow-Methods', 'GET,POST,PUT,DELETE,OPTIONS')
response.headers.add('Access-Control-Allow-Credentials', 'true')
current_app.logger.debug(f'CORS headers set for origin: {origin}')
else:
current_app.logger.warning(f'Origin {origin} not allowed')
return response return response

View File

@@ -36,7 +36,7 @@ def log_request_middleware(app):
@app.before_request @app.before_request
def log_session_state_before(): def log_session_state_before():
app.logger.debug(f'Session state before request: {session.items()}') pass
# @app.after_request # @app.after_request
# def log_response_info(response): # def log_response_info(response):
@@ -58,5 +58,4 @@ def log_request_middleware(app):
@app.after_request @app.after_request
def log_session_state_after(response): def log_session_state_after(response):
app.logger.debug(f'Session state after request: {session.items()}')
return response return response

View File

@@ -0,0 +1,365 @@
from datetime import datetime as dt, timezone as tz
from sqlalchemy import desc
from sqlalchemy.exc import SQLAlchemyError
from werkzeug.utils import secure_filename
from common.models.document import Document, DocumentVersion, Catalog
from common.extensions import db, minio_client
from common.utils.celery_utils import current_celery
from flask import current_app
from flask_security import current_user
import requests
from urllib.parse import urlparse, unquote, urlunparse
import os
from .eveai_exceptions import (EveAIInvalidLanguageException, EveAIDoubleURLException, EveAIUnsupportedFileType,
EveAIInvalidCatalog, EveAIInvalidDocument, EveAIInvalidDocumentVersion)
from ..models.user import Tenant
def create_document_stack(api_input, file, filename, extension, tenant_id):
# Create the Document
catalog_id = int(api_input.get('catalog_id'))
catalog = Catalog.query.get(catalog_id)
if not catalog:
raise EveAIInvalidCatalog(tenant_id, catalog_id)
new_doc = create_document(api_input, filename, catalog_id)
db.session.add(new_doc)
url = api_input.get('url', '')
if url != '':
url = cope_with_local_url(api_input.get('url', ''))
# Create the DocumentVersion
new_doc_vers = create_version_for_document(new_doc, tenant_id,
url,
api_input.get('sub_file_type', ''),
api_input.get('language', 'en'),
api_input.get('user_context', ''),
api_input.get('user_metadata'),
api_input.get('catalog_properties')
)
db.session.add(new_doc_vers)
try:
db.session.commit()
except SQLAlchemyError as e:
current_app.logger.error(f'Error adding document for tenant {tenant_id}: {e}')
db.session.rollback()
raise
current_app.logger.info(f'Document added successfully for tenant {tenant_id}, '
f'Document Version {new_doc.id}')
# Upload file to storage
upload_file_for_version(new_doc_vers, file, extension, tenant_id)
return new_doc, new_doc_vers
def create_document(form, filename, catalog_id):
new_doc = Document()
if form['name'] == '':
new_doc.name = filename.rsplit('.', 1)[0]
else:
new_doc.name = form['name']
if form['valid_from'] and form['valid_from'] != '':
new_doc.valid_from = form['valid_from']
else:
new_doc.valid_from = dt.now(tz.utc)
new_doc.catalog_id = catalog_id
set_logging_information(new_doc, dt.now(tz.utc))
return new_doc
def create_version_for_document(document, tenant_id, url, sub_file_type, language, user_context, user_metadata,
catalog_properties):
new_doc_vers = DocumentVersion()
if url != '':
new_doc_vers.url = url
if language == '':
raise EveAIInvalidLanguageException('Language is required for document creation!')
else:
new_doc_vers.language = language
if user_context != '':
new_doc_vers.user_context = user_context
if user_metadata != '' and user_metadata is not None:
new_doc_vers.user_metadata = user_metadata
if catalog_properties != '' and catalog_properties is not None:
new_doc_vers.catalog_properties = catalog_properties
if sub_file_type != '':
new_doc_vers.sub_file_type = sub_file_type
new_doc_vers.document = document
set_logging_information(new_doc_vers, dt.now(tz.utc))
mark_tenant_storage_dirty(tenant_id)
return new_doc_vers
def upload_file_for_version(doc_vers, file, extension, tenant_id):
doc_vers.file_type = extension
# Normally, the tenant bucket should exist. But let's be on the safe side if a migration took place.
minio_client.create_tenant_bucket(tenant_id)
try:
bn, on, size = minio_client.upload_document_file(
tenant_id,
doc_vers.doc_id,
doc_vers.language,
doc_vers.id,
f"{doc_vers.id}.{extension}",
file
)
doc_vers.bucket_name = bn
doc_vers.object_name = on
doc_vers.file_size = size / 1048576 # Convert bytes to MB
db.session.commit()
current_app.logger.info(f'Successfully saved document to MinIO for tenant {tenant_id} for '
f'document version {doc_vers.id} while uploading file.')
except Exception as e:
db.session.rollback()
current_app.logger.error(
f'Error saving document to MinIO for tenant {tenant_id}: {e}')
raise
def set_logging_information(obj, timestamp):
obj.created_at = timestamp
obj.updated_at = timestamp
user_id = get_current_user_id()
if user_id:
obj.created_by = user_id
obj.updated_by = user_id
def update_logging_information(obj, timestamp):
obj.updated_at = timestamp
user_id = get_current_user_id()
if user_id:
obj.updated_by = user_id
def get_current_user_id():
try:
if current_user and current_user.is_authenticated:
return current_user.id
else:
return None
except Exception:
# This will catch any errors if current_user is not available (e.g., in API context)
return None
def get_extension_from_content_type(content_type):
content_type_map = {
'text/html': 'html',
'application/pdf': 'pdf',
'text/plain': 'txt',
'application/msword': 'doc',
'application/vnd.openxmlformats-officedocument.wordprocessingml.document': 'docx',
# Add more mappings as needed
}
return content_type_map.get(content_type, 'html') # Default to 'html' if unknown
def process_url(url, tenant_id):
url = cope_with_local_url(url)
response = requests.head(url, allow_redirects=True)
content_type = response.headers.get('Content-Type', '').split(';')[0]
# Determine file extension based on Content-Type
extension = get_extension_from_content_type(content_type)
# Generate filename
parsed_url = urlparse(url)
path = unquote(parsed_url.path)
filename = os.path.basename(path)
if not filename or '.' not in filename:
# Use the last part of the path or a default name
filename = path.strip('/').split('/')[-1] or 'document'
filename = secure_filename(f"{filename}.{extension}")
else:
filename = secure_filename(filename)
# Check if a document with this URL already exists
existing_doc = DocumentVersion.query.filter_by(url=url).first()
if existing_doc:
raise EveAIDoubleURLException
# Download the content
response = requests.get(url)
response.raise_for_status()
file_content = response.content
return file_content, filename, extension
def start_embedding_task(tenant_id, doc_vers_id):
task = current_celery.send_task('create_embeddings',
args=[tenant_id, doc_vers_id,],
queue='embeddings')
current_app.logger.info(f'Embedding creation started for tenant {tenant_id}, '
f'Document Version {doc_vers_id}. '
f'Embedding creation task: {task.id}')
return task.id
def validate_file_type(extension):
if extension not in current_app.config['SUPPORTED_FILE_TYPES']:
raise EveAIUnsupportedFileType(f"Filetype {extension} is currently not supported. "
f"Supported filetypes: {', '.join(current_app.config['SUPPORTED_FILE_TYPES'])}")
def get_filename_from_url(url):
parsed_url = urlparse(url)
path_parts = parsed_url.path.split('/')
filename = path_parts[-1]
if filename == '':
filename = 'index'
if not filename.endswith('.html'):
filename += '.html'
return filename
def get_documents_list(page, per_page):
query = Document.query.order_by(desc(Document.created_at))
pagination = query.paginate(page=page, per_page=per_page, error_out=False)
return pagination
def edit_document(tenant_id, document_id, name, valid_from, valid_to):
doc = Document.query.get(document_id)
if not doc:
raise EveAIInvalidDocument(tenant_id, document_id)
if name:
doc.name = name
if valid_from:
doc.valid_from = valid_from
if valid_to:
doc.valid_to = valid_to
update_logging_information(doc, dt.now(tz.utc))
try:
db.session.add(doc)
db.session.commit()
return doc, None
except SQLAlchemyError as e:
db.session.rollback()
return None, str(e)
def edit_document_version(tenant_id, version_id, user_context, catalog_properties):
doc_vers = DocumentVersion.query.get(version_id)
if not doc_vers:
raise EveAIInvalidDocumentVersion(tenant_id, version_id)
doc_vers.user_context = user_context
doc_vers.catalog_properties = catalog_properties
update_logging_information(doc_vers, dt.now(tz.utc))
try:
db.session.add(doc_vers)
db.session.commit()
return doc_vers, None
except SQLAlchemyError as e:
db.session.rollback()
return None, str(e)
def refresh_document_with_info(doc_id, tenant_id, api_input):
doc = Document.query.get(doc_id)
if not doc:
raise EveAIInvalidDocument(tenant_id, doc_id)
old_doc_vers = DocumentVersion.query.filter_by(doc_id=doc_id).order_by(desc(DocumentVersion.id)).first()
if not old_doc_vers.url:
return None, "This document has no URL. Only documents with a URL can be refreshed."
new_doc_vers = create_version_for_document(
doc, tenant_id,
old_doc_vers.url,
old_doc_vers.sub_file_type,
api_input.get('language', old_doc_vers.language),
api_input.get('user_context', old_doc_vers.user_context),
api_input.get('user_metadata', old_doc_vers.user_metadata),
api_input.get('catalog_properties', old_doc_vers.catalog_properties),
)
set_logging_information(new_doc_vers, dt.now(tz.utc))
try:
db.session.add(new_doc_vers)
db.session.commit()
except SQLAlchemyError as e:
db.session.rollback()
return None, str(e)
url = cope_with_local_url(old_doc_vers.url)
response = requests.head(url, allow_redirects=True)
content_type = response.headers.get('Content-Type', '').split(';')[0]
extension = get_extension_from_content_type(content_type)
response = requests.get(url)
response.raise_for_status()
file_content = response.content
upload_file_for_version(new_doc_vers, file_content, extension, tenant_id)
task = current_celery.send_task('create_embeddings', args=[tenant_id, new_doc_vers.id,], queue='embeddings')
current_app.logger.info(f'Embedding creation started for document {doc_id} on version {new_doc_vers.id} '
f'with task id: {task.id}.')
return new_doc_vers, task.id
# Update the existing refresh_document function to use the new refresh_document_with_info
def refresh_document(doc_id, tenant_id):
current_app.logger.info(f'Refreshing document {doc_id}')
doc = Document.query.get_or_404(doc_id)
old_doc_vers = DocumentVersion.query.filter_by(doc_id=doc_id).order_by(desc(DocumentVersion.id)).first()
api_input = {
'language': old_doc_vers.language,
'user_context': old_doc_vers.user_context,
'user_metadata': old_doc_vers.user_metadata,
'catalog_properties': old_doc_vers.catalog_properties,
}
return refresh_document_with_info(doc_id, tenant_id, api_input)
# Function triggered when a document_version is created or updated
def mark_tenant_storage_dirty(tenant_id):
tenant = db.session.query(Tenant).filter_by(id=int(tenant_id)).first()
tenant.storage_dirty = True
db.session.commit()
def cope_with_local_url(url):
current_app.logger.debug(f'Incomming URL: {url}')
parsed_url = urlparse(url)
# Check if this is an internal WordPress URL (TESTING) and rewrite it
if parsed_url.netloc in [current_app.config['EXTERNAL_WORDPRESS_BASE_URL']]:
parsed_url = parsed_url._replace(
scheme=current_app.config['WORDPRESS_PROTOCOL'],
netloc=f"{current_app.config['WORDPRESS_HOST']}:{current_app.config['WORDPRESS_PORT']}"
)
url = urlunparse(parsed_url)
current_app.logger.debug(f'Translated Wordpress URL to: {url}')
return url

View File

@@ -0,0 +1,127 @@
class EveAIException(Exception):
"""Base exception class for EveAI API"""
def __init__(self, message, status_code=400, payload=None):
super().__init__()
self.message = message
self.status_code = status_code
self.payload = payload
def to_dict(self):
rv = dict(self.payload or ())
rv['message'] = self.message
rv['error'] = self.__class__.__name__
return rv
def __str__(self):
return self.message # Return the message when the exception is converted to a string
class EveAIInvalidLanguageException(EveAIException):
"""Raised when an invalid language is provided"""
def __init__(self, message="Langage is required", status_code=400, payload=None):
super().__init__(message, status_code, payload)
class EveAIDoubleURLException(EveAIException):
"""Raised when an existing url is provided"""
def __init__(self, message="URL already exists", status_code=400, payload=None):
super().__init__(message, status_code, payload)
class EveAIUnsupportedFileType(EveAIException):
"""Raised when an invalid file type is provided"""
def __init__(self, message="Filetype is not supported", status_code=400, payload=None):
super().__init__(message, status_code, payload)
class EveAINoLicenseForTenant(EveAIException):
"""Raised when no active license for a tenant is provided"""
def __init__(self, message="No license for tenant found", status_code=400, payload=None):
super().__init__(message, status_code, payload)
class EveAITenantNotFound(EveAIException):
"""Raised when a tenant is not found"""
def __init__(self, tenant_id, status_code=400, payload=None):
self.tenant_id = tenant_id
message = f"Tenant {tenant_id} not found"
super().__init__(message, status_code, payload)
class EveAITenantInvalid(EveAIException):
"""Raised when a tenant is invalid"""
def __init__(self, tenant_id, status_code=400, payload=None):
self.tenant_id = tenant_id
# Construct the message dynamically
message = f"Tenant with ID '{tenant_id}' is not valid. Please contact the System Administrator."
super().__init__(message, status_code, payload)
class EveAINoActiveLicense(EveAIException):
"""Raised when a tenant has no active licenses"""
def __init__(self, tenant_id, status_code=400, payload=None):
self.tenant_id = tenant_id
# Construct the message dynamically
message = f"Tenant with ID '{tenant_id}' has no active licenses. Please contact the System Administrator."
super().__init__(message, status_code, payload)
class EveAIInvalidCatalog(EveAIException):
"""Raised when a catalog cannot be found"""
def __init__(self, tenant_id, catalog_id, status_code=400, payload=None):
self.tenant_id = tenant_id
self.catalog_id = catalog_id
# Construct the message dynamically
message = f"Tenant with ID '{tenant_id}' has no valid catalog with ID {catalog_id}. Please contact the System Administrator."
super().__init__(message, status_code, payload)
class EveAIInvalidProcessor(EveAIException):
"""Raised when no valid processor can be found for a given Catalog ID"""
def __init__(self, tenant_id, catalog_id, file_type, status_code=400, payload=None):
self.tenant_id = tenant_id
self.catalog_id = catalog_id
self.file_type = file_type
# Construct the message dynamically
message = (f"Tenant with ID '{tenant_id}' has no valid {file_type} processor for catalog with ID {catalog_id}. "
f"Please contact the System Administrator.")
super().__init__(message, status_code, payload)
class EveAIInvalidDocument(EveAIException):
"""Raised when a tenant has no document with given ID"""
def __init__(self, tenant_id, document_id, status_code=400, payload=None):
self.tenant_id = tenant_id
self.document_id = document_id
# Construct the message dynamically
message = f"Tenant with ID '{tenant_id}' has no document with ID {document_id}."
super().__init__(message, status_code, payload)
class EveAIInvalidDocumentVersion(EveAIException):
"""Raised when a tenant has no document version with given ID"""
def __init__(self, tenant_id, document_version_id, status_code=400, payload=None):
self.tenant_id = tenant_id
self.document_version_id = document_version_id
# Construct the message dynamically
message = f"Tenant with ID '{tenant_id}' has no document version with ID {document_version_id}."
super().__init__(message, status_code, payload)
class EveAISocketInputException(EveAIException):
"""Raised when a socket call receives an invalid payload"""
def __init__(self, message, status_code=400, payload=None):
super.__init__(message, status_code, payload)

View File

@@ -24,9 +24,6 @@ def mw_before_request():
if not tenant_id: if not tenant_id:
raise Exception('Cannot switch schema for tenant: no tenant defined in session') raise Exception('Cannot switch schema for tenant: no tenant defined in session')
for role in current_user.roles:
current_app.logger.debug(f'In middleware: User {current_user.email} has role {role.name}')
# user = User.query.get(current_user.id) # user = User.query.get(current_user.id)
if current_user.has_role('Super User') or current_user.tenant_id == tenant_id: if current_user.has_role('Super User') or current_user.tenant_id == tenant_id:
Database(tenant_id).switch_schema() Database(tenant_id).switch_schema()

View File

@@ -50,13 +50,11 @@ class MinioClient:
self.client.put_object( self.client.put_object(
bucket_name, object_name, io.BytesIO(file_data), len(file_data) bucket_name, object_name, io.BytesIO(file_data), len(file_data)
) )
return True return bucket_name, object_name, len(file_data)
except S3Error as err: except S3Error as err:
raise Exception(f"Error occurred while uploading file: {err}") raise Exception(f"Error occurred while uploading file: {err}")
def download_document_file(self, tenant_id, document_id, language, version_id, filename): def download_document_file(self, tenant_id, bucket_name, object_name):
bucket_name = self.generate_bucket_name(tenant_id)
object_name = self.generate_object_name(document_id, language, version_id, filename)
try: try:
response = self.client.get_object(bucket_name, object_name) response = self.client.get_object(bucket_name, object_name)
return response.read() return response.read()

View File

@@ -1,205 +1,36 @@
import os import os
from typing import Dict, Any, Optional
import langcodes import langcodes
from flask import current_app
from langchain_openai import OpenAIEmbeddings, ChatOpenAI from common.langchain.llm_metrics_handler import LLMMetricsHandler
from common.langchain.templates.template_manager import TemplateManager
from langchain_openai import OpenAIEmbeddings, ChatOpenAI, OpenAI
from langchain_anthropic import ChatAnthropic from langchain_anthropic import ChatAnthropic
from langchain_core.pydantic_v1 import BaseModel, Field from flask import current_app
from langchain.prompts import ChatPromptTemplate from datetime import datetime as dt, timezone as tz
import ast
from typing import List
from openai import OpenAI
# from groq import Groq
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
from common.models.document import EmbeddingSmallOpenAI, EmbeddingLargeOpenAI from common.langchain.tracked_openai_embeddings import TrackedOpenAIEmbeddings
from common.langchain.tracked_transcription import TrackedOpenAITranscription
from common.models.user import Tenant
from common.utils.cache.base import CacheHandler
from config.model_config import MODEL_CONFIG
from common.extensions import template_manager, cache_manager
from common.models.document import EmbeddingLargeOpenAI, EmbeddingSmallOpenAI
from common.utils.eveai_exceptions import EveAITenantNotFound
class CitedAnswer(BaseModel): def create_language_template(template: str, language: str) -> str:
"""Default docstring - to be replaced with actual prompt""" """
Replace language placeholder in template with specified language
answer: str = Field( Args:
..., template: Template string with {language} placeholder
description="The answer to the user question, based on the given sources", language: Language code to insert
)
citations: List[int] = Field(
...,
description="The integer IDs of the SPECIFIC sources that were used to generate the answer"
)
insufficient_info: bool = Field(
False, # Default value is set to False
description="A boolean indicating wether given sources were sufficient or not to generate the answer"
)
Returns:
def set_language_prompt_template(cls, language_prompt): str: Template with language placeholder replaced
cls.__doc__ = language_prompt """
def select_model_variables(tenant):
embedding_provider = tenant.embedding_model.rsplit('.', 1)[0]
embedding_model = tenant.embedding_model.rsplit('.', 1)[1]
llm_provider = tenant.llm_model.rsplit('.', 1)[0]
llm_model = tenant.llm_model.rsplit('.', 1)[1]
# Set model variables
model_variables = {}
if tenant.es_k:
model_variables['k'] = tenant.es_k
else:
model_variables['k'] = 5
if tenant.es_similarity_threshold:
model_variables['similarity_threshold'] = tenant.es_similarity_threshold
else:
model_variables['similarity_threshold'] = 0.7
if tenant.chat_RAG_temperature:
model_variables['RAG_temperature'] = tenant.chat_RAG_temperature
else:
model_variables['RAG_temperature'] = 0.3
if tenant.chat_no_RAG_temperature:
model_variables['no_RAG_temperature'] = tenant.chat_no_RAG_temperature
else:
model_variables['no_RAG_temperature'] = 0.5
# Set Tuning variables
if tenant.embed_tuning:
model_variables['embed_tuning'] = tenant.embed_tuning
else:
model_variables['embed_tuning'] = False
if tenant.rag_tuning:
model_variables['rag_tuning'] = tenant.rag_tuning
else:
model_variables['rag_tuning'] = False
if tenant.rag_context:
model_variables['rag_context'] = tenant.rag_context
else:
model_variables['rag_context'] = " "
# Set HTML Chunking Variables
model_variables['html_tags'] = tenant.html_tags
model_variables['html_end_tags'] = tenant.html_end_tags
model_variables['html_included_elements'] = tenant.html_included_elements
model_variables['html_excluded_elements'] = tenant.html_excluded_elements
# Set Chunk Size variables
model_variables['min_chunk_size'] = tenant.min_chunk_size
model_variables['max_chunk_size'] = tenant.max_chunk_size
environment = os.getenv('FLASK_ENV', 'development')
portkey_metadata = {'tenant_id': str(tenant.id), 'environment': environment}
# Set Embedding variables
match embedding_provider:
case 'openai':
portkey_headers = createHeaders(api_key=current_app.config.get('PORTKEY_API_KEY'),
provider='openai',
metadata=portkey_metadata)
match embedding_model:
case 'text-embedding-3-small':
api_key = current_app.config.get('OPENAI_API_KEY')
model_variables['embedding_model'] = OpenAIEmbeddings(api_key=api_key,
model='text-embedding-3-small',
base_url=PORTKEY_GATEWAY_URL,
default_headers=portkey_headers
)
model_variables['embedding_db_model'] = EmbeddingSmallOpenAI
case 'text-embedding-3-large':
api_key = current_app.config.get('OPENAI_API_KEY')
model_variables['embedding_model'] = OpenAIEmbeddings(api_key=api_key,
model='text-embedding-3-large',
base_url=PORTKEY_GATEWAY_URL,
default_headers=portkey_headers
)
model_variables['embedding_db_model'] = EmbeddingLargeOpenAI
case _:
raise Exception(f'Error setting model variables for tenant {tenant.id} '
f'error: Invalid embedding model')
case _:
raise Exception(f'Error setting model variables for tenant {tenant.id} '
f'error: Invalid embedding provider')
# Set Chat model variables
match llm_provider:
case 'openai':
portkey_headers = createHeaders(api_key=current_app.config.get('PORTKEY_API_KEY'),
metadata=portkey_metadata,
provider='openai')
tool_calling_supported = False
api_key = current_app.config.get('OPENAI_API_KEY')
model_variables['llm'] = ChatOpenAI(api_key=api_key,
model=llm_model,
temperature=model_variables['RAG_temperature'],
base_url=PORTKEY_GATEWAY_URL,
default_headers=portkey_headers)
model_variables['llm_no_rag'] = ChatOpenAI(api_key=api_key,
model=llm_model,
temperature=model_variables['no_RAG_temperature'],
base_url=PORTKEY_GATEWAY_URL,
default_headers=portkey_headers)
tool_calling_supported = False
match llm_model:
case 'gpt-4-turbo' | 'gpt-4o' | 'gpt-4o-mini':
tool_calling_supported = True
case _:
raise Exception(f'Error setting model variables for tenant {tenant.id} '
f'error: Invalid chat model')
case 'anthropic':
api_key = current_app.config.get('ANTHROPIC_API_KEY')
# Anthropic does not have the same 'generic' model names as OpenAI
llm_model_ext = current_app.config.get('ANTHROPIC_LLM_VERSIONS').get(llm_model)
model_variables['llm'] = ChatAnthropic(api_key=api_key,
model=llm_model_ext,
temperature=model_variables['RAG_temperature'])
model_variables['llm_no_rag'] = ChatAnthropic(api_key=api_key,
model=llm_model_ext,
temperature=model_variables['RAG_temperature'])
tool_calling_supported = True
case _:
raise Exception(f'Error setting model variables for tenant {tenant.id} '
f'error: Invalid chat provider')
if tool_calling_supported:
model_variables['cited_answer_cls'] = CitedAnswer
templates = current_app.config['PROMPT_TEMPLATES'][f'{llm_provider}.{llm_model}']
model_variables['summary_template'] = templates['summary']
model_variables['rag_template'] = templates['rag']
model_variables['history_template'] = templates['history']
model_variables['encyclopedia_template'] = templates['encyclopedia']
model_variables['transcript_template'] = templates['transcript']
model_variables['html_parse_template'] = templates['html_parse']
model_variables['pdf_parse_template'] = templates['pdf_parse']
model_variables['annotation_chunk_length'] = current_app.config['ANNOTATION_TEXT_CHUNK_LENGTH'][tenant.llm_model]
# Transcription Client Variables.
# Using Groq
# api_key = current_app.config.get('GROQ_API_KEY')
# model_variables['transcription_client'] = Groq(api_key=api_key)
# model_variables['transcription_model'] = 'whisper-large-v3'
# Using OpenAI for transcriptions
portkey_metadata = {'tenant_id': str(tenant.id)}
portkey_headers = createHeaders(api_key=current_app.config.get('PORTKEY_API_KEY'),
metadata=portkey_metadata,
provider='openai'
)
api_key = current_app.config.get('OPENAI_API_KEY')
model_variables['transcription_client'] = OpenAI(api_key=api_key,
base_url=PORTKEY_GATEWAY_URL,
default_headers=portkey_headers)
model_variables['transcription_model'] = 'whisper-1'
return model_variables
def create_language_template(template, language):
try: try:
full_language = langcodes.Language.make(language=language) full_language = langcodes.Language.make(language=language)
language_template = template.replace('{language}', full_language.display_name()) language_template = template.replace('{language}', full_language.display_name())
@@ -209,5 +40,249 @@ def create_language_template(template, language):
return language_template return language_template
def replace_variable_in_template(template, variable, value): def replace_variable_in_template(template: str, variable: str, value: str) -> str:
return template.replace(variable, value) """
Replace a variable placeholder in template with specified value
Args:
template: Template string with variable placeholder
variable: Variable placeholder to replace (e.g. "{tenant_context}")
value: Value to insert
Returns:
str: Template with variable placeholder replaced
"""
return template.replace(variable, value or "")
class ModelVariables:
"""Manages model-related variables and configurations"""
def __init__(self, tenant_id: int, variables: Dict[str, Any] = None):
"""
Initialize ModelVariables with tenant and optional template manager
Args:
tenant: Tenant instance
template_manager: Optional TemplateManager instance
"""
current_app.logger.info(f'Model variables initialized with tenant {tenant_id} and variables \n{variables}')
self.tenant_id = tenant_id
self._variables = variables if variables is not None else self._initialize_variables()
current_app.logger.info(f'Model _variables initialized to {self._variables}')
self._embedding_model = None
self._embedding_model_class = None
self._llm_instances = {}
self.llm_metrics_handler = LLMMetricsHandler()
self._transcription_model = None
def _initialize_variables(self) -> Dict[str, Any]:
"""Initialize the variables dictionary"""
variables = {}
tenant = Tenant.query.get(self.tenant_id)
if not tenant:
raise EveAITenantNotFound(self.tenant_id)
# Set model providers
variables['embedding_provider'], variables['embedding_model'] = tenant.embedding_model.split('.')
variables['llm_provider'], variables['llm_model'] = tenant.llm_model.split('.')
variables['llm_full_model'] = tenant.llm_model
# Set model-specific configurations
model_config = MODEL_CONFIG.get(variables['llm_provider'], {}).get(variables['llm_model'], {})
variables.update(model_config)
# Additional configurations
variables['annotation_chunk_length'] = current_app.config['ANNOTATION_TEXT_CHUNK_LENGTH'][tenant.llm_model]
variables['max_compression_duration'] = current_app.config['MAX_COMPRESSION_DURATION']
variables['max_transcription_duration'] = current_app.config['MAX_TRANSCRIPTION_DURATION']
variables['compression_cpu_limit'] = current_app.config['COMPRESSION_CPU_LIMIT']
variables['compression_process_delay'] = current_app.config['COMPRESSION_PROCESS_DELAY']
return variables
@property
def embedding_model(self):
"""Get the embedding model instance"""
if self._embedding_model is None:
api_key = os.getenv('OPENAI_API_KEY')
self._embedding_model = TrackedOpenAIEmbeddings(
api_key=api_key,
model=self._variables['embedding_model']
)
return self._embedding_model
@property
def embedding_model_class(self):
"""Get the embedding model class"""
if self._embedding_model_class is None:
if self._variables['embedding_model'] == 'text-embedding-3-large':
self._embedding_model_class = EmbeddingLargeOpenAI
else: # text-embedding-3-small
self._embedding_model_class = EmbeddingSmallOpenAI
return self._embedding_model_class
@property
def annotation_chunk_length(self):
return self._variables['annotation_chunk_length']
@property
def max_compression_duration(self):
return self._variables['max_compression_duration']
@property
def max_transcription_duration(self):
return self._variables['max_transcription_duration']
@property
def compression_cpu_limit(self):
return self._variables['compression_cpu_limit']
@property
def compression_process_delay(self):
return self._variables['compression_process_delay']
def get_llm(self, temperature: float = 0.3, **kwargs) -> Any:
"""
Get an LLM instance with specific configuration
Args:
temperature: The temperature for the LLM
**kwargs: Additional configuration parameters
Returns:
An instance of the configured LLM
"""
cache_key = f"{temperature}_{hash(frozenset(kwargs.items()))}"
if cache_key not in self._llm_instances:
provider = self._variables['llm_provider']
model = self._variables['llm_model']
if provider == 'openai':
self._llm_instances[cache_key] = ChatOpenAI(
api_key=os.getenv('OPENAI_API_KEY'),
model=model,
temperature=temperature,
callbacks=[self.llm_metrics_handler],
**kwargs
)
elif provider == 'anthropic':
self._llm_instances[cache_key] = ChatAnthropic(
api_key=os.getenv('ANTHROPIC_API_KEY'),
model=current_app.config['ANTHROPIC_LLM_VERSIONS'][model],
temperature=temperature,
callbacks=[self.llm_metrics_handler],
**kwargs
)
else:
raise ValueError(f"Unsupported LLM provider: {provider}")
return self._llm_instances[cache_key]
@property
def transcription_model(self) -> TrackedOpenAITranscription:
"""Get the transcription model instance"""
if self._transcription_model is None:
api_key = os.getenv('OPENAI_API_KEY')
self._transcription_model = TrackedOpenAITranscription(
api_key=api_key,
model='whisper-1'
)
return self._transcription_model
# Remove the old transcription-related methods since they're now handled by TrackedOpenAITranscription
@property
def transcription_client(self):
raise DeprecationWarning("Use transcription_model instead")
def transcribe(self, *args, **kwargs):
raise DeprecationWarning("Use transcription_model.transcribe() instead")
def get_template(self, template_name: str, version: Optional[str] = None) -> str:
"""
Get a template for the tenant's configured LLM
Args:
template_name: Name of the template to retrieve
version: Optional specific version to retrieve
Returns:
The template content
"""
try:
template = template_manager.get_template(
self._variables['llm_full_model'],
template_name,
version
)
return template.content
except Exception as e:
current_app.logger.error(f"Error getting template {template_name}: {str(e)}")
# Fall back to old template loading if template_manager fails
if template_name in self._variables.get('templates', {}):
return self._variables['templates'][template_name]
raise
class ModelVariablesCacheHandler(CacheHandler[ModelVariables]):
handler_name = 'model_vars_cache' # Used to access handler instance from cache_manager
def __init__(self, region):
super().__init__(region, 'model_variables')
self.configure_keys('tenant_id')
self.subscribe_to_model('Tenant', ['tenant_id'])
def to_cache_data(self, instance: ModelVariables) -> Dict[str, Any]:
return {
'tenant_id': instance.tenant_id,
'variables': instance._variables,
'last_updated': dt.now(tz=tz.utc).isoformat()
}
def from_cache_data(self, data: Dict[str, Any], tenant_id: int, **kwargs) -> ModelVariables:
instance = ModelVariables(tenant_id, data.get('variables'))
return instance
def should_cache(self, value: Dict[str, Any]) -> bool:
required_fields = {'tenant_id', 'variables'}
return all(field in value for field in required_fields)
# Register the handler with the cache manager
cache_manager.register_handler(ModelVariablesCacheHandler, 'eveai_model')
# Helper function to get cached model variables
def get_model_variables(tenant_id: int) -> ModelVariables:
return cache_manager.model_vars_cache.get(
lambda tenant_id: ModelVariables(tenant_id), # function to create ModelVariables if required
tenant_id=tenant_id
)
# Written in a long format, without lambda
# def get_model_variables(tenant_id: int) -> ModelVariables:
# """
# Get ModelVariables instance, either from cache or newly created
#
# Args:
# tenant_id: The tenant's ID
#
# Returns:
# ModelVariables: Instance with either cached or fresh data
#
# Raises:
# TenantNotFoundError: If tenant doesn't exist
# CacheStateError: If cached data is invalid
# """
#
# def create_new_instance(tenant_id: int) -> ModelVariables:
# """Creator function that's called when cache miss occurs"""
# return ModelVariables(tenant_id) # This will initialize fresh variables
#
# return cache_manager.model_vars_cache.get(
# create_new_instance, # Function to create new instance if needed
# tenant_id=tenant_id # Parameters passed to both get() and create_new_instance
# )

View File

@@ -6,7 +6,6 @@ def prefixed_url_for(endpoint, **values):
prefix = request.headers.get('X-Forwarded-Prefix', '') prefix = request.headers.get('X-Forwarded-Prefix', '')
scheme = request.headers.get('X-Forwarded-Proto', request.scheme) scheme = request.headers.get('X-Forwarded-Proto', request.scheme)
host = request.headers.get('Host', request.host) host = request.headers.get('Host', request.host)
current_app.logger.debug(f'prefix: {prefix}, scheme: {scheme}, host: {host}')
external = values.pop('_external', False) external = values.pop('_external', False)
generated_url = url_for(endpoint, **values) generated_url = url_for(endpoint, **values)

View File

@@ -1,4 +1,6 @@
import os import os
import sys
import gevent import gevent
import time import time
from flask import current_app from flask import current_app
@@ -28,3 +30,17 @@ def sync_folder(file_path):
dir_fd = os.open(file_path, os.O_RDONLY) dir_fd = os.open(file_path, os.O_RDONLY)
os.fsync(dir_fd) os.fsync(dir_fd)
os.close(dir_fd) os.close(dir_fd)
def get_project_root():
"""Get the root directory of the project."""
# Use the module that's actually running (not this file)
module = sys.modules['__main__']
if hasattr(module, '__file__'):
# Get the path to the main module
main_path = os.path.abspath(module.__file__)
# Get the root directory (where the main module is located)
return os.path.dirname(main_path)
else:
# Fallback: use current working directory
return os.getcwd()

View File

@@ -1,10 +1,15 @@
from flask import session, current_app from flask import session, current_app
from sqlalchemy import and_
from common.models.user import Tenant from common.models.user import Tenant
from common.models.entitlements import License
from common.utils.database import Database
from common.utils.eveai_exceptions import EveAITenantNotFound, EveAITenantInvalid, EveAINoActiveLicense
from datetime import datetime as dt, timezone as tz
# Definition of Trigger Handlers # Definition of Trigger Handlers
def set_tenant_session_data(sender, user, **kwargs): def set_tenant_session_data(sender, user, **kwargs):
current_app.logger.debug(f"Setting tenant session data for user {user.id}")
tenant = Tenant.query.filter_by(id=user.tenant_id).first() tenant = Tenant.query.filter_by(id=user.tenant_id).first()
session['tenant'] = tenant.to_dict() session['tenant'] = tenant.to_dict()
session['default_language'] = tenant.default_language session['default_language'] = tenant.default_language
@@ -16,4 +21,25 @@ def clear_tenant_session_data(sender, user, **kwargs):
session.pop('tenant', None) session.pop('tenant', None)
session.pop('default_language', None) session.pop('default_language', None)
session.pop('default_embedding_model', None) session.pop('default_embedding_model', None)
session.pop('default_llm_model', None) session.pop('default_llm_model', None)
def is_valid_tenant(tenant_id):
if tenant_id == 1: # The 'root' tenant, is always valid
return True
tenant = Tenant.query.get(tenant_id)
Database(tenant).switch_schema()
if tenant is None:
raise EveAITenantNotFound()
elif tenant.type == 'Inactive':
raise EveAITenantInvalid(tenant_id)
else:
current_date = dt.now(tz=tz.utc).date()
active_license = (License.query.filter_by(tenant_id=tenant_id)
.filter(and_(License.start_date <= current_date,
License.end_date >= current_date))
.one_or_none())
if not active_license:
raise EveAINoActiveLicense(tenant_id)
return True

View File

@@ -11,7 +11,7 @@ def confirm_token(token, expiration=3600):
try: try:
email = serializer.loads(token, salt=current_app.config['SECURITY_PASSWORD_SALT'], max_age=expiration) email = serializer.loads(token, salt=current_app.config['SECURITY_PASSWORD_SALT'], max_age=expiration)
except Exception as e: except Exception as e:
current_app.logger.debug(f'Error confirming token: {e}') current_app.logger.error(f'Error confirming token: {e}')
raise raise
return email return email
@@ -35,14 +35,11 @@ def generate_confirmation_token(email):
def send_confirmation_email(user): def send_confirmation_email(user):
current_app.logger.debug(f'Sending confirmation email to {user.email}')
if not test_smtp_connection(): if not test_smtp_connection():
raise Exception("Failed to connect to SMTP server") raise Exception("Failed to connect to SMTP server")
token = generate_confirmation_token(user.email) token = generate_confirmation_token(user.email)
confirm_url = prefixed_url_for('security_bp.confirm_email', token=token, _external=True) confirm_url = prefixed_url_for('security_bp.confirm_email', token=token, _external=True)
current_app.logger.debug(f'Confirmation URL: {confirm_url}')
html = render_template('email/activate.html', confirm_url=confirm_url) html = render_template('email/activate.html', confirm_url=confirm_url)
subject = "Please confirm your email" subject = "Please confirm your email"
@@ -56,10 +53,8 @@ def send_confirmation_email(user):
def send_reset_email(user): def send_reset_email(user):
current_app.logger.debug(f'Sending reset email to {user.email}')
token = generate_reset_token(user.email) token = generate_reset_token(user.email)
reset_url = prefixed_url_for('security_bp.reset_password', token=token, _external=True) reset_url = prefixed_url_for('security_bp.reset_password', token=token, _external=True)
current_app.logger.debug(f'Reset URL: {reset_url}')
html = render_template('email/reset_password.html', reset_url=reset_url) html = render_template('email/reset_password.html', reset_url=reset_url)
subject = "Reset Your Password" subject = "Reset Your Password"
@@ -98,4 +93,3 @@ def test_smtp_connection():
except Exception as e: except Exception as e:
current_app.logger.error(f"Failed to connect to SMTP server: {str(e)}") current_app.logger.error(f"Failed to connect to SMTP server: {str(e)}")
return False return False

View File

@@ -4,7 +4,7 @@ from flask import Flask
def generate_api_key(prefix="EveAI-Chat"): def generate_api_key(prefix="EveAI-Chat"):
parts = [str(random.randint(1000, 9999)) for _ in range(5)] parts = [str(random.randint(1000, 9999)) for _ in range(8)]
return f"{prefix}-{'-'.join(parts)}" return f"{prefix}-{'-'.join(parts)}"

View File

@@ -0,0 +1,112 @@
from typing import List, Union
import re
class StringListConverter:
"""Utility class for converting between comma-separated strings and lists"""
@staticmethod
def string_to_list(input_string: Union[str, None], allow_empty: bool = True) -> List[str]:
"""
Convert a comma-separated string to a list of strings.
Args:
input_string: Comma-separated string to convert
allow_empty: If True, returns empty list for None/empty input
If False, raises ValueError for None/empty input
Returns:
List of stripped strings
Raises:
ValueError: If input is None/empty and allow_empty is False
"""
if not input_string:
if allow_empty:
return []
raise ValueError("Input string cannot be None or empty")
return [item.strip() for item in input_string.split(',') if item.strip()]
@staticmethod
def list_to_string(input_list: Union[List[str], None], allow_empty: bool = True) -> str:
"""
Convert a list of strings to a comma-separated string.
Args:
input_list: List of strings to convert
allow_empty: If True, returns empty string for None/empty input
If False, raises ValueError for None/empty input
Returns:
Comma-separated string
Raises:
ValueError: If input is None/empty and allow_empty is False
"""
if not input_list:
if allow_empty:
return ''
raise ValueError("Input list cannot be None or empty")
return ', '.join(str(item).strip() for item in input_list)
@staticmethod
def validate_format(input_string: str,
allowed_chars: str = r'a-zA-Z0-9_\-',
min_length: int = 1,
max_length: int = 50) -> bool:
"""
Validate the format of items in a comma-separated string.
Args:
input_string: String to validate
allowed_chars: String of allowed characters (for regex pattern)
min_length: Minimum length for each item
max_length: Maximum length for each item
Returns:
bool: True if format is valid, False otherwise
"""
if not input_string:
return False
# Create regex pattern for individual items
pattern = f'^[{allowed_chars}]{{{min_length},{max_length}}}$'
try:
# Convert to list and check each item
items = StringListConverter.string_to_list(input_string)
return all(bool(re.match(pattern, item)) for item in items)
except Exception:
return False
@staticmethod
def validate_and_convert(input_string: str,
allowed_chars: str = r'a-zA-Z0-9_\-',
min_length: int = 1,
max_length: int = 50) -> List[str]:
"""
Validate and convert a comma-separated string to a list.
Args:
input_string: String to validate and convert
allowed_chars: String of allowed characters (for regex pattern)
min_length: Minimum length for each item
max_length: Maximum length for each item
Returns:
List of validated and converted strings
Raises:
ValueError: If input string format is invalid
"""
if not StringListConverter.validate_format(
input_string, allowed_chars, min_length, max_length
):
raise ValueError(
f"Invalid format. Items must be {min_length}-{max_length} characters "
f"long and contain only these characters: {allowed_chars}"
)
return StringListConverter.string_to_list(input_string)

View File

@@ -0,0 +1,60 @@
from dataclasses import dataclass
from typing import Optional
from datetime import datetime
from flask_jwt_extended import decode_token, verify_jwt_in_request
from flask import current_app
@dataclass
class TokenValidationResult:
"""Clean, simple validation result"""
is_valid: bool
tenant_id: Optional[int] = None
error_message: Optional[str] = None
class TokenValidator:
"""Simplified token validator focused on JWT validation"""
def validate_token(self, token: str) -> TokenValidationResult:
"""
Validate JWT token
Args:
token: The JWT token to validate
Returns:
TokenValidationResult with validation status and tenant_id if valid
"""
try:
# Decode and validate token
decoded_token = decode_token(token)
# Extract tenant_id from token subject
tenant_id = decoded_token.get('sub')
if not tenant_id:
return TokenValidationResult(
is_valid=False,
error_message="Missing tenant ID in token"
)
# Verify token timestamps
now = datetime.utcnow().timestamp()
if not (decoded_token.get('exp', 0) > now >= decoded_token.get('nbf', 0)):
return TokenValidationResult(
is_valid=False,
error_message="Token expired or not yet valid"
)
# Token is valid
return TokenValidationResult(
is_valid=True,
tenant_id=tenant_id
)
except Exception as e:
current_app.logger.error(f"Token validation error: {str(e)}")
return TokenValidationResult(
is_valid=False,
error_message=str(e)
)

View File

@@ -1,4 +1,4 @@
from flask import flash from flask import flash, current_app
def prepare_table(model_objects, column_names): def prepare_table(model_objects, column_names):
@@ -44,7 +44,8 @@ def form_validation_failed(request, form):
for fieldName, errorMessages in form.errors.items(): for fieldName, errorMessages in form.errors.items():
for err in errorMessages: for err in errorMessages:
flash(f"Error in {fieldName}: {err}", 'danger') flash(f"Error in {fieldName}: {err}", 'danger')
current_app.logger.error(f"Error in {fieldName}: {err}")
def form_to_dict(form): def form_to_dict(form):
return {field.name: field.data for field in form if field.name != 'csrf_token' and hasattr(field, 'data')} return {field.name: field.data for field in form if field.name != 'csrf_token' and hasattr(field, 'data')}

View File

@@ -1,9 +1,9 @@
import os
from os import environ, path from os import environ, path
from datetime import timedelta from datetime import timedelta
import redis import redis
from common.utils.prompt_loader import load_prompt_templates from common.utils.prompt_loader import load_prompt_templates
from eveai_app.views.security_forms import ResetPasswordForm
basedir = path.abspath(path.dirname(__file__)) basedir = path.abspath(path.dirname(__file__))
@@ -46,7 +46,6 @@ class Config(object):
SECURITY_EMAIL_SUBJECT_PASSWORD_NOTICE = 'Your Password Has Been Reset' SECURITY_EMAIL_SUBJECT_PASSWORD_NOTICE = 'Your Password Has Been Reset'
SECURITY_EMAIL_PLAINTEXT = False SECURITY_EMAIL_PLAINTEXT = False
SECURITY_EMAIL_HTML = True SECURITY_EMAIL_HTML = True
SECURITY_RESET_PASSWORD_FORM = ResetPasswordForm
# Ensure Flask-Security-Too is handling CSRF tokens when behind a proxy # Ensure Flask-Security-Too is handling CSRF tokens when behind a proxy
SECURITY_CSRF_PROTECT_MECHANISMS = ['session'] SECURITY_CSRF_PROTECT_MECHANISMS = ['session']
@@ -55,21 +54,21 @@ class Config(object):
WTF_CSRF_CHECK_DEFAULT = False WTF_CSRF_CHECK_DEFAULT = False
# file upload settings # file upload settings
MAX_CONTENT_LENGTH = 16 * 1024 * 1024 MAX_CONTENT_LENGTH = 50 * 1024 * 1024
UPLOAD_EXTENSIONS = ['.txt', '.pdf', '.png', '.jpg', '.jpeg', '.gif'] UPLOAD_EXTENSIONS = ['.txt', '.pdf', '.png', '.jpg', '.jpeg', '.gif']
# supported languages # supported languages
SUPPORTED_LANGUAGES = ['en', 'fr', 'nl', 'de', 'es'] SUPPORTED_LANGUAGES = ['en', 'fr', 'nl', 'de', 'es']
# supported currencies
SUPPORTED_CURRENCIES = ['', '$']
# supported LLMs # supported LLMs
SUPPORTED_EMBEDDINGS = ['openai.text-embedding-3-small', 'openai.text-embedding-3-large', 'mistral.mistral-embed'] SUPPORTED_EMBEDDINGS = ['openai.text-embedding-3-small', 'openai.text-embedding-3-large', 'mistral.mistral-embed']
SUPPORTED_LLMS = ['openai.gpt-4o', 'anthropic.claude-3-5-sonnet', 'openai.gpt-4o-mini'] SUPPORTED_LLMS = ['openai.gpt-4o', 'anthropic.claude-3-5-sonnet', 'openai.gpt-4o-mini']
ANTHROPIC_LLM_VERSIONS = {'claude-3-5-sonnet': 'claude-3-5-sonnet-20240620', } ANTHROPIC_LLM_VERSIONS = {'claude-3-5-sonnet': 'claude-3-5-sonnet-20240620', }
# Load prompt templates dynamically
PROMPT_TEMPLATES = {model: load_prompt_templates(model) for model in SUPPORTED_LLMS}
# Annotation text chunk length # Annotation text chunk length
ANNOTATION_TEXT_CHUNK_LENGTH = { ANNOTATION_TEXT_CHUNK_LENGTH = {
'openai.gpt-4o': 10000, 'openai.gpt-4o': 10000,
@@ -86,9 +85,6 @@ class Config(object):
# Anthropic API Keys # Anthropic API Keys
ANTHROPIC_API_KEY = environ.get('ANTHROPIC_API_KEY') ANTHROPIC_API_KEY = environ.get('ANTHROPIC_API_KEY')
# Portkey API Keys
PORTKEY_API_KEY = environ.get('PORTKEY_API_KEY')
# Celery settings # Celery settings
CELERY_TASK_SERIALIZER = 'json' CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json' CELERY_RESULT_SERIALIZER = 'json'
@@ -109,6 +105,7 @@ class Config(object):
# JWT settings # JWT settings
JWT_SECRET_KEY = environ.get('JWT_SECRET_KEY') JWT_SECRET_KEY = environ.get('JWT_SECRET_KEY')
JWT_ACCESS_TOKEN_EXPIRES = timedelta(hours=1) # Set token expiry to 1 hour
# API Encryption # API Encryption
API_ENCRYPTION_KEY = environ.get('API_ENCRYPTION_KEY') API_ENCRYPTION_KEY = environ.get('API_ENCRYPTION_KEY')
@@ -136,7 +133,36 @@ class Config(object):
MAIL_USE_SSL = True MAIL_USE_SSL = True
MAIL_USERNAME = environ.get('MAIL_USERNAME') MAIL_USERNAME = environ.get('MAIL_USERNAME')
MAIL_PASSWORD = environ.get('MAIL_PASSWORD') MAIL_PASSWORD = environ.get('MAIL_PASSWORD')
MAIL_DEFAULT_SENDER = ('eveAI Admin', MAIL_USERNAME) MAIL_DEFAULT_SENDER = ('Evie', MAIL_USERNAME)
# Email settings for API key notifications
PROMOTIONAL_IMAGE_URL = 'https://askeveai.com/wp-content/uploads/2024/07/Evie-Call-scaled.jpg' # Replace with your actual URL
# Langsmith settings
LANGCHAIN_TRACING_V2 = True
LANGCHAIN_ENDPOINT = 'https://api.smith.langchain.com'
LANGCHAIN_PROJECT = "eveai"
SUPPORTED_FILE_TYPES = ['pdf', 'html', 'md', 'txt', 'mp3', 'mp4', 'ogg', 'srt']
TENANT_TYPES = ['Active', 'Demo', 'Inactive', 'Test', 'Wordpress Starter']
# The maximum number of seconds allowed for audio compression (to save resources)
MAX_COMPRESSION_DURATION = 60*10 # 10 minutes
# The maximum number of seconds allowed for transcribing audio
MAX_TRANSCRIPTION_DURATION = 60*10 # 10 minutes
# Maximum CPU usage for a compression task
COMPRESSION_CPU_LIMIT = 50
# Delay between compressing chunks in seconds
COMPRESSION_PROCESS_DELAY = 1
# WordPress Integration Settings
WORDPRESS_PROTOCOL = os.environ.get('WORDPRESS_PROTOCOL', 'http')
WORDPRESS_HOST = os.environ.get('WORDPRESS_HOST', 'host.docker.internal')
WORDPRESS_PORT = os.environ.get('WORDPRESS_PORT', '10003')
WORDPRESS_BASE_URL = f"{WORDPRESS_PROTOCOL}://{WORDPRESS_HOST}:{WORDPRESS_PORT}"
EXTERNAL_WORDPRESS_BASE_URL = 'localhost:10003'
class DevConfig(Config): class DevConfig(Config):
@@ -160,13 +186,21 @@ class DevConfig(Config):
# file upload settings # file upload settings
# UPLOAD_FOLDER = '/app/tenant_files' # UPLOAD_FOLDER = '/app/tenant_files'
# Redis Settings
REDIS_URL = 'redis'
REDIS_PORT = '6379'
REDIS_BASE_URI = f'redis://{REDIS_URL}:{REDIS_PORT}'
# Celery settings # Celery settings
# eveai_app Redis Settings # eveai_app Redis Settings
CELERY_BROKER_URL = 'redis://redis:6379/0' CELERY_BROKER_URL = f'{REDIS_BASE_URI}/0'
CELERY_RESULT_BACKEND = 'redis://redis:6379/0' CELERY_RESULT_BACKEND = f'{REDIS_BASE_URI}/0'
# eveai_chat Redis Settings # eveai_chat Redis Settings
CELERY_BROKER_URL_CHAT = 'redis://redis:6379/3' CELERY_BROKER_URL_CHAT = f'{REDIS_BASE_URI}/3'
CELERY_RESULT_BACKEND_CHAT = 'redis://redis:6379/3' CELERY_RESULT_BACKEND_CHAT = f'{REDIS_BASE_URI}/3'
# eveai_chat_workers cache Redis Settings
CHAT_WORKER_CACHE_URL = f'{REDIS_BASE_URI}/4'
# Unstructured settings # Unstructured settings
# UNSTRUCTURED_API_KEY = 'pDgCrXumYhM3CNvjvwV8msMldXC3uw' # UNSTRUCTURED_API_KEY = 'pDgCrXumYhM3CNvjvwV8msMldXC3uw'
@@ -174,7 +208,7 @@ class DevConfig(Config):
# UNSTRUCTURED_FULL_URL = 'https://flowitbv-16c4us0m.api.unstructuredapp.io/general/v0/general' # UNSTRUCTURED_FULL_URL = 'https://flowitbv-16c4us0m.api.unstructuredapp.io/general/v0/general'
# SocketIO settings # SocketIO settings
SOCKETIO_MESSAGE_QUEUE = 'redis://redis:6379/1' SOCKETIO_MESSAGE_QUEUE = f'{REDIS_BASE_URI}/1'
SOCKETIO_CORS_ALLOWED_ORIGINS = '*' SOCKETIO_CORS_ALLOWED_ORIGINS = '*'
SOCKETIO_LOGGER = True SOCKETIO_LOGGER = True
SOCKETIO_ENGINEIO_LOGGER = True SOCKETIO_ENGINEIO_LOGGER = True
@@ -190,7 +224,7 @@ class DevConfig(Config):
GC_CRYPTO_KEY = 'envelope-encryption-key' GC_CRYPTO_KEY = 'envelope-encryption-key'
# Session settings # Session settings
SESSION_REDIS = redis.from_url('redis://redis:6379/2') SESSION_REDIS = redis.from_url(f'{REDIS_BASE_URI}/2')
# PATH settings # PATH settings
ffmpeg_path = '/usr/bin/ffmpeg' ffmpeg_path = '/usr/bin/ffmpeg'
@@ -257,6 +291,8 @@ class ProdConfig(Config):
# eveai_chat Redis Settings # eveai_chat Redis Settings
CELERY_BROKER_URL_CHAT = f'{REDIS_BASE_URI}/3' CELERY_BROKER_URL_CHAT = f'{REDIS_BASE_URI}/3'
CELERY_RESULT_BACKEND_CHAT = f'{REDIS_BASE_URI}/3' CELERY_RESULT_BACKEND_CHAT = f'{REDIS_BASE_URI}/3'
# eveai_chat_workers cache Redis Settings
CHAT_WORKER_CACHE_URL = f'{REDIS_BASE_URI}/4'
# Session settings # Session settings
SESSION_REDIS = redis.from_url(f'{REDIS_BASE_URI}/2') SESSION_REDIS = redis.from_url(f'{REDIS_BASE_URI}/2')

View File

@@ -1,13 +0,0 @@
{
"type": "service_account",
"project_id": "eveai-420711",
"private_key_id": "e666408e75793321a6134243628346722a71b3a6",
"private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQCaGTXCWpq08YD1\nOW4z+gncOlB7T/EIiEwsZgMp6pyUrNioGfiI9YN+uVR0nsUSmFf1YyerRgX7RqD5\nRc7T/OuX8iIvmloK3g7CaFezcVrjnBKcg/QsjDAt/OO3DTk4vykDlh/Kqxx73Jdv\nFH9YSV2H7ToWqIE8CTDnqe8vQS7Bq995c9fPlues31MgndRFg3CFkH0ldfZ4aGm3\n1RnBDyC+9SPQW9e7CJgNN9PWTmOT51Zyy5IRuV5OWePMQaGLVmCo5zNc/EHZEVRu\n1hxJPHL3NNmkYDY8tye8uHgjsAkv8QuwIuUSqnqjoo1/Yg+P0+9GCpePOAJRNxJS\n0YpDFWc5AgMBAAECggEACIU4/hG+bh97BD7JriFhfDDT6bg7g+pCs/hsAlxQ42jv\nOH7pyWuHJXGf5Cwx31usZAq4fcrgYnVpnyl8odIL628y9AjdI66wMuWhZnBFGJgK\nRhHcZWjW8nlXf0lBjwwFe4edzbn1AuWT5fYZ2HWDW2mthY/e8sUwqWPcWsjdifhz\nNR7V+Ia47McKXYgEKjyEObSP1NUOW24zH0DgxS52YPMwa1FoHn6+9Pr8P3TsTSO6\nh6f8tnd81DGl1UH4F5Bj/MHsQXyAMJbu44S4+rZ4Qlk+5xPp9hfCNpxWaHLIkJCg\nYXnC8UAjjyXiqyK0U0RjJf8TS1FxUI4iPepLNqp/pQKBgQDTicZnWFXmCFTnycWp\n66P3Yx0yvlKdUdfnoD/n9NdmUA3TZUlEVfb0IOm7ZFubF/zDTH87XrRiD/NVDbr8\n6bdhA1DXzraxhbfD36Hca6K74Ba4aYJsSWWwI0hL3FDSsv8c7qAIaUF2iwuHb7Y0\nRDcvZqowtQobcQC8cHLc/bI/ZwKBgQC6fMeGaU+lP6jhp9Nb/3Gz5Z1zzCu34IOo\nlgpTNZsowRKYLtjHifrEFi3XRxPKz5thMuJFniof5U4WoMYtRXy+PbgySvBpCia2\nXty05XssnLLMvLpYU5sbQvmOTe20zaIzLohRvvmqrydYIKu62NTubNeuD1L+Zr0q\nz1P5/wUgXwKBgQCW9MrRFQi3j1qHzkVwbOglsmUzwP3TpoQclw8DyIWuTZKQOMeA\nLJh+vr4NLCDzHLsT45MoGv0+vYM4PwQhV+e1I1idqLZXGMV60iv/0A/hYpjUIPch\nr38RoxwEhsRml7XWP7OUTQiaP7+Kdv3fbo6zFOB+wbLkwk90KgrOCX0aIQKBgFeK\n7esmErJjMPdFXk3om0q09nX+mWNHLOb+EDjBiGXYRM9V5oO9PQ/BzaEqh5sEXE+D\noH7H4cR5U3AB5yYnYYi41ngdf7//eO7Rl1AADhOCN9kum1eNX9mrVhU8deMTSRo3\ntNyTBwbeFF0lcRhUY5jNVW4rWW19cz3ed/B6i8CHAoGBAJ/l5rkV74Z5hg6BWNfQ\nYAg/4PLZmjnXIy5QdnWc/PYgbhn5+iVUcL9fSofFzJM1rjFnNcs3S90MGeOmfmo4\nM1WtcQFQbsCGt6+G5uEL/nf74mKUGpOqEM/XSkZ3inweWiDk3LK3iYfXCMBFouIr\n80IlzI1yMf7MVmWn3e1zPjCA\n-----END PRIVATE KEY-----\n",
"client_email": "eveai-349@eveai-420711.iam.gserviceaccount.com",
"client_id": "109927035346319712442",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/eveai-349%40eveai-420711.iam.gserviceaccount.com",
"universe_domain": "googleapis.com"
}

View File

@@ -1,4 +1,8 @@
import json
import os import os
from datetime import datetime as dt, timezone as tz
from flask import current_app
from graypy import GELFUDPHandler from graypy import GELFUDPHandler
import logging import logging
import logging.config import logging.config
@@ -9,19 +13,173 @@ GRAYLOG_PORT = int(os.environ.get('GRAYLOG_PORT', 12201))
env = os.environ.get('FLASK_ENV', 'development') env = os.environ.get('FLASK_ENV', 'development')
class CustomLogRecord(logging.LogRecord): class TuningLogRecord(logging.LogRecord):
"""Extended LogRecord that handles both tuning and business event logging"""
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
self.component = os.environ.get('COMPONENT_NAME', 'eveai_app') # Set default component value here # Initialize extra fields after parent initialization
self._extra_fields = {}
self._is_tuning_log = False
self._tuning_type = None
self._tuning_tenant_id = None
self._tuning_catalog_id = None
self._tuning_specialist_id = None
self._tuning_retriever_id = None
self._tuning_processor_id = None
self.component = os.environ.get('COMPONENT_NAME', 'eveai_app')
def getMessage(self):
"""
Override getMessage to handle both string and dict messages
"""
msg = self.msg
if self.args:
msg = msg % self.args
return msg
@property
def is_tuning_log(self):
return self._is_tuning_log
@is_tuning_log.setter
def is_tuning_log(self, value):
object.__setattr__(self, '_is_tuning_log', value)
@property
def tuning_type(self):
return self._tuning_type
@tuning_type.setter
def tuning_type(self, value):
object.__setattr__(self, '_tuning_type', value)
def get_tuning_data(self):
"""Get all tuning-related data if this is a tuning log"""
if not self._is_tuning_log:
return {}
return {
'is_tuning_log': self._is_tuning_log,
'tuning_type': self._tuning_type,
'tuning_tenant_id': self._tuning_tenant_id,
'tuning_catalog_id': self._tuning_catalog_id,
'tuning_specialist_id': self._tuning_specialist_id,
'tuning_retriever_id': self._tuning_retriever_id,
'tuning_processor_id': self._tuning_processor_id,
}
def set_tuning_data(self, tenant_id=None, catalog_id=None, specialist_id=None,
retriever_id=None, processor_id=None):
"""Set tuning-specific data"""
object.__setattr__(self, '_tuning_tenant_id', tenant_id)
object.__setattr__(self, '_tuning_catalog_id', catalog_id)
object.__setattr__(self, '_tuning_specialist_id', specialist_id)
object.__setattr__(self, '_tuning_retriever_id', retriever_id)
object.__setattr__(self, '_tuning_processor_id', processor_id)
def custom_log_record_factory(*args, **kwargs): class TuningFormatter(logging.Formatter):
record = CustomLogRecord(*args, **kwargs) """Universal formatter for all tuning logs"""
return record
def __init__(self, fmt=None, datefmt=None):
super().__init__(fmt or '%(asctime)s [%(levelname)s] %(name)s: %(message)s',
datefmt or '%Y-%m-%d %H:%M:%S')
def format(self, record):
# First format with the default formatter to handle basic fields
formatted_msg = super().format(record)
# If this is a tuning log, add the additional context
if getattr(record, 'is_tuning_log', False):
try:
identifiers = []
if hasattr(record, 'tenant_id') and record.tenant_id:
identifiers.append(f"Tenant: {record.tenant_id}")
if hasattr(record, 'catalog_id') and record.catalog_id:
identifiers.append(f"Catalog: {record.catalog_id}")
if hasattr(record, 'processor_id') and record.processor_id:
identifiers.append(f"Processor: {record.processor_id}")
formatted_msg = (
f"{formatted_msg}\n"
f"[TUNING {record.tuning_type}] [{' | '.join(identifiers)}]"
)
if hasattr(record, 'tuning_data') and record.tuning_data:
formatted_msg += f"\nData: {json.dumps(record.tuning_data, indent=2)}"
except Exception as e:
return f"{formatted_msg} (Error formatting tuning data: {str(e)})"
return formatted_msg
class GraylogFormatter(logging.Formatter):
"""Maintains existing Graylog formatting while adding tuning fields"""
def format(self, record):
if getattr(record, 'is_tuning_log', False):
# Add tuning-specific fields to Graylog
record.tuning_fields = {
'is_tuning_log': True,
'tuning_type': record.tuning_type,
'tenant_id': record.tenant_id,
'catalog_id': record.catalog_id,
'specialist_id': record.specialist_id,
'retriever_id': record.retriever_id,
'processor_id': record.processor_id,
}
return super().format(record)
class TuningLogger:
"""Helper class to manage tuning logs with consistent structure"""
def __init__(self, logger_name, tenant_id=None, catalog_id=None, specialist_id=None, retriever_id=None, processor_id=None):
self.logger = logging.getLogger(logger_name)
self.tenant_id = tenant_id
self.catalog_id = catalog_id
self.specialist_id = specialist_id
self.retriever_id = retriever_id
self.processor_id = processor_id
def log_tuning(self, tuning_type: str, message: str, data=None, level=logging.DEBUG):
"""Log a tuning event with structured data"""
try:
# Create a standard LogRecord for tuning
record = logging.LogRecord(
name=self.logger.name,
level=level,
pathname='',
lineno=0,
msg=message,
args=(),
exc_info=None
)
# Add tuning-specific attributes
record.is_tuning_log = True
record.tuning_type = tuning_type
record.tenant_id = self.tenant_id
record.catalog_id = self.catalog_id
record.specialist_id = self.specialist_id
record.retriever_id = self.retriever_id
record.processor_id = self.processor_id
if data:
record.tuning_data = data
# Process the record
self.logger.handle(record)
except Exception as e:
fallback_logger = logging.getLogger('eveai_workers')
fallback_logger.exception(f"Failed to log tuning message: {str(e)}")
# Set the custom log record factory # Set the custom log record factory
logging.setLogRecordFactory(custom_log_record_factory) logging.setLogRecordFactory(TuningLogRecord)
LOGGING = { LOGGING = {
@@ -32,72 +190,104 @@ LOGGING = {
'level': 'DEBUG', 'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler', 'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/eveai_app.log', 'filename': 'logs/eveai_app.log',
'maxBytes': 1024 * 1024 * 5, # 5MB 'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 10, 'backupCount': 2,
'formatter': 'standard', 'formatter': 'standard',
}, },
'file_workers': { 'file_workers': {
'level': 'DEBUG', 'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler', 'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/eveai_workers.log', 'filename': 'logs/eveai_workers.log',
'maxBytes': 1024 * 1024 * 5, # 5MB 'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 10, 'backupCount': 2,
'formatter': 'standard', 'formatter': 'standard',
}, },
'file_chat': { 'file_chat': {
'level': 'DEBUG', 'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler', 'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/eveai_chat.log', 'filename': 'logs/eveai_chat.log',
'maxBytes': 1024 * 1024 * 5, # 5MB 'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 10, 'backupCount': 2,
'formatter': 'standard', 'formatter': 'standard',
}, },
'file_chat_workers': { 'file_chat_workers': {
'level': 'DEBUG', 'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler', 'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/eveai_chat_workers.log', 'filename': 'logs/eveai_chat_workers.log',
'maxBytes': 1024 * 1024 * 5, # 5MB 'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 10, 'backupCount': 2,
'formatter': 'standard',
},
'file_api': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/eveai_api.log',
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
},
'file_beat': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/eveai_beat.log',
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard',
},
'file_entitlements': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/eveai_entitlements.log',
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard', 'formatter': 'standard',
}, },
'file_sqlalchemy': { 'file_sqlalchemy': {
'level': 'DEBUG', 'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler', 'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/sqlalchemy.log', 'filename': 'logs/sqlalchemy.log',
'maxBytes': 1024 * 1024 * 5, # 5MB 'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 10, 'backupCount': 2,
'formatter': 'standard', 'formatter': 'standard',
}, },
'file_mailman': { 'file_mailman': {
'level': 'DEBUG', 'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler', 'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/mailman.log', 'filename': 'logs/mailman.log',
'maxBytes': 1024 * 1024 * 5, # 5MB 'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 10, 'backupCount': 2,
'formatter': 'standard', 'formatter': 'standard',
}, },
'file_security': { 'file_security': {
'level': 'DEBUG', 'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler', 'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/security.log', 'filename': 'logs/security.log',
'maxBytes': 1024 * 1024 * 5, # 5MB 'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 10, 'backupCount': 2,
'formatter': 'standard', 'formatter': 'standard',
}, },
'file_rag_tuning': { 'file_rag_tuning': {
'level': 'DEBUG', 'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler', 'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/rag_tuning.log', 'filename': 'logs/rag_tuning.log',
'maxBytes': 1024 * 1024 * 5, # 5MB 'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 10, 'backupCount': 2,
'formatter': 'standard', 'formatter': 'standard',
}, },
'file_embed_tuning': { 'file_embed_tuning': {
'level': 'DEBUG', 'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler', 'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/embed_tuning.log', 'filename': 'logs/embed_tuning.log',
'maxBytes': 1024 * 1024 * 5, # 5MB 'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 10, 'backupCount': 2,
'formatter': 'standard',
},
'file_business_events': {
'level': 'INFO',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/business_events.log',
'maxBytes': 1024 * 1024 * 1, # 1MB
'backupCount': 2,
'formatter': 'standard', 'formatter': 'standard',
}, },
'console': { 'console': {
@@ -105,25 +295,38 @@ LOGGING = {
'level': 'DEBUG', 'level': 'DEBUG',
'formatter': 'standard', 'formatter': 'standard',
}, },
'tuning_file': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'logs/tuning.log',
'maxBytes': 1024 * 1024 * 3, # 3MB
'backupCount': 3,
'formatter': 'tuning',
},
'graylog': { 'graylog': {
'level': 'DEBUG', 'level': 'DEBUG',
'class': 'graypy.GELFUDPHandler', 'class': 'graypy.GELFUDPHandler',
'host': GRAYLOG_HOST, 'host': GRAYLOG_HOST,
'port': GRAYLOG_PORT, 'port': GRAYLOG_PORT,
'debugging_fields': True, # Set to True if you want to include debugging fields 'debugging_fields': True,
'extra_fields': True, # Set to True if you want to include extra fields 'formatter': 'graylog'
}, },
}, },
'formatters': { 'formatters': {
'standard': { 'standard': {
'format': '%(asctime)s [%(levelname)s] %(name)s (%(component)s) [%(module)s:%(lineno)d in %(funcName)s] ' 'format': '%(asctime)s [%(levelname)s] %(name)s (%(component)s) [%(module)s:%(lineno)d]: %(message)s',
'[Thread: %(threadName)s]: %(message)s' 'datefmt': '%Y-%m-%d %H:%M:%S'
}, },
'graylog': { 'graylog': {
'format': '[%(levelname)s] %(name)s (%(component)s) [%(module)s:%(lineno)d in %(funcName)s] ' 'format': '[%(levelname)s] %(name)s (%(component)s) [%(module)s:%(lineno)d in %(funcName)s] '
'[Thread: %(threadName)s]: %(message)s', '[Thread: %(threadName)s]: %(message)s',
'datefmt': '%Y-%m-%d %H:%M:%S', 'datefmt': '%Y-%m-%d %H:%M:%S',
'()': GraylogFormatter
}, },
'tuning': {
'()': TuningFormatter,
'datefmt': '%Y-%m-%d %H:%M:%S UTC'
}
}, },
'loggers': { 'loggers': {
'eveai_app': { # logger for the eveai_app 'eveai_app': { # logger for the eveai_app
@@ -146,6 +349,21 @@ LOGGING = {
'level': 'DEBUG', 'level': 'DEBUG',
'propagate': False 'propagate': False
}, },
'eveai_api': { # logger for the eveai_chat_workers
'handlers': ['file_api', 'graylog', ] if env == 'production' else ['file_api', ],
'level': 'DEBUG',
'propagate': False
},
'eveai_beat': { # logger for the eveai_beat
'handlers': ['file_beat', 'graylog', ] if env == 'production' else ['file_beat', ],
'level': 'DEBUG',
'propagate': False
},
'eveai_entitlements': { # logger for the eveai_entitlements
'handlers': ['file_entitlements', 'graylog', ] if env == 'production' else ['file_entitlements', ],
'level': 'DEBUG',
'propagate': False
},
'sqlalchemy.engine': { # logger for the sqlalchemy 'sqlalchemy.engine': { # logger for the sqlalchemy
'handlers': ['file_sqlalchemy', 'graylog', ] if env == 'production' else ['file_sqlalchemy', ], 'handlers': ['file_sqlalchemy', 'graylog', ] if env == 'production' else ['file_sqlalchemy', ],
'level': 'DEBUG', 'level': 'DEBUG',
@@ -161,15 +379,16 @@ LOGGING = {
'level': 'DEBUG', 'level': 'DEBUG',
'propagate': False 'propagate': False
}, },
'rag_tuning': { # logger for the rag_tuning 'business_events': {
'handlers': ['file_rag_tuning', 'graylog', ] if env == 'production' else ['file_rag_tuning', ], 'handlers': ['file_business_events', 'graylog'],
'level': 'DEBUG', 'level': 'DEBUG',
'propagate': False 'propagate': False
}, },
'embed_tuning': { # logger for the embed_tuning # Single tuning logger
'handlers': ['file_embed_tuning', 'graylog', ] if env == 'production' else ['file_embed_tuning', ], 'tuning': {
'handlers': ['tuning_file', 'graylog'] if env == 'production' else ['tuning_file'],
'level': 'DEBUG', 'level': 'DEBUG',
'propagate': False 'propagate': False,
}, },
'': { # root logger '': { # root logger
'handlers': ['console'], 'handlers': ['console'],

41
config/model_config.py Normal file
View File

@@ -0,0 +1,41 @@
MODEL_CONFIG = {
"openai": {
"gpt-4o": {
"tool_calling_supported": True,
"processing_chunk_size": 10000,
"processing_chunk_overlap": 200,
"processing_min_chunk_size": 8000,
"processing_max_chunk_size": 12000,
"prompt_templates": [
"summary", "rag", "history", "encyclopedia",
"transcript", "html_parse", "pdf_parse"
]
},
"gpt-4o-mini": {
"tool_calling_supported": True,
"processing_chunk_size": 10000,
"processing_chunk_overlap": 200,
"processing_min_chunk_size": 8000,
"processing_max_chunk_size": 12000,
"prompt_templates": [
"summary", "rag", "history", "encyclopedia",
"transcript", "html_parse", "pdf_parse"
]
},
# Add other OpenAI models here
},
"anthropic": {
"claude-3-5-sonnet": {
"tool_calling_supported": True,
"processing_chunk_size": 10000,
"processing_chunk_overlap": 200,
"processing_min_chunk_size": 8000,
"processing_max_chunk_size": 12000,
"prompt_templates": [
"summary", "rag", "history", "encyclopedia",
"transcript", "html_parse", "pdf_parse"
]
},
# Add other Anthropic models here
},
}

View File

@@ -1,88 +0,0 @@
html_parse: |
You are a top administrative assistant specialized in transforming given HTML into markdown formatted files. The generated files will be used to generate embeddings in a RAG-system.
# Best practices are:
- Respect wordings and language(s) used in the HTML.
- The following items need to be considered: headings, paragraphs, listed items (numbered or not) and tables. Images can be neglected.
- Sub-headers can be used as lists. This is true when a header is followed by a series of sub-headers without content (paragraphs or listed items). Present those sub-headers as a list.
- Be careful of encoding of the text. Everything needs to be human readable.
Process the file carefully, and take a stepped approach. The resulting markdown should be the result of the processing of the complete input html file. Answer with the pure markdown, without any other text.
HTML is between triple backticks.
```{html}```
pdf_parse: |
You are a top administrative aid specialized in transforming given PDF-files into markdown formatted files. The generated files will be used to generate embeddings in a RAG-system.
# Best practices are:
- Respect wordings and language(s) used in the PDF.
- The following items need to be considered: headings, paragraphs, listed items (numbered or not) and tables. Images can be neglected.
- When headings are numbered, show the numbering and define the header level.
- A new item is started when a <return> is found before a full line is reached. In order to know the number of characters in a line, please check the document and the context within the document (e.g. an image could limit the number of characters temporarily).
- Paragraphs are to be stripped of newlines so they become easily readable.
- Be careful of encoding of the text. Everything needs to be human readable.
Process the file carefully, and take a stepped approach. The resulting markdown should be the result of the processing of the complete input pdf content. Answer with the pure markdown, without any other text.
PDF content is between triple backticks.
```{pdf_content}```
summary: |
Write a concise summary of the text in {language}. The text is delimited between triple backticks.
```{text}```
rag: |
Answer the question based on the following context, delimited between triple backticks.
{tenant_context}
Use the following {language} in your communication, and cite the sources used.
If the question cannot be answered using the given context, say "I have insufficient information to answer this question."
Context:
```{context}```
Question:
{question}
history: |
You are a helpful assistant that details a question based on a previous context,
in such a way that the question is understandable without the previous context.
The context is a conversation history, with the HUMAN asking questions, the AI answering questions.
The history is delimited between triple backticks.
You answer by stating the question in {language}.
History:
```{history}```
Question to be detailed:
{question}
encyclopedia: |
You have a lot of background knowledge, and as such you are some kind of
'encyclopedia' to explain general terminology. Only answer if you have a clear understanding of the question.
If not, say you do not have sufficient information to answer the question. Use the {language} in your communication.
Question:
{question}
transcript: |
"""You are a top administrative assistant specialized in transforming given transcriptions into markdown formatted files. Your task is to process and improve the given transcript, not to summarize it.
IMPORTANT INSTRUCTIONS:
1. DO NOT summarize the transcript and don't make your own interpretations. Return the FULL, COMPLETE transcript with improvements.
2. Improve any errors in the transcript based on context.
3. Respect the original wording and language(s) used in the transcription. Main Language used is {language}.
4. Divide the transcript into paragraphs for better readability. Each paragraph ONLY contains ORIGINAL TEXT.
5. Group related paragraphs into logical sections.
6. Add appropriate headers (using markdown syntax) to each section in {language}.
7. We do not need an overall title. Just add logical headers
8. Ensure that the entire transcript is included in your response, from start to finish.
REMEMBER:
- Your output should be the complete transcript in markdown format, NOT A SUMMARY OR ANALYSIS.
- Include EVERYTHING from the original transcript, just organized and formatted better.
- Just return the markdown version of the transcript, without any other text such as an introduction or a summary.
Here is the transcript to process (between triple backticks):
```{transcript}```
Process this transcript according to the instructions above and return the full, formatted markdown version.
"""

View File

@@ -1,79 +0,0 @@
html_parse: |
You are a top administrative assistant specialized in transforming given HTML into markdown formatted files. The generated files will be used to generate embeddings in a RAG-system.
# Best practices are:
- Respect wordings and language(s) used in the HTML.
- The following items need to be considered: headings, paragraphs, listed items (numbered or not) and tables. Images can be neglected.
- Sub-headers can be used as lists. This is true when a header is followed by a series of sub-headers without content (paragraphs or listed items). Present those sub-headers as a list.
- Be careful of encoding of the text. Everything needs to be human readable.
Process the file carefully, and take a stepped approach. The resulting markdown should be the result of the processing of the complete input html file. Answer with the pure markdown, without any other text.
HTML is between triple backquotes.
```{html}```
pdf_parse: |
You are a top administrative aid specialized in transforming given PDF-files into markdown formatted files. The generated files will be used to generate embeddings in a RAG-system.
# Best practices are:
- Respect wordings and language(s) used in the PDF.
- The following items need to be considered: headings, paragraphs, listed items (numbered or not) and tables. Images can be neglected.
- When headings are numbered, show the numbering and define the header level.
- A new item is started when a <return> is found before a full line is reached. In order to know the number of characters in a line, please check the document and the context within the document (e.g. an image could limit the number of characters temporarily).
- Paragraphs are to be stripped of newlines so they become easily readable.
- Be careful of encoding of the text. Everything needs to be human readable.
Process the file carefully, and take a stepped approach. The resulting markdown should be the result of the processing of the complete input pdf content. Answer with the pure markdown, without any other text.
PDF content is between triple backquotes.
```{pdf_content}```
summary: |
Write a concise summary of the text in {language}. The text is delimited between triple backquotes.
```{text}```
rag: |
Answer the question based on the following context, delimited between triple backquotes.
{tenant_context}
Use the following {language} in your communication, and cite the sources used.
If the question cannot be answered using the given context, say "I have insufficient information to answer this question."
Context:
```{context}```
Question:
{question}
history: |
You are a helpful assistant that details a question based on a previous context,
in such a way that the question is understandable without the previous context.
The context is a conversation history, with the HUMAN asking questions, the AI answering questions.
The history is delimited between triple backquotes.
You answer by stating the question in {language}.
History:
```{history}```
Question to be detailed:
{question}
encyclopedia: |
You have a lot of background knowledge, and as such you are some kind of
'encyclopedia' to explain general terminology. Only answer if you have a clear understanding of the question.
If not, say you do not have sufficient information to answer the question. Use the {language} in your communication.
Question:
{question}
transcript: |
You are a top administrative assistant specialized in transforming given transcriptions into markdown formatted files. The generated files will be used to generate embeddings in a RAG-system. The transcriptions originate from podcast, videos and similar material.
# Best practices and steps are:
- Respect wordings and language(s) used in the transcription. Main language is {language}.
- Sometimes, the transcript contains speech of several people participating in a conversation. Although these are not obvious from reading the file, try to detect when other people are speaking.
- Divide the transcript into several logical parts. Ensure questions and their answers are in the same logical part.
- annotate the text to identify these logical parts using headings in {language}.
- improve errors in the transcript given the context, but do not change the meaning and intentions of the transcription.
Process the file carefully, and take a stepped approach. The resulting markdown should be the result of processing the complete input transcription. Answer with the pure markdown, without any other text.
The transcript is between triple backquotes.
```{transcript}```

View File

@@ -1,79 +0,0 @@
html_parse: |
You are a top administrative assistant specialized in transforming given HTML into markdown formatted files. The generated files will be used to generate embeddings in a RAG-system.
# Best practices are:
- Respect wordings and language(s) used in the HTML.
- The following items need to be considered: headings, paragraphs, listed items (numbered or not) and tables. Images can be neglected.
- Sub-headers can be used as lists. This is true when a header is followed by a series of sub-headers without content (paragraphs or listed items). Present those sub-headers as a list.
- Be careful of encoding of the text. Everything needs to be human readable.
Process the file carefully, and take a stepped approach. The resulting markdown should be the result of the processing of the complete input html file. Answer with the pure markdown, without any other text.
HTML is between triple backquotes.
```{html}```
pdf_parse: |
You are a top administrative aid specialized in transforming given PDF-files into markdown formatted files. The generated files will be used to generate embeddings in a RAG-system.
# Best practices are:
- Respect wordings and language(s) used in the PDF.
- The following items need to be considered: headings, paragraphs, listed items (numbered or not) and tables. Images can be neglected.
- When headings are numbered, show the numbering and define the header level.
- A new item is started when a <return> is found before a full line is reached. In order to know the number of characters in a line, please check the document and the context within the document (e.g. an image could limit the number of characters temporarily).
- Paragraphs are to be stripped of newlines so they become easily readable.
- Be careful of encoding of the text. Everything needs to be human readable.
Process the file carefully, and take a stepped approach. The resulting markdown should be the result of the processing of the complete input pdf content. Answer with the pure markdown, without any other text.
PDF content is between triple backquotes.
```{pdf_content}```
summary: |
Write a concise summary of the text in {language}. The text is delimited between triple backquotes.
```{text}```
rag: |
Answer the question based on the following context, delimited between triple backquotes.
{tenant_context}
Use the following {language} in your communication, and cite the sources used.
If the question cannot be answered using the given context, say "I have insufficient information to answer this question."
Context:
```{context}```
Question:
{question}
history: |
You are a helpful assistant that details a question based on a previous context,
in such a way that the question is understandable without the previous context.
The context is a conversation history, with the HUMAN asking questions, the AI answering questions.
The history is delimited between triple backquotes.
You answer by stating the question in {language}.
History:
```{history}```
Question to be detailed:
{question}
encyclopedia: |
You have a lot of background knowledge, and as such you are some kind of
'encyclopedia' to explain general terminology. Only answer if you have a clear understanding of the question.
If not, say you do not have sufficient information to answer the question. Use the {language} in your communication.
Question:
{question}
transcript: |
You are a top administrative assistant specialized in transforming given transcriptions into markdown formatted files. The generated files will be used to generate embeddings in a RAG-system. The transcriptions originate from podcast, videos and similar material.
# Best practices and steps are:
- Respect wordings and language(s) used in the transcription. Main language is {language}.
- Sometimes, the transcript contains speech of several people participating in a conversation. Although these are not obvious from reading the file, try to detect when other people are speaking.
- Divide the transcript into several logical parts. Ensure questions and their answers are in the same logical part.
- annotate the text to identify these logical parts using headings in {language}.
- improve errors in the transcript given the context, but do not change the meaning and intentions of the transcription.
Process the file carefully, and take a stepped approach. The resulting markdown should be the result of processing the complete input transcription. Answer with the pure markdown, without any other text.
The transcript is between triple backquotes.
```{transcript}```

View File

@@ -0,0 +1,12 @@
version: "1.0.0"
content: |
You have a lot of background knowledge, and as such you are some kind of
'encyclopedia' to explain general terminology. Only answer if you have a clear understanding of the question.
If not, say you do not have sufficient information to answer the question. Use the {language} in your communication.
Question:
{question}
metadata:
author: "Josako"
date_added: "2024-11-10"
description: "A background information retriever for Evie"
changes: "Initial version migrated from flat file structure"

View File

@@ -0,0 +1,16 @@
version: "1.0.0"
content: |
You are a helpful assistant that details a question based on a previous context,
in such a way that the question is understandable without the previous context.
The context is a conversation history, with the HUMAN asking questions, the AI answering questions.
The history is delimited between triple backquotes.
You answer by stating the question in {language}.
History:
```{history}```
Question to be detailed:
{question}
metadata:
author: "Josako"
date_added: "2024-11-10"
description: "Prompt to further detail a question based on the previous conversation"
changes: "Initial version migrated from flat file structure"

View File

@@ -0,0 +1,20 @@
version: "1.0.0"
content: |
You are a top administrative assistant specialized in transforming given HTML into markdown formatted files. The generated files will be used to generate embeddings in a RAG-system.
# Best practices are:
- Respect wordings and language(s) used in the HTML.
- The following items need to be considered: headings, paragraphs, listed items (numbered or not) and tables. Images can be neglected.
- Sub-headers can be used as lists. This is true when a header is followed by a series of sub-headers without content (paragraphs or listed items). Present those sub-headers as a list.
- Be careful of encoding of the text. Everything needs to be human readable.
Process the file carefully, and take a stepped approach. The resulting markdown should be the result of the processing of the complete input html file. Answer with the pure markdown, without any other text.
HTML is between triple backquotes.
```{html}```
metadata:
author: "Josako"
date_added: "2024-11-10"
description: "An aid in transforming HTML-based inputs to markdown"
changes: "Initial version migrated from flat file structure"

View File

@@ -0,0 +1,23 @@
version: "1.0.0"
content: |
You are a top administrative aid specialized in transforming given PDF-files into markdown formatted files. The generated files will be used to generate embeddings in a RAG-system.
The content you get is already processed (some markdown already generated), but needs to be corrected. For large files, you may receive only portions of the full file. Consider this when processing the content.
# Best practices are:
- Respect wordings and language(s) used in the provided content.
- The following items need to be considered: headings, paragraphs, listed items (numbered or not) and tables. Images can be neglected.
- When headings are numbered, show the numbering and define the header level. You may have to correct current header levels, as preprocessing is known to make errors.
- A new item is started when a <return> is found before a full line is reached. In order to know the number of characters in a line, please check the document and the context within the document (e.g. an image could limit the number of characters temporarily).
- Paragraphs are to be stripped of newlines so they become easily readable.
- Be careful of encoding of the text. Everything needs to be human readable.
Process the file carefully, and take a stepped approach. The resulting markdown should be the result of the processing of the complete input pdf content. Answer with the pure markdown, without any other text.
PDF content is between triple backquotes.
```{pdf_content}```
metadata:
author: "Josako"
date_added: "2024-11-10"
description: "A assistant to parse PDF-content into markdown"
changes: "Initial version migrated from flat file structure"

View File

@@ -0,0 +1,15 @@
version: "1.0.0"
content: |
Answer the question based on the following context, delimited between triple backquotes.
{tenant_context}
Use the following {language} in your communication, and cite the sources used at the end of the full conversation.
If the question cannot be answered using the given context, say "I have insufficient information to answer this question."
Context:
```{context}```
Question:
{question}
metadata:
author: "Josako"
date_added: "2024-11-10"
description: "The Main RAG retriever"
changes: "Initial version migrated from flat file structure"

View File

@@ -0,0 +1,9 @@
version: "1.0.0"
content: |
Write a concise summary of the text in {language}. The text is delimited between triple backquotes.
```{text}```
metadata:
author: "Josako"
date_added: "2024-11-10"
description: "An assistant to create a summary when multiple chunks are required for 1 file"
changes: "Initial version migrated from flat file structure"

View File

@@ -0,0 +1,25 @@
version: "1.0.0"
content: |
You are a top administrative assistant specialized in transforming given transcriptions into markdown formatted files. The generated files will be used to generate embeddings in a RAG-system. The transcriptions originate from podcast, videos and similar material.
You may receive information in different chunks. If you're not receiving the first chunk, you'll get the last part of the previous chunk, including it's title in between triple $. Consider this last part and the title as the start of the new chunk.
# Best practices and steps are:
- Respect wordings and language(s) used in the transcription. Main language is {language}.
- Sometimes, the transcript contains speech of several people participating in a conversation. Although these are not obvious from reading the file, try to detect when other people are speaking.
- Divide the transcript into several logical parts. Ensure questions and their answers are in the same logical part. Don't make logical parts too small. They should contain at least 7 or 8 sentences.
- annotate the text to identify these logical parts using headings in {language}.
- improve errors in the transcript given the context, but do not change the meaning and intentions of the transcription.
Process the file carefully, and take a stepped approach. The resulting markdown should be the result of processing the complete input transcription. Answer with the pure markdown, without any other text.
The transcript is between triple backquotes.
$$${previous_part}$$$
```{transcript}```
metadata:
author: "Josako"
date_added: "2024-11-10"
description: "An assistant to transform a transcript to markdown."
changes: "Initial version migrated from flat file structure"

View File

View File

@@ -0,0 +1,29 @@
# Catalog Types
CATALOG_TYPES = {
"STANDARD_CATALOG": {
"name": "Standard Catalog",
"Description": "A Catalog with information in Evie's Library, to be considered as a whole",
"configuration": {},
"document_version_configurations": []
},
"DOSSIER": {
"name": "Dossier Catalog",
"Description": "A Catalog with information in Evie's Library in which several Dossiers can be stored",
"configuration": {
"tagging_fields": {
"name": "Tagging Fields",
"type": "tagging_fields",
"description": """Define the metadata fields that will be used for tagging documents.
Each field must have:
- type: one of 'string', 'integer', 'float', 'date', 'enum'
- required: boolean indicating if the field is mandatory
- description: field description
- allowed_values: list of values (for enum type only)
- min_value/max_value: range limits (for numeric types only)""",
"required": True,
"default": {},
}
},
"document_version_configurations": ["tagging_fields"]
},
}

View File

@@ -0,0 +1,56 @@
# Catalog Types
PROCESSOR_TYPES = {
"HTML_PROCESSOR": {
"name": "HTML Processor",
"file_types": "html",
"Description": "A processor for HTML files",
"configuration": {
"html_tags": {
"name": "HTML Tags",
"type": "string",
"description": "A comma-separated list of HTML tags",
"required": True,
"default": "p, h1, h2, h3, h4, h5, h6, li, table, thead, tbody, tr, td"
},
"html_end_tags": {
"name": "HTML End Tags",
"type": "string",
"description": "A comma-separated list of HTML end tags (where can the chunk end)",
"required": True,
"default": "p, li, table"
},
"html_included_elements": {
"name": "HTML Included Elements",
"type": "string",
"description": "A comma-separated list of elements to be included",
"required": True,
"default": "article, main"
},
"html_excluded_elements": {
"name": "HTML Excluded Elements",
"type": "string",
"description": "A comma-separated list of elements to be excluded",
"required": False,
"default": "header, footer, nav, script"
},
"html_excluded_classes": {
"name": "HTML Excluded Classes",
"type": "string",
"description": "A comma-separated list of classes to be excluded",
"required": False,
},
},
},
"PDF_PROCESSOR": {
"name": "PDF Processor",
"file_types": "pdf",
"Description": "A Processor for PDF files",
"configuration": {}
},
"AUDIO_PROCESSOR": {
"name": "AUDIO Processor",
"file_types": "mp3, mp4, ogg",
"Description": "A Processor for audio files",
"configuration": {}
},
}

View File

@@ -0,0 +1,31 @@
# Retriever Types
RETRIEVER_TYPES = {
"STANDARD_RAG": {
"name": "Standard RAG Retriever",
"description": "Retrieving all embeddings conform the query",
"configuration": {
"es_k": {
"name": "es_k",
"type": "int",
"description": "K-value to retrieve embeddings (max embeddings retrieved)",
"required": True,
"default": 8,
},
"es_similarity_threshold": {
"name": "es_similarity_threshold",
"type": "float",
"description": "Similarity threshold for retrieving embeddings",
"required": True,
"default": 0.3,
},
},
"arguments": {
"query": {
"name": "query",
"type": "str",
"description": "Query to retrieve embeddings",
"required": True,
},
}
}
}

View File

@@ -0,0 +1,11 @@
# Specialist Types
SERVICE_TYPES = {
"CHAT": {
"name": "CHAT",
"description": "Service allows to use CHAT functionality.",
},
"DOCAPI": {
"name": "DOCAPI",
"description": "Service allows to use document API functionality.",
},
}

View File

@@ -0,0 +1,62 @@
# Specialist Types
SPECIALIST_TYPES = {
"STANDARD_RAG": {
"name": "Q&A RAG Specialist",
"description": "Standard Q&A through RAG Specialist",
"configuration": {
"specialist_context": {
"name": "Specialist Context",
"type": "text",
"description": "The context to be used by the specialist.",
"required": False,
},
"temperature": {
"name": "Temperature",
"type": "number",
"description": "The inference temperature to be used by the specialist.",
"required": False,
"default": 0.3
}
},
"arguments": {
"language": {
"name": "Language",
"type": "str",
"description": "Language code to be used for receiving questions and giving answers",
"required": True,
},
"query": {
"name": "query",
"type": "str",
"description": "Query to answer",
"required": True,
}
},
"results": {
"detailed_query": {
"name": "detailed_query",
"type": "str",
"description": "The query detailed with the Chat Session History.",
"required": True,
},
"answer": {
"name": "answer",
"type": "str",
"description": "Answer to the query",
"required": True,
},
"citations": {
"name": "citations",
"type": "List[str]",
"description": "List of citations",
"required": False,
},
"insufficient_info": {
"name": "insufficient_info",
"type": "bool",
"description": "Whether or not the query is insufficient info",
"required": True,
},
}
}
}

View File

@@ -141,7 +141,7 @@ if [ $# -eq 0 ]; then
SERVICES=() SERVICES=()
while IFS= read -r line; do while IFS= read -r line; do
SERVICES+=("$line") SERVICES+=("$line")
done < <(yq e '.services | keys | .[]' compose_dev.yaml | grep -E '^(nginx|eveai_)') done < <(yq e '.services | keys | .[]' compose_dev.yaml | grep -E '^(nginx|eveai_|flower)')
else else
SERVICES=("$@") SERVICES=("$@")
fi fi
@@ -158,7 +158,7 @@ docker buildx use eveai_builder
# Loop through services # Loop through services
for SERVICE in "${SERVICES[@]}"; do for SERVICE in "${SERVICES[@]}"; do
if [[ "$SERVICE" == "nginx" || "$SERVICE" == eveai_* ]]; then if [[ "$SERVICE" == "nginx" || "$SERVICE" == eveai_* || "$SERVICE" == "flower" ]]; then
if process_service "$SERVICE"; then if process_service "$SERVICE"; then
echo "Successfully processed $SERVICE" echo "Successfully processed $SERVICE"
else else
@@ -169,4 +169,5 @@ for SERVICE in "${SERVICES[@]}"; do
fi fi
done done
echo "All specified services processed." echo -e "\033[35mAll specified services processed.\033[0m"
echo -e "\033[35mFinished at $(date +"%d/%m/%Y %H:%M:%S")\033[0m"

View File

@@ -18,25 +18,22 @@ x-common-variables: &common-variables
FLASK_DEBUG: true FLASK_DEBUG: true
SECRET_KEY: '97867c1491bea5ee6a8e8436eb11bf2ba6a69ff53ab1b17ecba450d0f2e572e1' SECRET_KEY: '97867c1491bea5ee6a8e8436eb11bf2ba6a69ff53ab1b17ecba450d0f2e572e1'
SECURITY_PASSWORD_SALT: '228614859439123264035565568761433607235' SECURITY_PASSWORD_SALT: '228614859439123264035565568761433607235'
MAIL_USERNAME: eveai_super@flow-it.net MAIL_USERNAME: evie@askeveai.com
MAIL_PASSWORD: '$$6xsWGbNtx$$CFMQZqc*' MAIL_PASSWORD: 'D**0z@UGfJOI@yv3eC5'
MAIL_SERVER: mail.flow-it.net MAIL_SERVER: mail.flow-it.net
MAIL_PORT: 465 MAIL_PORT: 465
REDIS_URL: redis
REDIS_PORT: '6379'
OPENAI_API_KEY: 'sk-proj-8R0jWzwjL7PeoPyMhJTZT3BlbkFJLb6HfRB2Hr9cEVFWEhU7' OPENAI_API_KEY: 'sk-proj-8R0jWzwjL7PeoPyMhJTZT3BlbkFJLb6HfRB2Hr9cEVFWEhU7'
GROQ_API_KEY: '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' GROQ_API_KEY: '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'
ANTHROPIC_API_KEY: 'sk-ant-api03-c2TmkzbReeGhXBO5JxNH6BJNylRDonc9GmZd0eRbrvyekec2' ANTHROPIC_API_KEY: 'sk-ant-api03-c2TmkzbReeGhXBO5JxNH6BJNylRDonc9GmZd0eRbrvyekec2'
PORTKEY_API_KEY: 'T2Dt4QTpgCvWxa1OftYCJtj7NcDZ'
JWT_SECRET_KEY: 'bsdMkmQ8ObfMD52yAFg4trrvjgjMhuIqg2fjDpD/JqvgY0ccCcmlsEnVFmR79WPiLKEA3i8a5zmejwLZKl4v9Q==' JWT_SECRET_KEY: 'bsdMkmQ8ObfMD52yAFg4trrvjgjMhuIqg2fjDpD/JqvgY0ccCcmlsEnVFmR79WPiLKEA3i8a5zmejwLZKl4v9Q=='
API_ENCRYPTION_KEY: 'xfF5369IsredSrlrYZqkM9ZNrfUASYYS6TCcAR9UKj4=' API_ENCRYPTION_KEY: 'xfF5369IsredSrlrYZqkM9ZNrfUASYYS6TCcAR9UKj4='
MINIO_ENDPOINT: minio:9000 MINIO_ENDPOINT: minio:9000
MINIO_ACCESS_KEY: minioadmin MINIO_ACCESS_KEY: minioadmin
MINIO_SECRET_KEY: minioadmin MINIO_SECRET_KEY: minioadmin
NGINX_SERVER_NAME: 'localhost http://macstudio.ask-eve-ai-local.com/' NGINX_SERVER_NAME: 'localhost http://macstudio.ask-eve-ai-local.com/'
LANGCHAIN_API_KEY: "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"
networks:
eveai-network:
driver: bridge
services: services:
nginx: nginx:
@@ -57,6 +54,10 @@ services:
- ../nginx/sites-enabled:/etc/nginx/sites-enabled - ../nginx/sites-enabled:/etc/nginx/sites-enabled
- ../nginx/static:/etc/nginx/static - ../nginx/static:/etc/nginx/static
- ../nginx/public:/etc/nginx/public - ../nginx/public:/etc/nginx/public
- ../integrations/Wordpress/eveai-chat/assets/css/eveai-chat-style.css:/etc/nginx/static/css/eveai-chat-style.css
- ../integrations/Wordpress/eveai-chat/assets/js/eveai-chat-widget.js:/etc/nginx/static/js/eveai-chat-widget.js
- ../integrations/Wordpress/eveai-chat/assets/js/eveai-chat-widget.js:/etc/nginx/static/js/eveai-token-manager.js
- ../integrations/Wordpress/eveai-chat/assets/js/eveai-sdk.js:/etc/nginx/static/js/eveai-sdk.js
- ./logs/nginx:/var/log/nginx - ./logs/nginx:/var/log/nginx
depends_on: depends_on:
- eveai_app - eveai_app
@@ -84,7 +85,7 @@ services:
- ../migrations:/app/migrations - ../migrations:/app/migrations
- ../scripts:/app/scripts - ../scripts:/app/scripts
- ../patched_packages:/app/patched_packages - ../patched_packages:/app/patched_packages
- eveai_logs:/app/logs - ./eveai_logs:/app/logs
depends_on: depends_on:
db: db:
condition: service_healthy condition: service_healthy
@@ -93,12 +94,11 @@ services:
minio: minio:
condition: service_healthy condition: service_healthy
healthcheck: healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5001/health"] test: ["CMD", "curl", "-f", "http://localhost:5001/healthz/ready"]
interval: 10s interval: 30s
timeout: 5s timeout: 1s
retries: 5 retries: 3
# entrypoint: ["scripts/entrypoint.sh"] start_period: 30s
# command: ["scripts/start_eveai_app.sh"]
networks: networks:
- eveai-network - eveai-network
@@ -110,8 +110,6 @@ services:
platforms: platforms:
- linux/amd64 - linux/amd64
- linux/arm64 - linux/arm64
# ports:
# - 5001:5001
environment: environment:
<<: *common-variables <<: *common-variables
COMPONENT_NAME: eveai_workers COMPONENT_NAME: eveai_workers
@@ -121,7 +119,7 @@ services:
- ../config:/app/config - ../config:/app/config
- ../scripts:/app/scripts - ../scripts:/app/scripts
- ../patched_packages:/app/patched_packages - ../patched_packages:/app/patched_packages
- eveai_logs:/app/logs - ./eveai_logs:/app/logs
depends_on: depends_on:
db: db:
condition: service_healthy condition: service_healthy
@@ -129,13 +127,6 @@ services:
condition: service_healthy condition: service_healthy
minio: minio:
condition: service_healthy condition: service_healthy
# healthcheck:
# test: [ "CMD", "curl", "-f", "http://localhost:5001/health" ]
# interval: 10s
# timeout: 5s
# retries: 5
# entrypoint: [ "sh", "-c", "scripts/entrypoint.sh" ]
# command: [ "sh", "-c", "scripts/start_eveai_workers.sh" ]
networks: networks:
- eveai-network - eveai-network
@@ -158,19 +149,18 @@ services:
- ../config:/app/config - ../config:/app/config
- ../scripts:/app/scripts - ../scripts:/app/scripts
- ../patched_packages:/app/patched_packages - ../patched_packages:/app/patched_packages
- eveai_logs:/app/logs - ./eveai_logs:/app/logs
depends_on: depends_on:
db: db:
condition: service_healthy condition: service_healthy
redis: redis:
condition: service_healthy condition: service_healthy
healthcheck: healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:5002/health" ] # Adjust based on your health endpoint test: [ "CMD", "curl", "-f", "http://localhost:5002/healthz/ready" ] # Adjust based on your health endpoint
interval: 10s interval: 30s
timeout: 5s timeout: 1s
retries: 5 retries: 3
# entrypoint: [ "sh", "-c", "scripts/entrypoint.sh" ] start_period: 30s
# command: ["sh", "-c", "scripts/start_eveai_chat.sh"]
networks: networks:
- eveai-network - eveai-network
@@ -182,8 +172,6 @@ services:
platforms: platforms:
- linux/amd64 - linux/amd64
- linux/arm64 - linux/arm64
# ports:
# - 5001:5001
environment: environment:
<<: *common-variables <<: *common-variables
COMPONENT_NAME: eveai_chat_workers COMPONENT_NAME: eveai_chat_workers
@@ -193,19 +181,103 @@ services:
- ../config:/app/config - ../config:/app/config
- ../scripts:/app/scripts - ../scripts:/app/scripts
- ../patched_packages:/app/patched_packages - ../patched_packages:/app/patched_packages
- eveai_logs:/app/logs - ./eveai_logs:/app/logs
depends_on: depends_on:
db: db:
condition: service_healthy condition: service_healthy
redis: redis:
condition: service_healthy condition: service_healthy
# healthcheck: networks:
# test: [ "CMD", "curl", "-f", "http://localhost:5001/health" ] - eveai-network
# interval: 10s
# timeout: 5s eveai_api:
# retries: 5 image: josakola/eveai_api:latest
# entrypoint: [ "sh", "-c", "scripts/entrypoint.sh" ] build:
# command: [ "sh", "-c", "scripts/start_eveai_chat_workers.sh" ] context: ..
dockerfile: ./docker/eveai_api/Dockerfile
platforms:
- linux/amd64
- linux/arm64
ports:
- 5003:5003
environment:
<<: *common-variables
COMPONENT_NAME: eveai_api
WORDPRESS_HOST: host.docker.internal
WORDPRESS_PORT: 10003
WORDPRESS_PROTOCOL: http
volumes:
- ../eveai_api:/app/eveai_api
- ../common:/app/common
- ../config:/app/config
- ../scripts:/app/scripts
- ../patched_packages:/app/patched_packages
- ./eveai_logs:/app/logs
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
minio:
condition: service_healthy
healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:5003/healthz/ready" ]
interval: 30s
timeout: 1s
retries: 3
start_period: 30s
networks:
- eveai-network
eveai_beat:
image: josakola/eveai_beat:latest
build:
context: ..
dockerfile: ./docker/eveai_beat/Dockerfile
platforms:
- linux/amd64
- linux/arm64
environment:
<<: *common-variables
COMPONENT_NAME: eveai_beat
volumes:
- ../eveai_beat:/app/eveai_beat
- ../common:/app/common
- ../config:/app/config
- ../scripts:/app/scripts
- ../patched_packages:/app/patched_packages
- ./eveai_logs:/app/logs
depends_on:
redis:
condition: service_healthy
networks:
- eveai-network
eveai_entitlements:
image: josakola/eveai_entitlements:latest
build:
context: ..
dockerfile: ./docker/eveai_entitlements/Dockerfile
platforms:
- linux/amd64
- linux/arm64
environment:
<<: *common-variables
COMPONENT_NAME: eveai_entitlements
volumes:
- ../eveai_entitlements:/app/eveai_entitlements
- ../common:/app/common
- ../config:/app/config
- ../scripts:/app/scripts
- ../patched_packages:/app/patched_packages
- ./eveai_logs:/app/logs
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
minio:
condition: service_healthy
networks: networks:
- eveai-network - eveai-network
@@ -233,8 +305,8 @@ services:
redis: redis:
image: redis:7.2.5 image: redis:7.2.5
restart: always restart: always
expose: ports:
- 6379 - "6379:6379"
volumes: volumes:
- ./db/redis:/data - ./db/redis:/data
healthcheck: healthcheck:
@@ -245,6 +317,22 @@ services:
networks: networks:
- eveai-network - eveai-network
flower:
image: josakola/flower:latest
build:
context: ..
dockerfile: ./docker/flower/Dockerfile
environment:
<<: *common-variables
volumes:
- ../scripts:/app/scripts
ports:
- "5555:5555"
depends_on:
- redis
networks:
- eveai-network
minio: minio:
image: minio/minio image: minio/minio
ports: ports:
@@ -268,6 +356,13 @@ services:
networks: networks:
- eveai-network - eveai-network
networks:
eveai-network:
driver: bridge
# This enables the containers to access the host network
driver_opts:
com.docker.network.bridge.host_ipc: "true"
volumes: volumes:
minio_data: minio_data:
eveai_logs: eveai_logs:

View File

@@ -21,15 +21,16 @@ x-common-variables: &common-variables
MAIL_USERNAME: 'evie_admin@askeveai.com' MAIL_USERNAME: 'evie_admin@askeveai.com'
MAIL_PASSWORD: 's5D%R#y^v!s&6Z^i0k&' MAIL_PASSWORD: 's5D%R#y^v!s&6Z^i0k&'
MAIL_SERVER: mail.askeveai.com MAIL_SERVER: mail.askeveai.com
MAIL_PORT: 465 MAIL_PORT: '465'
REDIS_USER: eveai REDIS_USER: eveai
REDIS_PASS: 'jHliZwGD36sONgbm0fc6SOpzLbknqq4RNF8K' REDIS_PASS: 'jHliZwGD36sONgbm0fc6SOpzLbknqq4RNF8K'
REDIS_URL: 8bciqc.stackhero-network.com REDIS_URL: 8bciqc.stackhero-network.com
REDIS_PORT: '9961' REDIS_PORT: '9961'
FLOWER_USER: 'Felucia'
FLOWER_PASSWORD: 'Jungles'
OPENAI_API_KEY: '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' OPENAI_API_KEY: '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'
GROQ_API_KEY: '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' GROQ_API_KEY: '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'
ANTHROPIC_API_KEY: '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' ANTHROPIC_API_KEY: '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'
PORTKEY_API_KEY: 'XvmvBFIVbm76opUxA7MNP14QmdQj'
JWT_SECRET_KEY: '0d99e810e686ea567ef305d8e9b06195c4db482952e19276590a726cde60a408' JWT_SECRET_KEY: '0d99e810e686ea567ef305d8e9b06195c4db482952e19276590a726cde60a408'
API_ENCRYPTION_KEY: 'Ly5XYWwEKiasfAwEqdEMdwR-k0vhrq6QPYd4whEROB0=' API_ENCRYPTION_KEY: 'Ly5XYWwEKiasfAwEqdEMdwR-k0vhrq6QPYd4whEROB0='
GRAYLOG_HOST: de4zvu.stackhero-network.com GRAYLOG_HOST: de4zvu.stackhero-network.com
@@ -38,6 +39,7 @@ x-common-variables: &common-variables
MINIO_ACCESS_KEY: 04JKmQln8PQpyTmMiCPc MINIO_ACCESS_KEY: 04JKmQln8PQpyTmMiCPc
MINIO_SECRET_KEY: 2PEZAD1nlpAmOyDV0TUTuJTQw1qVuYLF3A7GMs0D MINIO_SECRET_KEY: 2PEZAD1nlpAmOyDV0TUTuJTQw1qVuYLF3A7GMs0D
NGINX_SERVER_NAME: 'evie.askeveai.com mxz536.stackhero-network.com' NGINX_SERVER_NAME: 'evie.askeveai.com mxz536.stackhero-network.com'
LANGCHAIN_API_KEY: "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"
networks: networks:
eveai-network: eveai-network:
@@ -53,10 +55,6 @@ services:
environment: environment:
<<: *common-variables <<: *common-variables
volumes: volumes:
# - ../nginx:/etc/nginx
# - ../nginx/sites-enabled:/etc/nginx/sites-enabled
# - ../nginx/static:/etc/nginx/static
# - ../nginx/public:/etc/nginx/public
- eveai_logs:/var/log/nginx - eveai_logs:/var/log/nginx
labels: labels:
- "traefik.enable=true" - "traefik.enable=true"
@@ -81,7 +79,7 @@ services:
volumes: volumes:
- eveai_logs:/app/logs - eveai_logs:/app/logs
healthcheck: healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5001/health"] test: ["CMD", "curl", "-f", "http://localhost:5001/healthz/ready"]
interval: 10s interval: 10s
timeout: 5s timeout: 5s
retries: 5 retries: 5
@@ -91,18 +89,11 @@ services:
eveai_workers: eveai_workers:
platform: linux/amd64 platform: linux/amd64
image: josakola/eveai_workers:latest image: josakola/eveai_workers:latest
# ports:
# - 5001:5001
environment: environment:
<<: *common-variables <<: *common-variables
COMPONENT_NAME: eveai_workers COMPONENT_NAME: eveai_workers
volumes: volumes:
- eveai_logs:/app/logs - eveai_logs:/app/logs
# healthcheck:
# test: [ "CMD", "curl", "-f", "http://localhost:5001/health" ]
# interval: 10s
# timeout: 5s
# retries: 5
networks: networks:
- eveai-network - eveai-network
@@ -117,7 +108,7 @@ services:
volumes: volumes:
- eveai_logs:/app/logs - eveai_logs:/app/logs
healthcheck: healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:5002/health" ] # Adjust based on your health endpoint test: [ "CMD", "curl", "-f", "http://localhost:5002/healthz/ready" ] # Adjust based on your health endpoint
interval: 10s interval: 10s
timeout: 5s timeout: 5s
retries: 5 retries: 5
@@ -127,28 +118,64 @@ services:
eveai_chat_workers: eveai_chat_workers:
platform: linux/amd64 platform: linux/amd64
image: josakola/eveai_chat_workers:latest image: josakola/eveai_chat_workers:latest
# ports:
# - 5001:5001
environment: environment:
<<: *common-variables <<: *common-variables
COMPONENT_NAME: eveai_chat_workers COMPONENT_NAME: eveai_chat_workers
volumes: volumes:
- eveai_logs:/app/logs - eveai_logs:/app/logs
# healthcheck: networks:
# test: [ "CMD", "curl", "-f", "http://localhost:5001/health" ] - eveai-network
# interval: 10s
# timeout: 5s eveai_api:
# retries: 5 platform: linux/amd64
image: josakola/eveai_api:latest
ports:
- 5003:5003
environment:
<<: *common-variables
COMPONENT_NAME: eveai_api
volumes:
- eveai_logs:/app/logs
healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:5003/healthz/ready" ]
interval: 10s
timeout: 5s
retries: 5
networks:
- eveai-network
eveai_beat:
platform: linux/amd64
image: josakola/eveai_beat:latest
environment:
<<: *common-variables
COMPONENT_NAME: eveai_beat
volumes:
- eveai_logs:/app/logs
networks:
- eveai-network
eveai_entitlements:
platform: linux/amd64
image: josakola/eveai_entitlements:latest
environment:
<<: *common-variables
COMPONENT_NAME: eveai_entitlements
volumes:
- eveai_logs:/app/logs
networks:
- eveai-network
flower:
image: josakola/flower:latest
environment:
<<: *common-variables
ports:
- "5555:5555"
networks: networks:
- eveai-network - eveai-network
volumes: volumes:
eveai_logs: eveai_logs:
# miniAre theo_data:
# db-data:
# redis-data:
# tenant-files:
#secrets:
# db-password:
# file: ./db/password.txt

View File

@@ -0,0 +1,70 @@
ARG PYTHON_VERSION=3.12.7
FROM python:${PYTHON_VERSION}-slim as base
# Prevents Python from writing pyc files.
ENV PYTHONDONTWRITEBYTECODE=1
# Keeps Python from buffering stdout and stderr to avoid situations where
# the application crashes without emitting any logs due to buffering.
ENV PYTHONUNBUFFERED=1
# Create directory for patched packages and set permissions
RUN mkdir -p /app/patched_packages && \
chmod 777 /app/patched_packages
# Ensure patches are applied to the application.
ENV PYTHONPATH=/app/patched_packages:$PYTHONPATH
WORKDIR /app
# Create a non-privileged user that the app will run under.
# See https://docs.docker.com/go/dockerfile-user-best-practices/
ARG UID=10001
RUN adduser \
--disabled-password \
--gecos "" \
--home "/nonexistent" \
--shell "/bin/bash" \
--no-create-home \
--uid "${UID}" \
appuser
# Install necessary packages and build tools
RUN apt-get update && apt-get install -y \
build-essential \
gcc \
postgresql-client \
curl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Create logs directory and set permissions
RUN mkdir -p /app/logs && chown -R appuser:appuser /app/logs
# Download dependencies as a separate step to take advantage of Docker's caching.
# Leverage a cache mount to /root/.cache/pip to speed up subsequent builds.
# Leverage a bind mount to requirements.txt to avoid having to copy them into
# into this layer.
COPY requirements.txt /app/
RUN python -m pip install -r /app/requirements.txt
# Copy the source code into the container.
COPY eveai_api /app/eveai_api
COPY common /app/common
COPY config /app/config
COPY scripts /app/scripts
COPY patched_packages /app/patched_packages
# Set permissions for entrypoint script
RUN chmod 777 /app/scripts/entrypoint.sh
# Set ownership of the application directory to the non-privileged user
RUN chown -R appuser:appuser /app
# Expose the port that the application listens on.
EXPOSE 5003
# Set entrypoint and command
ENTRYPOINT ["/app/scripts/entrypoint.sh"]
CMD ["/app/scripts/start_eveai_api.sh"]

View File

@@ -1,4 +1,4 @@
ARG PYTHON_VERSION=3.12.3 ARG PYTHON_VERSION=3.12.7
FROM python:${PYTHON_VERSION}-slim as base FROM python:${PYTHON_VERSION}-slim as base
# Prevents Python from writing pyc files. # Prevents Python from writing pyc files.
@@ -34,6 +34,7 @@ RUN apt-get update && apt-get install -y \
build-essential \ build-essential \
gcc \ gcc \
postgresql-client \ postgresql-client \
curl \
&& apt-get clean \ && apt-get clean \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*

View File

@@ -0,0 +1,65 @@
ARG PYTHON_VERSION=3.12.7
FROM python:${PYTHON_VERSION}-slim as base
# Prevents Python from writing pyc files.
ENV PYTHONDONTWRITEBYTECODE=1
# Keeps Python from buffering stdout and stderr to avoid situations where
# the application crashes without emitting any logs due to buffering.
ENV PYTHONUNBUFFERED=1
# Create directory for patched packages and set permissions
RUN mkdir -p /app/patched_packages && \
chmod 777 /app/patched_packages
# Ensure patches are applied to the application.
ENV PYTHONPATH=/app/patched_packages:$PYTHONPATH
WORKDIR /app
# Create a non-privileged user that the app will run under.
# See https://docs.docker.com/go/dockerfile-user-best-practices/
ARG UID=10001
RUN adduser \
--disabled-password \
--gecos "" \
--home "/nonexistent" \
--shell "/bin/bash" \
--no-create-home \
--uid "${UID}" \
appuser
# Install necessary packages and build tools
#RUN apt-get update && apt-get install -y \
# build-essential \
# gcc \
# && apt-get clean \
# && rm -rf /var/lib/apt/lists/*
# Create logs directory and set permissions
RUN mkdir -p /app/logs && chown -R appuser:appuser /app/logs
# Install Python dependencies.
# Download dependencies as a separate step to take advantage of Docker's caching.
# Leverage a cache mount to /root/.cache/pip to speed up subsequent builds.
# Leverage a bind mount to requirements.txt to avoid having to copy them into
# into this layer.
COPY requirements.txt /app/
RUN python -m pip install -r /app/requirements.txt
# Copy the source code into the container.
COPY eveai_beat /app/eveai_beat
COPY common /app/common
COPY config /app/config
COPY scripts /app/scripts
COPY patched_packages /app/patched_packages
COPY --chown=root:root scripts/entrypoint_no_db.sh /app/scripts/
# Set ownership of the application directory to the non-privileged user
RUN chown -R appuser:appuser /app
# Set entrypoint and command
ENTRYPOINT ["/app/scripts/entrypoint_no_db.sh"]
CMD ["/app/scripts/start_eveai_beat.sh"]

View File

@@ -1,4 +1,4 @@
ARG PYTHON_VERSION=3.12.3 ARG PYTHON_VERSION=3.12.7
FROM python:${PYTHON_VERSION}-slim as base FROM python:${PYTHON_VERSION}-slim as base
# Prevents Python from writing pyc files. # Prevents Python from writing pyc files.
@@ -34,6 +34,7 @@ RUN apt-get update && apt-get install -y \
build-essential \ build-essential \
gcc \ gcc \
postgresql-client \ postgresql-client \
curl \
&& apt-get clean \ && apt-get clean \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
@@ -45,7 +46,7 @@ RUN mkdir -p /app/logs && chown -R appuser:appuser /app/logs
# Leverage a bind mount to requirements.txt to avoid having to copy them into # Leverage a bind mount to requirements.txt to avoid having to copy them into
# into this layer. # into this layer.
COPY ../../requirements.txt /app/ COPY requirements.txt /app/
RUN python -m pip install -r requirements.txt RUN python -m pip install -r requirements.txt
# Copy the source code into the container. # Copy the source code into the container.

View File

@@ -1,4 +1,4 @@
ARG PYTHON_VERSION=3.12.3 ARG PYTHON_VERSION=3.12.7
FROM python:${PYTHON_VERSION}-slim as base FROM python:${PYTHON_VERSION}-slim as base
# Prevents Python from writing pyc files. # Prevents Python from writing pyc files.

View File

@@ -0,0 +1,69 @@
ARG PYTHON_VERSION=3.12.7
FROM python:${PYTHON_VERSION}-slim as base
# Prevents Python from writing pyc files.
ENV PYTHONDONTWRITEBYTECODE=1
# Keeps Python from buffering stdout and stderr to avoid situations where
# the application crashes without emitting any logs due to buffering.
ENV PYTHONUNBUFFERED=1
# Create directory for patched packages and set permissions
RUN mkdir -p /app/patched_packages && \
chmod 777 /app/patched_packages
# Ensure patches are applied to the application.
ENV PYTHONPATH=/app/patched_packages:$PYTHONPATH
WORKDIR /app
# Create a non-privileged user that the app will run under.
# See https://docs.docker.com/go/dockerfile-user-best-practices/
ARG UID=10001
RUN adduser \
--disabled-password \
--gecos "" \
--home "/nonexistent" \
--shell "/bin/bash" \
--no-create-home \
--uid "${UID}" \
appuser
# Install necessary packages and build tools
RUN apt-get update && apt-get install -y \
build-essential \
gcc \
postgresql-client \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Create logs directory and set permissions
RUN mkdir -p /app/logs && chown -R appuser:appuser /app/logs
# Install Python dependencies.
# Download dependencies as a separate step to take advantage of Docker's caching.
# Leverage a cache mount to /root/.cache/pip to speed up subsequent builds.
# Leverage a bind mount to requirements.txt to avoid having to copy them into
# into this layer.
COPY requirements.txt /app/
RUN python -m pip install -r /app/requirements.txt
# Copy the source code into the container.
COPY eveai_entitlements /app/eveai_entitlements
COPY common /app/common
COPY config /app/config
COPY scripts /app/scripts
COPY patched_packages /app/patched_packages
COPY --chown=root:root scripts/entrypoint.sh /app/scripts/
# Set permissions for entrypoint script
RUN chmod 777 /app/scripts/entrypoint.sh
# Set ownership of the application directory to the non-privileged user
RUN chown -R appuser:appuser /app
# Set entrypoint and command
ENTRYPOINT ["/app/scripts/entrypoint.sh"]
CMD ["/app/scripts/start_eveai_entitlements.sh"]

View File

@@ -1,4 +1,4 @@
ARG PYTHON_VERSION=3.12.3 ARG PYTHON_VERSION=3.12.7
FROM python:${PYTHON_VERSION}-slim as base FROM python:${PYTHON_VERSION}-slim as base
# Prevents Python from writing pyc files. # Prevents Python from writing pyc files.

34
docker/flower/Dockerfile Normal file
View File

@@ -0,0 +1,34 @@
ARG PYTHON_VERSION=3.12.7
FROM python:${PYTHON_VERSION}-slim as base
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
WORKDIR /app
ARG UID=10001
RUN adduser \
--disabled-password \
--gecos "" \
--home "/nonexistent" \
--shell "/bin/bash" \
--no-create-home \
--uid "${UID}" \
appuser
RUN apt-get update && apt-get install -y \
build-essential \
gcc \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
COPY requirements.txt /app/
RUN pip install --no-cache-dir -r requirements.txt
COPY . /app
COPY scripts/start_flower.sh /app/start_flower.sh
RUN chmod a+x /app/start_flower.sh
USER appuser
CMD ["/app/start_flower.sh"]

View File

@@ -10,6 +10,10 @@ COPY ../../nginx/mime.types /etc/nginx/mime.types
# Copy static & public files # Copy static & public files
RUN mkdir -p /etc/nginx/static /etc/nginx/public RUN mkdir -p /etc/nginx/static /etc/nginx/public
COPY ../../nginx/static /etc/nginx/static COPY ../../nginx/static /etc/nginx/static
COPY ../../integrations/Wordpress/eveai-chat/assets/css/eveai-chat-style.css /etc/nginx/static/css/
COPY ../../integrations/Wordpress/eveai-chat/assets/js/eveai-chat-widget.js /etc/nginx/static/js/
COPY ../../integrations/Wordpress/eveai-chat/assets/js/eveai-token-manager.js /etc/nginx/static/js/
COPY ../../integrations/Wordpress/eveai-chat/assets/js/eveai-sdk.js /etc/nginx/static/js
COPY ../../nginx/public /etc/nginx/public COPY ../../nginx/public /etc/nginx/public
# Copy site-specific configurations # Copy site-specific configurations

62
docker/release_and_tag_eveai.sh Executable file
View File

@@ -0,0 +1,62 @@
#!/bin/bash
# Initialize variables
RELEASE_VERSION=""
RELEASE_MESSAGE=""
DOCKER_ACCOUNT="josakola" # Your Docker account name
# Parse input arguments
while getopts r:m: flag
do
case "${flag}" in
r) RELEASE_VERSION=${OPTARG};;
m) RELEASE_MESSAGE=${OPTARG};;
*)
echo "Usage: $0 -r <release_version> -m <release_message>"
exit 1 ;;
esac
done
# Ensure both version and message are provided
if [ -z "$RELEASE_VERSION" ]; then
echo "Error: Release version not provided. Use -r <release_version>"
exit 1
fi
if [ -z "$RELEASE_MESSAGE" ]; then
echo "Error: Release message not provided. Use -m <release_message>"
exit 1
fi
# Path to your docker-compose file
DOCKER_COMPOSE_FILE="compose_dev.yaml"
# Get all the images defined in docker-compose
IMAGES=$(docker compose -f $DOCKER_COMPOSE_FILE config | grep 'image:' | awk '{ print $2 }')
# Start tagging only relevant images
for DOCKER_IMAGE in $IMAGES; do
# Check if the image belongs to your Docker account and ends with :latest
if [[ $DOCKER_IMAGE == $DOCKER_ACCOUNT* && $DOCKER_IMAGE == *:latest ]]; then
# Remove the ":latest" tag to use the base image name
BASE_IMAGE=${DOCKER_IMAGE%:latest}
echo "Tagging Docker image: $BASE_IMAGE with version: $RELEASE_VERSION"
# Tag the 'latest' image with the new release version
docker tag $DOCKER_IMAGE $BASE_IMAGE:$RELEASE_VERSION
# Push the newly tagged image to Docker Hub
docker push $BASE_IMAGE:$RELEASE_VERSION
else
echo "Skipping image: $DOCKER_IMAGE (not part of $DOCKER_ACCOUNT or not tagged as latest)"
fi
done
# Step 3: Tag the Git repository with the release version
echo "Tagging Git repository with version: $RELEASE_VERSION"
git tag -a v$RELEASE_VERSION -m "Release $RELEASE_VERSION: $RELEASE_MESSAGE"
git push origin v$RELEASE_VERSION
echo -e "\033[35mRelease process completed for version: $RELEASE_VERSION \033[0m"
echo -e "\033[35mFinished at $(date +"%d/%m/%Y %H:%M:%S")\033[0m"

View File

@@ -1,4 +1,192 @@
# from flask import Blueprint, request import traceback
#
# public_api_bp = Blueprint("public", __name__, url_prefix="/api/v1") from flask import Flask, jsonify, request
# tenant_api_bp = Blueprint("tenant", __name__, url_prefix="/api/v1/tenant") from flask_jwt_extended import get_jwt_identity, verify_jwt_in_request
from sqlalchemy.exc import SQLAlchemyError
from werkzeug.exceptions import HTTPException
from common.extensions import db, api_rest, jwt, minio_client, simple_encryption, cors
import os
import logging.config
from common.models.user import TenantDomain
from common.utils.cors_utils import get_allowed_origins
from common.utils.database import Database
from config.logging_config import LOGGING
from .api.document_api import document_ns
from .api.auth import auth_ns
from config.config import get_config
from common.utils.celery_utils import make_celery, init_celery
from common.utils.eveai_exceptions import EveAIException
def create_app(config_file=None):
app = Flask(__name__)
environment = os.getenv('FLASK_ENV', 'development')
match environment:
case 'development':
app.config.from_object(get_config('dev'))
case 'production':
app.config.from_object(get_config('prod'))
case _:
app.config.from_object(get_config('dev'))
app.config['SESSION_KEY_PREFIX'] = 'eveai_api_'
app.celery = make_celery(app.name, app.config)
init_celery(app.celery, app)
logging.config.dictConfig(LOGGING)
logger = logging.getLogger(__name__)
logger.info("eveai_api starting up")
# Register Necessary Extensions
register_extensions(app)
# register Namespaces
register_namespaces(api_rest)
# Register Blueprints
register_blueprints(app)
# Register Error Handlers
register_error_handlers(app)
@app.before_request
def check_cors():
if request.method == 'OPTIONS':
app.logger.debug("Handling OPTIONS request")
return '', 200 # Allow OPTIONS to pass through
origin = request.headers.get('Origin')
if not origin:
return # Not a CORS request
# Get tenant ID from request
if verify_jwt_in_request():
tenant_id = get_jwt_identity()
if not tenant_id:
return
else:
return
# Check if origin is allowed for this tenant
allowed_origins = get_allowed_origins(tenant_id)
if origin not in allowed_origins:
app.logger.warning(f'Origin {origin} not allowed for tenant {tenant_id}')
return {'error': 'Origin not allowed'}, 403
@app.before_request
def set_tenant_schema():
# Check if this a health check request
if request.path.startswith('/_healthz') or request.path.startswith('/healthz'):
pass
else:
try:
verify_jwt_in_request(optional=True)
tenant_id = get_jwt_identity()
if tenant_id:
Database(tenant_id).switch_schema()
except Exception as e:
app.logger.error(f'Error in before_request: {str(e)}')
# Don't raise the exception here, let the request continue
# The appropriate error handling will be done in the specific endpoints
@app.route('/api/v1')
def swagger():
return api_rest.render_doc()
return app
def register_extensions(app):
db.init_app(app)
api_rest.init_app(app, title='EveAI API', version='1.0', description='EveAI API')
jwt.init_app(app)
minio_client.init_app(app)
simple_encryption.init_app(app)
cors.init_app(app, resources={
r"/api/v1/*": {
"origins": "*",
"methods": ["GET", "POST", "PUT", "OPTIONS"],
"allow_headers": ["Content-Type", "Authorization", "X-Requested-With"],
"expose_headers": ["Content-Length", "Content-Range"],
"supports_credentials": True,
"max_age": 1728000, # 20 days
"allow_credentials": True
}
})
def register_namespaces(app):
api_rest.add_namespace(document_ns, path='/api/v1/documents')
api_rest.add_namespace(auth_ns, path='/api/v1/auth')
def register_blueprints(app):
from .views.healthz_views import healthz_bp
app.register_blueprint(healthz_bp)
def register_error_handlers(app):
@app.errorhandler(Exception)
def handle_exception(e):
"""Handle all unhandled exceptions with detailed error responses"""
# Get the current exception info
exc_info = traceback.format_exc()
# Log the full exception details
app.logger.error(f"Unhandled exception: {str(e)}\n{exc_info}")
# Start with a default error response
response = {
"error": "Internal Server Error",
"message": str(e),
"type": e.__class__.__name__
}
status_code = 500
# Handle specific types of exceptions
if isinstance(e, HTTPException):
status_code = e.code
response["error"] = e.name
elif isinstance(e, SQLAlchemyError):
response["error"] = "Database Error"
response["details"] = str(e.__cause__ or e)
elif isinstance(e, ValueError):
status_code = 400
response["error"] = "Invalid Input"
# In development, include additional debug information
if app.debug:
response["debug"] = {
"exception": exc_info,
"class": e.__class__.__name__,
"module": e.__class__.__module__
}
return jsonify(response), status_code
@app.errorhandler(404)
def not_found_error(e):
return jsonify({
"error": "Not Found",
"message": str(e),
"type": "NotFoundError"
}), 404
@app.errorhandler(400)
def bad_request_error(e):
return jsonify({
"error": "Bad Request",
"message": str(e),
"type": "BadRequestError"
}), 400

148
eveai_api/api/auth.py Normal file
View File

@@ -0,0 +1,148 @@
from datetime import timedelta, datetime as dt, timezone as tz
from flask_restx import Namespace, Resource, fields
from flask_jwt_extended import create_access_token, verify_jwt_in_request, get_jwt
from common.models.user import Tenant, TenantProject
from common.extensions import simple_encryption
from flask import current_app, request
auth_ns = Namespace('auth', description='Authentication related operations')
token_model = auth_ns.model('Token', {
'tenant_id': fields.Integer(required=True, description='Tenant ID'),
'api_key': fields.String(required=True, description='API Key')
})
token_response = auth_ns.model('TokenResponse', {
'access_token': fields.String(description='JWT access token'),
'expires_in': fields.Integer(description='Token expiration time in seconds')
})
token_verification = auth_ns.model('TokenVerification', {
'is_valid': fields.Boolean(description='Token validity status'),
'expires_in': fields.Integer(description='Seconds until token expiration'),
'tenant_id': fields.Integer(description='Tenant ID from token')
})
@auth_ns.route('/token')
class Token(Resource):
@auth_ns.expect(token_model)
@auth_ns.response(200, 'Success', token_response)
@auth_ns.response(400, 'Validation Error')
@auth_ns.response(401, 'Unauthorized')
@auth_ns.response(404, 'Tenant Not Found')
def post(self):
"""
Get JWT token
"""
current_app.logger.debug(f'Token Requested {auth_ns.payload}')
try:
tenant_id = int(auth_ns.payload['tenant_id'])
api_key = auth_ns.payload['api_key']
except KeyError as e:
current_app.logger.error(f"Missing required field: {e}")
return {'message': f"Missing required field: {e}"}, 400
tenant = Tenant.query.get(tenant_id)
if not tenant:
current_app.logger.error(f"Tenant not found: {tenant_id}")
return {'message': f"Authentication invalid for tenant {tenant_id}"}, 404
projects = TenantProject.query.filter_by(
tenant_id=tenant_id,
active=True
).all()
# Find project with matching API key
matching_project = None
for project in projects:
try:
decrypted_key = simple_encryption.decrypt_api_key(project.encrypted_api_key)
if decrypted_key == api_key:
matching_project = project
break
except Exception as e:
current_app.logger.error(f"Error decrypting API key for project {project.id}: {e}")
continue
if not matching_project:
current_app.logger.error(f"Project for given API key not found for Tenant: {tenant_id}")
return {'message': "Invalid API key"}, 401
if "DOCAPI" not in matching_project.services:
current_app.logger.error(f"Service DOCAPI not authorized for Project {matching_project.name} "
f"for Tenant: {tenant_id}")
return {'message': f"Service DOCAPI not authorized for Project {matching_project.name}"}, 403
# Get the JWT_ACCESS_TOKEN_EXPIRES setting from the app config
expires_delta = current_app.config.get('JWT_ACCESS_TOKEN_EXPIRES', timedelta(minutes=15))
try:
access_token = create_access_token(identity=tenant_id, expires_delta=expires_delta)
return {
'access_token': access_token,
'expires_in': expires_delta.total_seconds()
}, 200
except Exception as e:
current_app.logger.error(f"Error creating access token: {e}")
return {'message': "Internal server error"}, 500
@auth_ns.route('/verify')
class TokenVerification(Resource):
@auth_ns.doc('verify_token')
@auth_ns.response(200, 'Token verification result', token_verification)
@auth_ns.response(401, 'Invalid token')
def get(self):
"""Verify a token's validity and get expiration information"""
try:
verify_jwt_in_request()
jwt_data = get_jwt()
# Get expiration timestamp from token
exp_timestamp = jwt_data['exp']
current_timestamp = dt.now().timestamp()
return {
'is_valid': True,
'expires_in': int(exp_timestamp - current_timestamp),
'tenant_id': jwt_data['sub'] # tenant_id is stored in 'sub' claim
}, 200
except Exception as e:
current_app.logger.error(f"Token verification failed: {str(e)}")
return {
'is_valid': False,
'message': 'Invalid token'
}, 401
@auth_ns.route('/refresh')
class TokenRefresh(Resource):
@auth_ns.doc('refresh_token')
@auth_ns.response(200, 'New token', token_response)
@auth_ns.response(401, 'Invalid token')
def post(self):
"""Get a new token before the current one expires"""
try:
verify_jwt_in_request()
jwt_data = get_jwt()
tenant_id = jwt_data['sub']
# Optional: Add additional verification here if needed
# Create new token
expires_delta = current_app.config.get('JWT_ACCESS_TOKEN_EXPIRES', timedelta(minutes=15))
new_token = create_access_token(
identity=tenant_id,
expires_delta=expires_delta
)
return {
'access_token': new_token,
'expires_in': int(expires_delta.total_seconds())
}, 200
except Exception as e:
current_app.logger.error(f"Token refresh failed: {str(e)}")
return {'message': 'Token refresh failed'}, 401

View File

@@ -0,0 +1,340 @@
import json
from datetime import datetime
import pytz
from flask import current_app, request
from flask_restx import Namespace, Resource, fields, reqparse
from flask_jwt_extended import jwt_required, get_jwt_identity
from werkzeug.datastructures import FileStorage
from werkzeug.utils import secure_filename
from common.utils.document_utils import (
create_document_stack, process_url, start_embedding_task,
validate_file_type, EveAIInvalidLanguageException, EveAIDoubleURLException, EveAIUnsupportedFileType,
get_documents_list, edit_document, refresh_document, edit_document_version,
refresh_document_with_info
)
from common.utils.eveai_exceptions import EveAIException
def validate_date(date_str):
try:
return datetime.fromisoformat(date_str).replace(tzinfo=pytz.UTC)
except ValueError:
raise ValueError("Invalid date format. Use ISO format (YYYY-MM-DDTHH:MM:SS).")
def validate_json(json_str):
try:
return json.loads(json_str)
except json.JSONDecodeError:
raise ValueError("Invalid JSON format for user_metadata.")
document_ns = Namespace('documents', description='Document related operations')
# Define models for request parsing and response serialization
upload_parser = reqparse.RequestParser()
upload_parser.add_argument('catalog_id', location='form', type=int, required=True, help='The catalog to add the file to')
upload_parser.add_argument('file', location='files', type=FileStorage, required=True, help='The file to upload')
upload_parser.add_argument('name', location='form', type=str, required=False, help='Name of the document')
upload_parser.add_argument('language', location='form', type=str, required=True, help='Language of the document')
upload_parser.add_argument('user_context', location='form', type=str, required=False,
help='User context for the document')
upload_parser.add_argument('valid_from', location='form', type=validate_date, required=False,
help='Valid from date for the document (ISO format)')
upload_parser.add_argument('user_metadata', location='form', type=validate_json, required=False,
help='User metadata for the document (JSON format)')
upload_parser.add_argument('catalog_properties', location='form', type=validate_json, required=False,
help='The catalog configuration to be passed along (JSON format). Validity is against catalog requirements '
'is not checked, and is the responsibility of the calling client.')
add_document_response = document_ns.model('AddDocumentResponse', {
'message': fields.String(description='Status message'),
'document_id': fields.Integer(description='ID of the created document'),
'document_version_id': fields.Integer(description='ID of the created document version'),
'task_id': fields.String(description='ID of the embedding task')
})
@document_ns.route('/add_document')
class AddDocument(Resource):
@jwt_required()
@document_ns.expect(upload_parser)
@document_ns.response(201, 'Document added successfully', add_document_response)
@document_ns.response(400, 'Validation Error')
@document_ns.response(500, 'Internal Server Error')
def post(self):
"""
Add a new document
"""
tenant_id = get_jwt_identity()
current_app.logger.info(f'Adding document for tenant {tenant_id}')
try:
args = upload_parser.parse_args()
file = args['file']
filename = secure_filename(file.filename)
extension = filename.rsplit('.', 1)[1].lower()
validate_file_type(extension)
api_input = {
'catalog_id': args.get('catalog_id'),
'name': args.get('name') or filename,
'language': args.get('language'),
'user_context': args.get('user_context'),
'valid_from': args.get('valid_from'),
'user_metadata': args.get('user_metadata'),
'catalog_properties': args.get('catalog_properties'),
}
new_doc, new_doc_vers = create_document_stack(api_input, file, filename, extension, tenant_id)
task_id = start_embedding_task(tenant_id, new_doc_vers.id)
return {
'message': f'Processing on document {new_doc.name}, version {new_doc_vers.id} started. Task ID: {task_id}.',
'document_id': new_doc.id,
'document_version_id': new_doc_vers.id,
'task_id': task_id
}, 201
except (EveAIInvalidLanguageException, EveAIUnsupportedFileType) as e:
current_app.logger.error(f'Error adding document: {str(e)}')
document_ns.abort(400, str(e))
except Exception as e:
current_app.logger.error(f'Error adding document: {str(e)}')
document_ns.abort(500, 'Error adding document')
# Models for AddURL
add_url_model = document_ns.model('AddURL', {
'catalog_id': fields.Integer(required='True', description='ID of the catalog the URL needs to be added to'),
'url': fields.String(required=True, description='URL of the document to add'),
'name': fields.String(required=False, description='Name of the document'),
'language': fields.String(required=True, description='Language of the document'),
'user_context': fields.String(required=False, description='User context for the document'),
'valid_from': fields.String(required=False, description='Valid from date for the document'),
'user_metadata': fields.String(required=False, description='User metadata for the document'),
'system_metadata': fields.String(required=False, description='System metadata for the document'),
'catalog_properties': fields.String(required=False, description='The catalog configuration to be passed along (JSON '
'format). Validity is against catalog requirements '
'is not checked, and is the responsibility of the '
'calling client.'),
})
add_url_response = document_ns.model('AddURLResponse', {
'message': fields.String(description='Status message'),
'document_id': fields.Integer(description='ID of the created document'),
'document_version_id': fields.Integer(description='ID of the created document version'),
'task_id': fields.String(description='ID of the embedding task')
})
@document_ns.route('/add_url')
class AddURL(Resource):
@jwt_required()
@document_ns.expect(add_url_model)
@document_ns.response(201, 'Document added successfully', add_url_response)
@document_ns.response(400, 'Validation Error')
@document_ns.response(500, 'Internal Server Error')
def post(self):
"""
Add a new document from URL
"""
tenant_id = get_jwt_identity()
current_app.logger.info(f'Adding document from URL for tenant {tenant_id}')
try:
args = document_ns.payload
file_content, filename, extension = process_url(args['url'], tenant_id)
api_input = {
'catalog_id': args['catalog_id'],
'url': args['url'],
'name': args.get('name') or filename,
'language': args['language'],
'user_context': args.get('user_context'),
'valid_from': args.get('valid_from'),
'user_metadata': args.get('user_metadata'),
'catalog_properties': args.get('catalog_properties'),
}
new_doc, new_doc_vers = create_document_stack(api_input, file_content, filename, extension, tenant_id)
task_id = start_embedding_task(tenant_id, new_doc_vers.id)
return {
'message': f'Processing on document {new_doc.name}, version {new_doc_vers.id} started. Task ID: {task_id}.',
'document_id': new_doc.id,
'document_version_id': new_doc_vers.id,
'task_id': task_id
}, 201
except EveAIDoubleURLException:
document_ns.abort(400, f'A document with URL {args["url"]} already exists.')
except (EveAIInvalidLanguageException, EveAIUnsupportedFileType) as e:
document_ns.abort(400, str(e))
except Exception as e:
current_app.logger.error(f'Error adding document from URL: {str(e)}')
document_ns.abort(500, 'Error adding document from URL')
document_list_model = document_ns.model('DocumentList', {
'id': fields.Integer(description='Document ID'),
'name': fields.String(description='Document name'),
'valid_from': fields.DateTime(description='Valid from date'),
'valid_to': fields.DateTime(description='Valid to date'),
})
@document_ns.route('/list')
class DocumentList(Resource):
@jwt_required()
@document_ns.doc('list_documents')
@document_ns.marshal_list_with(document_list_model, envelope='documents')
def get(self):
"""List all documents"""
page = request.args.get('page', 1, type=int)
per_page = request.args.get('per_page', 10, type=int)
pagination = get_documents_list(page, per_page)
return pagination.items, 200
edit_document_model = document_ns.model('EditDocument', {
'name': fields.String(required=True, description='New name for the document'),
'valid_from': fields.DateTime(required=False, description='New valid from date'),
'valid_to': fields.DateTime(required=False, description='New valid to date'),
})
@document_ns.route('/<int:document_id>')
class DocumentResource(Resource):
@jwt_required()
@document_ns.doc('edit_document')
@document_ns.expect(edit_document_model)
@document_ns.response(200, 'Document updated successfully')
@document_ns.response(400, 'Validation Error')
@document_ns.response(404, 'Document not found')
@document_ns.response(500, 'Internal Server Error')
def put(self, document_id):
"""Edit a document"""
try:
current_app.logger.debug(f'Editing document {document_id}')
data = request.json
tenant_id = get_jwt_identity()
updated_doc, error = edit_document(tenant_id, document_id, data.get('name', None),
data.get('valid_from', None), data.get('valid_to', None))
if updated_doc:
return {'message': f'Document {updated_doc.id} updated successfully'}, 200
else:
return {'message': f'Error updating document: {error}'}, 400
except EveAIException as e:
return e.to_dict(), e.status_code
@jwt_required()
@document_ns.doc('refresh_document')
@document_ns.response(200, 'Document refreshed successfully')
def post(self, document_id):
"""Refresh a document"""
tenant_id = get_jwt_identity()
new_version, result = refresh_document(document_id, tenant_id)
if new_version:
return {'message': f'Document refreshed. New version: {new_version.id}. Task ID: {result}'}, 200
else:
return {'message': f'Error refreshing document: {result}'}, 400
edit_document_version_model = document_ns.model('EditDocumentVersion', {
'user_context': fields.String(required=True, description='New user context for the document version'),
'catalog_properties': fields.String(required=True, description='New catalog properties for the document version'),
})
@document_ns.route('/version/<int:version_id>')
class DocumentVersionResource(Resource):
@jwt_required()
@document_ns.doc('edit_document_version')
@document_ns.expect(edit_document_version_model)
@document_ns.response(200, 'Document version updated successfully')
def put(self, version_id):
"""Edit a document version"""
data = request.json
tenant_id = get_jwt_identity()
updated_version, error = edit_document_version(tenant_id, version_id, data['user_context'], data.get('catalog_properties'))
if updated_version:
return {'message': f'Document Version {updated_version.id} updated successfully'}, 200
else:
return {'message': f'Error updating document version: {error}'}, 400
# Define the model for the request body of refresh_with_info
refresh_document_model = document_ns.model('RefreshDocument', {
'name': fields.String(required=False, description='New name for the document'),
'language': fields.String(required=False, description='Language of the document'),
'user_context': fields.String(required=False, description='User context for the document'),
'user_metadata': fields.Raw(required=False, description='User metadata for the document'),
'catalog_properties': fields.Raw(required=False, description='Catalog properties for the document'),
})
@document_ns.route('/<int:document_id>/refresh')
class RefreshDocument(Resource):
@jwt_required()
@document_ns.response(200, 'Document refreshed successfully')
@document_ns.response(404, 'Document not found')
def post(self, document_id):
"""
Refresh a document without additional information
"""
tenant_id = get_jwt_identity()
current_app.logger.info(f'Refreshing document {document_id} for tenant {tenant_id}')
try:
new_version, result = refresh_document(document_id, tenant_id)
if new_version:
return {
'message': f'Document refreshed successfully. New version: {new_version.id}. Task ID: {result}',
'document_id': document_id,
'document_version_id': new_version.id,
'task_id': result
}, 200
else:
return {'message': f'Error refreshing document: {result}'}, 400
except Exception as e:
current_app.logger.error(f'Error refreshing document: {str(e)}')
return {'message': 'Internal server error'}, 500
@document_ns.route('/<int:document_id>/refresh_with_info')
class RefreshDocumentWithInfo(Resource):
@jwt_required()
@document_ns.expect(refresh_document_model)
@document_ns.response(200, 'Document refreshed successfully')
@document_ns.response(400, 'Validation Error')
@document_ns.response(404, 'Document not found')
def post(self, document_id):
"""
Refresh a document with new information
"""
tenant_id = get_jwt_identity()
current_app.logger.info(f'Refreshing document {document_id} with info for tenant {tenant_id}')
try:
api_input = request.json
new_version, result = refresh_document_with_info(document_id, tenant_id, api_input)
if new_version:
return {
'message': f'Document refreshed successfully with new info. New version: {new_version.id}. Task ID: {result}',
'document_id': document_id,
'document_version_id': new_version.id,
'task_id': result
}, 200
else:
return {'message': f'Error refreshing document with info: {result}'}, 400
except Exception as e:
current_app.logger.error(f'Error refreshing document with info: {str(e)}')
return {'message': 'Internal server error'}, 500

View File

@@ -1,7 +0,0 @@
from flask import request
from flask.views import MethodView
class RegisterAPI(MethodView):
def post(self):
username = request.json['username']

View File

@@ -0,0 +1,82 @@
from flask import Blueprint, current_app, request
from flask_healthz import HealthError
from sqlalchemy.exc import SQLAlchemyError
from celery.exceptions import TimeoutError as CeleryTimeoutError
from prometheus_client import Counter, Histogram, generate_latest, CONTENT_TYPE_LATEST
from common.extensions import db, metrics, minio_client
from common.utils.celery_utils import current_celery
healthz_bp = Blueprint('healthz', __name__, url_prefix='/_healthz')
# Define Prometheus metrics
api_request_counter = Counter('api_request_count', 'API Request Count', ['method', 'endpoint'])
api_request_latency = Histogram('api_request_latency_seconds', 'API Request latency')
def liveness():
try:
# Basic check to see if the app is running
return True
except Exception:
raise HealthError("Liveness check failed")
def readiness():
checks = {
"database": check_database(),
# "celery": check_celery(),
"minio": check_minio(),
# Add more checks as needed
}
if not all(checks.values()):
raise HealthError("Readiness check failed")
def check_database():
try:
# Perform a simple database query
db.session.execute("SELECT 1")
return True
except SQLAlchemyError:
current_app.logger.error("Database check failed", exc_info=True)
return False
def check_celery():
try:
# Send a simple task to Celery
result = current_celery.send_task('ping', queue='eveai_workers.ping')
response = result.get(timeout=10) # Wait for up to 10 seconds for a response
return response == 'pong'
except CeleryTimeoutError:
current_app.logger.error("Celery check timed out", exc_info=True)
return False
except Exception as e:
current_app.logger.error(f"Celery check failed: {str(e)}", exc_info=True)
return False
def check_minio():
try:
# List buckets to check if MinIO is accessible
minio_client.list_buckets()
return True
except Exception as e:
current_app.logger.error(f"MinIO check failed: {str(e)}", exc_info=True)
return False
@healthz_bp.route('/metrics')
@metrics.do_not_track()
def prometheus_metrics():
return generate_latest(), 200, {'Content-Type': CONTENT_TYPE_LATEST}
def init_healtz(app):
app.config.update(
HEALTHZ={
"live": "healthz_views.liveness",
"ready": "healthz_views.readiness",
}
)

View File

@@ -7,9 +7,11 @@ from werkzeug.middleware.proxy_fix import ProxyFix
import logging.config import logging.config
from common.extensions import (db, migrate, bootstrap, security, mail, login_manager, cors, csrf, session, from common.extensions import (db, migrate, bootstrap, security, mail, login_manager, cors, csrf, session,
minio_client, simple_encryption) minio_client, simple_encryption, metrics, cache_manager)
from common.models.user import User, Role, Tenant, TenantDomain from common.models.user import User, Role, Tenant, TenantDomain
import common.models.interaction import common.models.interaction
import common.models.entitlements
import common.models.document
from common.utils.nginx_utils import prefixed_url_for from common.utils.nginx_utils import prefixed_url_for
from config.logging_config import LOGGING from config.logging_config import LOGGING
from common.utils.security import set_tenant_session_data from common.utils.security import set_tenant_session_data
@@ -17,6 +19,7 @@ from .errors import register_error_handlers
from common.utils.celery_utils import make_celery, init_celery from common.utils.celery_utils import make_celery, init_celery
from common.utils.template_filters import register_filters from common.utils.template_filters import register_filters
from config.config import get_config from config.config import get_config
from eveai_app.views.security_forms import ResetPasswordForm
def create_app(config_file=None): def create_app(config_file=None):
@@ -26,7 +29,6 @@ def create_app(config_file=None):
app.wsgi_app = ProxyFix(app.wsgi_app, x_for=1, x_proto=1, x_host=1, x_port=1) app.wsgi_app = ProxyFix(app.wsgi_app, x_for=1, x_proto=1, x_host=1, x_port=1)
environment = os.getenv('FLASK_ENV', 'development') environment = os.getenv('FLASK_ENV', 'development')
print(environment)
match environment: match environment:
case 'development': case 'development':
@@ -37,6 +39,7 @@ def create_app(config_file=None):
app.config.from_object(get_config('dev')) app.config.from_object(get_config('dev'))
app.config['SESSION_KEY_PREFIX'] = 'eveai_app_' app.config['SESSION_KEY_PREFIX'] = 'eveai_app_'
app.config['SECURITY_RESET_PASSWORD_FORM'] = ResetPasswordForm
try: try:
os.makedirs(app.instance_path) os.makedirs(app.instance_path)
@@ -47,8 +50,6 @@ def create_app(config_file=None):
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
logger.info("eveai_app starting up") logger.info("eveai_app starting up")
logger.debug("start config")
logger.debug(app.config)
# Register extensions # Register extensions
@@ -93,14 +94,11 @@ def create_app(config_file=None):
} }
return jsonify(response), 500 return jsonify(response), 500
@app.before_request # @app.before_request
def before_request(): # def before_request():
# app.logger.debug(f"Before request - Session ID: {session.sid}") # # app.logger.debug(f"Before request - Session ID: {session.sid}")
app.logger.debug(f"Before request - Session data: {session}") # app.logger.debug(f"Before request - Session data: {session}")
app.logger.debug(f"Before request - Request headers: {request.headers}") # app.logger.debug(f"Before request - Request headers: {request.headers}")
# Register API
register_api(app)
# Register template filters # Register template filters
register_filters(app) register_filters(app)
@@ -118,10 +116,11 @@ def register_extensions(app):
csrf.init_app(app) csrf.init_app(app)
login_manager.init_app(app) login_manager.init_app(app)
cors.init_app(app) cors.init_app(app)
# kms_client.init_app(app)
simple_encryption.init_app(app) simple_encryption.init_app(app)
session.init_app(app) session.init_app(app)
minio_client.init_app(app) minio_client.init_app(app)
cache_manager.init_app(app)
metrics.init_app(app)
# Register Blueprints # Register Blueprints
@@ -136,9 +135,11 @@ def register_blueprints(app):
app.register_blueprint(security_bp) app.register_blueprint(security_bp)
from .views.interaction_views import interaction_bp from .views.interaction_views import interaction_bp
app.register_blueprint(interaction_bp) app.register_blueprint(interaction_bp)
from .views.entitlements_views import entitlements_bp
app.register_blueprint(entitlements_bp)
from .views.administration_views import administration_bp
app.register_blueprint(administration_bp)
from .views.healthz_views import healthz_bp, init_healtz
app.register_blueprint(healthz_bp)
init_healtz(app)
def register_api(app):
pass
# from . import api
# app.register_blueprint(api.bp, url_prefix='/api')

View File

@@ -1,4 +1,4 @@
from flask import render_template, request, jsonify, redirect from flask import render_template, request, jsonify, redirect, current_app
from flask_login import current_user from flask_login import current_user
from common.utils.nginx_utils import prefixed_url_for from common.utils.nginx_utils import prefixed_url_for
@@ -6,24 +6,28 @@ from common.utils.nginx_utils import prefixed_url_for
def not_found_error(error): def not_found_error(error):
if not current_user.is_authenticated: if not current_user.is_authenticated:
return redirect(prefixed_url_for('security.login')) return redirect(prefixed_url_for('security.login'))
current_app.logger.error(f"Not Found Error: {error}")
return render_template('error/404.html'), 404 return render_template('error/404.html'), 404
def internal_server_error(error): def internal_server_error(error):
if not current_user.is_authenticated: if not current_user.is_authenticated:
return redirect(prefixed_url_for('security.login')) return redirect(prefixed_url_for('security.login'))
current_app.logger.error(f"Internal Server Error: {error}")
return render_template('error/500.html'), 500 return render_template('error/500.html'), 500
def not_authorised_error(error): def not_authorised_error(error):
if not current_user.is_authenticated: if not current_user.is_authenticated:
return redirect(prefixed_url_for('security.login')) return redirect(prefixed_url_for('security.login'))
current_app.logger.error(f"Not Authorised Error: {error}")
return render_template('error/401.html') return render_template('error/401.html')
def access_forbidden(error): def access_forbidden(error):
if not current_user.is_authenticated: if not current_user.is_authenticated:
return redirect(prefixed_url_for('security.login')) return redirect(prefixed_url_for('security.login'))
current_app.logger.error(f"Access Forbidden: {error}")
return render_template('error/403.html') return render_template('error/403.html')
@@ -32,6 +36,7 @@ def key_error_handler(error):
if str(error) == "'tenant'": if str(error) == "'tenant'":
return redirect(prefixed_url_for('security.login')) return redirect(prefixed_url_for('security.login'))
# For other KeyErrors, you might want to log the error and return a generic error page # For other KeyErrors, you might want to log the error and return a generic error page
current_app.logger.error(f"Key Error: {error}")
return render_template('error/generic.html', error_message="An unexpected error occurred"), 500 return render_template('error/generic.html', error_message="An unexpected error occurred"), 500

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,22 @@
{% extends 'base.html' %}
{% from "macros.html" import render_selectable_table, render_pagination, render_field %}
{% block title %}Trigger Actions{% endblock %}
{% block content_title %}Trigger Actions{% endblock %}
{% block content_description %}Manually trigger batch actions{% endblock %}
{% block content %}
<!-- Trigger action Form -->
<form method="POST" action="{{ url_for('administration_bp.handle_trigger_action') }}">
<div class="form-group mt-3">
<button type="submit" name="action" value="update_usages" class="btn btn-secondary">Update Usages</button>
</div>
</form>
{% endblock %}
{% block content_footer %}
{% endblock %}
{% block scripts %}
{% endblock %}

View File

@@ -11,9 +11,17 @@
{{ form.hidden_tag() }} {{ form.hidden_tag() }}
{% set disabled_fields = [] %} {% set disabled_fields = [] %}
{% set exclude_fields = [] %} {% set exclude_fields = [] %}
{% for field in form %} {% for field in form.get_static_fields() %}
{{ render_field(field, disabled_fields, exclude_fields) }} {{ render_field(field, disabled_fields, exclude_fields) }}
{% endfor %} {% endfor %}
{% for collection_name, fields in form.get_dynamic_fields().items() %}
{% if fields|length > 0 %}
<h4 class="mt-4">{{ collection_name }}</h4>
{% endif %}
{% for field in fields %}
{{ render_field(field, disabled_fields, exclude_fields) }}
{% endfor %}
{% endfor %}
<button type="submit" class="btn btn-primary">Add Document</button> <button type="submit" class="btn btn-primary">Add Document</button>
</form> </form>
{% endblock %} {% endblock %}

View File

@@ -1,24 +0,0 @@
{% extends 'base.html' %}
{% from "macros.html" import render_field %}
{% block title %}Add Youtube Document{% endblock %}
{% block content_title %}Add Youtube Document{% endblock %}
{% block content_description %}Add a youtube url and the corresponding document to EveAI. In some cases, url's cannot be loaded directly. Download the html and add it as a document in that case.{% endblock %}
{% block content %}
<form method="post">
{{ form.hidden_tag() }}
{% set disabled_fields = [] %}
{% set exclude_fields = [] %}
{% for field in form %}
{{ render_field(field, disabled_fields, exclude_fields) }}
{% endfor %}
<button type="submit" class="btn btn-primary">Add Youtube Document</button>
</form>
{% endblock %}
{% block content_footer %}
{% endblock %}

View File

@@ -0,0 +1,23 @@
{% extends 'base.html' %}
{% from "macros.html" import render_field %}
{% block title %}Catalog Registration{% endblock %}
{% block content_title %}Register Catalog{% endblock %}
{% block content_description %}Define a new catalog of documents in Evie's Library{% endblock %}
{% block content %}
<form method="post">
{{ form.hidden_tag() }}
{% set disabled_fields = [] %}
{% set exclude_fields = [] %}
{% for field in form %}
{{ render_field(field, disabled_fields, exclude_fields) }}
{% endfor %}
<button type="submit" class="btn btn-primary">Register Catalog</button>
</form>
{% endblock %}
{% block content_footer %}
{% endblock %}

View File

@@ -0,0 +1,24 @@
{% extends 'base.html' %}
{% from 'macros.html' import render_selectable_table, render_pagination %}
{% block title %}Documents{% endblock %}
{% block content_title %}Catalogs{% endblock %}
{% block content_description %}View Catalogs for Tenant{% endblock %}
{% block content_class %}<div class="col-xl-12 col-lg-5 col-md-7 mx-auto"></div>{% endblock %}
{% block content %}
<div class="container">
<form method="POST" action="{{ url_for('document_bp.handle_catalog_selection') }}">
{{ render_selectable_table(headers=["Catalog ID", "Name", "Type"], rows=rows, selectable=True, id="catalogsTable") }}
<div class="form-group mt-3">
<button type="submit" name="action" value="set_session_catalog" class="btn btn-primary">Set Session Catalog</button>
<button type="submit" name="action" value="edit_catalog" class="btn btn-primary">Edit Catalog</button>
</div>
</form>
</div>
{% endblock %}
{% block content_footer %}
{{ render_pagination(pagination, 'document_bp.catalogs') }}
{% endblock %}

Some files were not shown because too many files have changed in this diff Show More