Compare commits

..

3 Commits

Author SHA1 Message Date
ARCHITECT
d35f11e2f7 Add apps modules and improve captain_claude logging
- Add apps/ directory with modular components:
  - captain.py: Main orchestrator
  - corp/, deck/, devops/, docker/, hst/: Domain-specific apps
- Fix duplicate logger handlers in long sessions
- Add flush=True to print statements for real-time output

Note: flow-ui, mindlink, tzzr-cli are separate repos (not included)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-02 01:13:30 +00:00
ARCHITECT
acffd2a2a7 Remove context-manager (moved to dedicated repo) 2025-12-31 20:07:26 +00:00
ARCHITECT
224d2f522c Add comprehensive README for CAPTAIN CLAUDE system
- Document full infrastructure overview (Central, DECK, CORP, HST)
- Add SSH access instructions and R2 storage documentation
- Detail all available services and microservices
- Include context-manager CLI commands and examples
- Document operation rules and automatic cleanup procedures
- Provide troubleshooting and contact information
- Add examples for common use cases

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 18:20:01 +00:00
20 changed files with 2148 additions and 2791 deletions

352
README.md Normal file
View File

@@ -0,0 +1,352 @@
# CAPTAIN CLAUDE - Sistema Multiagente TZZR
Coordinador central del sistema TZZR (The Zero-Trust Resilient Resource Network). CAPTAIN CLAUDE gestiona la infraestructura distribuida, servicios centralizados y coordina agentes especializados en múltiples servidores.
## Visión General
CAPTAIN CLAUDE es un sistema multiagente que coordina la infraestructura TZZR:
- **Servidor Central**: 69.62.126.110 (Gitea, PostgreSQL)
- **Servidores Remotos**: DECK, CORP, HST
- **Almacenamiento**: Cloudflare R2 (s3://architect/)
- **Coordinación**: Agentes especializados para tareas específicas
## Infraestructura
### Servidores
| Servidor | IP | Función |
|----------|-----|---------|
| **Central** | 69.62.126.110 | Control central, Gitea, PostgreSQL |
| **DECK** | 72.62.1.113 | Servicios, Agentes (Clara, Alfred, Mason, Feldman) |
| **CORP** | 92.112.181.188 | ERP (Odoo), CMS (Directus), Agentes (Margaret, Jared) |
| **HST** | 72.62.2.84 | Directus, Gestión de imágenes |
### Acceso SSH
Todos los servidores remotos son accesibles via SSH usando la clave `~/.ssh/tzzr`:
```bash
ssh -i ~/.ssh/tzzr root@72.62.1.113 # DECK
ssh -i ~/.ssh/tzzr root@92.112.181.188 # CORP
ssh -i ~/.ssh/tzzr root@72.62.2.84 # HST
```
## Almacenamiento R2
Cloudflare R2 almacena documentos, configuraciones y backups:
```bash
# Endpoint
https://7dedae6030f5554d99d37e98a5232996.r2.cloudflarestorage.com
# Estructura
s3://architect/
├── documentos adjuntos/ # Documentos para compartir
├── documentos adjuntos/architect/ # Reportes generados
├── system/ # Configs, backups internos
├── gpu-services/ # Servicios GRACE/PENNY/FACTORY
├── backups/ # Backups Gitea y sistema
└── auditorias/ # Logs de auditoría
```
### Comandos R2
```bash
# Listar contenido
aws s3 ls s3://architect/ --endpoint-url https://7dedae6030f5554d99d37e98a5232996.r2.cloudflarestorage.com
# Subir archivo para usuario
aws s3 cp archivo.md "s3://architect/documentos adjuntos/archivo.md" \
--endpoint-url https://7dedae6030f5554d99d37e98a5232996.r2.cloudflarestorage.com
# Subir archivo interno
aws s3 cp archivo "s3://architect/system/archivo" \
--endpoint-url https://7dedae6030f5554d99d37e98a5232996.r2.cloudflarestorage.com
```
## Agentes Especializados
CAPTAIN CLAUDE coordina múltiples agentes para diferentes tareas:
### Agentes Disponibles
- **Captain**: Coordinador principal, análisis de tareas, delegación
- **Coder**: Implementación de código, desarrollo de features
- **Reviewer**: Revisión de código, calidad, estándares
- **Researcher**: Investigación, análisis, documentación
- **Architect**: Diseño de sistemas, arquitectura, optimización
### Ejecución de Agentes
Los agentes pueden ejecutarse:
- **En paralelo**: Para tareas independientes
- **Secuencialmente**: Para tareas dependientes
- **Interactivamente**: Con feedback del usuario
## Servicios en Cada Servidor
### DECK (72.62.1.113)
**Microservicios:**
- Clara (5051) - Log inmutable y auditoría
- Alfred (5052) - Automatización de workflows
- Mason (5053) - Enriquecimiento de datos
- Feldman (5054) - Validador Merkle
**Aplicaciones:**
- Nextcloud (8084) - Almacenamiento en la nube
- Odoo (8069) - ERP
- Vaultwarden (8085) - Gestor de contraseñas
- Directus (8055) - CMS
- Mailcow (8180) - Servidor de correo
**Infraestructura:**
- PostgreSQL (5432) - Base de datos con pgvector
- Redis (6379) - Cache en memoria
### CORP (92.112.181.188)
**Aplicaciones:**
- Odoo 17 (8069) - Sistema ERP empresarial
- Directus 11 (8055) - CMS y gestor de contenidos
- Nextcloud (8080) - Almacenamiento compartido
- Vaultwarden (8081) - Gestor de contraseñas
**Microservicios:**
- Margaret (5051) - Orquestación y coordinación
- Jared (5052) - Procesamiento de datos
- Mason (5053) - Generación de reportes
- Feldman (5054) - Auditoría y logging
**Infraestructura:**
- PostgreSQL (5432) - Base de datos
### HST (72.62.2.84)
- Directus
- Gestión de imágenes
- Servicios de almacenamiento
## Context-Manager
Sistema central para gestión de contexto persistente. Disponible en DECK.
### Instalación
```bash
ssh -i ~/.ssh/tzzr root@72.62.1.113 "context-manager --help"
```
### Comandos Principales
```bash
# Ver ayuda
context-manager --help
# Listar bloques de contexto
context-manager block list
# Ver contenido de bloque
context-manager block view <ID>
# Crear bloque
context-manager block add "nombre_bloque" \
--tipo "project" \
--contenido '{"estado": "en_progreso"}'
# Eliminar bloque
context-manager block remove <ID>
# Listar memoria compartida
context-manager memory list
# Agregar a memoria
context-manager memory add "clave" "contenido"
# Chat interactivo
context-manager chat
```
## Documentación
Manuales disponibles para cada servidor:
- **MANUAL_USUARIO_DECK.md**: Guía completa del servidor DECK
- Servicios, configuración, troubleshooting
- PostgreSQL y administración
- Guías rápidas de inicio
- **MANUAL_USUARIO_CORP.md**: Guía completa del servidor CORP
- Odoo 17 y Directus 11
- Administración y troubleshooting
- Procedimientos frecuentes
- **MANUAL_USUARIO_HST.md**: Documentación del servidor HST
Todos los manuales se encuentran en:
- Repositorio: `/home/architect/captain-claude/`
- R2: `s3://architect/system/skynet v8/`
## Reglas de Operación
### Principio Fundamental
**No guardar documentos en servidor local.**
- Los documentos y reportes generados van a R2, NO al filesystem local
- El servidor solo mantiene código, configuraciones y aplicaciones activas
- Limpieza automática después de generar archivos
### Limpieza Automática
Al finalizar cualquier tarea que genere archivos:
1. Subir TODOS los archivos generados a R2
```bash
aws s3 cp archivo "s3://architect/destino/archivo" \
--endpoint-url https://7dedae6030f5554d99d37e98a5232996.r2.cloudflarestorage.com
```
2. Verificar que están en R2
```bash
aws s3 ls s3://architect/destino/ \
--endpoint-url https://7dedae6030f5554d99d37e98a5232996.r2.cloudflarestorage.com
```
3. Eliminar archivos locales
```bash
rm -rf carpeta_local/
```
4. No esperar a que el usuario lo pida - ejecutar automáticamente
### Destinos R2 por Tipo
| Tipo | Destino R2 |
|------|-----------|
| Auditorías | `s3://architect/auditorias/` |
| Reportes para usuario | `s3://architect/documentos adjuntos/architect/` |
| Configs/backups internos | `s3://architect/system/` |
| Documentos de usuario | `s3://architect/documentos adjuntos/` |
## Ejecución
### Inicio
```bash
# Ejecutar CAPTAIN CLAUDE
python captain_claude.py
# O via script
./run.sh
```
### Variables de Entorno
Se requieren:
- Acceso SSH a servidores remotos (clave `~/.ssh/tzzr`)
- Credenciales de R2 (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
- APIs configuradas (Anthropic, OpenAI, etc.)
### Logs y Monitoreo
Los logs se almacenan en:
- Local: `captain_output/`
- R2: `s3://architect/auditorias/`
## Casos de Uso
### 1. Supervisar Estado de la Infraestructura
```bash
python captain_claude.py --action health-check --all-servers
```
### 2. Generar Reportes
```bash
python captain_claude.py --action report --type performance --output r2
```
### 3. Administrar Servicios
```bash
# Ver estado de servicio en DECK
ssh -i ~/.ssh/tzzr root@72.62.1.113 "docker ps"
# Reiniciar servicio
ssh -i ~/.ssh/tzzr root@72.62.1.113 "docker restart clara-service"
```
### 4. Gestionar Contexto
```bash
# Crear bloque de contexto para coordinación
context-manager block add "tarea_importante" \
--tipo "coordination" \
--contenido '{"agentes": ["coder", "reviewer"], "estado": "en_progreso"}'
```
## Contacto y Soporte
### Coordinación
- **Servidor Central**: Git en http://localhost:3000
- **Bitácora**: Logs en R2 `s3://architect/auditorias/`
- **Documentación**: Manuales en R2 `s3://architect/system/skynet v8/`
### Escalation
1. Revisar logs relevantes
2. Consultar documentación
3. Crear ticket en Gitea
4. Contactar administrador del sistema
## Información Técnica
### Dependencias
- Python 3.8+
- SSH (conexión a servidores remotos)
- AWS CLI (acceso a R2)
- Docker (para servicios)
- PostgreSQL (base de datos)
### Estructura del Proyecto
```
captain-claude/
├── README.md # Este archivo
├── CAPTAIN_CLAUDE.md # Instrucciones de operación
├── captain_claude.py # Coordinador principal
├── captain # Script de ejecución
├── apps/ # Aplicaciones integradas
├── context-manager/ # Sistema de gestión de contexto
├── venv/ # Entorno virtual Python
└── captain_output/ # Salidas y logs
```
### Permisos y Seguridad
- Clave SSH protegida: `~/.ssh/tzzr`
- Credenciales R2 en variables de entorno
- Logs auditados y almacenados en R2
- Acceso restringido por rol
## Versión y Actualización
**Versión**: 1.0
**Última actualización**: 2025-12-30
**Sistema**: TZZR - Skynet v8
## Licencia
Proyecto interno del sistema TZZR.
---
Para más información, consultar:
- Gitea: http://69.62.126.110:3000
- R2 System Docs: s3://architect/system/
- Manuales: s3://architect/system/skynet v8/

165
apps/captain.py Normal file
View File

@@ -0,0 +1,165 @@
#!/usr/bin/env python3
"""
CAPTAIN CLAUDE - CLI Unificado para el Sistema TZZR
Punto de entrada principal para todas las apps
"""
import sys
import os
import subprocess
APPS_DIR = os.path.dirname(os.path.abspath(__file__))
APPS = {
"devops": {
"path": f"{APPS_DIR}/devops/app.py",
"desc": "Gestión de despliegues y construcción",
"alias": ["deploy", "build"]
},
"deck": {
"path": f"{APPS_DIR}/deck/app.py",
"desc": "Servidor DECK (72.62.1.113) - Agentes, Mail, Apps",
"alias": ["d"]
},
"corp": {
"path": f"{APPS_DIR}/corp/app.py",
"desc": "Servidor CORP (92.112.181.188) - Margaret, Jared, Mason, Feldman",
"alias": ["c"]
},
"hst": {
"path": f"{APPS_DIR}/hst/app.py",
"desc": "Servidor HST (72.62.2.84) - Directus, Imágenes",
"alias": ["h"]
},
"docker": {
"path": f"{APPS_DIR}/docker/app.py",
"desc": "Gestión Docker multi-servidor",
"alias": ["dk"]
}
}
def print_banner():
print("""
╔═══════════════════════════════════════════════════════════════╗
║ CAPTAIN CLAUDE ║
║ Sistema Multiagente TZZR ║
╠═══════════════════════════════════════════════════════════════╣
║ Servidor Central: 69.62.126.110 (Gitea, PostgreSQL) ║
║ DECK: 72.62.1.113 │ CORP: 92.112.181.188 ║
║ HST: 72.62.2.84 │ R2: Cloudflare Storage ║
╚═══════════════════════════════════════════════════════════════╝
""")
def print_help():
print_banner()
print("Uso: python captain.py <app> [comando] [argumentos]\n")
print("Apps disponibles:")
print("" * 60)
for name, info in APPS.items():
aliases = ", ".join(info["alias"]) if info["alias"] else ""
alias_str = f" (alias: {aliases})" if aliases else ""
print(f" {name}{alias_str}")
print(f" └─ {info['desc']}")
print()
print("Ejemplos:")
print(" python captain.py devops deploy clara deck")
print(" python captain.py deck agents")
print(" python captain.py corp pending")
print(" python captain.py docker ps all")
print(" python captain.py hst directus")
print()
print("Atajos rápidos:")
print(" python captain.py d agents # DECK agents")
print(" python captain.py c flows # CORP flows")
print(" python captain.py dk dashboard # Docker dashboard")
def resolve_app(name: str) -> str:
"""Resuelve nombre o alias a nombre de app"""
if name in APPS:
return name
for app_name, info in APPS.items():
if name in info.get("alias", []):
return app_name
return None
def run_app(app: str, args: list):
"""Ejecuta una app con los argumentos dados"""
app_name = resolve_app(app)
if not app_name:
print(f"❌ App no encontrada: {app}")
print(f" Apps disponibles: {', '.join(APPS.keys())}")
return 1
app_path = APPS[app_name]["path"]
if not os.path.exists(app_path):
print(f"❌ Archivo no encontrado: {app_path}")
return 1
cmd = ["python3", app_path] + args
try:
result = subprocess.run(cmd)
return result.returncode
except KeyboardInterrupt:
print("\n⚠️ Interrumpido")
return 130
except Exception as e:
print(f"❌ Error: {e}")
return 1
def quick_status():
"""Muestra estado rápido de todos los servidores"""
print_banner()
print("📊 Estado rápido del sistema:\n")
servers = [
("DECK", "root@72.62.1.113"),
("CORP", "root@92.112.181.188"),
("HST", "root@72.62.2.84")
]
for name, host in servers:
cmd = f"ssh -i ~/.ssh/tzzr -o ConnectTimeout=3 {host} 'docker ps -q | wc -l' 2>/dev/null"
try:
result = subprocess.run(cmd, shell=True, capture_output=True, text=True, timeout=5)
if result.returncode == 0:
count = result.stdout.strip()
print(f"{name} ({host.split('@')[1]}): {count} contenedores")
else:
print(f"{name} ({host.split('@')[1]}): No responde")
except:
print(f"{name} ({host.split('@')[1]}): Timeout")
print()
def main():
if len(sys.argv) < 2:
print_help()
return 0
cmd = sys.argv[1]
# Comandos especiales
if cmd in ["-h", "--help", "help"]:
print_help()
return 0
if cmd in ["status", "s"]:
quick_status()
return 0
if cmd in ["version", "-v", "--version"]:
print("Captain Claude v1.0.0")
return 0
# Ejecutar app
args = sys.argv[2:] if len(sys.argv) > 2 else []
return run_app(cmd, args)
if __name__ == "__main__":
sys.exit(main())

343
apps/corp/app.py Normal file
View File

@@ -0,0 +1,343 @@
#!/usr/bin/env python3
"""
TZZR CORP App - Gestión del servidor CORP (92.112.181.188)
Servicios: Agentes Margaret/Jared/Mason/Feldman, Context Manager, Apps
"""
import subprocess
import json
import sys
from dataclasses import dataclass
from typing import List, Dict, Optional
from datetime import datetime
SERVER = "root@92.112.181.188"
SSH_KEY = "~/.ssh/tzzr"
# Servicios conocidos en CORP
SERVICES = {
# Microservicios TZZR
"margaret": {"port": 5051, "type": "service", "desc": "Recepción de datos", "path": "/opt/margaret"},
"jared": {"port": 5052, "type": "service", "desc": "Automatización de flujos", "path": "/opt/jared"},
"mason": {"port": 5053, "type": "service", "desc": "Espacio de enriquecimiento", "path": "/opt/mason"},
"feldman": {"port": 5054, "type": "service", "desc": "Registro final + Merkle", "path": "/opt/feldman"},
# Aplicaciones
"nextcloud": {"port": 8080, "type": "app", "desc": "Cloud storage", "url": "nextcloud.tzzrcorp.me"},
"directus": {"port": 8055, "type": "app", "desc": "CMS", "url": "tzzrcorp.me"},
"vaultwarden": {"port": 8081, "type": "app", "desc": "Passwords", "url": "vault.tzzrcorp.me"},
"shlink": {"port": 8082, "type": "app", "desc": "URL shortener", "url": "shlink.tzzrcorp.me"},
"addy": {"port": 8083, "type": "app", "desc": "Email aliases", "url": "addy.tzzrcorp.me"},
"odoo": {"port": 8069, "type": "app", "desc": "ERP", "url": "erp.tzzrcorp.me"},
"ntfy": {"port": 8880, "type": "app", "desc": "Notifications", "url": "ntfy.tzzrcorp.me"},
# APIs y Automatización
"hsu": {"port": 5002, "type": "api", "desc": "HSU User Library", "url": "hsu.tzzrcorp.me"},
"windmill": {"port": 8000, "type": "automation", "desc": "Automatización de flujos", "url": "windmill.tzzrcorp.me"},
"context-manager": {"port": None, "type": "system", "desc": "Context Manager IA", "path": "/opt/context-manager"},
# Infraestructura
"postgres": {"port": 5432, "type": "db", "desc": "PostgreSQL 16"},
"redis": {"port": 6379, "type": "db", "desc": "Cache Redis"},
}
@dataclass
class Result:
success: bool
data: any
error: str = ""
def ssh(cmd: str, timeout: int = 60) -> Result:
"""Ejecuta comando en CORP"""
full_cmd = f'ssh -i {SSH_KEY} {SERVER} "{cmd}"'
try:
result = subprocess.run(full_cmd, shell=True, capture_output=True, text=True, timeout=timeout)
return Result(success=result.returncode == 0, data=result.stdout.strip(), error=result.stderr.strip())
except Exception as e:
return Result(success=False, data="", error=str(e))
def get_all_containers() -> List[Dict]:
"""Lista todos los contenedores Docker"""
result = ssh("docker ps -a --format '{{.Names}}|{{.Status}}|{{.Ports}}|{{.Image}}'")
if not result.success:
return []
containers = []
for line in result.data.split('\n'):
if line:
parts = line.split('|')
containers.append({
"name": parts[0],
"status": parts[1] if len(parts) > 1 else "",
"ports": parts[2] if len(parts) > 2 else "",
"image": parts[3] if len(parts) > 3 else ""
})
return containers
def get_service_status(service: str) -> Dict:
"""Estado detallado de un servicio"""
info = SERVICES.get(service, {})
container = f"{service}-service" if info.get("type") == "agent" else service
status_result = ssh(f"docker ps --filter name={container} --format '{{{{.Status}}}}'")
health = "unknown"
if info.get("port"):
health_result = ssh(f"curl -s http://localhost:{info.get('port')}/health 2>/dev/null | head -c 100")
health = "healthy" if health_result.success and health_result.data else "unhealthy"
return {
"service": service,
"container": container,
"status": status_result.data if status_result.success else "not found",
"health": health,
"port": info.get("port"),
"url": info.get("url"),
"desc": info.get("desc")
}
def restart_service(service: str) -> Result:
"""Reinicia un servicio"""
info = SERVICES.get(service, {})
container = f"{service}-service" if info.get("type") == "agent" else service
return ssh(f"docker restart {container}")
def get_logs(service: str, lines: int = 100) -> str:
"""Obtiene logs de un servicio"""
info = SERVICES.get(service, {})
container = f"{service}-service" if info.get("type") == "agent" else service
result = ssh(f"docker logs {container} --tail {lines} 2>&1")
return result.data if result.success else result.error
def query_agent(agent: str, endpoint: str, method: str = "GET", data: dict = None) -> Result:
"""Hace petición a un agente TZZR"""
info = SERVICES.get(agent)
if not info or info.get("type") != "agent":
return Result(success=False, data="", error="Agente no encontrado")
port = info["port"]
if method == "GET":
cmd = f"curl -s http://localhost:{port}{endpoint}"
else:
json_data = json.dumps(data) if data else "{}"
cmd = f"curl -s -X {method} -H 'Content-Type: application/json' -d '{json_data}' http://localhost:{port}{endpoint}"
result = ssh(cmd)
try:
return Result(success=result.success, data=json.loads(result.data) if result.data else {})
except:
return Result(success=result.success, data=result.data)
def get_system_stats() -> Dict:
"""Estadísticas del sistema"""
stats = {}
mem_result = ssh("free -h | grep Mem | awk '{print $2,$3,$4}'")
if mem_result.success:
parts = mem_result.data.split()
stats["memory"] = {"total": parts[0], "used": parts[1], "available": parts[2]}
disk_result = ssh("df -h / | tail -1 | awk '{print $2,$3,$4,$5}'")
if disk_result.success:
parts = disk_result.data.split()
stats["disk"] = {"total": parts[0], "used": parts[1], "available": parts[2], "percent": parts[3]}
containers_result = ssh("docker ps -q | wc -l")
stats["containers_running"] = int(containers_result.data) if containers_result.success else 0
return stats
def get_postgres_stats() -> Dict:
"""Estadísticas de PostgreSQL"""
result = ssh("sudo -u postgres psql -c \"SELECT datname, pg_size_pretty(pg_database_size(datname)) as size FROM pg_database WHERE datistemplate = false;\" -t")
databases = {}
if result.success:
for line in result.data.split('\n'):
if '|' in line:
parts = line.split('|')
databases[parts[0].strip()] = parts[1].strip()
return databases
def ingest_to_margaret(contenedor: dict, auth_key: str) -> Result:
"""Envía datos a Margaret para ingestión"""
json_data = json.dumps(contenedor)
cmd = f"curl -s -X POST -H 'Content-Type: application/json' -H 'X-Auth-Key: {auth_key}' -d '{json_data}' http://localhost:5051/ingest"
result = ssh(cmd)
try:
return Result(success=result.success, data=json.loads(result.data) if result.data else {})
except:
return Result(success=result.success, data=result.data)
def execute_flow(flow_id: int, data: dict, auth_key: str) -> Result:
"""Ejecuta un flujo en Jared"""
json_data = json.dumps(data)
cmd = f"curl -s -X POST -H 'Content-Type: application/json' -H 'X-Auth-Key: {auth_key}' -d '{json_data}' http://localhost:5052/ejecutar/{flow_id}"
result = ssh(cmd)
try:
return Result(success=result.success, data=json.loads(result.data) if result.data else {})
except:
return Result(success=result.success, data=result.data)
def get_pending_issues() -> Result:
"""Obtiene incidencias pendientes de Mason"""
result = ssh("curl -s http://localhost:5053/pendientes")
try:
return Result(success=result.success, data=json.loads(result.data) if result.data else [])
except:
return Result(success=result.success, data=result.data)
def verify_merkle(hash: str) -> Result:
"""Verifica registro en Feldman"""
result = ssh(f"curl -s http://localhost:5054/verify/{hash}")
try:
return Result(success=result.success, data=json.loads(result.data) if result.data else {})
except:
return Result(success=result.success, data=result.data)
# CLI
def main():
if len(sys.argv) < 2:
print("""
TZZR CORP App - Servidor 92.112.181.188
=======================================
Uso: python app.py <comando> [argumentos]
Comandos Generales:
status Estado general del sistema
containers Lista todos los contenedores
stats Estadísticas del sistema
Gestión de Servicios:
service <name> Estado detallado de servicio
restart <service> Reiniciar servicio
logs <service> [lines] Ver logs
Agentes TZZR (Flujo de datos):
agents Estado de todos los agentes
agent <name> <endpoint> Query a agente (GET)
margaret Estado de Margaret (Secretaria)
jared Estado de Jared (Flujos)
mason Estado de Mason (Editor)
feldman Estado de Feldman (Contable)
flows Lista flujos predefinidos en Jared
pending Incidencias pendientes en Mason
verify <hash> Verificar hash en Feldman
Bases de Datos:
postgres Estadísticas PostgreSQL
Servicios disponibles:
Agentes: margaret, jared, mason, feldman
Apps: nextcloud, directus, vaultwarden, shlink, addy, odoo, ntfy
APIs: hsu, context-manager
DB: postgres, redis
""")
return
cmd = sys.argv[1]
if cmd == "status":
print("\n📊 CORP Status (92.112.181.188)")
print("=" * 40)
stats = get_system_stats()
print(f"💾 Memoria: {stats.get('memory', {}).get('used', '?')}/{stats.get('memory', {}).get('total', '?')}")
print(f"💿 Disco: {stats.get('disk', {}).get('used', '?')}/{stats.get('disk', {}).get('total', '?')} ({stats.get('disk', {}).get('percent', '?')})")
print(f"📦 Contenedores: {stats.get('containers_running', 0)}")
elif cmd == "containers":
containers = get_all_containers()
print(f"\n📦 Contenedores en CORP ({len(containers)} total):")
for c in containers:
icon = "" if "Up" in c["status"] else ""
print(f" {icon} {c['name']}: {c['status'][:30]}")
elif cmd == "service" and len(sys.argv) >= 3:
status = get_service_status(sys.argv[2])
print(f"\n🔧 {status['service']}")
print(f" Container: {status['container']}")
print(f" Status: {status['status']}")
print(f" Health: {status['health']}")
print(f" Desc: {status.get('desc', '')}")
if status.get('url'):
print(f" URL: https://{status['url']}")
elif cmd == "restart" and len(sys.argv) >= 3:
result = restart_service(sys.argv[2])
print(f"{'' if result.success else ''} {sys.argv[2]}: {'reiniciado' if result.success else result.error}")
elif cmd == "logs" and len(sys.argv) >= 3:
lines = int(sys.argv[3]) if len(sys.argv) > 3 else 50
print(get_logs(sys.argv[2], lines))
elif cmd == "agents":
print("\n🤖 Agentes TZZR en CORP:")
print(" PACKET → MARGARET → JARED → MASON/FELDMAN")
print()
for agent in ["margaret", "jared", "mason", "feldman"]:
status = get_service_status(agent)
icon = "" if "Up" in status["status"] else ""
health = f"({status['health']})" if status['health'] != "unknown" else ""
print(f" {icon} {agent}: {status['status'][:20]} {health}")
print(f" └─ {SERVICES[agent]['desc']}")
elif cmd == "agent" and len(sys.argv) >= 4:
result = query_agent(sys.argv[2], sys.argv[3])
if result.success:
print(json.dumps(result.data, indent=2) if isinstance(result.data, dict) else result.data)
else:
print(f"❌ Error: {result.error}")
elif cmd in ["margaret", "jared", "mason", "feldman"]:
status = get_service_status(cmd)
print(f"\n🤖 {cmd.upper()} - {SERVICES[cmd]['desc']}")
print(f" Puerto: {SERVICES[cmd]['port']}")
print(f" Status: {status['status']}")
print(f" Health: {status['health']}")
# Endpoints específicos
contracts = query_agent(cmd, "/s-contract")
if contracts.success and isinstance(contracts.data, dict):
print(f" Versión: {contracts.data.get('version', 'N/A')}")
elif cmd == "flows":
result = query_agent("jared", "/flujos")
if result.success:
flows = result.data if isinstance(result.data, list) else []
print("\n📋 Flujos predefinidos en Jared:")
for f in flows:
print(f" • [{f.get('id')}] {f.get('nombre')}")
elif cmd == "pending":
result = get_pending_issues()
if result.success:
issues = result.data if isinstance(result.data, list) else []
print(f"\n⚠️ Incidencias pendientes en Mason: {len(issues)}")
for i in issues[:10]:
print(f"{i.get('id')}: {i.get('tipo', 'N/A')} - {i.get('descripcion', '')[:40]}")
elif cmd == "verify" and len(sys.argv) >= 3:
result = verify_merkle(sys.argv[2])
if result.success:
print(json.dumps(result.data, indent=2))
else:
print(f"❌ Error: {result.error}")
elif cmd == "postgres":
dbs = get_postgres_stats()
print("\n🐘 PostgreSQL en CORP:")
for db, size in dbs.items():
print(f"{db}: {size}")
else:
print("❌ Comando no reconocido. Usa 'python app.py' para ver ayuda.")
if __name__ == "__main__":
main()

317
apps/deck/app.py Normal file
View File

@@ -0,0 +1,317 @@
#!/usr/bin/env python3
"""
TZZR DECK App - Gestión del servidor DECK (72.62.1.113)
Servicios: Agentes TZZR, Mailcow, Nextcloud, Odoo, Vaultwarden, etc.
"""
import subprocess
import json
import sys
from dataclasses import dataclass
from typing import Optional, List, Dict
from datetime import datetime
SERVER = "root@72.62.1.113"
SSH_KEY = "~/.ssh/tzzr"
# Servicios conocidos en DECK
SERVICES = {
# Microservicios TZZR
"clara": {"port": 5051, "type": "service", "desc": "Log inmutable"},
"alfred": {"port": 5052, "type": "service", "desc": "Automatización de flujos"},
"mason": {"port": 5053, "type": "service", "desc": "Espacio de enriquecimiento"},
"feldman": {"port": 5054, "type": "service", "desc": "Validador Merkle"},
# Aplicaciones
"nextcloud": {"port": 8084, "type": "app", "desc": "Cloud storage", "url": "cloud.tzzrdeck.me"},
"odoo": {"port": 8069, "type": "app", "desc": "ERP", "url": "odoo.tzzrdeck.me", "container": "deck-odoo"},
"vaultwarden": {"port": 8085, "type": "app", "desc": "Passwords", "url": "vault.tzzrdeck.me"},
"directus": {"port": 8055, "type": "app", "desc": "CMS", "url": "directus.tzzrdeck.me"},
"shlink": {"port": 8083, "type": "app", "desc": "URL shortener", "url": "shlink.tzzrdeck.me"},
"ntfy": {"port": 8080, "type": "app", "desc": "Notifications", "url": "ntfy.tzzrdeck.me"},
"filebrowser": {"port": 8082, "type": "app", "desc": "File manager", "url": "files.tzzrdeck.me"},
# Infraestructura
"postgres": {"port": 5432, "type": "db", "desc": "PostgreSQL con pgvector"},
"redis": {"port": 6379, "type": "db", "desc": "Cache Redis"},
"mailcow": {"port": 8180, "type": "mail", "desc": "Servidor correo", "url": "mail.tzzr.net"},
}
@dataclass
class Result:
success: bool
data: any
error: str = ""
def ssh(cmd: str, timeout: int = 60) -> Result:
"""Ejecuta comando en DECK"""
full_cmd = f'ssh -i {SSH_KEY} {SERVER} "{cmd}"'
try:
result = subprocess.run(full_cmd, shell=True, capture_output=True, text=True, timeout=timeout)
return Result(success=result.returncode == 0, data=result.stdout.strip(), error=result.stderr.strip())
except Exception as e:
return Result(success=False, data="", error=str(e))
def get_all_containers() -> List[Dict]:
"""Lista todos los contenedores Docker"""
result = ssh("docker ps -a --format '{{.Names}}|{{.Status}}|{{.Ports}}|{{.Image}}'")
if not result.success:
return []
containers = []
for line in result.data.split('\n'):
if line:
parts = line.split('|')
containers.append({
"name": parts[0],
"status": parts[1] if len(parts) > 1 else "",
"ports": parts[2] if len(parts) > 2 else "",
"image": parts[3] if len(parts) > 3 else ""
})
return containers
def get_service_status(service: str) -> Dict:
"""Estado detallado de un servicio"""
info = SERVICES.get(service, {})
container = info.get("container", f"{service}-service" if info.get("type") == "service" else service)
# Estado del contenedor
status_result = ssh(f"docker ps --filter name={container} --format '{{{{.Status}}}}'")
# Logs recientes
logs_result = ssh(f"docker logs {container} --tail 5 2>&1")
# Health check si tiene endpoint
health = "unknown"
if info.get("type") == "service":
health_result = ssh(f"curl -s http://localhost:{info.get('port')}/health 2>/dev/null")
health = "healthy" if health_result.success and health_result.data else "unhealthy"
return {
"service": service,
"container": container,
"status": status_result.data if status_result.success else "not found",
"health": health,
"port": info.get("port"),
"url": info.get("url"),
"logs": logs_result.data[:500] if logs_result.success else ""
}
def restart_service(service: str) -> Result:
"""Reinicia un servicio"""
info = SERVICES.get(service, {})
container = info.get("container", f"{service}-service" if info.get("type") == "agent" else service)
return ssh(f"docker restart {container}")
def stop_service(service: str) -> Result:
"""Detiene un servicio"""
info = SERVICES.get(service, {})
container = info.get("container", f"{service}-service" if info.get("type") == "agent" else service)
return ssh(f"docker stop {container}")
def start_service(service: str) -> Result:
"""Inicia un servicio"""
info = SERVICES.get(service, {})
container = info.get("container", f"{service}-service" if info.get("type") == "agent" else service)
return ssh(f"docker start {container}")
def get_logs(service: str, lines: int = 100) -> str:
"""Obtiene logs de un servicio"""
info = SERVICES.get(service, {})
container = info.get("container", f"{service}-service" if info.get("type") == "agent" else service)
result = ssh(f"docker logs {container} --tail {lines} 2>&1")
return result.data if result.success else result.error
def get_system_stats() -> Dict:
"""Estadísticas del sistema"""
stats = {}
# CPU y memoria
mem_result = ssh("free -h | grep Mem | awk '{print $2,$3,$4}'")
if mem_result.success:
parts = mem_result.data.split()
stats["memory"] = {"total": parts[0], "used": parts[1], "available": parts[2]}
# Disco
disk_result = ssh("df -h / | tail -1 | awk '{print $2,$3,$4,$5}'")
if disk_result.success:
parts = disk_result.data.split()
stats["disk"] = {"total": parts[0], "used": parts[1], "available": parts[2], "percent": parts[3]}
# Contenedores
containers_result = ssh("docker ps -q | wc -l")
stats["containers_running"] = int(containers_result.data) if containers_result.success else 0
# Load average
load_result = ssh("cat /proc/loadavg | awk '{print $1,$2,$3}'")
if load_result.success:
parts = load_result.data.split()
stats["load"] = {"1m": parts[0], "5m": parts[1], "15m": parts[2]}
return stats
def query_service(service: str, endpoint: str, method: str = "GET", data: dict = None) -> Result:
"""Hace petición a un servicio TZZR"""
info = SERVICES.get(service)
if not info or info.get("type") != "service":
return Result(success=False, data="", error="Servicio no encontrado")
port = info["port"]
if method == "GET":
cmd = f"curl -s http://localhost:{port}{endpoint}"
else:
json_data = json.dumps(data) if data else "{}"
cmd = f"curl -s -X {method} -H 'Content-Type: application/json' -d '{json_data}' http://localhost:{port}{endpoint}"
result = ssh(cmd)
try:
return Result(success=result.success, data=json.loads(result.data) if result.data else {})
except:
return Result(success=result.success, data=result.data)
def get_postgres_stats() -> Dict:
"""Estadísticas de PostgreSQL"""
result = ssh("sudo -u postgres psql -c \"SELECT datname, pg_size_pretty(pg_database_size(datname)) as size FROM pg_database WHERE datistemplate = false;\" -t")
databases = {}
if result.success:
for line in result.data.split('\n'):
if '|' in line:
parts = line.split('|')
databases[parts[0].strip()] = parts[1].strip()
return databases
def get_mailcow_status() -> Dict:
"""Estado de Mailcow"""
containers = ["postfix-mailcow", "dovecot-mailcow", "nginx-mailcow", "mysql-mailcow", "redis-mailcow"]
status = {}
for c in containers:
result = ssh(f"docker ps --filter name={c} --format '{{{{.Status}}}}'")
status[c.replace("-mailcow", "")] = result.data if result.success and result.data else "stopped"
return status
# CLI
def main():
if len(sys.argv) < 2:
print("""
TZZR DECK App - Servidor 72.62.1.113
====================================
Uso: python app.py <comando> [argumentos]
Comandos Generales:
status Estado general del sistema
containers Lista todos los contenedores
stats Estadísticas del sistema
Gestión de Servicios:
service <name> Estado detallado de servicio
restart <service> Reiniciar servicio
stop <service> Detener servicio
start <service> Iniciar servicio
logs <service> [lines] Ver logs
Servicios TZZR:
services Estado de todos los servicios
query <name> <endpoint> Query a servicio (GET)
Bases de Datos:
postgres Estadísticas PostgreSQL
Mail:
mailcow Estado de Mailcow
Servicios disponibles:
TZZR: clara, alfred, mason, feldman
Apps: nextcloud, odoo, vaultwarden, directus, shlink, ntfy, filebrowser
DB: postgres, redis
Mail: mailcow
""")
return
cmd = sys.argv[1]
if cmd == "status":
print("\n📊 DECK Status (72.62.1.113)")
print("=" * 40)
stats = get_system_stats()
print(f"💾 Memoria: {stats.get('memory', {}).get('used', '?')}/{stats.get('memory', {}).get('total', '?')}")
print(f"💿 Disco: {stats.get('disk', {}).get('used', '?')}/{stats.get('disk', {}).get('total', '?')} ({stats.get('disk', {}).get('percent', '?')})")
print(f"📦 Contenedores: {stats.get('containers_running', 0)}")
print(f"⚡ Load: {stats.get('load', {}).get('1m', '?')}")
elif cmd == "containers":
containers = get_all_containers()
print(f"\n📦 Contenedores en DECK ({len(containers)} total):")
for c in containers:
icon = "" if "Up" in c["status"] else ""
print(f" {icon} {c['name']}: {c['status'][:30]}")
elif cmd == "stats":
stats = get_system_stats()
print(json.dumps(stats, indent=2))
elif cmd == "service" and len(sys.argv) >= 3:
status = get_service_status(sys.argv[2])
print(f"\n🔧 {status['service']}")
print(f" Container: {status['container']}")
print(f" Status: {status['status']}")
print(f" Health: {status['health']}")
if status.get('url'):
print(f" URL: https://{status['url']}")
if status.get('port'):
print(f" Puerto: {status['port']}")
elif cmd == "restart" and len(sys.argv) >= 3:
result = restart_service(sys.argv[2])
print(f"{'' if result.success else ''} {sys.argv[2]}: {'reiniciado' if result.success else result.error}")
elif cmd == "stop" and len(sys.argv) >= 3:
result = stop_service(sys.argv[2])
print(f"{'' if result.success else ''} {sys.argv[2]}: {'detenido' if result.success else result.error}")
elif cmd == "start" and len(sys.argv) >= 3:
result = start_service(sys.argv[2])
print(f"{'' if result.success else ''} {sys.argv[2]}: {'iniciado' if result.success else result.error}")
elif cmd == "logs" and len(sys.argv) >= 3:
lines = int(sys.argv[3]) if len(sys.argv) > 3 else 50
print(get_logs(sys.argv[2], lines))
elif cmd == "services":
print("\n⚙️ Servicios TZZR en DECK:")
for svc in ["clara", "alfred", "mason", "feldman"]:
status = get_service_status(svc)
icon = "" if "Up" in status["status"] else ""
health = f"({status['health']})" if status['health'] != "unknown" else ""
print(f" {icon} {svc}: {status['status'][:25]} {health} - {SERVICES[svc]['desc']}")
elif cmd == "query" and len(sys.argv) >= 4:
result = query_service(sys.argv[2], sys.argv[3])
if result.success:
print(json.dumps(result.data, indent=2) if isinstance(result.data, dict) else result.data)
else:
print(f"❌ Error: {result.error}")
elif cmd == "postgres":
dbs = get_postgres_stats()
print("\n🐘 PostgreSQL en DECK:")
for db, size in dbs.items():
print(f"{db}: {size}")
elif cmd == "mailcow":
status = get_mailcow_status()
print("\n📧 Mailcow Status:")
for service, st in status.items():
icon = "" if "Up" in st else ""
print(f" {icon} {service}: {st[:30]}")
else:
print("❌ Comando no reconocido. Usa 'python app.py' para ver ayuda.")
if __name__ == "__main__":
main()

230
apps/devops/app.py Normal file
View File

@@ -0,0 +1,230 @@
#!/usr/bin/env python3
"""
TZZR DevOps App - Gestión de despliegues y construcción
"""
import subprocess
import sys
from dataclasses import dataclass
from typing import Optional
import json
# Configuración de servidores
SERVERS = {
"deck": {"host": "root@72.62.1.113", "name": "DECK"},
"corp": {"host": "root@92.112.181.188", "name": "CORP"},
"hst": {"host": "root@72.62.2.84", "name": "HST"},
"local": {"host": None, "name": "LOCAL (69.62.126.110)"}
}
SSH_KEY = "~/.ssh/tzzr"
# Agentes conocidos
AGENTS = {
"deck": ["clara", "alfred", "mason", "feldman"],
"corp": ["margaret", "jared", "mason", "feldman"],
"hst": ["hst-api", "directus_hst", "directus_lumalia", "directus_personal"]
}
@dataclass
class CommandResult:
success: bool
output: str
error: str = ""
def ssh_cmd(server: str, command: str) -> CommandResult:
"""Ejecuta comando SSH en servidor remoto"""
if server == "local":
full_cmd = command
else:
host = SERVERS[server]["host"]
full_cmd = f'ssh -i {SSH_KEY} {host} "{command}"'
try:
result = subprocess.run(full_cmd, shell=True, capture_output=True, text=True, timeout=60)
return CommandResult(
success=result.returncode == 0,
output=result.stdout.strip(),
error=result.stderr.strip()
)
except subprocess.TimeoutExpired:
return CommandResult(success=False, output="", error="Timeout")
except Exception as e:
return CommandResult(success=False, output="", error=str(e))
def deploy_agent(agent: str, server: str) -> CommandResult:
"""Despliega un agente en el servidor especificado"""
print(f"🚀 Desplegando {agent} en {server}...")
# Usar el script de deploy en DECK
cmd = f"/opt/scripts/deploy-agent.sh {agent} {server}"
result = ssh_cmd("deck", cmd)
if result.success:
print(f"{agent} desplegado exitosamente en {server}")
else:
print(f"❌ Error desplegando {agent}: {result.error}")
return result
def backup_postgres(server: str = "deck") -> CommandResult:
"""Ejecuta backup de PostgreSQL"""
print(f"💾 Ejecutando backup en {server}...")
result = ssh_cmd(server, "/opt/scripts/backup_postgres.sh")
if result.success:
print("✅ Backup completado")
else:
print(f"❌ Error: {result.error}")
return result
def sync_r2() -> CommandResult:
"""Sincroniza backups a R2"""
print("☁️ Sincronizando con R2...")
result = ssh_cmd("deck", "/opt/scripts/sync_backups_r2.sh")
if result.success:
print("✅ Sincronización completada")
else:
print(f"❌ Error: {result.error}")
return result
def onboard_user(email: str, username: str) -> CommandResult:
"""Da de alta un nuevo usuario"""
print(f"👤 Creando usuario {username} ({email})...")
result = ssh_cmd("deck", f"/opt/scripts/onboard-user.sh {email} {username}")
if result.success:
print(f"✅ Usuario {username} creado")
else:
print(f"❌ Error: {result.error}")
return result
def get_agent_status(server: str) -> dict:
"""Obtiene estado de agentes en un servidor"""
agents = AGENTS.get(server, [])
status = {}
for agent in agents:
result = ssh_cmd(server, f"docker ps --filter name={agent} --format '{{{{.Status}}}}'")
status[agent] = result.output if result.success else "unknown"
return status
def restart_agent(agent: str, server: str) -> CommandResult:
"""Reinicia un agente específico"""
print(f"🔄 Reiniciando {agent} en {server}...")
result = ssh_cmd(server, f"docker restart {agent}-service 2>/dev/null || docker restart {agent}")
if result.success:
print(f"{agent} reiniciado")
else:
print(f"❌ Error: {result.error}")
return result
def get_logs(agent: str, server: str, lines: int = 50) -> CommandResult:
"""Obtiene logs de un agente"""
container = f"{agent}-service" if agent in ["clara", "alfred", "mason", "feldman", "margaret", "jared"] else agent
return ssh_cmd(server, f"docker logs {container} --tail {lines} 2>&1")
def list_deployments() -> dict:
"""Lista todos los deployments activos"""
deployments = {}
for server in ["deck", "corp", "hst"]:
result = ssh_cmd(server, "docker ps --format '{{.Names}}|{{.Status}}|{{.Ports}}'")
if result.success:
containers = []
for line in result.output.split('\n'):
if line:
parts = line.split('|')
containers.append({
"name": parts[0],
"status": parts[1] if len(parts) > 1 else "",
"ports": parts[2] if len(parts) > 2 else ""
})
deployments[server] = containers
return deployments
def git_pull_all(server: str) -> dict:
"""Hace git pull en todos los proyectos de /opt"""
result = ssh_cmd(server, "for d in /opt/*/; do echo \"=== $d ===\"; cd $d && git pull 2>/dev/null || echo 'no git'; done")
return {"output": result.output, "success": result.success}
# CLI
def main():
if len(sys.argv) < 2:
print("""
TZZR DevOps App
===============
Uso: python app.py <comando> [argumentos]
Comandos:
deploy <agent> <server> Despliega un agente
backup [server] Ejecuta backup PostgreSQL
sync Sincroniza backups a R2
onboard <email> <username> Alta de usuario
status <server> Estado de agentes
restart <agent> <server> Reinicia agente
logs <agent> <server> Ver logs de agente
list Lista todos los deployments
pull <server> Git pull en todos los proyectos
Servidores: deck, corp, hst, local
Agentes DECK: clara, alfred, mason, feldman
Agentes CORP: margaret, jared, mason, feldman
""")
return
cmd = sys.argv[1]
if cmd == "deploy" and len(sys.argv) >= 4:
deploy_agent(sys.argv[2], sys.argv[3])
elif cmd == "backup":
server = sys.argv[2] if len(sys.argv) > 2 else "deck"
backup_postgres(server)
elif cmd == "sync":
sync_r2()
elif cmd == "onboard" and len(sys.argv) >= 4:
onboard_user(sys.argv[2], sys.argv[3])
elif cmd == "status" and len(sys.argv) >= 3:
status = get_agent_status(sys.argv[2])
print(f"\n📊 Estado de agentes en {sys.argv[2]}:")
for agent, st in status.items():
icon = "" if "Up" in st else ""
print(f" {icon} {agent}: {st}")
elif cmd == "restart" and len(sys.argv) >= 4:
restart_agent(sys.argv[2], sys.argv[3])
elif cmd == "logs" and len(sys.argv) >= 4:
result = get_logs(sys.argv[2], sys.argv[3])
print(result.output)
elif cmd == "list":
deployments = list_deployments()
for server, containers in deployments.items():
print(f"\n📦 {server.upper()}:")
for c in containers[:10]:
print(f"{c['name']}: {c['status']}")
if len(containers) > 10:
print(f" ... y {len(containers)-10} más")
elif cmd == "pull" and len(sys.argv) >= 3:
result = git_pull_all(sys.argv[2])
print(result["output"])
else:
print("❌ Comando no reconocido. Usa 'python app.py' para ver ayuda.")
if __name__ == "__main__":
main()

425
apps/docker/app.py Normal file
View File

@@ -0,0 +1,425 @@
#!/usr/bin/env python3
"""
TZZR Docker App - Gestión unificada de Docker en todos los servidores
Servidores: DECK (72.62.1.113), CORP (92.112.181.188), HST (72.62.2.84)
"""
import subprocess
import json
import sys
from dataclasses import dataclass, field
from typing import List, Dict, Optional
from datetime import datetime
# Configuración de servidores
SERVERS = {
"deck": {
"host": "root@72.62.1.113",
"name": "DECK",
"desc": "Producción - Agentes TZZR, Mail, Apps"
},
"corp": {
"host": "root@92.112.181.188",
"name": "CORP",
"desc": "Aplicaciones - Margaret, Jared, Mason, Feldman"
},
"hst": {
"host": "root@72.62.2.84",
"name": "HST",
"desc": "Contenido - Directus, Imágenes"
}
}
SSH_KEY = "~/.ssh/tzzr"
@dataclass
class Container:
name: str
status: str
image: str
ports: str = ""
created: str = ""
server: str = ""
@property
def is_running(self) -> bool:
return "Up" in self.status
@property
def icon(self) -> str:
return "" if self.is_running else ""
@dataclass
class Result:
success: bool
data: any
error: str = ""
def ssh(server: str, cmd: str, timeout: int = 60) -> Result:
"""Ejecuta comando SSH"""
host = SERVERS[server]["host"]
full_cmd = f'ssh -i {SSH_KEY} {host} "{cmd}"'
try:
result = subprocess.run(full_cmd, shell=True, capture_output=True, text=True, timeout=timeout)
return Result(success=result.returncode == 0, data=result.stdout.strip(), error=result.stderr.strip())
except subprocess.TimeoutExpired:
return Result(success=False, data="", error="Timeout")
except Exception as e:
return Result(success=False, data="", error=str(e))
def get_containers(server: str, all: bool = False) -> List[Container]:
"""Lista contenedores de un servidor"""
flag = "-a" if all else ""
result = ssh(server, f"docker ps {flag} --format '{{{{.Names}}}}|{{{{.Status}}}}|{{{{.Image}}}}|{{{{.Ports}}}}'")
if not result.success:
return []
containers = []
for line in result.data.split('\n'):
if line:
parts = line.split('|')
containers.append(Container(
name=parts[0],
status=parts[1] if len(parts) > 1 else "",
image=parts[2] if len(parts) > 2 else "",
ports=parts[3] if len(parts) > 3 else "",
server=server
))
return containers
def get_all_containers(all: bool = False) -> Dict[str, List[Container]]:
"""Lista contenedores de todos los servidores"""
result = {}
for server in SERVERS:
result[server] = get_containers(server, all)
return result
def container_action(server: str, container: str, action: str) -> Result:
"""Ejecuta acción en contenedor (start, stop, restart, rm)"""
return ssh(server, f"docker {action} {container}")
def get_logs(server: str, container: str, lines: int = 100, follow: bool = False) -> str:
"""Obtiene logs de contenedor"""
follow_flag = "-f" if follow else ""
result = ssh(server, f"docker logs {container} --tail {lines} {follow_flag} 2>&1", timeout=120 if follow else 60)
return result.data if result.success else result.error
def inspect_container(server: str, container: str) -> Dict:
"""Inspecciona contenedor"""
result = ssh(server, f"docker inspect {container}")
if result.success:
try:
return json.loads(result.data)[0]
except:
return {}
return {}
def get_images(server: str) -> List[Dict]:
"""Lista imágenes Docker"""
result = ssh(server, "docker images --format '{{.Repository}}|{{.Tag}}|{{.Size}}|{{.ID}}'")
if not result.success:
return []
images = []
for line in result.data.split('\n'):
if line:
parts = line.split('|')
images.append({
"repository": parts[0],
"tag": parts[1] if len(parts) > 1 else "",
"size": parts[2] if len(parts) > 2 else "",
"id": parts[3] if len(parts) > 3 else ""
})
return images
def get_networks(server: str) -> List[Dict]:
"""Lista redes Docker"""
result = ssh(server, "docker network ls --format '{{.Name}}|{{.Driver}}|{{.Scope}}'")
if not result.success:
return []
networks = []
for line in result.data.split('\n'):
if line:
parts = line.split('|')
networks.append({
"name": parts[0],
"driver": parts[1] if len(parts) > 1 else "",
"scope": parts[2] if len(parts) > 2 else ""
})
return networks
def get_volumes(server: str) -> List[Dict]:
"""Lista volúmenes Docker"""
result = ssh(server, "docker volume ls --format '{{.Name}}|{{.Driver}}'")
if not result.success:
return []
volumes = []
for line in result.data.split('\n'):
if line:
parts = line.split('|')
volumes.append({
"name": parts[0],
"driver": parts[1] if len(parts) > 1 else ""
})
return volumes
def docker_stats(server: str) -> List[Dict]:
"""Estadísticas de contenedores"""
result = ssh(server, "docker stats --no-stream --format '{{.Name}}|{{.CPUPerc}}|{{.MemUsage}}|{{.NetIO}}'")
if not result.success:
return []
stats = []
for line in result.data.split('\n'):
if line:
parts = line.split('|')
stats.append({
"name": parts[0],
"cpu": parts[1] if len(parts) > 1 else "",
"memory": parts[2] if len(parts) > 2 else "",
"network": parts[3] if len(parts) > 3 else ""
})
return stats
def prune(server: str, what: str = "all") -> Result:
"""Limpia recursos Docker no usados"""
if what == "all":
return ssh(server, "docker system prune -f")
elif what == "images":
return ssh(server, "docker image prune -f")
elif what == "volumes":
return ssh(server, "docker volume prune -f")
elif what == "networks":
return ssh(server, "docker network prune -f")
return Result(success=False, data="", error="Tipo no válido")
def compose_action(server: str, path: str, action: str) -> Result:
"""Ejecuta docker compose en un directorio"""
return ssh(server, f"cd {path} && docker compose {action}")
def exec_container(server: str, container: str, command: str) -> Result:
"""Ejecuta comando dentro de contenedor"""
return ssh(server, f"docker exec {container} {command}")
def find_container(name: str) -> List[tuple]:
"""Busca contenedor por nombre en todos los servidores"""
results = []
for server in SERVERS:
containers = get_containers(server, all=True)
for c in containers:
if name.lower() in c.name.lower():
results.append((server, c))
return results
def get_system_df(server: str) -> Dict:
"""Uso de disco de Docker"""
result = ssh(server, "docker system df --format '{{.Type}}|{{.Size}}|{{.Reclaimable}}'")
if not result.success:
return {}
df = {}
for line in result.data.split('\n'):
if line:
parts = line.split('|')
df[parts[0]] = {
"size": parts[1] if len(parts) > 1 else "",
"reclaimable": parts[2] if len(parts) > 2 else ""
}
return df
# CLI
def main():
if len(sys.argv) < 2:
print("""
TZZR Docker App - Gestión Multi-servidor
=========================================
Uso: python app.py <comando> [argumentos]
Servidores: deck, corp, hst (o 'all' para todos)
Contenedores:
ps [server] Lista contenedores activos
ps -a [server] Lista todos los contenedores
start <server> <name> Iniciar contenedor
stop <server> <name> Detener contenedor
restart <server> <name> Reiniciar contenedor
rm <server> <name> Eliminar contenedor
logs <server> <name> Ver logs
inspect <server> <name> Inspeccionar contenedor
exec <server> <name> <cmd> Ejecutar comando en contenedor
find <name> Buscar contenedor en todos los servidores
Recursos:
images <server> Lista imágenes
networks <server> Lista redes
volumes <server> Lista volúmenes
stats <server> Estadísticas de recursos
df <server> Uso de disco Docker
Mantenimiento:
prune <server> [type] Limpiar recursos (all/images/volumes/networks)
Compose:
up <server> <path> docker compose up -d
down <server> <path> docker compose down
build <server> <path> docker compose build
Dashboard:
dashboard Vista general de todos los servidores
""")
return
cmd = sys.argv[1]
# PS - Lista contenedores
if cmd == "ps":
show_all = "-a" in sys.argv
server = None
for arg in sys.argv[2:]:
if arg != "-a" and arg in SERVERS:
server = arg
if server:
containers = get_containers(server, show_all)
print(f"\n📦 {SERVERS[server]['name']} ({len(containers)} contenedores):")
for c in containers:
print(f" {c.icon} {c.name}: {c.status[:30]}")
else:
# Todos los servidores
all_containers = get_all_containers(show_all)
for srv, containers in all_containers.items():
print(f"\n📦 {SERVERS[srv]['name']} ({len(containers)}):")
for c in containers[:15]:
print(f" {c.icon} {c.name}: {c.status[:25]}")
if len(containers) > 15:
print(f" ... y {len(containers)-15} más")
# Acciones de contenedor
elif cmd in ["start", "stop", "restart", "rm"] and len(sys.argv) >= 4:
server, container = sys.argv[2], sys.argv[3]
result = container_action(server, container, cmd)
icon = "" if result.success else ""
action_name = {"start": "iniciado", "stop": "detenido", "restart": "reiniciado", "rm": "eliminado"}
print(f"{icon} {container} {action_name.get(cmd, cmd)}" if result.success else f"❌ Error: {result.error}")
# Logs
elif cmd == "logs" and len(sys.argv) >= 4:
server, container = sys.argv[2], sys.argv[3]
lines = int(sys.argv[4]) if len(sys.argv) > 4 else 50
print(get_logs(server, container, lines))
# Inspect
elif cmd == "inspect" and len(sys.argv) >= 4:
server, container = sys.argv[2], sys.argv[3]
info = inspect_container(server, container)
print(json.dumps(info, indent=2)[:3000])
# Exec
elif cmd == "exec" and len(sys.argv) >= 5:
server, container = sys.argv[2], sys.argv[3]
command = " ".join(sys.argv[4:])
result = exec_container(server, container, command)
print(result.data if result.success else f"Error: {result.error}")
# Find
elif cmd == "find" and len(sys.argv) >= 3:
name = sys.argv[2]
results = find_container(name)
if results:
print(f"\n🔍 Encontrados {len(results)} contenedores con '{name}':")
for server, c in results:
print(f" {c.icon} [{server}] {c.name}: {c.status[:25]}")
else:
print(f"❌ No se encontró '{name}'")
# Images
elif cmd == "images" and len(sys.argv) >= 3:
server = sys.argv[2]
images = get_images(server)
print(f"\n🖼️ Imágenes en {server}:")
for img in images[:20]:
print(f"{img['repository']}:{img['tag']} ({img['size']})")
# Networks
elif cmd == "networks" and len(sys.argv) >= 3:
server = sys.argv[2]
networks = get_networks(server)
print(f"\n🌐 Redes en {server}:")
for net in networks:
print(f"{net['name']} ({net['driver']})")
# Volumes
elif cmd == "volumes" and len(sys.argv) >= 3:
server = sys.argv[2]
volumes = get_volumes(server)
print(f"\n💾 Volúmenes en {server}:")
for vol in volumes:
print(f"{vol['name']}")
# Stats
elif cmd == "stats" and len(sys.argv) >= 3:
server = sys.argv[2]
stats = docker_stats(server)
print(f"\n📊 Estadísticas en {server}:")
for s in stats[:15]:
print(f" {s['name'][:20]}: CPU {s['cpu']} | MEM {s['memory']}")
# DF
elif cmd == "df" and len(sys.argv) >= 3:
server = sys.argv[2]
df = get_system_df(server)
print(f"\n💿 Uso de Docker en {server}:")
for type_, info in df.items():
print(f" {type_}: {info['size']} (recuperable: {info['reclaimable']})")
# Prune
elif cmd == "prune" and len(sys.argv) >= 3:
server = sys.argv[2]
what = sys.argv[3] if len(sys.argv) > 3 else "all"
result = prune(server, what)
print(f"{'' if result.success else ''} Limpieza completada" if result.success else f"Error: {result.error}")
# Compose
elif cmd in ["up", "down", "build"] and len(sys.argv) >= 4:
server, path = sys.argv[2], sys.argv[3]
action = f"{cmd} -d" if cmd == "up" else cmd
result = compose_action(server, path, action)
print(f"{'' if result.success else ''} compose {cmd}" if result.success else f"Error: {result.error}")
# Dashboard
elif cmd == "dashboard":
print("\n" + "=" * 60)
print("🐳 TZZR Docker Dashboard")
print("=" * 60)
for server, info in SERVERS.items():
containers = get_containers(server)
running = sum(1 for c in containers if c.is_running)
print(f"\n📦 {info['name']} ({info['host'].split('@')[1]})")
print(f" {info['desc']}")
print(f" Contenedores: {running}/{len(containers)} activos")
# Top 5 contenedores
for c in containers[:5]:
print(f" {c.icon} {c.name}")
else:
print("❌ Comando no reconocido. Usa 'python app.py' para ver ayuda.")
if __name__ == "__main__":
main()

273
apps/hst/app.py Normal file
View File

@@ -0,0 +1,273 @@
#!/usr/bin/env python3
"""
TZZR HST App - Gestión del servidor HST (72.62.2.84)
Servicios: Directus (3 instancias), Servidor de imágenes, APIs
"""
import subprocess
import json
import sys
from dataclasses import dataclass
from typing import List, Dict
SERVER = "root@72.62.2.84"
SSH_KEY = "~/.ssh/tzzr"
# Servicios conocidos en HST
SERVICES = {
# Directus instances
"directus_hst": {"port": 8055, "type": "cms", "desc": "Directus HST principal", "url": "hst.tzrtech.org"},
"directus_lumalia": {"port": 8056, "type": "cms", "desc": "Directus Lumalia", "url": "lumalia.tzrtech.org"},
"directus_personal": {"port": 8057, "type": "cms", "desc": "Directus Personal", "url": "personal.tzrtech.org"},
# APIs
"hst-api": {"port": 5001, "type": "api", "desc": "HST Flask API"},
"hst-images": {"port": 80, "type": "web", "desc": "Servidor NGINX imágenes", "url": "tzrtech.org"},
# Infraestructura
"postgres_hst": {"port": 5432, "type": "db", "desc": "PostgreSQL 15"},
"filebrowser": {"port": 8081, "type": "app", "desc": "File Browser"},
}
@dataclass
class Result:
success: bool
data: any
error: str = ""
def ssh(cmd: str, timeout: int = 60) -> Result:
"""Ejecuta comando en HST"""
full_cmd = f'ssh -i {SSH_KEY} {SERVER} "{cmd}"'
try:
result = subprocess.run(full_cmd, shell=True, capture_output=True, text=True, timeout=timeout)
return Result(success=result.returncode == 0, data=result.stdout.strip(), error=result.stderr.strip())
except Exception as e:
return Result(success=False, data="", error=str(e))
def get_all_containers() -> List[Dict]:
"""Lista todos los contenedores Docker"""
result = ssh("docker ps -a --format '{{.Names}}|{{.Status}}|{{.Ports}}|{{.Image}}'")
if not result.success:
return []
containers = []
for line in result.data.split('\n'):
if line:
parts = line.split('|')
containers.append({
"name": parts[0],
"status": parts[1] if len(parts) > 1 else "",
"ports": parts[2] if len(parts) > 2 else "",
"image": parts[3] if len(parts) > 3 else ""
})
return containers
def get_service_status(service: str) -> Dict:
"""Estado detallado de un servicio"""
info = SERVICES.get(service, {})
status_result = ssh(f"docker ps --filter name={service} --format '{{{{.Status}}}}'")
health = "unknown"
if info.get("port") and info.get("type") in ["cms", "api"]:
if info.get("type") == "cms":
health_result = ssh(f"curl -s http://localhost:{info.get('port')}/server/health 2>/dev/null | head -c 50")
else:
health_result = ssh(f"curl -s http://localhost:{info.get('port')}/health 2>/dev/null | head -c 50")
health = "healthy" if health_result.success and health_result.data else "unhealthy"
return {
"service": service,
"status": status_result.data if status_result.success else "not found",
"health": health,
"port": info.get("port"),
"url": info.get("url"),
"desc": info.get("desc")
}
def restart_service(service: str) -> Result:
"""Reinicia un servicio"""
return ssh(f"docker restart {service}")
def get_logs(service: str, lines: int = 100) -> str:
"""Obtiene logs de un servicio"""
result = ssh(f"docker logs {service} --tail {lines} 2>&1")
return result.data if result.success else result.error
def get_system_stats() -> Dict:
"""Estadísticas del sistema"""
stats = {}
mem_result = ssh("free -h | grep Mem | awk '{print $2,$3,$4}'")
if mem_result.success:
parts = mem_result.data.split()
stats["memory"] = {"total": parts[0], "used": parts[1], "available": parts[2]}
disk_result = ssh("df -h / | tail -1 | awk '{print $2,$3,$4,$5}'")
if disk_result.success:
parts = disk_result.data.split()
stats["disk"] = {"total": parts[0], "used": parts[1], "available": parts[2], "percent": parts[3]}
containers_result = ssh("docker ps -q | wc -l")
stats["containers_running"] = int(containers_result.data) if containers_result.success else 0
return stats
def query_directus(instance: str, endpoint: str, token: str = None) -> Result:
"""Hace petición a una instancia de Directus"""
info = SERVICES.get(instance)
if not info or info.get("type") != "cms":
return Result(success=False, data="", error="Instancia Directus no encontrada")
port = info["port"]
auth = f"-H 'Authorization: Bearer {token}'" if token else ""
cmd = f"curl -s {auth} http://localhost:{port}{endpoint}"
result = ssh(cmd)
try:
return Result(success=result.success, data=json.loads(result.data) if result.data else {})
except:
return Result(success=result.success, data=result.data)
def get_directus_collections(instance: str) -> List[str]:
"""Lista colecciones de una instancia Directus"""
result = query_directus(instance, "/collections")
if result.success and isinstance(result.data, dict):
collections = result.data.get("data", [])
return [c.get("collection") for c in collections if not c.get("collection", "").startswith("directus_")]
return []
def list_images(path: str = "/var/www/images") -> List[str]:
"""Lista imágenes en el servidor"""
result = ssh(f"ls -la {path} 2>/dev/null | head -20")
return result.data.split('\n') if result.success else []
def get_postgres_databases() -> List[str]:
"""Lista bases de datos PostgreSQL"""
result = ssh("docker exec postgres_hst psql -U postgres -c '\\l' -t 2>/dev/null")
if not result.success:
return []
databases = []
for line in result.data.split('\n'):
if '|' in line:
db = line.split('|')[0].strip()
if db and db not in ['template0', 'template1']:
databases.append(db)
return databases
# CLI
def main():
if len(sys.argv) < 2:
print("""
TZZR HST App - Servidor 72.62.2.84
==================================
Uso: python app.py <comando> [argumentos]
Comandos Generales:
status Estado general del sistema
containers Lista todos los contenedores
stats Estadísticas del sistema
Gestión de Servicios:
service <name> Estado detallado de servicio
restart <service> Reiniciar servicio
logs <service> [lines] Ver logs
Directus (CMS):
directus Estado de las 3 instancias
collections <instance> Lista colecciones de instancia
query <instance> <path> Query a Directus API
Imágenes:
images [path] Lista imágenes
Bases de Datos:
postgres Lista bases de datos
Servicios disponibles:
CMS: directus_hst, directus_lumalia, directus_personal
API: hst-api
Web: hst-images
DB: postgres_hst
App: filebrowser
""")
return
cmd = sys.argv[1]
if cmd == "status":
print("\n📊 HST Status (72.62.2.84)")
print("=" * 40)
stats = get_system_stats()
print(f"💾 Memoria: {stats.get('memory', {}).get('used', '?')}/{stats.get('memory', {}).get('total', '?')}")
print(f"💿 Disco: {stats.get('disk', {}).get('used', '?')}/{stats.get('disk', {}).get('total', '?')} ({stats.get('disk', {}).get('percent', '?')})")
print(f"📦 Contenedores: {stats.get('containers_running', 0)}")
elif cmd == "containers":
containers = get_all_containers()
print(f"\n📦 Contenedores en HST ({len(containers)} total):")
for c in containers:
icon = "" if "Up" in c["status"] else ""
print(f" {icon} {c['name']}: {c['status'][:30]}")
elif cmd == "service" and len(sys.argv) >= 3:
status = get_service_status(sys.argv[2])
print(f"\n🔧 {status['service']}")
print(f" Status: {status['status']}")
print(f" Health: {status['health']}")
print(f" Desc: {status.get('desc', '')}")
if status.get('url'):
print(f" URL: https://{status['url']}")
elif cmd == "restart" and len(sys.argv) >= 3:
result = restart_service(sys.argv[2])
print(f"{'' if result.success else ''} {sys.argv[2]}: {'reiniciado' if result.success else result.error}")
elif cmd == "logs" and len(sys.argv) >= 3:
lines = int(sys.argv[3]) if len(sys.argv) > 3 else 50
print(get_logs(sys.argv[2], lines))
elif cmd == "directus":
print("\n📚 Instancias Directus en HST:")
for instance in ["directus_hst", "directus_lumalia", "directus_personal"]:
status = get_service_status(instance)
icon = "" if "Up" in status["status"] else ""
health = f"({status['health']})" if status['health'] != "unknown" else ""
url = f"https://{status['url']}" if status.get('url') else ""
print(f" {icon} {instance}: {status['status'][:20]} {health}")
print(f" └─ {url}")
elif cmd == "collections" and len(sys.argv) >= 3:
collections = get_directus_collections(sys.argv[2])
print(f"\n📋 Colecciones en {sys.argv[2]}:")
for c in collections:
print(f"{c}")
elif cmd == "query" and len(sys.argv) >= 4:
result = query_directus(sys.argv[2], sys.argv[3])
if result.success:
print(json.dumps(result.data, indent=2)[:2000])
else:
print(f"❌ Error: {result.error}")
elif cmd == "images":
path = sys.argv[2] if len(sys.argv) > 2 else "/var/www/images"
images = list_images(path)
print(f"\n🖼️ Imágenes en {path}:")
for img in images[:20]:
print(f" {img}")
elif cmd == "postgres":
dbs = get_postgres_databases()
print("\n🐘 Bases de datos PostgreSQL:")
for db in dbs:
print(f"{db}")
else:
print("❌ Comando no reconocido. Usa 'python app.py' para ver ayuda.")
if __name__ == "__main__":
main()

View File

@@ -98,6 +98,11 @@ class CaptainClaude:
self.cost_tracker = CostTracker() self.cost_tracker = CostTracker()
logger = logging.getLogger("captain-claude") logger = logging.getLogger("captain-claude")
logger.setLevel(logging.INFO) logger.setLevel(logging.INFO)
# Evitar handlers duplicados en sesiones largas
if not logger.handlers:
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter('%(message)s'))
logger.addHandler(handler)
self.logger = StructuredLogger(logger) self.logger = StructuredLogger(logger)
self.output_dir = Path(output_dir) if output_dir else Path.cwd() / "captain_output" self.output_dir = Path(output_dir) if output_dir else Path.cwd() / "captain_output"
self.output_dir.mkdir(exist_ok=True) self.output_dir.mkdir(exist_ok=True)
@@ -244,33 +249,33 @@ Continue and complete this work."""
"""Execute a task using intelligent agent orchestration.""" """Execute a task using intelligent agent orchestration."""
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
print(f"\n{'='*60}") print(f"\n{'='*60}", flush=True)
print("CAPTAIN CLAUDE by dadiaar") print("CAPTAIN CLAUDE by dadiaar", flush=True)
print(f"{'='*60}") print(f"{'='*60}", flush=True)
print(f"Task: {task[:100]}...") print(f"Task: {task[:100]}...", flush=True)
print(f"{'='*60}\n") print(f"{'='*60}\n", flush=True)
# Phase 1: Analyze task # Phase 1: Analyze task
print("[Captain] Analyzing task...") print("[Captain] Analyzing task...", flush=True)
plan = await self.analyze_task(task) plan = await self.analyze_task(task)
print(f"[Captain] Plan created: {len(plan.get('steps', []))} steps") print(f"[Captain] Plan created: {len(plan.get('steps', []))} steps", flush=True)
# Phase 2: Execute plan # Phase 2: Execute plan
results = [] results = []
if plan.get("parallel_possible") and len(plan.get("agents_needed", [])) > 1: if plan.get("parallel_possible") and len(plan.get("agents_needed", [])) > 1:
print("[Captain] Executing agents in parallel...") print("[Captain] Executing agents in parallel...", flush=True)
parallel_result = await self.run_parallel( parallel_result = await self.run_parallel(
task, task,
plan.get("agents_needed", ["coder"]) plan.get("agents_needed", ["coder"])
) )
results.append(parallel_result) results.append(parallel_result)
else: else:
print("[Captain] Executing agents sequentially...") print("[Captain] Executing agents sequentially...", flush=True)
sequential_results = await self.run_sequential(plan.get("steps", [])) sequential_results = await self.run_sequential(plan.get("steps", []))
results.extend(sequential_results) results.extend(sequential_results)
# Phase 3: Synthesize results # Phase 3: Synthesize results
print("[Captain] Synthesizing results...") print("[Captain] Synthesizing results...", flush=True)
synthesis_prompt = f"""Synthesize these results into a coherent final output: synthesis_prompt = f"""Synthesize these results into a coherent final output:
Original task: {task} Original task: {task}
@@ -298,12 +303,12 @@ Provide a clear, actionable final result."""
with open(output_file, "w") as f: with open(output_file, "w") as f:
json.dump(final_result, f, indent=2, default=str) json.dump(final_result, f, indent=2, default=str)
print(f"\n{'='*60}") print(f"\n{'='*60}", flush=True)
print("EXECUTION COMPLETE") print("EXECUTION COMPLETE", flush=True)
print(f"{'='*60}") print(f"{'='*60}", flush=True)
print(f"Output saved: {output_file}") print(f"Output saved: {output_file}", flush=True)
print(f"Cost: {self.cost_tracker.summary()}") print(f"Cost: {self.cost_tracker.summary()}", flush=True)
print(f"{'='*60}\n") print(f"{'='*60}\n", flush=True)
return final_result return final_result
@@ -322,18 +327,18 @@ async def main():
"""Interactive Captain Claude session.""" """Interactive Captain Claude session."""
captain = CaptainClaude() captain = CaptainClaude()
print("\n" + "="*60) print("\n" + "="*60, flush=True)
print("CAPTAIN CLAUDE by dadiaar") print("CAPTAIN CLAUDE by dadiaar", flush=True)
print("Multi-Agent Orchestration System") print("Multi-Agent Orchestration System", flush=True)
print("="*60) print("="*60, flush=True)
print("\nCommands:") print("\nCommands:", flush=True)
print(" /execute <task> - Full multi-agent execution") print(" /execute <task> - Full multi-agent execution", flush=True)
print(" /chat <message> - Chat with Captain") print(" /chat <message> - Chat with Captain", flush=True)
print(" /agent <name> <message> - Chat with specific agent") print(" /agent <name> <message> - Chat with specific agent", flush=True)
print(" /parallel <task> - Run all agents in parallel") print(" /parallel <task> - Run all agents in parallel", flush=True)
print(" /cost - Show cost summary") print(" /cost - Show cost summary", flush=True)
print(" /quit - Exit") print(" /quit - Exit", flush=True)
print("="*60 + "\n") print("="*60 + "\n", flush=True)
while True: while True:
try: try:
@@ -343,24 +348,24 @@ async def main():
continue continue
if user_input.lower() == "/quit": if user_input.lower() == "/quit":
print("Goodbye!") print("Goodbye!", flush=True)
break break
if user_input.lower() == "/cost": if user_input.lower() == "/cost":
print(f"Cost: {captain.cost_tracker.summary()}") print(f"Cost: {captain.cost_tracker.summary()}", flush=True)
continue continue
if user_input.startswith("/execute "): if user_input.startswith("/execute "):
task = user_input[9:] task = user_input[9:]
result = await captain.execute(task) result = await captain.execute(task)
print(f"\nFinal Output:\n{result['final_output']}\n") print(f"\nFinal Output:\n{result['final_output']}\n", flush=True)
continue continue
if user_input.startswith("/parallel "): if user_input.startswith("/parallel "):
task = user_input[10:] task = user_input[10:]
agents = ["coder", "reviewer", "researcher"] agents = ["coder", "reviewer", "researcher"]
result = await captain.run_parallel(task, agents) result = await captain.run_parallel(task, agents)
print(f"\nParallel Results:\n{result}\n") print(f"\nParallel Results:\n{result}\n", flush=True)
continue continue
if user_input.startswith("/agent "): if user_input.startswith("/agent "):
@@ -368,24 +373,24 @@ async def main():
if len(parts) == 2: if len(parts) == 2:
agent, message = parts agent, message = parts
response = await captain.chat(message, agent) response = await captain.chat(message, agent)
print(f"\n[{agent}]: {response}\n") print(f"\n[{agent}]: {response}\n", flush=True)
else: else:
print("Usage: /agent <name> <message>") print("Usage: /agent <name> <message>", flush=True)
continue continue
if user_input.startswith("/chat ") or not user_input.startswith("/"): if user_input.startswith("/chat ") or not user_input.startswith("/"):
message = user_input[6:] if user_input.startswith("/chat ") else user_input message = user_input[6:] if user_input.startswith("/chat ") else user_input
response = await captain.chat(message) response = await captain.chat(message)
print(f"\n[Captain]: {response}\n") print(f"\n[Captain]: {response}\n", flush=True)
continue continue
print("Unknown command. Use /quit to exit.") print("Unknown command. Use /quit to exit.", flush=True)
except KeyboardInterrupt: except KeyboardInterrupt:
print("\nGoodbye!") print("\nGoodbye!", flush=True)
break break
except Exception as e: except Exception as e:
print(f"Error: {e}") print(f"Error: {e}", flush=True)
if __name__ == "__main__": if __name__ == "__main__":

View File

@@ -1,39 +0,0 @@
-- ============================================
-- CONTEXT MANAGER - BASE TYPES
-- Sistema local de gestión de contexto para IA
-- ============================================
-- Extension para UUIDs
CREATE EXTENSION IF NOT EXISTS "pgcrypto";
-- ============================================
-- TIPOS ENUMERADOS
-- ============================================
CREATE TYPE mensaje_role AS ENUM ('user', 'assistant', 'system', 'tool');
CREATE TYPE context_source AS ENUM ('memory', 'knowledge', 'history', 'ambient', 'dataset');
CREATE TYPE algorithm_status AS ENUM ('draft', 'testing', 'active', 'deprecated');
CREATE TYPE metric_type AS ENUM ('relevance', 'token_efficiency', 'response_quality', 'latency');
-- ============================================
-- FUNCIÓN: Timestamp de actualización
-- ============================================
CREATE OR REPLACE FUNCTION update_updated_at_column()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated_at = CURRENT_TIMESTAMP;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- ============================================
-- FUNCIÓN: Hash SHA-256
-- ============================================
CREATE OR REPLACE FUNCTION sha256_hash(content TEXT)
RETURNS VARCHAR(64) AS $$
BEGIN
RETURN encode(digest(content, 'sha256'), 'hex');
END;
$$ LANGUAGE plpgsql IMMUTABLE;

View File

@@ -1,276 +0,0 @@
-- ============================================
-- LOG INMUTABLE - TABLA DE REFERENCIA
-- NO EDITABLE - Solo INSERT permitido
-- ============================================
-- Esta tabla es la fuente de verdad del sistema.
-- Nunca se modifica ni se borra. Solo se inserta.
-- ============================================
-- TABLA: immutable_log
-- Registro permanente de todas las interacciones
-- ============================================
CREATE TABLE IF NOT EXISTS immutable_log (
-- Identificación
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
hash VARCHAR(64) NOT NULL UNIQUE, -- SHA-256 del contenido
hash_anterior VARCHAR(64), -- Encadenamiento (blockchain-style)
-- Sesión
session_id UUID NOT NULL,
sequence_num BIGINT NOT NULL, -- Número secuencial en la sesión
-- Mensaje
role mensaje_role NOT NULL,
content TEXT NOT NULL,
-- Modelo IA (agnóstico)
model_provider VARCHAR(50), -- anthropic, openai, ollama, local, etc.
model_name VARCHAR(100), -- claude-3-opus, gpt-4, llama-3, etc.
model_params JSONB DEFAULT '{}', -- temperature, max_tokens, etc.
-- Contexto enviado (snapshot)
context_snapshot JSONB, -- Copia del contexto usado
context_algorithm_id UUID, -- Qué algoritmo seleccionó el contexto
context_tokens_used INT,
-- Respuesta (solo para role=assistant)
tokens_input INT,
tokens_output INT,
latency_ms INT,
-- Metadata inmutable
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL,
source_ip VARCHAR(45),
user_agent TEXT,
-- Integridad
CONSTRAINT chain_integrity CHECK (
(sequence_num = 1 AND hash_anterior IS NULL) OR
(sequence_num > 1 AND hash_anterior IS NOT NULL)
)
);
-- Índices para consulta (no para modificación)
CREATE INDEX IF NOT EXISTS idx_log_session ON immutable_log(session_id, sequence_num);
CREATE INDEX IF NOT EXISTS idx_log_created ON immutable_log(created_at DESC);
CREATE INDEX IF NOT EXISTS idx_log_model ON immutable_log(model_provider, model_name);
CREATE INDEX IF NOT EXISTS idx_log_hash ON immutable_log(hash);
CREATE INDEX IF NOT EXISTS idx_log_chain ON immutable_log(hash_anterior);
-- ============================================
-- PROTECCIÓN: Trigger que impide UPDATE y DELETE
-- ============================================
CREATE OR REPLACE FUNCTION prevent_log_modification()
RETURNS TRIGGER AS $$
BEGIN
RAISE EXCEPTION 'immutable_log no permite modificaciones. Solo INSERT está permitido.';
RETURN NULL;
END;
$$ LANGUAGE plpgsql;
DROP TRIGGER IF EXISTS protect_immutable_log_update ON immutable_log;
CREATE TRIGGER protect_immutable_log_update
BEFORE UPDATE ON immutable_log
FOR EACH ROW EXECUTE FUNCTION prevent_log_modification();
DROP TRIGGER IF EXISTS protect_immutable_log_delete ON immutable_log;
CREATE TRIGGER protect_immutable_log_delete
BEFORE DELETE ON immutable_log
FOR EACH ROW EXECUTE FUNCTION prevent_log_modification();
-- ============================================
-- FUNCIÓN: Insertar en log con hash automático
-- ============================================
CREATE OR REPLACE FUNCTION insert_log_entry(
p_session_id UUID,
p_role mensaje_role,
p_content TEXT,
p_model_provider VARCHAR DEFAULT NULL,
p_model_name VARCHAR DEFAULT NULL,
p_model_params JSONB DEFAULT '{}',
p_context_snapshot JSONB DEFAULT NULL,
p_context_algorithm_id UUID DEFAULT NULL,
p_context_tokens_used INT DEFAULT NULL,
p_tokens_input INT DEFAULT NULL,
p_tokens_output INT DEFAULT NULL,
p_latency_ms INT DEFAULT NULL,
p_source_ip VARCHAR DEFAULT NULL,
p_user_agent TEXT DEFAULT NULL
)
RETURNS UUID AS $$
DECLARE
v_sequence_num BIGINT;
v_hash_anterior VARCHAR(64);
v_content_hash VARCHAR(64);
v_new_id UUID;
BEGIN
-- Obtener último hash y secuencia de la sesión
SELECT sequence_num, hash
INTO v_sequence_num, v_hash_anterior
FROM immutable_log
WHERE session_id = p_session_id
ORDER BY sequence_num DESC
LIMIT 1;
IF v_sequence_num IS NULL THEN
v_sequence_num := 1;
v_hash_anterior := NULL;
ELSE
v_sequence_num := v_sequence_num + 1;
END IF;
-- Calcular hash del contenido (incluye hash anterior para encadenamiento)
v_content_hash := sha256_hash(
COALESCE(v_hash_anterior, '') ||
p_session_id::TEXT ||
v_sequence_num::TEXT ||
p_role::TEXT ||
p_content
);
-- Insertar
INSERT INTO immutable_log (
session_id, sequence_num, hash, hash_anterior,
role, content,
model_provider, model_name, model_params,
context_snapshot, context_algorithm_id, context_tokens_used,
tokens_input, tokens_output, latency_ms,
source_ip, user_agent
) VALUES (
p_session_id, v_sequence_num, v_content_hash, v_hash_anterior,
p_role, p_content,
p_model_provider, p_model_name, p_model_params,
p_context_snapshot, p_context_algorithm_id, p_context_tokens_used,
p_tokens_input, p_tokens_output, p_latency_ms,
p_source_ip, p_user_agent
) RETURNING id INTO v_new_id;
RETURN v_new_id;
END;
$$ LANGUAGE plpgsql;
-- ============================================
-- TABLA: sessions
-- Registro de sesiones (también inmutable)
-- ============================================
CREATE TABLE IF NOT EXISTS sessions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
hash VARCHAR(64) NOT NULL UNIQUE,
-- Identificación
user_id VARCHAR(100),
instance_id VARCHAR(100),
-- Configuración inicial
initial_model_provider VARCHAR(50),
initial_model_name VARCHAR(100),
initial_context_algorithm_id UUID,
-- Metadata
metadata JSONB DEFAULT '{}',
-- Timestamps inmutables
started_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL,
ended_at TIMESTAMP,
-- Estadísticas finales (se actualizan solo al cerrar)
total_messages INT DEFAULT 0,
total_tokens_input INT DEFAULT 0,
total_tokens_output INT DEFAULT 0,
total_latency_ms INT DEFAULT 0
);
CREATE INDEX IF NOT EXISTS idx_sessions_user ON sessions(user_id);
CREATE INDEX IF NOT EXISTS idx_sessions_instance ON sessions(instance_id);
CREATE INDEX IF NOT EXISTS idx_sessions_started ON sessions(started_at DESC);
-- ============================================
-- FUNCIÓN: Crear nueva sesión
-- ============================================
CREATE OR REPLACE FUNCTION create_session(
p_user_id VARCHAR DEFAULT NULL,
p_instance_id VARCHAR DEFAULT NULL,
p_model_provider VARCHAR DEFAULT NULL,
p_model_name VARCHAR DEFAULT NULL,
p_algorithm_id UUID DEFAULT NULL,
p_metadata JSONB DEFAULT '{}'
)
RETURNS UUID AS $$
DECLARE
v_session_id UUID;
v_hash VARCHAR(64);
BEGIN
v_session_id := gen_random_uuid();
v_hash := sha256_hash(v_session_id::TEXT || CURRENT_TIMESTAMP::TEXT);
INSERT INTO sessions (
id, hash, user_id, instance_id,
initial_model_provider, initial_model_name,
initial_context_algorithm_id, metadata
) VALUES (
v_session_id, v_hash, p_user_id, p_instance_id,
p_model_provider, p_model_name,
p_algorithm_id, p_metadata
);
RETURN v_session_id;
END;
$$ LANGUAGE plpgsql;
-- ============================================
-- FUNCIÓN: Verificar integridad de la cadena
-- ============================================
CREATE OR REPLACE FUNCTION verify_chain_integrity(p_session_id UUID)
RETURNS TABLE (
is_valid BOOLEAN,
broken_at_sequence BIGINT,
expected_hash VARCHAR(64),
actual_hash VARCHAR(64)
) AS $$
DECLARE
rec RECORD;
prev_hash VARCHAR(64) := NULL;
computed_hash VARCHAR(64);
BEGIN
FOR rec IN
SELECT * FROM immutable_log
WHERE session_id = p_session_id
ORDER BY sequence_num
LOOP
-- Verificar encadenamiento
IF rec.sequence_num = 1 AND rec.hash_anterior IS NOT NULL THEN
RETURN QUERY SELECT FALSE, rec.sequence_num, NULL::VARCHAR(64), rec.hash_anterior;
RETURN;
END IF;
IF rec.sequence_num > 1 AND rec.hash_anterior != prev_hash THEN
RETURN QUERY SELECT FALSE, rec.sequence_num, prev_hash, rec.hash_anterior;
RETURN;
END IF;
-- Verificar hash del contenido
computed_hash := sha256_hash(
COALESCE(prev_hash, '') ||
rec.session_id::TEXT ||
rec.sequence_num::TEXT ||
rec.role::TEXT ||
rec.content
);
IF computed_hash != rec.hash THEN
RETURN QUERY SELECT FALSE, rec.sequence_num, computed_hash, rec.hash;
RETURN;
END IF;
prev_hash := rec.hash;
END LOOP;
RETURN QUERY SELECT TRUE, NULL::BIGINT, NULL::VARCHAR(64), NULL::VARCHAR(64);
END;
$$ LANGUAGE plpgsql;

View File

@@ -1,243 +0,0 @@
-- ============================================
-- GESTOR DE CONTEXTO - TABLAS EDITABLES
-- Estas tablas SÍ se pueden modificar
-- ============================================
-- ============================================
-- TABLA: context_blocks
-- Bloques de contexto reutilizables
-- ============================================
CREATE TABLE IF NOT EXISTS context_blocks (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Identificación
code VARCHAR(100) UNIQUE NOT NULL,
name VARCHAR(255) NOT NULL,
description TEXT,
-- Contenido
content TEXT NOT NULL,
content_hash VARCHAR(64), -- Para detectar cambios
-- Clasificación
category VARCHAR(50) NOT NULL, -- system, persona, knowledge, rules, examples
priority INT DEFAULT 50, -- 0-100, mayor = más importante
tokens_estimated INT,
-- Alcance
scope VARCHAR(50) DEFAULT 'global', -- global, project, session
project_id UUID,
-- Condiciones de activación
activation_rules JSONB DEFAULT '{}',
/*
Ejemplo activation_rules:
{
"always": false,
"keywords": ["database", "sql"],
"model_providers": ["anthropic"],
"min_session_messages": 0,
"time_of_day": null
}
*/
-- Estado
active BOOLEAN DEFAULT true,
version INT DEFAULT 1,
-- Timestamps
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX IF NOT EXISTS idx_ctx_blocks_code ON context_blocks(code);
CREATE INDEX IF NOT EXISTS idx_ctx_blocks_category ON context_blocks(category);
CREATE INDEX IF NOT EXISTS idx_ctx_blocks_priority ON context_blocks(priority DESC);
CREATE INDEX IF NOT EXISTS idx_ctx_blocks_active ON context_blocks(active);
CREATE INDEX IF NOT EXISTS idx_ctx_blocks_scope ON context_blocks(scope);
DROP TRIGGER IF EXISTS update_ctx_blocks_updated_at ON context_blocks;
CREATE TRIGGER update_ctx_blocks_updated_at
BEFORE UPDATE ON context_blocks
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
-- Trigger para calcular hash y tokens al insertar/actualizar
CREATE OR REPLACE FUNCTION update_block_metadata()
RETURNS TRIGGER AS $$
BEGIN
NEW.content_hash := sha256_hash(NEW.content);
-- Estimación simple: ~4 caracteres por token
NEW.tokens_estimated := CEIL(LENGTH(NEW.content) / 4.0);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
DROP TRIGGER IF EXISTS calc_block_metadata ON context_blocks;
CREATE TRIGGER calc_block_metadata
BEFORE INSERT OR UPDATE OF content ON context_blocks
FOR EACH ROW EXECUTE FUNCTION update_block_metadata();
-- ============================================
-- TABLA: knowledge_base
-- Base de conocimiento (RAG simple)
-- ============================================
CREATE TABLE IF NOT EXISTS knowledge_base (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Identificación
title VARCHAR(255) NOT NULL,
category VARCHAR(100) NOT NULL,
tags TEXT[] DEFAULT '{}',
-- Contenido
content TEXT NOT NULL,
content_hash VARCHAR(64),
tokens_estimated INT,
-- Embeddings (para búsqueda semántica futura)
embedding_model VARCHAR(100),
embedding VECTOR(1536), -- Requiere pgvector si se usa
-- Fuente
source_type VARCHAR(50), -- file, url, manual, extracted
source_ref TEXT,
-- Relevancia
priority INT DEFAULT 50,
access_count INT DEFAULT 0,
last_accessed_at TIMESTAMP,
-- Estado
active BOOLEAN DEFAULT true,
-- Timestamps
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX IF NOT EXISTS idx_kb_category ON knowledge_base(category);
CREATE INDEX IF NOT EXISTS idx_kb_tags ON knowledge_base USING GIN(tags);
CREATE INDEX IF NOT EXISTS idx_kb_priority ON knowledge_base(priority DESC);
CREATE INDEX IF NOT EXISTS idx_kb_active ON knowledge_base(active);
DROP TRIGGER IF EXISTS update_kb_updated_at ON knowledge_base;
CREATE TRIGGER update_kb_updated_at
BEFORE UPDATE ON knowledge_base
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
DROP TRIGGER IF EXISTS calc_kb_metadata ON knowledge_base;
CREATE TRIGGER calc_kb_metadata
BEFORE INSERT OR UPDATE OF content ON knowledge_base
FOR EACH ROW EXECUTE FUNCTION update_block_metadata();
-- ============================================
-- TABLA: memory
-- Memoria a largo plazo extraída de conversaciones
-- ============================================
CREATE TABLE IF NOT EXISTS memory (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Clasificación
type VARCHAR(50) NOT NULL, -- fact, preference, decision, learning, procedure
category VARCHAR(100),
-- Contenido
content TEXT NOT NULL,
summary VARCHAR(500),
content_hash VARCHAR(64),
-- Origen
extracted_from_session UUID REFERENCES sessions(id),
extracted_from_log UUID, -- No FK para no bloquear
-- Relevancia
importance INT DEFAULT 50, -- 0-100
confidence DECIMAL(3,2) DEFAULT 1.0, -- 0.00-1.00
uses INT DEFAULT 0,
last_used_at TIMESTAMP,
-- Expiración
expires_at TIMESTAMP,
-- Estado
active BOOLEAN DEFAULT true,
verified BOOLEAN DEFAULT false, -- Confirmado por usuario
-- Timestamps
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX IF NOT EXISTS idx_memory_type ON memory(type);
CREATE INDEX IF NOT EXISTS idx_memory_importance ON memory(importance DESC);
CREATE INDEX IF NOT EXISTS idx_memory_active ON memory(active);
CREATE INDEX IF NOT EXISTS idx_memory_expires ON memory(expires_at);
DROP TRIGGER IF EXISTS update_memory_updated_at ON memory;
CREATE TRIGGER update_memory_updated_at
BEFORE UPDATE ON memory
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
-- ============================================
-- TABLA: ambient_context
-- Contexto ambiental (estado actual del sistema)
-- ============================================
CREATE TABLE IF NOT EXISTS ambient_context (
id SERIAL PRIMARY KEY,
-- Snapshot
captured_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
expires_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP + INTERVAL '1 hour',
-- Datos ambientales
environment JSONB DEFAULT '{}',
/*
{
"timezone": "Europe/Madrid",
"locale": "es-ES",
"working_directory": "/home/user/project",
"git_branch": "main",
"active_project": "my-app"
}
*/
-- Estado del sistema
system_state JSONB DEFAULT '{}',
/*
{
"servers": {"architect": "online"},
"services": {"gitea": "running"},
"pending_tasks": [],
"alerts": []
}
*/
-- Archivos/recursos activos
active_resources JSONB DEFAULT '[]'
/*
[
{"type": "file", "path": "/path/to/file.py", "modified": true},
{"type": "url", "href": "https://docs.example.com"}
]
*/
);
CREATE INDEX IF NOT EXISTS idx_ambient_captured ON ambient_context(captured_at DESC);
CREATE INDEX IF NOT EXISTS idx_ambient_expires ON ambient_context(expires_at);
-- Limpiar contextos expirados
CREATE OR REPLACE FUNCTION cleanup_expired_ambient()
RETURNS INTEGER AS $$
DECLARE
deleted_count INTEGER;
BEGIN
DELETE FROM ambient_context
WHERE expires_at < CURRENT_TIMESTAMP;
GET DIAGNOSTICS deleted_count = ROW_COUNT;
RETURN deleted_count;
END;
$$ LANGUAGE plpgsql;

View File

@@ -1,399 +0,0 @@
-- ============================================
-- MOTOR DE ALGORITMOS - Sistema evolutivo
-- Permite versionar y mejorar el algoritmo de contexto
-- ============================================
-- ============================================
-- TABLA: context_algorithms
-- Definición de algoritmos de selección de contexto
-- ============================================
CREATE TABLE IF NOT EXISTS context_algorithms (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Identificación
code VARCHAR(100) UNIQUE NOT NULL,
name VARCHAR(255) NOT NULL,
description TEXT,
version VARCHAR(20) NOT NULL DEFAULT '1.0.0',
-- Estado
status algorithm_status DEFAULT 'draft',
-- Configuración del algoritmo
config JSONB NOT NULL DEFAULT '{
"max_tokens": 4000,
"sources": {
"system_prompts": true,
"context_blocks": true,
"memory": true,
"knowledge": true,
"history": true,
"ambient": true
},
"weights": {
"priority": 0.4,
"relevance": 0.3,
"recency": 0.2,
"frequency": 0.1
},
"history_config": {
"max_messages": 20,
"summarize_after": 10,
"include_system": false
},
"memory_config": {
"max_items": 15,
"min_importance": 30
},
"knowledge_config": {
"max_items": 5,
"require_keyword_match": true
}
}'::jsonb,
-- Código del algoritmo (Python embebido)
selector_code TEXT,
/*
Ejemplo:
def select_context(session, message, config):
context = []
# ... lógica de selección
return context
*/
-- Estadísticas
times_used INT DEFAULT 0,
avg_tokens_used DECIMAL(10,2),
avg_relevance_score DECIMAL(3,2),
avg_response_quality DECIMAL(3,2),
-- Linaje
parent_algorithm_id UUID REFERENCES context_algorithms(id),
fork_reason TEXT,
-- Timestamps
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
activated_at TIMESTAMP,
deprecated_at TIMESTAMP
);
CREATE INDEX IF NOT EXISTS idx_algo_code ON context_algorithms(code);
CREATE INDEX IF NOT EXISTS idx_algo_status ON context_algorithms(status);
CREATE INDEX IF NOT EXISTS idx_algo_version ON context_algorithms(version);
CREATE INDEX IF NOT EXISTS idx_algo_parent ON context_algorithms(parent_algorithm_id);
DROP TRIGGER IF EXISTS update_algo_updated_at ON context_algorithms;
CREATE TRIGGER update_algo_updated_at
BEFORE UPDATE ON context_algorithms
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
-- ============================================
-- TABLA: algorithm_metrics
-- Métricas de rendimiento por algoritmo
-- ============================================
CREATE TABLE IF NOT EXISTS algorithm_metrics (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Referencias
algorithm_id UUID NOT NULL REFERENCES context_algorithms(id),
session_id UUID REFERENCES sessions(id),
log_entry_id UUID, -- Referencia al log inmutable
-- Métricas de contexto
tokens_budget INT,
tokens_used INT,
token_efficiency DECIMAL(5,4), -- tokens_used / tokens_budget
-- Composición del contexto
context_composition JSONB,
/*
{
"system_prompts": {"count": 1, "tokens": 500},
"context_blocks": {"count": 3, "tokens": 800},
"memory": {"count": 5, "tokens": 300},
"knowledge": {"count": 2, "tokens": 400},
"history": {"count": 10, "tokens": 1500},
"ambient": {"count": 1, "tokens": 100}
}
*/
-- Métricas de respuesta
latency_ms INT,
model_tokens_input INT,
model_tokens_output INT,
-- Evaluación (puede ser automática o manual)
relevance_score DECIMAL(3,2), -- 0.00-1.00: ¿El contexto fue relevante?
response_quality DECIMAL(3,2), -- 0.00-1.00: ¿La respuesta fue buena?
user_satisfaction DECIMAL(3,2), -- 0.00-1.00: Feedback del usuario
-- Evaluación automática
auto_evaluated BOOLEAN DEFAULT false,
evaluation_method VARCHAR(50), -- llm_judge, heuristic, user_feedback
-- Timestamp
recorded_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX IF NOT EXISTS idx_metrics_algorithm ON algorithm_metrics(algorithm_id);
CREATE INDEX IF NOT EXISTS idx_metrics_session ON algorithm_metrics(session_id);
CREATE INDEX IF NOT EXISTS idx_metrics_recorded ON algorithm_metrics(recorded_at DESC);
CREATE INDEX IF NOT EXISTS idx_metrics_quality ON algorithm_metrics(response_quality DESC);
-- ============================================
-- TABLA: algorithm_experiments
-- A/B testing de algoritmos
-- ============================================
CREATE TABLE IF NOT EXISTS algorithm_experiments (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Identificación
name VARCHAR(255) NOT NULL,
description TEXT,
-- Algoritmos en competencia
control_algorithm_id UUID NOT NULL REFERENCES context_algorithms(id),
treatment_algorithm_id UUID NOT NULL REFERENCES context_algorithms(id),
-- Configuración
traffic_split DECIMAL(3,2) DEFAULT 0.50, -- % para treatment
min_samples INT DEFAULT 100,
max_samples INT DEFAULT 1000,
-- Estado
status VARCHAR(50) DEFAULT 'pending', -- pending, running, completed, cancelled
-- Resultados
control_samples INT DEFAULT 0,
treatment_samples INT DEFAULT 0,
control_avg_quality DECIMAL(3,2),
treatment_avg_quality DECIMAL(3,2),
winner_algorithm_id UUID REFERENCES context_algorithms(id),
statistical_significance DECIMAL(5,4),
-- Timestamps
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
started_at TIMESTAMP,
completed_at TIMESTAMP
);
CREATE INDEX IF NOT EXISTS idx_exp_status ON algorithm_experiments(status);
CREATE INDEX IF NOT EXISTS idx_exp_control ON algorithm_experiments(control_algorithm_id);
CREATE INDEX IF NOT EXISTS idx_exp_treatment ON algorithm_experiments(treatment_algorithm_id);
-- ============================================
-- VISTA: Resumen de rendimiento por algoritmo
-- ============================================
CREATE OR REPLACE VIEW algorithm_performance AS
SELECT
a.id,
a.code,
a.name,
a.version,
a.status,
a.times_used,
COUNT(m.id) as total_metrics,
AVG(m.token_efficiency) as avg_token_efficiency,
AVG(m.relevance_score) as avg_relevance,
AVG(m.response_quality) as avg_quality,
AVG(m.user_satisfaction) as avg_satisfaction,
AVG(m.latency_ms) as avg_latency,
PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY m.response_quality) as median_quality,
STDDEV(m.response_quality) as quality_stddev
FROM context_algorithms a
LEFT JOIN algorithm_metrics m ON a.id = m.algorithm_id
GROUP BY a.id, a.code, a.name, a.version, a.status, a.times_used;
-- ============================================
-- FUNCIÓN: Activar algoritmo (desactiva el anterior)
-- ============================================
CREATE OR REPLACE FUNCTION activate_algorithm(p_algorithm_id UUID)
RETURNS BOOLEAN AS $$
BEGIN
-- Deprecar algoritmo activo actual
UPDATE context_algorithms
SET status = 'deprecated', deprecated_at = CURRENT_TIMESTAMP
WHERE status = 'active';
-- Activar nuevo algoritmo
UPDATE context_algorithms
SET status = 'active', activated_at = CURRENT_TIMESTAMP
WHERE id = p_algorithm_id;
RETURN FOUND;
END;
$$ LANGUAGE plpgsql;
-- ============================================
-- FUNCIÓN: Clonar algoritmo para experimentación
-- ============================================
CREATE OR REPLACE FUNCTION fork_algorithm(
p_source_id UUID,
p_new_code VARCHAR,
p_new_name VARCHAR,
p_reason TEXT DEFAULT NULL
)
RETURNS UUID AS $$
DECLARE
v_new_id UUID;
v_source RECORD;
BEGIN
SELECT * INTO v_source FROM context_algorithms WHERE id = p_source_id;
IF NOT FOUND THEN
RAISE EXCEPTION 'Algoritmo fuente no encontrado: %', p_source_id;
END IF;
INSERT INTO context_algorithms (
code, name, description, version,
status, config, selector_code,
parent_algorithm_id, fork_reason
) VALUES (
p_new_code,
p_new_name,
v_source.description,
'1.0.0',
'draft',
v_source.config,
v_source.selector_code,
p_source_id,
p_reason
) RETURNING id INTO v_new_id;
RETURN v_new_id;
END;
$$ LANGUAGE plpgsql;
-- ============================================
-- FUNCIÓN: Obtener algoritmo activo
-- ============================================
CREATE OR REPLACE FUNCTION get_active_algorithm()
RETURNS UUID AS $$
SELECT id FROM context_algorithms
WHERE status = 'active'
ORDER BY activated_at DESC
LIMIT 1;
$$ LANGUAGE SQL STABLE;
-- ============================================
-- FUNCIÓN: Registrar métrica de uso
-- ============================================
CREATE OR REPLACE FUNCTION record_algorithm_metric(
p_algorithm_id UUID,
p_session_id UUID,
p_log_entry_id UUID,
p_tokens_budget INT,
p_tokens_used INT,
p_context_composition JSONB,
p_latency_ms INT DEFAULT NULL,
p_model_tokens_input INT DEFAULT NULL,
p_model_tokens_output INT DEFAULT NULL
)
RETURNS UUID AS $$
DECLARE
v_metric_id UUID;
v_efficiency DECIMAL(5,4);
BEGIN
v_efficiency := CASE
WHEN p_tokens_budget > 0 THEN p_tokens_used::DECIMAL / p_tokens_budget
ELSE 0
END;
INSERT INTO algorithm_metrics (
algorithm_id, session_id, log_entry_id,
tokens_budget, tokens_used, token_efficiency,
context_composition, latency_ms,
model_tokens_input, model_tokens_output
) VALUES (
p_algorithm_id, p_session_id, p_log_entry_id,
p_tokens_budget, p_tokens_used, v_efficiency,
p_context_composition, p_latency_ms,
p_model_tokens_input, p_model_tokens_output
) RETURNING id INTO v_metric_id;
-- Actualizar contador del algoritmo
UPDATE context_algorithms
SET times_used = times_used + 1
WHERE id = p_algorithm_id;
RETURN v_metric_id;
END;
$$ LANGUAGE plpgsql;
-- ============================================
-- FUNCIÓN: Actualizar evaluación de métrica
-- ============================================
CREATE OR REPLACE FUNCTION update_metric_evaluation(
p_metric_id UUID,
p_relevance DECIMAL DEFAULT NULL,
p_quality DECIMAL DEFAULT NULL,
p_satisfaction DECIMAL DEFAULT NULL,
p_method VARCHAR DEFAULT 'manual'
)
RETURNS BOOLEAN AS $$
BEGIN
UPDATE algorithm_metrics
SET
relevance_score = COALESCE(p_relevance, relevance_score),
response_quality = COALESCE(p_quality, response_quality),
user_satisfaction = COALESCE(p_satisfaction, user_satisfaction),
auto_evaluated = (p_method != 'user_feedback'),
evaluation_method = p_method
WHERE id = p_metric_id;
RETURN FOUND;
END;
$$ LANGUAGE plpgsql;
-- ============================================
-- DATOS INICIALES: Algoritmo por defecto
-- ============================================
INSERT INTO context_algorithms (code, name, description, version, status, config) VALUES
(
'ALG_DEFAULT_V1',
'Algoritmo por defecto v1',
'Selección de contexto basada en prioridad y tokens disponibles',
'1.0.0',
'active',
'{
"max_tokens": 4000,
"sources": {
"system_prompts": true,
"context_blocks": true,
"memory": true,
"knowledge": true,
"history": true,
"ambient": true
},
"weights": {
"priority": 0.4,
"relevance": 0.3,
"recency": 0.2,
"frequency": 0.1
},
"history_config": {
"max_messages": 20,
"summarize_after": 10,
"include_system": false
},
"memory_config": {
"max_items": 15,
"min_importance": 30
},
"knowledge_config": {
"max_items": 5,
"require_keyword_match": true
}
}'::jsonb
) ON CONFLICT (code) DO NOTHING;

View File

@@ -1,25 +0,0 @@
"""
Context Manager - Sistema local de gestión de contexto para IA
Características:
- Log inmutable (tabla de referencia no editable)
- Gestor de contexto mejorable
- Agnóstico al modelo de IA
- Sistema de métricas para mejora continua
"""
__version__ = "1.0.0"
__author__ = "TZZR System"
from .database import Database
from .context_selector import ContextSelector
from .models import Session, Message, ContextBlock, Algorithm
__all__ = [
"Database",
"ContextSelector",
"Session",
"Message",
"ContextBlock",
"Algorithm",
]

View File

@@ -1,508 +0,0 @@
"""
Selector de Contexto - Motor principal
Selecciona el contexto óptimo para enviar al modelo de IA
basándose en el algoritmo activo y las fuentes disponibles.
"""
import re
import uuid
from datetime import datetime
from typing import Optional, List, Dict, Any, Callable
from .models import (
Session, Message, MessageRole, ContextBlock, Memory,
Knowledge, Algorithm, AmbientContext, ContextItem,
SelectedContext, ContextSource
)
from .database import Database
class ContextSelector:
"""
Motor de selección de contexto.
Características:
- Agnóstico al modelo de IA
- Basado en algoritmo configurable
- Métricas de rendimiento
- Soporte para múltiples fuentes
"""
def __init__(self, db: Database):
self.db = db
self._custom_selectors: Dict[str, Callable] = {}
def register_selector(self, name: str, selector_fn: Callable):
"""Registra un selector de contexto personalizado"""
self._custom_selectors[name] = selector_fn
def select_context(
self,
session: Session,
user_message: str,
algorithm: Algorithm = None,
max_tokens: int = None
) -> SelectedContext:
"""
Selecciona el contexto óptimo para la sesión actual.
Args:
session: Sesión activa
user_message: Mensaje del usuario
algorithm: Algoritmo a usar (o el activo por defecto)
max_tokens: Límite de tokens (sobreescribe config del algoritmo)
Returns:
SelectedContext con los items seleccionados
"""
# Obtener algoritmo
if algorithm is None:
algorithm = self.db.get_active_algorithm()
if algorithm is None:
# Algoritmo por defecto si no hay ninguno
algorithm = Algorithm(
code="DEFAULT",
name="Default",
config={
"max_tokens": 4000,
"sources": {
"system_prompts": True,
"context_blocks": True,
"memory": True,
"knowledge": True,
"history": True,
"ambient": True
},
"weights": {"priority": 0.4, "relevance": 0.3, "recency": 0.2, "frequency": 0.1},
"history_config": {"max_messages": 20, "summarize_after": 10, "include_system": False},
"memory_config": {"max_items": 15, "min_importance": 30},
"knowledge_config": {"max_items": 5, "require_keyword_match": True}
}
)
config = algorithm.config
token_budget = max_tokens or config.get("max_tokens", 4000)
sources = config.get("sources", {})
# Verificar si hay selector personalizado
if algorithm.selector_code and algorithm.code in self._custom_selectors:
return self._custom_selectors[algorithm.code](
session, user_message, config, self.db
)
# Selección estándar
context = SelectedContext(algorithm_id=algorithm.id)
composition = {}
# 1. System prompts y bloques de contexto
if sources.get("context_blocks", True):
blocks = self._select_context_blocks(user_message, config)
for block in blocks:
if context.total_tokens + block.tokens_estimated <= token_budget:
context.items.append(ContextItem(
source=ContextSource.DATASET,
content=block.content,
tokens=block.tokens_estimated,
priority=block.priority,
metadata={"block_code": block.code, "category": block.category}
))
context.total_tokens += block.tokens_estimated
composition["context_blocks"] = {
"count": len([i for i in context.items if i.source == ContextSource.DATASET]),
"tokens": sum(i.tokens for i in context.items if i.source == ContextSource.DATASET)
}
# 2. Memoria a largo plazo
if sources.get("memory", True):
memory_config = config.get("memory_config", {})
memories = self._select_memories(
user_message,
min_importance=memory_config.get("min_importance", 30),
max_items=memory_config.get("max_items", 15)
)
memory_tokens = 0
memory_count = 0
for mem in memories:
tokens = len(mem.content) // 4
if context.total_tokens + tokens <= token_budget:
context.items.append(ContextItem(
source=ContextSource.MEMORY,
content=f"[Memoria - {mem.type}]: {mem.content}",
tokens=tokens,
priority=mem.importance,
metadata={"memory_type": mem.type, "importance": mem.importance}
))
context.total_tokens += tokens
memory_tokens += tokens
memory_count += 1
composition["memory"] = {"count": memory_count, "tokens": memory_tokens}
# 3. Base de conocimiento
if sources.get("knowledge", True):
knowledge_config = config.get("knowledge_config", {})
keywords = self._extract_keywords(user_message)
if keywords or not knowledge_config.get("require_keyword_match", True):
knowledge_items = self.db.search_knowledge(
keywords=keywords if knowledge_config.get("require_keyword_match", True) else None,
limit=knowledge_config.get("max_items", 5)
)
knowledge_tokens = 0
knowledge_count = 0
for item in knowledge_items:
if context.total_tokens + item.tokens_estimated <= token_budget:
context.items.append(ContextItem(
source=ContextSource.KNOWLEDGE,
content=f"[Conocimiento - {item.title}]: {item.content}",
tokens=item.tokens_estimated,
priority=item.priority,
metadata={"title": item.title, "category": item.category}
))
context.total_tokens += item.tokens_estimated
knowledge_tokens += item.tokens_estimated
knowledge_count += 1
composition["knowledge"] = {"count": knowledge_count, "tokens": knowledge_tokens}
# 4. Contexto ambiental
if sources.get("ambient", True):
ambient = self.db.get_latest_ambient_context()
if ambient:
ambient_content = self._format_ambient_context(ambient)
tokens = len(ambient_content) // 4
if context.total_tokens + tokens <= token_budget:
context.items.append(ContextItem(
source=ContextSource.AMBIENT,
content=ambient_content,
tokens=tokens,
priority=30,
metadata={"captured_at": ambient.captured_at.isoformat()}
))
context.total_tokens += tokens
composition["ambient"] = {"count": 1, "tokens": tokens}
# 5. Historial de conversación (al final para llenar espacio restante)
if sources.get("history", True):
history_config = config.get("history_config", {})
history = self.db.get_session_history(
session.id,
limit=history_config.get("max_messages", 20),
include_system=history_config.get("include_system", False)
)
history_tokens = 0
history_count = 0
for msg in history:
tokens = len(msg.content) // 4
if context.total_tokens + tokens <= token_budget:
context.items.append(ContextItem(
source=ContextSource.HISTORY,
content=msg.content,
tokens=tokens,
priority=10,
metadata={"role": msg.role.value, "sequence": msg.sequence_num}
))
context.total_tokens += tokens
history_tokens += tokens
history_count += 1
composition["history"] = {"count": history_count, "tokens": history_tokens}
context.composition = composition
return context
def _select_context_blocks(
self,
user_message: str,
config: Dict[str, Any]
) -> List[ContextBlock]:
"""Selecciona bloques de contexto relevantes"""
blocks = self.db.get_active_context_blocks()
relevant_blocks = []
keywords = self._extract_keywords(user_message)
for block in blocks:
rules = block.activation_rules
# Siempre incluir
if rules.get("always", False):
relevant_blocks.append(block)
continue
# Verificar keywords
block_keywords = rules.get("keywords", [])
if block_keywords:
if any(kw.lower() in user_message.lower() for kw in block_keywords):
relevant_blocks.append(block)
continue
# Verificar categoría system (siempre incluir)
if block.category == "system":
relevant_blocks.append(block)
# Ordenar por prioridad
relevant_blocks.sort(key=lambda b: b.priority, reverse=True)
return relevant_blocks
def _select_memories(
self,
user_message: str,
min_importance: int = 30,
max_items: int = 15
) -> List[Memory]:
"""Selecciona memorias relevantes"""
memories = self.db.get_memories(
min_importance=min_importance,
limit=max_items * 2 # Obtener más para filtrar
)
# Filtrar por relevancia al mensaje
keywords = self._extract_keywords(user_message)
if keywords:
scored_memories = []
for mem in memories:
score = sum(1 for kw in keywords if kw.lower() in mem.content.lower())
scored_memories.append((mem, score + mem.importance / 100))
scored_memories.sort(key=lambda x: x[1], reverse=True)
return [m[0] for m in scored_memories[:max_items]]
return memories[:max_items]
def _extract_keywords(self, text: str) -> List[str]:
"""Extrae keywords de un texto"""
# Palabras comunes a ignorar
stopwords = {
"el", "la", "los", "las", "un", "una", "unos", "unas",
"de", "del", "al", "a", "en", "con", "por", "para",
"que", "qué", "como", "cómo", "donde", "dónde", "cuando", "cuándo",
"es", "son", "está", "están", "ser", "estar", "tener", "hacer",
"y", "o", "pero", "si", "no", "me", "te", "se", "nos",
"the", "a", "an", "is", "are", "was", "were", "be", "been",
"have", "has", "had", "do", "does", "did", "will", "would",
"can", "could", "should", "may", "might", "must",
"and", "or", "but", "if", "then", "else", "when", "where",
"what", "which", "who", "whom", "this", "that", "these", "those",
"i", "you", "he", "she", "it", "we", "they", "my", "your", "his", "her"
}
# Extraer palabras
words = re.findall(r'\b\w+\b', text.lower())
# Filtrar
keywords = [
w for w in words
if len(w) > 2 and w not in stopwords
]
return list(set(keywords))
def _format_ambient_context(self, ambient: AmbientContext) -> str:
"""Formatea el contexto ambiental como texto"""
lines = ["[Contexto del sistema]"]
env = ambient.environment
if env:
if env.get("timezone"):
lines.append(f"- Zona horaria: {env['timezone']}")
if env.get("working_directory"):
lines.append(f"- Directorio: {env['working_directory']}")
if env.get("git_branch"):
lines.append(f"- Git branch: {env['git_branch']}")
if env.get("active_project"):
lines.append(f"- Proyecto activo: {env['active_project']}")
state = ambient.system_state
if state:
if state.get("servers"):
servers = state["servers"]
online = [k for k, v in servers.items() if v == "online"]
if online:
lines.append(f"- Servidores online: {', '.join(online)}")
if state.get("alerts"):
alerts = state["alerts"]
if alerts:
lines.append(f"- Alertas activas: {len(alerts)}")
return "\n".join(lines)
class ContextManager:
"""
Gestor completo de contexto.
Combina el selector con el logging y métricas.
"""
def __init__(
self,
db: Database = None,
host: str = None,
port: int = None,
database: str = None,
user: str = None,
password: str = None
):
if db:
self.db = db
else:
self.db = Database(
host=host,
port=port,
database=database,
user=user,
password=password
)
self.selector = ContextSelector(self.db)
self._current_session: Optional[Session] = None
def start_session(
self,
user_id: str = None,
instance_id: str = None,
model_provider: str = None,
model_name: str = None,
metadata: Dict[str, Any] = None
) -> Session:
"""Inicia una nueva sesión"""
algorithm = self.db.get_active_algorithm()
self._current_session = self.db.create_session(
user_id=user_id,
instance_id=instance_id,
model_provider=model_provider,
model_name=model_name,
algorithm_id=algorithm.id if algorithm else None,
metadata=metadata
)
return self._current_session
def get_context_for_message(
self,
message: str,
max_tokens: int = None,
session: Session = None
) -> SelectedContext:
"""Obtiene el contexto para un mensaje"""
session = session or self._current_session
if not session:
raise ValueError("No hay sesión activa. Llama a start_session() primero.")
return self.selector.select_context(
session=session,
user_message=message,
max_tokens=max_tokens
)
def log_user_message(
self,
content: str,
context: SelectedContext = None,
session: Session = None
) -> uuid.UUID:
"""Registra un mensaje del usuario en el log inmutable"""
session = session or self._current_session
if not session:
raise ValueError("No hay sesión activa.")
return self.db.insert_log_entry(
session_id=session.id,
role=MessageRole.USER,
content=content,
model_provider=session.model_provider,
model_name=session.model_name,
context_snapshot=context.to_dict() if context else None,
context_algorithm_id=context.algorithm_id if context else None,
context_tokens_used=context.total_tokens if context else None
)
def log_assistant_message(
self,
content: str,
tokens_input: int = None,
tokens_output: int = None,
latency_ms: int = None,
model_provider: str = None,
model_name: str = None,
model_params: Dict[str, Any] = None,
session: Session = None
) -> uuid.UUID:
"""Registra una respuesta del asistente en el log inmutable"""
session = session or self._current_session
if not session:
raise ValueError("No hay sesión activa.")
return self.db.insert_log_entry(
session_id=session.id,
role=MessageRole.ASSISTANT,
content=content,
model_provider=model_provider or session.model_provider,
model_name=model_name or session.model_name,
model_params=model_params,
tokens_input=tokens_input,
tokens_output=tokens_output,
latency_ms=latency_ms
)
def record_metric(
self,
context: SelectedContext,
log_entry_id: uuid.UUID,
tokens_budget: int,
latency_ms: int = None,
model_tokens_input: int = None,
model_tokens_output: int = None,
session: Session = None
) -> uuid.UUID:
"""Registra una métrica de uso del algoritmo"""
session = session or self._current_session
if not session or not context.algorithm_id:
return None
return self.db.record_metric(
algorithm_id=context.algorithm_id,
session_id=session.id,
log_entry_id=log_entry_id,
tokens_budget=tokens_budget,
tokens_used=context.total_tokens,
context_composition=context.composition,
latency_ms=latency_ms,
model_tokens_input=model_tokens_input,
model_tokens_output=model_tokens_output
)
def rate_response(
self,
metric_id: uuid.UUID,
relevance: float = None,
quality: float = None,
satisfaction: float = None
):
"""Evalúa una respuesta (feedback manual)"""
self.db.update_metric_evaluation(
metric_id=metric_id,
relevance=relevance,
quality=quality,
satisfaction=satisfaction,
method="user_feedback"
)
def verify_session_integrity(self, session: Session = None) -> Dict[str, Any]:
"""Verifica la integridad de la sesión"""
session = session or self._current_session
if not session:
raise ValueError("No hay sesión activa.")
return self.db.verify_chain_integrity(session.id)
def close(self):
"""Cierra las conexiones"""
self.db.close()

View File

@@ -1,621 +0,0 @@
"""
Conexión a base de datos PostgreSQL
"""
import os
import uuid
import json
from datetime import datetime
from typing import Optional, List, Dict, Any
from contextlib import contextmanager
try:
import psycopg2
from psycopg2.extras import RealDictCursor, Json
from psycopg2 import pool
HAS_PSYCOPG2 = True
except ImportError:
HAS_PSYCOPG2 = False
from .models import (
Session, Message, MessageRole, ContextBlock, Memory,
Knowledge, Algorithm, AlgorithmStatus, AlgorithmMetric,
AmbientContext
)
class Database:
"""Gestión de conexión a PostgreSQL"""
def __init__(
self,
host: str = None,
port: int = None,
database: str = None,
user: str = None,
password: str = None,
min_connections: int = 1,
max_connections: int = 10
):
if not HAS_PSYCOPG2:
raise ImportError("psycopg2 no está instalado. Ejecuta: pip install psycopg2-binary")
self.host = host or os.getenv("PGHOST", "localhost")
self.port = port or int(os.getenv("PGPORT", "5432"))
self.database = database or os.getenv("PGDATABASE", "context_manager")
self.user = user or os.getenv("PGUSER", "postgres")
self.password = password or os.getenv("PGPASSWORD", "")
self._pool = pool.ThreadedConnectionPool(
min_connections,
max_connections,
host=self.host,
port=self.port,
database=self.database,
user=self.user,
password=self.password
)
@contextmanager
def get_connection(self):
"""Obtiene una conexión del pool"""
conn = self._pool.getconn()
try:
yield conn
conn.commit()
except Exception:
conn.rollback()
raise
finally:
self._pool.putconn(conn)
@contextmanager
def get_cursor(self, dict_cursor: bool = True):
"""Obtiene un cursor"""
with self.get_connection() as conn:
cursor_factory = RealDictCursor if dict_cursor else None
with conn.cursor(cursor_factory=cursor_factory) as cur:
yield cur
def close(self):
"""Cierra el pool de conexiones"""
self._pool.closeall()
# ==========================================
# SESIONES
# ==========================================
def create_session(
self,
user_id: str = None,
instance_id: str = None,
model_provider: str = None,
model_name: str = None,
algorithm_id: uuid.UUID = None,
metadata: Dict[str, Any] = None
) -> Session:
"""Crea una nueva sesión"""
with self.get_cursor() as cur:
cur.execute(
"""
SELECT create_session(%s, %s, %s, %s, %s, %s) as id
""",
(user_id, instance_id, model_provider, model_name,
str(algorithm_id) if algorithm_id else None,
Json(metadata or {}))
)
session_id = cur.fetchone()["id"]
return Session(
id=session_id,
user_id=user_id,
instance_id=instance_id,
model_provider=model_provider,
model_name=model_name,
algorithm_id=algorithm_id,
metadata=metadata or {}
)
def get_session(self, session_id: uuid.UUID) -> Optional[Session]:
"""Obtiene una sesión por ID"""
with self.get_cursor() as cur:
cur.execute(
"SELECT * FROM sessions WHERE id = %s",
(str(session_id),)
)
row = cur.fetchone()
if row:
return Session(
id=row["id"],
user_id=row["user_id"],
instance_id=row["instance_id"],
model_provider=row["initial_model_provider"],
model_name=row["initial_model_name"],
algorithm_id=row["initial_context_algorithm_id"],
metadata=row["metadata"] or {},
started_at=row["started_at"],
ended_at=row["ended_at"],
total_messages=row["total_messages"],
total_tokens_input=row["total_tokens_input"],
total_tokens_output=row["total_tokens_output"]
)
return None
# ==========================================
# LOG INMUTABLE
# ==========================================
def insert_log_entry(
self,
session_id: uuid.UUID,
role: MessageRole,
content: str,
model_provider: str = None,
model_name: str = None,
model_params: Dict[str, Any] = None,
context_snapshot: Dict[str, Any] = None,
context_algorithm_id: uuid.UUID = None,
context_tokens_used: int = None,
tokens_input: int = None,
tokens_output: int = None,
latency_ms: int = None,
source_ip: str = None,
user_agent: str = None
) -> uuid.UUID:
"""Inserta una entrada en el log inmutable"""
with self.get_cursor() as cur:
cur.execute(
"""
SELECT insert_log_entry(
%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s
) as id
""",
(
str(session_id),
role.value,
content,
model_provider,
model_name,
Json(model_params or {}),
Json(context_snapshot) if context_snapshot else None,
str(context_algorithm_id) if context_algorithm_id else None,
context_tokens_used,
tokens_input,
tokens_output,
latency_ms,
source_ip,
user_agent
)
)
return cur.fetchone()["id"]
def get_session_history(
self,
session_id: uuid.UUID,
limit: int = None,
include_system: bool = False
) -> List[Message]:
"""Obtiene el historial de una sesión"""
with self.get_cursor() as cur:
query = """
SELECT * FROM immutable_log
WHERE session_id = %s
"""
params = [str(session_id)]
if not include_system:
query += " AND role != 'system'"
query += " ORDER BY sequence_num DESC"
if limit:
query += " LIMIT %s"
params.append(limit)
cur.execute(query, params)
rows = cur.fetchall()
messages = []
for row in reversed(rows):
messages.append(Message(
id=row["id"],
session_id=row["session_id"],
sequence_num=row["sequence_num"],
role=MessageRole(row["role"]),
content=row["content"],
hash=row["hash"],
hash_anterior=row["hash_anterior"],
model_provider=row["model_provider"],
model_name=row["model_name"],
model_params=row["model_params"] or {},
context_snapshot=row["context_snapshot"],
context_algorithm_id=row["context_algorithm_id"],
context_tokens_used=row["context_tokens_used"],
tokens_input=row["tokens_input"],
tokens_output=row["tokens_output"],
latency_ms=row["latency_ms"],
created_at=row["created_at"]
))
return messages
def verify_chain_integrity(self, session_id: uuid.UUID) -> Dict[str, Any]:
"""Verifica la integridad de la cadena de hashes"""
with self.get_cursor() as cur:
cur.execute(
"SELECT * FROM verify_chain_integrity(%s)",
(str(session_id),)
)
row = cur.fetchone()
return {
"is_valid": row["is_valid"],
"broken_at_sequence": row["broken_at_sequence"],
"expected_hash": row["expected_hash"],
"actual_hash": row["actual_hash"]
}
# ==========================================
# BLOQUES DE CONTEXTO
# ==========================================
def create_context_block(self, block: ContextBlock) -> uuid.UUID:
"""Crea un bloque de contexto"""
with self.get_cursor() as cur:
cur.execute(
"""
INSERT INTO context_blocks
(code, name, description, content, category, priority, scope, project_id, activation_rules, active)
VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)
RETURNING id
""",
(
block.code, block.name, block.description, block.content,
block.category, block.priority, block.scope,
str(block.project_id) if block.project_id else None,
Json(block.activation_rules), block.active
)
)
return cur.fetchone()["id"]
def get_active_context_blocks(
self,
category: str = None,
scope: str = None,
project_id: uuid.UUID = None
) -> List[ContextBlock]:
"""Obtiene bloques de contexto activos"""
with self.get_cursor() as cur:
query = "SELECT * FROM context_blocks WHERE active = true"
params = []
if category:
query += " AND category = %s"
params.append(category)
if scope:
query += " AND scope = %s"
params.append(scope)
if project_id:
query += " AND (project_id = %s OR project_id IS NULL)"
params.append(str(project_id))
query += " ORDER BY priority DESC"
cur.execute(query, params)
return [
ContextBlock(
id=row["id"],
code=row["code"],
name=row["name"],
description=row["description"],
content=row["content"],
content_hash=row["content_hash"],
category=row["category"],
priority=row["priority"],
tokens_estimated=row["tokens_estimated"],
scope=row["scope"],
project_id=row["project_id"],
activation_rules=row["activation_rules"] or {},
active=row["active"],
version=row["version"]
)
for row in cur.fetchall()
]
# ==========================================
# MEMORIA
# ==========================================
def get_memories(
self,
type: str = None,
min_importance: int = 0,
limit: int = 20
) -> List[Memory]:
"""Obtiene memorias activas"""
with self.get_cursor() as cur:
query = """
SELECT * FROM memory
WHERE active = true
AND importance >= %s
AND (expires_at IS NULL OR expires_at > NOW())
"""
params = [min_importance]
if type:
query += " AND type = %s"
params.append(type)
query += " ORDER BY importance DESC, last_used_at DESC NULLS LAST LIMIT %s"
params.append(limit)
cur.execute(query, params)
return [
Memory(
id=row["id"],
type=row["type"],
category=row["category"],
content=row["content"],
summary=row["summary"],
importance=row["importance"],
confidence=float(row["confidence"]) if row["confidence"] else 1.0,
uses=row["uses"],
last_used_at=row["last_used_at"],
verified=row["verified"]
)
for row in cur.fetchall()
]
def save_memory(self, memory: Memory) -> uuid.UUID:
"""Guarda una memoria"""
with self.get_cursor() as cur:
cur.execute(
"""
INSERT INTO memory
(type, category, content, summary, extracted_from_session, importance, confidence, expires_at)
VALUES (%s, %s, %s, %s, %s, %s, %s, %s)
RETURNING id
""",
(
memory.type, memory.category, memory.content, memory.summary,
str(memory.extracted_from_session) if memory.extracted_from_session else None,
memory.importance, memory.confidence, memory.expires_at
)
)
return cur.fetchone()["id"]
# ==========================================
# CONOCIMIENTO
# ==========================================
def search_knowledge(
self,
keywords: List[str] = None,
category: str = None,
tags: List[str] = None,
limit: int = 5
) -> List[Knowledge]:
"""Busca en la base de conocimiento"""
with self.get_cursor() as cur:
query = "SELECT * FROM knowledge_base WHERE active = true"
params = []
if category:
query += " AND category = %s"
params.append(category)
if tags:
query += " AND tags && %s"
params.append(tags)
if keywords:
# Búsqueda simple por contenido
keyword_conditions = []
for kw in keywords:
keyword_conditions.append("content ILIKE %s")
params.append(f"%{kw}%")
query += f" AND ({' OR '.join(keyword_conditions)})"
query += " ORDER BY priority DESC, access_count DESC LIMIT %s"
params.append(limit)
cur.execute(query, params)
return [
Knowledge(
id=row["id"],
title=row["title"],
category=row["category"],
tags=row["tags"] or [],
content=row["content"],
tokens_estimated=row["tokens_estimated"],
priority=row["priority"],
access_count=row["access_count"]
)
for row in cur.fetchall()
]
# ==========================================
# CONTEXTO AMBIENTAL
# ==========================================
def get_latest_ambient_context(self) -> Optional[AmbientContext]:
"""Obtiene el contexto ambiental más reciente"""
with self.get_cursor() as cur:
cur.execute(
"""
SELECT * FROM ambient_context
WHERE expires_at > NOW()
ORDER BY captured_at DESC
LIMIT 1
"""
)
row = cur.fetchone()
if row:
return AmbientContext(
id=row["id"],
captured_at=row["captured_at"],
expires_at=row["expires_at"],
environment=row["environment"] or {},
system_state=row["system_state"] or {},
active_resources=row["active_resources"] or []
)
return None
def save_ambient_context(self, context: AmbientContext) -> int:
"""Guarda un snapshot de contexto ambiental"""
with self.get_cursor() as cur:
cur.execute(
"""
INSERT INTO ambient_context
(environment, system_state, active_resources, expires_at)
VALUES (%s, %s, %s, %s)
RETURNING id
""",
(
Json(context.environment),
Json(context.system_state),
Json(context.active_resources),
context.expires_at
)
)
return cur.fetchone()["id"]
# ==========================================
# ALGORITMOS
# ==========================================
def get_active_algorithm(self) -> Optional[Algorithm]:
"""Obtiene el algoritmo activo"""
with self.get_cursor() as cur:
cur.execute(
"""
SELECT * FROM context_algorithms
WHERE status = 'active'
ORDER BY activated_at DESC
LIMIT 1
"""
)
row = cur.fetchone()
if row:
return Algorithm(
id=row["id"],
code=row["code"],
name=row["name"],
description=row["description"],
version=row["version"],
status=AlgorithmStatus(row["status"]),
config=row["config"] or {},
selector_code=row["selector_code"],
times_used=row["times_used"],
avg_tokens_used=float(row["avg_tokens_used"]) if row["avg_tokens_used"] else None,
avg_relevance_score=float(row["avg_relevance_score"]) if row["avg_relevance_score"] else None,
avg_response_quality=float(row["avg_response_quality"]) if row["avg_response_quality"] else None,
parent_algorithm_id=row["parent_algorithm_id"],
activated_at=row["activated_at"]
)
return None
def get_algorithm(self, algorithm_id: uuid.UUID) -> Optional[Algorithm]:
"""Obtiene un algoritmo por ID"""
with self.get_cursor() as cur:
cur.execute(
"SELECT * FROM context_algorithms WHERE id = %s",
(str(algorithm_id),)
)
row = cur.fetchone()
if row:
return Algorithm(
id=row["id"],
code=row["code"],
name=row["name"],
description=row["description"],
version=row["version"],
status=AlgorithmStatus(row["status"]),
config=row["config"] or {},
selector_code=row["selector_code"],
times_used=row["times_used"]
)
return None
def fork_algorithm(
self,
source_id: uuid.UUID,
new_code: str,
new_name: str,
reason: str = None
) -> uuid.UUID:
"""Clona un algoritmo para experimentación"""
with self.get_cursor() as cur:
cur.execute(
"SELECT fork_algorithm(%s, %s, %s, %s) as id",
(str(source_id), new_code, new_name, reason)
)
return cur.fetchone()["id"]
def activate_algorithm(self, algorithm_id: uuid.UUID) -> bool:
"""Activa un algoritmo"""
with self.get_cursor() as cur:
cur.execute(
"SELECT activate_algorithm(%s) as success",
(str(algorithm_id),)
)
return cur.fetchone()["success"]
# ==========================================
# MÉTRICAS
# ==========================================
def record_metric(
self,
algorithm_id: uuid.UUID,
session_id: uuid.UUID,
log_entry_id: uuid.UUID,
tokens_budget: int,
tokens_used: int,
context_composition: Dict[str, Any],
latency_ms: int = None,
model_tokens_input: int = None,
model_tokens_output: int = None
) -> uuid.UUID:
"""Registra una métrica de uso"""
with self.get_cursor() as cur:
cur.execute(
"""
SELECT record_algorithm_metric(
%s, %s, %s, %s, %s, %s, %s, %s, %s
) as id
""",
(
str(algorithm_id), str(session_id), str(log_entry_id),
tokens_budget, tokens_used, Json(context_composition),
latency_ms, model_tokens_input, model_tokens_output
)
)
return cur.fetchone()["id"]
def update_metric_evaluation(
self,
metric_id: uuid.UUID,
relevance: float = None,
quality: float = None,
satisfaction: float = None,
method: str = "manual"
) -> bool:
"""Actualiza la evaluación de una métrica"""
with self.get_cursor() as cur:
cur.execute(
"SELECT update_metric_evaluation(%s, %s, %s, %s, %s) as success",
(str(metric_id), relevance, quality, satisfaction, method)
)
return cur.fetchone()["success"]
def get_algorithm_performance(self, algorithm_id: uuid.UUID = None) -> List[Dict[str, Any]]:
"""Obtiene estadísticas de rendimiento de algoritmos"""
with self.get_cursor() as cur:
query = "SELECT * FROM algorithm_performance"
params = []
if algorithm_id:
query += " WHERE id = %s"
params.append(str(algorithm_id))
cur.execute(query, params)
return [dict(row) for row in cur.fetchall()]

View File

@@ -1,309 +0,0 @@
"""
Modelos de datos para Context Manager
"""
from dataclasses import dataclass, field
from datetime import datetime
from typing import Optional, List, Dict, Any
from enum import Enum
import uuid
import hashlib
import json
class MessageRole(Enum):
USER = "user"
ASSISTANT = "assistant"
SYSTEM = "system"
TOOL = "tool"
class ContextSource(Enum):
MEMORY = "memory"
KNOWLEDGE = "knowledge"
HISTORY = "history"
AMBIENT = "ambient"
DATASET = "dataset"
class AlgorithmStatus(Enum):
DRAFT = "draft"
TESTING = "testing"
ACTIVE = "active"
DEPRECATED = "deprecated"
@dataclass
class Session:
"""Sesión de conversación"""
id: uuid.UUID = field(default_factory=uuid.uuid4)
user_id: Optional[str] = None
instance_id: Optional[str] = None
model_provider: Optional[str] = None
model_name: Optional[str] = None
algorithm_id: Optional[uuid.UUID] = None
metadata: Dict[str, Any] = field(default_factory=dict)
started_at: datetime = field(default_factory=datetime.now)
ended_at: Optional[datetime] = None
total_messages: int = 0
total_tokens_input: int = 0
total_tokens_output: int = 0
@property
def hash(self) -> str:
content = f"{self.id}{self.started_at.isoformat()}"
return hashlib.sha256(content.encode()).hexdigest()
@dataclass
class Message:
"""Mensaje en el log inmutable"""
id: uuid.UUID = field(default_factory=uuid.uuid4)
session_id: uuid.UUID = None
sequence_num: int = 0
role: MessageRole = MessageRole.USER
content: str = ""
hash: str = ""
hash_anterior: Optional[str] = None
# Modelo
model_provider: Optional[str] = None
model_name: Optional[str] = None
model_params: Dict[str, Any] = field(default_factory=dict)
# Contexto
context_snapshot: Optional[Dict[str, Any]] = None
context_algorithm_id: Optional[uuid.UUID] = None
context_tokens_used: Optional[int] = None
# Respuesta
tokens_input: Optional[int] = None
tokens_output: Optional[int] = None
latency_ms: Optional[int] = None
# Metadata
created_at: datetime = field(default_factory=datetime.now)
source_ip: Optional[str] = None
user_agent: Optional[str] = None
def compute_hash(self) -> str:
"""Calcula el hash del mensaje (blockchain-style)"""
content = (
(self.hash_anterior or "") +
str(self.session_id) +
str(self.sequence_num) +
self.role.value +
self.content
)
return hashlib.sha256(content.encode()).hexdigest()
@dataclass
class ContextBlock:
"""Bloque de contexto reutilizable"""
id: uuid.UUID = field(default_factory=uuid.uuid4)
code: str = ""
name: str = ""
description: Optional[str] = None
content: str = ""
content_hash: Optional[str] = None
category: str = "general"
priority: int = 50
tokens_estimated: int = 0
scope: str = "global"
project_id: Optional[uuid.UUID] = None
activation_rules: Dict[str, Any] = field(default_factory=dict)
active: bool = True
version: int = 1
created_at: datetime = field(default_factory=datetime.now)
updated_at: datetime = field(default_factory=datetime.now)
def __post_init__(self):
self.content_hash = hashlib.sha256(self.content.encode()).hexdigest()
self.tokens_estimated = len(self.content) // 4
@dataclass
class Memory:
"""Memoria a largo plazo"""
id: uuid.UUID = field(default_factory=uuid.uuid4)
type: str = "fact"
category: Optional[str] = None
content: str = ""
summary: Optional[str] = None
content_hash: Optional[str] = None
extracted_from_session: Optional[uuid.UUID] = None
importance: int = 50
confidence: float = 1.0
uses: int = 0
last_used_at: Optional[datetime] = None
expires_at: Optional[datetime] = None
active: bool = True
verified: bool = False
created_at: datetime = field(default_factory=datetime.now)
@dataclass
class Knowledge:
"""Base de conocimiento"""
id: uuid.UUID = field(default_factory=uuid.uuid4)
title: str = ""
category: str = ""
tags: List[str] = field(default_factory=list)
content: str = ""
content_hash: Optional[str] = None
tokens_estimated: int = 0
source_type: Optional[str] = None
source_ref: Optional[str] = None
priority: int = 50
access_count: int = 0
active: bool = True
created_at: datetime = field(default_factory=datetime.now)
@dataclass
class Algorithm:
"""Algoritmo de selección de contexto"""
id: uuid.UUID = field(default_factory=uuid.uuid4)
code: str = ""
name: str = ""
description: Optional[str] = None
version: str = "1.0.0"
status: AlgorithmStatus = AlgorithmStatus.DRAFT
config: Dict[str, Any] = field(default_factory=lambda: {
"max_tokens": 4000,
"sources": {
"system_prompts": True,
"context_blocks": True,
"memory": True,
"knowledge": True,
"history": True,
"ambient": True
},
"weights": {
"priority": 0.4,
"relevance": 0.3,
"recency": 0.2,
"frequency": 0.1
},
"history_config": {
"max_messages": 20,
"summarize_after": 10,
"include_system": False
},
"memory_config": {
"max_items": 15,
"min_importance": 30
},
"knowledge_config": {
"max_items": 5,
"require_keyword_match": True
}
})
selector_code: Optional[str] = None
times_used: int = 0
avg_tokens_used: Optional[float] = None
avg_relevance_score: Optional[float] = None
avg_response_quality: Optional[float] = None
parent_algorithm_id: Optional[uuid.UUID] = None
fork_reason: Optional[str] = None
created_at: datetime = field(default_factory=datetime.now)
activated_at: Optional[datetime] = None
deprecated_at: Optional[datetime] = None
@dataclass
class AlgorithmMetric:
"""Métrica de rendimiento de algoritmo"""
id: uuid.UUID = field(default_factory=uuid.uuid4)
algorithm_id: uuid.UUID = None
session_id: Optional[uuid.UUID] = None
log_entry_id: Optional[uuid.UUID] = None
tokens_budget: int = 0
tokens_used: int = 0
token_efficiency: float = 0.0
context_composition: Dict[str, Any] = field(default_factory=dict)
latency_ms: Optional[int] = None
model_tokens_input: Optional[int] = None
model_tokens_output: Optional[int] = None
relevance_score: Optional[float] = None
response_quality: Optional[float] = None
user_satisfaction: Optional[float] = None
auto_evaluated: bool = False
evaluation_method: Optional[str] = None
recorded_at: datetime = field(default_factory=datetime.now)
@dataclass
class AmbientContext:
"""Contexto ambiental del sistema"""
id: int = 0
captured_at: datetime = field(default_factory=datetime.now)
expires_at: Optional[datetime] = None
environment: Dict[str, Any] = field(default_factory=dict)
system_state: Dict[str, Any] = field(default_factory=dict)
active_resources: List[Dict[str, Any]] = field(default_factory=list)
@dataclass
class ContextItem:
"""Item individual de contexto seleccionado"""
source: ContextSource
content: str
tokens: int
priority: int
metadata: Dict[str, Any] = field(default_factory=dict)
@dataclass
class SelectedContext:
"""Contexto seleccionado para enviar al modelo"""
items: List[ContextItem] = field(default_factory=list)
total_tokens: int = 0
algorithm_id: Optional[uuid.UUID] = None
composition: Dict[str, Any] = field(default_factory=dict)
def to_messages(self) -> List[Dict[str, str]]:
"""Convierte el contexto a formato de mensajes para la API"""
messages = []
# System context primero
system_content = []
for item in self.items:
if item.source in [ContextSource.MEMORY, ContextSource.KNOWLEDGE, ContextSource.AMBIENT]:
system_content.append(item.content)
if system_content:
messages.append({
"role": "system",
"content": "\n\n".join(system_content)
})
# History messages
for item in self.items:
if item.source == ContextSource.HISTORY:
role = item.metadata.get("role", "user")
messages.append({
"role": role,
"content": item.content
})
return messages
def to_dict(self) -> Dict[str, Any]:
"""Serializa el contexto para snapshot"""
return {
"items": [
{
"source": item.source.value,
"content": item.content[:500] + "..." if len(item.content) > 500 else item.content,
"tokens": item.tokens,
"priority": item.priority,
"metadata": item.metadata
}
for item in self.items
],
"total_tokens": self.total_tokens,
"algorithm_id": str(self.algorithm_id) if self.algorithm_id else None,
"composition": self.composition
}

View File

@@ -1,18 +0,0 @@
"""
Adaptadores para proveedores de IA
Permite usar el Context Manager con cualquier modelo de IA.
"""
from .base import BaseProvider, ProviderResponse
from .anthropic import AnthropicProvider
from .openai import OpenAIProvider
from .ollama import OllamaProvider
__all__ = [
"BaseProvider",
"ProviderResponse",
"AnthropicProvider",
"OpenAIProvider",
"OllamaProvider",
]

View File

@@ -1,110 +0,0 @@
"""
Adaptador para Anthropic (Claude)
"""
import os
from typing import List, Dict, Any, Optional
from .base import BaseProvider, ProviderResponse
from ..models import SelectedContext, ContextSource
try:
import anthropic
HAS_ANTHROPIC = True
except ImportError:
HAS_ANTHROPIC = False
class AnthropicProvider(BaseProvider):
"""Proveedor para modelos de Anthropic (Claude)"""
provider_name = "anthropic"
def __init__(
self,
api_key: str = None,
model: str = "claude-sonnet-4-20250514",
max_tokens: int = 4096,
**kwargs
):
super().__init__(api_key=api_key, model=model, **kwargs)
if not HAS_ANTHROPIC:
raise ImportError("anthropic no está instalado. Ejecuta: pip install anthropic")
self.api_key = api_key or os.getenv("ANTHROPIC_API_KEY")
self.model = model
self.max_tokens = max_tokens
self.client = anthropic.Anthropic(api_key=self.api_key)
def format_context(self, context: SelectedContext) -> tuple:
"""
Formatea el contexto para la API de Anthropic.
Returns:
Tuple de (system_prompt, messages)
"""
system_parts = []
messages = []
for item in context.items:
if item.source in [ContextSource.MEMORY, ContextSource.KNOWLEDGE,
ContextSource.AMBIENT, ContextSource.DATASET]:
system_parts.append(item.content)
elif item.source == ContextSource.HISTORY:
role = item.metadata.get("role", "user")
messages.append({
"role": role,
"content": item.content
})
system_prompt = "\n\n".join(system_parts) if system_parts else None
return system_prompt, messages
def send_message(
self,
message: str,
context: SelectedContext = None,
system_prompt: str = None,
temperature: float = 1.0,
**kwargs
) -> ProviderResponse:
"""Envía mensaje a Claude"""
# Formatear contexto
context_system, context_messages = self.format_context(context) if context else (None, [])
# Combinar system prompts
final_system = ""
if system_prompt:
final_system = system_prompt
if context_system:
final_system = f"{final_system}\n\n{context_system}" if final_system else context_system
# Construir mensajes
messages = context_messages.copy()
messages.append({"role": "user", "content": message})
# Llamar a la API
response, latency_ms = self._measure_latency(
self.client.messages.create,
model=self.model,
max_tokens=kwargs.get("max_tokens", self.max_tokens),
system=final_system if final_system else anthropic.NOT_GIVEN,
messages=messages,
temperature=temperature
)
return ProviderResponse(
content=response.content[0].text,
model=response.model,
tokens_input=response.usage.input_tokens,
tokens_output=response.usage.output_tokens,
latency_ms=latency_ms,
finish_reason=response.stop_reason,
raw_response={
"id": response.id,
"type": response.type,
"role": response.role
}
)

View File

@@ -1,85 +0,0 @@
"""
Clase base para proveedores de IA
"""
from abc import ABC, abstractmethod
from dataclasses import dataclass, field
from typing import List, Dict, Any, Optional
import time
from ..models import SelectedContext
@dataclass
class ProviderResponse:
"""Respuesta de un proveedor de IA"""
content: str
model: str
tokens_input: int = 0
tokens_output: int = 0
latency_ms: int = 0
finish_reason: str = "stop"
raw_response: Dict[str, Any] = field(default_factory=dict)
class BaseProvider(ABC):
"""
Clase base para adaptadores de proveedores de IA.
Cada proveedor debe implementar:
- send_message(): Enviar mensaje y recibir respuesta
- format_context(): Formatear contexto al formato del proveedor
"""
provider_name: str = "base"
def __init__(self, api_key: str = None, model: str = None, **kwargs):
self.api_key = api_key
self.model = model
self.extra_config = kwargs
@abstractmethod
def send_message(
self,
message: str,
context: SelectedContext = None,
system_prompt: str = None,
**kwargs
) -> ProviderResponse:
"""
Envía un mensaje al modelo y retorna la respuesta.
Args:
message: Mensaje del usuario
context: Contexto seleccionado
system_prompt: Prompt de sistema adicional
**kwargs: Parámetros adicionales del modelo
Returns:
ProviderResponse con la respuesta
"""
pass
@abstractmethod
def format_context(self, context: SelectedContext) -> List[Dict[str, str]]:
"""
Formatea el contexto al formato de mensajes del proveedor.
Args:
context: Contexto seleccionado
Returns:
Lista de mensajes en el formato del proveedor
"""
pass
def estimate_tokens(self, text: str) -> int:
"""Estimación simple de tokens (4 caracteres por token)"""
return len(text) // 4
def _measure_latency(self, func, *args, **kwargs):
"""Mide la latencia de una función"""
start = time.time()
result = func(*args, **kwargs)
latency_ms = int((time.time() - start) * 1000)
return result, latency_ms

View File

@@ -1,120 +0,0 @@
"""
Adaptador para OpenAI (GPT)
"""
import os
from typing import List, Dict, Any, Optional
from .base import BaseProvider, ProviderResponse
from ..models import SelectedContext, ContextSource
try:
import openai
HAS_OPENAI = True
except ImportError:
HAS_OPENAI = False
class OpenAIProvider(BaseProvider):
"""Proveedor para modelos de OpenAI (GPT)"""
provider_name = "openai"
def __init__(
self,
api_key: str = None,
model: str = "gpt-4",
max_tokens: int = 4096,
base_url: str = None,
**kwargs
):
super().__init__(api_key=api_key, model=model, **kwargs)
if not HAS_OPENAI:
raise ImportError("openai no está instalado. Ejecuta: pip install openai")
self.api_key = api_key or os.getenv("OPENAI_API_KEY")
self.model = model
self.max_tokens = max_tokens
self.client = openai.OpenAI(
api_key=self.api_key,
base_url=base_url
)
def format_context(self, context: SelectedContext) -> List[Dict[str, str]]:
"""
Formatea el contexto para la API de OpenAI.
Returns:
Lista de mensajes en formato OpenAI
"""
messages = []
system_parts = []
for item in context.items:
if item.source in [ContextSource.MEMORY, ContextSource.KNOWLEDGE,
ContextSource.AMBIENT, ContextSource.DATASET]:
system_parts.append(item.content)
elif item.source == ContextSource.HISTORY:
role = item.metadata.get("role", "user")
messages.append({
"role": role,
"content": item.content
})
# Insertar system message al inicio
if system_parts:
messages.insert(0, {
"role": "system",
"content": "\n\n".join(system_parts)
})
return messages
def send_message(
self,
message: str,
context: SelectedContext = None,
system_prompt: str = None,
temperature: float = 1.0,
**kwargs
) -> ProviderResponse:
"""Envía mensaje a GPT"""
# Formatear contexto
messages = self.format_context(context) if context else []
# Añadir system prompt adicional
if system_prompt:
if messages and messages[0]["role"] == "system":
messages[0]["content"] = f"{system_prompt}\n\n{messages[0]['content']}"
else:
messages.insert(0, {"role": "system", "content": system_prompt})
# Añadir mensaje del usuario
messages.append({"role": "user", "content": message})
# Llamar a la API
response, latency_ms = self._measure_latency(
self.client.chat.completions.create,
model=self.model,
messages=messages,
max_tokens=kwargs.get("max_tokens", self.max_tokens),
temperature=temperature
)
choice = response.choices[0]
return ProviderResponse(
content=choice.message.content,
model=response.model,
tokens_input=response.usage.prompt_tokens,
tokens_output=response.usage.completion_tokens,
latency_ms=latency_ms,
finish_reason=choice.finish_reason,
raw_response={
"id": response.id,
"created": response.created,
"system_fingerprint": response.system_fingerprint
}
)