Kaloyan Danchev
872ed24f0c
Add performance features: caching, cost tracking, retry, compaction, classification, scrubbing
...
Inspired by zeroclaw's lightweight patterns for slow hardware:
- Response cache (SQLite + SHA-256 keyed) to skip redundant LLM calls
- History compaction — LLM-summarize old messages when history exceeds 50
- Query classifier routes simple/research queries to cheaper models
- Credential scrubbing removes secrets from tool output before sending to LLM
- Cost tracker with daily/monthly budget enforcement (SQLite)
- Resilient provider with retry + exponential backoff + fallback provider
- Approval engine gains session "always allow" and audit log
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com >
2026-02-19 09:20:52 +02:00
Kaloyan Danchev
b3608b35fa
Connect to Discord, switch to NVIDIA NIM providers
...
- Enable Discord channel, respond to all messages from allowed users
- Switch all agents to NVIDIA NIM (Kimi K2.5, DeepSeek V3.1)
- Auto-approve all tools for non-interactive Docker deployment
- Fix tool call arguments serialization (dict → JSON string)
- Fix Dockerfile to copy source before uv sync
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com >
2026-02-18 15:34:15 +02:00
Kaloyan Danchev
378d599125
Initial implementation of xtrm-agent multi-agent system
...
Multi-agent AI automation system with shared message bus, specialized
roles (coder/researcher/reviewer), and deny-by-default security.
- Config system with Pydantic validation and YAML loading
- Async message bus with inter-agent delegation
- LLM providers: Anthropic (Claude) and LiteLLM (DeepSeek/Kimi/MiniMax)
- Tool system: registry, builtins (file/bash/web), approval engine, MCP client
- Agent engine with tool-calling loop and orchestrator for multi-agent management
- CLI channel (REPL) and Discord channel
- Docker + Dockge deployment config
- Typer CLI: chat, serve, status, agents commands
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com >
2026-02-18 10:21:42 +02:00