Initial implementation of xtrm-agent multi-agent system
Multi-agent AI automation system with shared message bus, specialized roles (coder/researcher/reviewer), and deny-by-default security. - Config system with Pydantic validation and YAML loading - Async message bus with inter-agent delegation - LLM providers: Anthropic (Claude) and LiteLLM (DeepSeek/Kimi/MiniMax) - Tool system: registry, builtins (file/bash/web), approval engine, MCP client - Agent engine with tool-calling loop and orchestrator for multi-agent management - CLI channel (REPL) and Discord channel - Docker + Dockge deployment config - Typer CLI: chat, serve, status, agents commands Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
17
.gitignore
vendored
Normal file
17
.gitignore
vendored
Normal file
@@ -0,0 +1,17 @@
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
*.egg-info/
|
||||
dist/
|
||||
build/
|
||||
.eggs/
|
||||
*.egg
|
||||
.venv/
|
||||
venv/
|
||||
.env
|
||||
*.log
|
||||
.mypy_cache/
|
||||
.ruff_cache/
|
||||
.pytest_cache/
|
||||
data/
|
||||
playground.html
|
||||
34
CLAUDE.md
Normal file
34
CLAUDE.md
Normal file
@@ -0,0 +1,34 @@
|
||||
# xtrm-agent — Claude Code Instructions
|
||||
|
||||
## Project
|
||||
Multi-agent AI automation system with shared message bus, specialized roles, and deny-by-default security.
|
||||
|
||||
## Stack
|
||||
- Python 3.12+, uv for dependency management
|
||||
- Anthropic SDK for Claude, LiteLLM for DeepSeek/Kimi/MiniMax
|
||||
- discord.py for Discord, prompt-toolkit + rich for CLI
|
||||
- MCP Python SDK for tool server connections
|
||||
- Pydantic for config validation, typer for CLI
|
||||
|
||||
## Structure
|
||||
- `xtrm_agent/` — Main package
|
||||
- `xtrm_agent/llm/` — LLM providers (Anthropic, LiteLLM)
|
||||
- `xtrm_agent/tools/` — Tool registry, builtins, approval, MCP, delegate
|
||||
- `xtrm_agent/channels/` — CLI and Discord channels
|
||||
- `agents/` — Agent definitions (markdown + YAML frontmatter)
|
||||
- `config.yaml` — Main configuration
|
||||
|
||||
## Commands
|
||||
- `uv run xtrm-agent chat` — Interactive REPL
|
||||
- `uv run xtrm-agent chat -m "msg"` — Single-shot
|
||||
- `uv run xtrm-agent serve` — Production (Discord + all agents)
|
||||
- `uv run xtrm-agent status` — Show config
|
||||
- `uv run xtrm-agent agents` — List agents
|
||||
|
||||
## Key Patterns
|
||||
- Message bus (asyncio queues) decouples channels from agents
|
||||
- Router resolves @mentions, channel defaults, delegation targets
|
||||
- Per-agent tool filtering via registry.filtered()
|
||||
- Deny-by-default approval engine
|
||||
- Agent definitions in markdown with YAML frontmatter
|
||||
- Inter-agent delegation via DelegateTool + AgentMessage bus
|
||||
12
Dockerfile
Normal file
12
Dockerfile
Normal file
@@ -0,0 +1,12 @@
|
||||
FROM python:3.12-slim
|
||||
|
||||
RUN pip install uv
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY pyproject.toml .
|
||||
RUN uv sync --no-dev
|
||||
|
||||
COPY . .
|
||||
|
||||
CMD ["uv", "run", "xtrm-agent", "serve"]
|
||||
42
README.md
Normal file
42
README.md
Normal file
@@ -0,0 +1,42 @@
|
||||
# xtrm-agent
|
||||
|
||||
Multi-agent AI automation system with shared message bus, specialized roles, and deny-by-default security.
|
||||
|
||||
## Architecture
|
||||
|
||||
Multiple specialized agents share a message bus and can delegate to each other:
|
||||
|
||||
- **Coder Agent** — Claude, file+bash tools, coding-focused
|
||||
- **Researcher Agent** — DeepSeek/Kimi, web tools, research-focused
|
||||
- **Reviewer Agent** — Claude, read-only tools, code review
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Install
|
||||
uv sync
|
||||
|
||||
# Interactive chat (default: coder agent)
|
||||
uv run xtrm-agent chat
|
||||
|
||||
# Target a specific agent
|
||||
uv run xtrm-agent chat --agent researcher
|
||||
|
||||
# Single-shot message
|
||||
uv run xtrm-agent chat -m "write a hello world script"
|
||||
|
||||
# Run all agents + Discord bot
|
||||
uv run xtrm-agent serve
|
||||
|
||||
# Show status
|
||||
uv run xtrm-agent status
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Edit `config.yaml` to configure providers, agents, tools, and channels.
|
||||
Agent definitions live in `agents/*.md` with YAML frontmatter.
|
||||
|
||||
## Deployment
|
||||
|
||||
Deploy via Dockge on Unraid using the included `compose.yaml`.
|
||||
31
agents/coder.md
Normal file
31
agents/coder.md
Normal file
@@ -0,0 +1,31 @@
|
||||
---
|
||||
name: coder
|
||||
provider: anthropic
|
||||
model: claude-sonnet-4-5-20250929
|
||||
temperature: 0.3
|
||||
max_iterations: 30
|
||||
tools:
|
||||
- read_file
|
||||
- write_file
|
||||
- edit_file
|
||||
- list_dir
|
||||
- bash
|
||||
- delegate
|
||||
---
|
||||
|
||||
# Coder Agent
|
||||
|
||||
You are a coding specialist. You write, edit, and debug code.
|
||||
|
||||
## Capabilities
|
||||
- Read, write, and edit files in the workspace
|
||||
- Execute shell commands
|
||||
- Delegate research tasks to @researcher
|
||||
- Delegate code review to @reviewer
|
||||
|
||||
## Guidelines
|
||||
- Write clean, minimal code
|
||||
- Test changes when possible
|
||||
- Delegate web research to @researcher instead of doing it yourself
|
||||
- Ask @reviewer to check complex changes before finalizing
|
||||
- Keep responses concise and focused on the code
|
||||
27
agents/researcher.md
Normal file
27
agents/researcher.md
Normal file
@@ -0,0 +1,27 @@
|
||||
---
|
||||
name: researcher
|
||||
provider: deepseek
|
||||
model: deepseek/deepseek-chat-v3.1
|
||||
temperature: 0.5
|
||||
max_iterations: 20
|
||||
tools:
|
||||
- web_fetch
|
||||
- read_file
|
||||
- list_dir
|
||||
- delegate
|
||||
---
|
||||
|
||||
# Researcher Agent
|
||||
|
||||
You are a research specialist. You find information and summarize it.
|
||||
|
||||
## Capabilities
|
||||
- Fetch and analyze web content
|
||||
- Read files for context
|
||||
- Delegate coding tasks to @coder
|
||||
|
||||
## Guidelines
|
||||
- Be thorough in research — check multiple sources when possible
|
||||
- Summarize findings clearly with key points
|
||||
- Include source URLs when relevant
|
||||
- Delegate any coding or file editing to @coder
|
||||
27
agents/reviewer.md
Normal file
27
agents/reviewer.md
Normal file
@@ -0,0 +1,27 @@
|
||||
---
|
||||
name: reviewer
|
||||
provider: anthropic
|
||||
model: claude-sonnet-4-5-20250929
|
||||
temperature: 0.2
|
||||
max_iterations: 15
|
||||
tools:
|
||||
- read_file
|
||||
- list_dir
|
||||
- delegate
|
||||
---
|
||||
|
||||
# Reviewer Agent
|
||||
|
||||
You are a code review specialist. You analyze code for quality, bugs, and security issues.
|
||||
|
||||
## Capabilities
|
||||
- Read files to review code
|
||||
- List directory structures
|
||||
- Delegate fixes to @coder
|
||||
|
||||
## Guidelines
|
||||
- Focus on correctness, security, and maintainability
|
||||
- Point out specific issues with file paths and line references
|
||||
- Suggest concrete improvements
|
||||
- Delegate any code changes to @coder — never modify files yourself
|
||||
- Be direct and constructive in feedback
|
||||
24
compose.yaml
Normal file
24
compose.yaml
Normal file
@@ -0,0 +1,24 @@
|
||||
services:
|
||||
xtrm-agent:
|
||||
build: .
|
||||
container_name: xtrm-agent
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
|
||||
- DEEPSEEK_API_KEY=${DEEPSEEK_API_KEY}
|
||||
- OPENROUTER_API_KEY=${OPENROUTER_API_KEY}
|
||||
- MINIMAX_API_KEY=${MINIMAX_API_KEY}
|
||||
- DISCORD_BOT_TOKEN=${DISCORD_BOT_TOKEN}
|
||||
volumes:
|
||||
- ./config.yaml:/app/config.yaml:ro
|
||||
- ./agents:/app/agents:ro
|
||||
- /mnt/user/appdata/xtrm-agent/data:/app/data
|
||||
networks:
|
||||
- dockerproxy
|
||||
labels:
|
||||
net.unraid.docker.managed: dockerman
|
||||
net.unraid.docker.icon: https://raw.githubusercontent.com/walkxcode/dashboard-icons/main/png/robot.png
|
||||
|
||||
networks:
|
||||
dockerproxy:
|
||||
external: true
|
||||
47
config.yaml
Normal file
47
config.yaml
Normal file
@@ -0,0 +1,47 @@
|
||||
llm:
|
||||
providers:
|
||||
anthropic:
|
||||
model: claude-sonnet-4-5-20250929
|
||||
max_tokens: 8192
|
||||
deepseek:
|
||||
provider: litellm
|
||||
model: deepseek/deepseek-chat-v3.1
|
||||
kimi:
|
||||
provider: litellm
|
||||
model: openrouter/moonshotai/kimi-k2.5
|
||||
minimax:
|
||||
provider: litellm
|
||||
model: minimax/MiniMax-M2.1
|
||||
|
||||
channels:
|
||||
cli:
|
||||
enabled: true
|
||||
default_agent: coder
|
||||
discord:
|
||||
enabled: false
|
||||
token_env: DISCORD_BOT_TOKEN
|
||||
default_agent: coder
|
||||
allowed_users: []
|
||||
|
||||
tools:
|
||||
workspace: ./data
|
||||
auto_approve:
|
||||
- read_file
|
||||
- list_dir
|
||||
- web_fetch
|
||||
- delegate
|
||||
require_approval:
|
||||
- bash
|
||||
- write_file
|
||||
- edit_file
|
||||
|
||||
mcp_servers: {}
|
||||
|
||||
agents:
|
||||
coder: agents/coder.md
|
||||
researcher: agents/researcher.md
|
||||
reviewer: agents/reviewer.md
|
||||
|
||||
orchestrator:
|
||||
max_concurrent: 5
|
||||
delegation_timeout: 120
|
||||
27
pyproject.toml
Normal file
27
pyproject.toml
Normal file
@@ -0,0 +1,27 @@
|
||||
[project]
|
||||
name = "xtrm-agent"
|
||||
version = "0.1.0"
|
||||
description = "Multi-agent AI automation system with shared bus and specialized roles"
|
||||
requires-python = ">=3.12"
|
||||
dependencies = [
|
||||
"anthropic>=0.79.0",
|
||||
"litellm>=1.60.0",
|
||||
"discord.py>=2.6.0",
|
||||
"mcp>=1.0.0",
|
||||
"typer>=0.15.0",
|
||||
"rich>=13.0.0",
|
||||
"prompt-toolkit>=3.0.0",
|
||||
"pydantic>=2.0.0",
|
||||
"pydantic-settings>=2.0.0",
|
||||
"pyyaml>=6.0",
|
||||
"httpx>=0.28.0",
|
||||
"loguru>=0.7.0",
|
||||
"json-repair>=0.30.0",
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
xtrm-agent = "xtrm_agent.main:app"
|
||||
|
||||
[build-system]
|
||||
requires = ["hatchling"]
|
||||
build-backend = "hatchling.build"
|
||||
0
tests/__init__.py
Normal file
0
tests/__init__.py
Normal file
0
xtrm_agent/__init__.py
Normal file
0
xtrm_agent/__init__.py
Normal file
3
xtrm_agent/__main__.py
Normal file
3
xtrm_agent/__main__.py
Normal file
@@ -0,0 +1,3 @@
|
||||
from xtrm_agent.main import app
|
||||
|
||||
app()
|
||||
92
xtrm_agent/bus.py
Normal file
92
xtrm_agent/bus.py
Normal file
@@ -0,0 +1,92 @@
|
||||
"""Shared message bus — async queues for inter-component communication."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timezone
|
||||
from typing import Any
|
||||
|
||||
|
||||
@dataclass
|
||||
class InboundMessage:
|
||||
"""Message from a channel (user) heading to an agent."""
|
||||
|
||||
channel: str
|
||||
sender_id: str
|
||||
chat_id: str
|
||||
content: str
|
||||
target_agent: str | None = None
|
||||
timestamp: datetime = field(default_factory=lambda: datetime.now(timezone.utc))
|
||||
metadata: dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
|
||||
@dataclass
|
||||
class OutboundMessage:
|
||||
"""Message from an agent heading back to a channel."""
|
||||
|
||||
channel: str
|
||||
chat_id: str
|
||||
content: str
|
||||
reply_to: str | None = None
|
||||
metadata: dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
|
||||
@dataclass
|
||||
class AgentMessage:
|
||||
"""Inter-agent delegation message."""
|
||||
|
||||
from_agent: str
|
||||
to_agent: str
|
||||
task: str
|
||||
request_id: str = ""
|
||||
response: str | None = None
|
||||
|
||||
|
||||
class MessageBus:
|
||||
"""Async queue-based message bus."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self.inbound: asyncio.Queue[InboundMessage] = asyncio.Queue()
|
||||
self.outbound: asyncio.Queue[OutboundMessage] = asyncio.Queue()
|
||||
self.agent_messages: asyncio.Queue[AgentMessage] = asyncio.Queue()
|
||||
self._outbound_subscribers: dict[str, list[asyncio.Queue[OutboundMessage]]] = {}
|
||||
|
||||
async def publish_inbound(self, msg: InboundMessage) -> None:
|
||||
await self.inbound.put(msg)
|
||||
|
||||
async def consume_inbound(self, timeout: float = 1.0) -> InboundMessage | None:
|
||||
try:
|
||||
return await asyncio.wait_for(self.inbound.get(), timeout=timeout)
|
||||
except asyncio.TimeoutError:
|
||||
return None
|
||||
|
||||
async def publish_outbound(self, msg: OutboundMessage) -> None:
|
||||
await self.outbound.put(msg)
|
||||
# Also dispatch to channel-specific subscribers
|
||||
channel_queues = self._outbound_subscribers.get(msg.channel, [])
|
||||
for q in channel_queues:
|
||||
await q.put(msg)
|
||||
|
||||
async def consume_outbound(self, timeout: float = 1.0) -> OutboundMessage | None:
|
||||
try:
|
||||
return await asyncio.wait_for(self.outbound.get(), timeout=timeout)
|
||||
except asyncio.TimeoutError:
|
||||
return None
|
||||
|
||||
def subscribe_outbound(self, channel: str) -> asyncio.Queue[OutboundMessage]:
|
||||
"""Subscribe to outbound messages for a specific channel."""
|
||||
q: asyncio.Queue[OutboundMessage] = asyncio.Queue()
|
||||
self._outbound_subscribers.setdefault(channel, []).append(q)
|
||||
return q
|
||||
|
||||
async def publish_agent_message(self, msg: AgentMessage) -> None:
|
||||
await self.agent_messages.put(msg)
|
||||
|
||||
async def consume_agent_message(
|
||||
self, timeout: float = 1.0
|
||||
) -> AgentMessage | None:
|
||||
try:
|
||||
return await asyncio.wait_for(self.agent_messages.get(), timeout=timeout)
|
||||
except asyncio.TimeoutError:
|
||||
return None
|
||||
0
xtrm_agent/channels/__init__.py
Normal file
0
xtrm_agent/channels/__init__.py
Normal file
22
xtrm_agent/channels/base.py
Normal file
22
xtrm_agent/channels/base.py
Normal file
@@ -0,0 +1,22 @@
|
||||
"""Base channel interface."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
|
||||
from xtrm_agent.bus import MessageBus
|
||||
|
||||
|
||||
class BaseChannel(ABC):
|
||||
"""Abstract base for all input/output channels."""
|
||||
|
||||
def __init__(self, bus: MessageBus) -> None:
|
||||
self.bus = bus
|
||||
|
||||
@abstractmethod
|
||||
async def start(self) -> None:
|
||||
"""Start listening for messages."""
|
||||
|
||||
@abstractmethod
|
||||
async def stop(self) -> None:
|
||||
"""Clean up and stop."""
|
||||
110
xtrm_agent/channels/cli.py
Normal file
110
xtrm_agent/channels/cli.py
Normal file
@@ -0,0 +1,110 @@
|
||||
"""Interactive CLI channel — REPL with prompt_toolkit + rich."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
|
||||
from loguru import logger
|
||||
from prompt_toolkit import PromptSession
|
||||
from prompt_toolkit.patch_stdout import patch_stdout
|
||||
from rich.console import Console
|
||||
from rich.markdown import Markdown
|
||||
|
||||
from xtrm_agent.bus import InboundMessage, MessageBus, OutboundMessage
|
||||
from xtrm_agent.channels.base import BaseChannel
|
||||
|
||||
|
||||
class CLIChannel(BaseChannel):
|
||||
"""Interactive REPL channel."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
bus: MessageBus,
|
||||
default_agent: str = "coder",
|
||||
) -> None:
|
||||
super().__init__(bus)
|
||||
self.default_agent = default_agent
|
||||
self.console = Console()
|
||||
self._running = False
|
||||
self._outbound_queue = bus.subscribe_outbound("cli")
|
||||
|
||||
async def start(self) -> None:
|
||||
"""Run the interactive REPL."""
|
||||
self._running = True
|
||||
session: PromptSession[str] = PromptSession()
|
||||
|
||||
self.console.print("[bold]xtrm-agent[/bold] — type a message or @agent_name to target an agent")
|
||||
self.console.print("Type [bold]/quit[/bold] to exit\n")
|
||||
|
||||
# Start output listener
|
||||
output_task = asyncio.create_task(self._output_loop())
|
||||
|
||||
try:
|
||||
while self._running:
|
||||
try:
|
||||
with patch_stdout():
|
||||
user_input = await session.prompt_async("you> ")
|
||||
except (EOFError, KeyboardInterrupt):
|
||||
break
|
||||
|
||||
text = user_input.strip()
|
||||
if not text:
|
||||
continue
|
||||
if text.lower() in ("/quit", "/exit"):
|
||||
break
|
||||
|
||||
msg = InboundMessage(
|
||||
channel="cli",
|
||||
sender_id="user",
|
||||
chat_id="cli",
|
||||
content=text,
|
||||
)
|
||||
await self.bus.publish_inbound(msg)
|
||||
|
||||
# Wait for the response
|
||||
try:
|
||||
out_msg = await asyncio.wait_for(self._outbound_queue.get(), timeout=300)
|
||||
self._render_response(out_msg)
|
||||
except asyncio.TimeoutError:
|
||||
self.console.print("[red]Timed out waiting for response[/red]")
|
||||
finally:
|
||||
self._running = False
|
||||
output_task.cancel()
|
||||
|
||||
async def _output_loop(self) -> None:
|
||||
"""Background task to handle unsolicited outbound messages."""
|
||||
# This handles messages that arrive outside the normal request/response flow
|
||||
# (e.g., delegation results, notifications)
|
||||
pass
|
||||
|
||||
def _render_response(self, msg: OutboundMessage) -> None:
|
||||
"""Render agent response with rich markdown."""
|
||||
self.console.print()
|
||||
self.console.print(Markdown(msg.content))
|
||||
self.console.print()
|
||||
|
||||
async def stop(self) -> None:
|
||||
self._running = False
|
||||
|
||||
|
||||
async def run_single_message(
|
||||
bus: MessageBus,
|
||||
message: str,
|
||||
agent: str | None = None,
|
||||
outbound_queue: asyncio.Queue[OutboundMessage] | None = None,
|
||||
) -> str:
|
||||
"""Send a single message and wait for the response."""
|
||||
if outbound_queue is None:
|
||||
outbound_queue = bus.subscribe_outbound("cli")
|
||||
|
||||
msg = InboundMessage(
|
||||
channel="cli",
|
||||
sender_id="user",
|
||||
chat_id="cli",
|
||||
content=message,
|
||||
target_agent=agent,
|
||||
)
|
||||
await bus.publish_inbound(msg)
|
||||
|
||||
out = await asyncio.wait_for(outbound_queue.get(), timeout=300)
|
||||
return out.content
|
||||
98
xtrm_agent/channels/discord.py
Normal file
98
xtrm_agent/channels/discord.py
Normal file
@@ -0,0 +1,98 @@
|
||||
"""Discord channel — bot integration via discord.py."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
|
||||
import discord
|
||||
from loguru import logger
|
||||
|
||||
from xtrm_agent.bus import InboundMessage, MessageBus, OutboundMessage
|
||||
from xtrm_agent.channels.base import BaseChannel
|
||||
|
||||
|
||||
class DiscordChannel(BaseChannel):
|
||||
"""Discord bot channel."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
bus: MessageBus,
|
||||
token_env: str = "DISCORD_BOT_TOKEN",
|
||||
default_agent: str = "coder",
|
||||
allowed_users: list[str] | None = None,
|
||||
) -> None:
|
||||
super().__init__(bus)
|
||||
self.token_env = token_env
|
||||
self.default_agent = default_agent
|
||||
self.allowed_users = set(allowed_users or [])
|
||||
self._outbound_queue = bus.subscribe_outbound("discord")
|
||||
|
||||
intents = discord.Intents.default()
|
||||
intents.message_content = True
|
||||
self.client = discord.Client(intents=intents)
|
||||
self._setup_events()
|
||||
|
||||
def _setup_events(self) -> None:
|
||||
@self.client.event
|
||||
async def on_ready() -> None:
|
||||
logger.info(f"Discord bot connected as {self.client.user}")
|
||||
|
||||
@self.client.event
|
||||
async def on_message(message: discord.Message) -> None:
|
||||
if message.author == self.client.user:
|
||||
return
|
||||
if message.author.bot:
|
||||
return
|
||||
|
||||
# Check allowlist
|
||||
if self.allowed_users and str(message.author.id) not in self.allowed_users:
|
||||
return
|
||||
|
||||
# Only respond to mentions or DMs
|
||||
is_dm = isinstance(message.channel, discord.DMChannel)
|
||||
is_mentioned = self.client.user in message.mentions if self.client.user else False
|
||||
if not is_dm and not is_mentioned:
|
||||
return
|
||||
|
||||
content = message.content
|
||||
# Strip bot mention from content
|
||||
if self.client.user:
|
||||
content = content.replace(f"<@{self.client.user.id}>", "").strip()
|
||||
|
||||
msg = InboundMessage(
|
||||
channel="discord",
|
||||
sender_id=str(message.author.id),
|
||||
chat_id=str(message.channel.id),
|
||||
content=content,
|
||||
metadata={"guild_id": str(message.guild.id) if message.guild else ""},
|
||||
)
|
||||
await self.bus.publish_inbound(msg)
|
||||
|
||||
# Wait for response and send it
|
||||
try:
|
||||
async with message.channel.typing():
|
||||
out = await asyncio.wait_for(self._outbound_queue.get(), timeout=300)
|
||||
await self._send_chunked(message.channel, out.content)
|
||||
except asyncio.TimeoutError:
|
||||
await message.channel.send("Sorry, I timed out processing your request.")
|
||||
|
||||
async def _send_chunked(
|
||||
self, channel: discord.abc.Messageable, content: str
|
||||
) -> None:
|
||||
"""Send a message, splitting into 2000-char chunks if needed."""
|
||||
while content:
|
||||
chunk = content[:2000]
|
||||
content = content[2000:]
|
||||
await channel.send(chunk)
|
||||
|
||||
async def start(self) -> None:
|
||||
token = os.environ.get(self.token_env)
|
||||
if not token:
|
||||
logger.error(f"Discord token not found in env var '{self.token_env}'")
|
||||
return
|
||||
logger.info("Starting Discord bot...")
|
||||
await self.client.start(token)
|
||||
|
||||
async def stop(self) -> None:
|
||||
await self.client.close()
|
||||
114
xtrm_agent/config.py
Normal file
114
xtrm_agent/config.py
Normal file
@@ -0,0 +1,114 @@
|
||||
"""Configuration system — YAML config + Pydantic validation."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
import yaml
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class ProviderConfig(BaseModel):
|
||||
"""Single LLM provider configuration."""
|
||||
|
||||
provider: str = "anthropic"
|
||||
model: str = "claude-sonnet-4-5-20250929"
|
||||
max_tokens: int = 8192
|
||||
temperature: float = 0.3
|
||||
api_key_env: str = ""
|
||||
|
||||
|
||||
class LLMConfig(BaseModel):
|
||||
"""LLM providers section."""
|
||||
|
||||
providers: dict[str, ProviderConfig] = Field(default_factory=dict)
|
||||
|
||||
|
||||
class CLIChannelConfig(BaseModel):
|
||||
enabled: bool = True
|
||||
default_agent: str = "coder"
|
||||
|
||||
|
||||
class DiscordChannelConfig(BaseModel):
|
||||
enabled: bool = False
|
||||
token_env: str = "DISCORD_BOT_TOKEN"
|
||||
default_agent: str = "coder"
|
||||
allowed_users: list[str] = Field(default_factory=list)
|
||||
|
||||
|
||||
class ChannelsConfig(BaseModel):
|
||||
cli: CLIChannelConfig = Field(default_factory=CLIChannelConfig)
|
||||
discord: DiscordChannelConfig = Field(default_factory=DiscordChannelConfig)
|
||||
|
||||
|
||||
class ToolsConfig(BaseModel):
|
||||
workspace: str = "./data"
|
||||
auto_approve: list[str] = Field(
|
||||
default_factory=lambda: ["read_file", "list_dir", "web_fetch", "delegate"]
|
||||
)
|
||||
require_approval: list[str] = Field(
|
||||
default_factory=lambda: ["bash", "write_file", "edit_file"]
|
||||
)
|
||||
|
||||
|
||||
class MCPServerConfig(BaseModel):
|
||||
"""Single MCP server configuration."""
|
||||
|
||||
command: str = ""
|
||||
args: list[str] = Field(default_factory=list)
|
||||
env: dict[str, str] = Field(default_factory=dict)
|
||||
url: str = ""
|
||||
|
||||
|
||||
class OrchestratorConfig(BaseModel):
|
||||
max_concurrent: int = 5
|
||||
delegation_timeout: int = 120
|
||||
|
||||
|
||||
class AgentFileConfig(BaseModel):
|
||||
"""Parsed from agent markdown frontmatter."""
|
||||
|
||||
name: str = ""
|
||||
provider: str = "anthropic"
|
||||
model: str = ""
|
||||
temperature: float = 0.3
|
||||
max_iterations: int = 30
|
||||
tools: list[str] = Field(default_factory=list)
|
||||
instructions: str = ""
|
||||
|
||||
|
||||
class Config(BaseModel):
|
||||
"""Top-level application config."""
|
||||
|
||||
llm: LLMConfig = Field(default_factory=LLMConfig)
|
||||
channels: ChannelsConfig = Field(default_factory=ChannelsConfig)
|
||||
tools: ToolsConfig = Field(default_factory=ToolsConfig)
|
||||
mcp_servers: dict[str, MCPServerConfig] = Field(default_factory=dict)
|
||||
agents: dict[str, str] = Field(default_factory=dict)
|
||||
orchestrator: OrchestratorConfig = Field(default_factory=OrchestratorConfig)
|
||||
|
||||
|
||||
def load_config(path: str | Path = "config.yaml") -> Config:
|
||||
"""Load and validate config from YAML file."""
|
||||
p = Path(path)
|
||||
if not p.exists():
|
||||
return Config()
|
||||
raw = yaml.safe_load(p.read_text()) or {}
|
||||
return Config.model_validate(raw)
|
||||
|
||||
|
||||
def parse_agent_file(path: str | Path) -> AgentFileConfig:
|
||||
"""Parse a markdown agent definition with YAML frontmatter."""
|
||||
text = Path(path).read_text()
|
||||
if not text.startswith("---"):
|
||||
return AgentFileConfig(instructions=text)
|
||||
|
||||
parts = text.split("---", 2)
|
||||
if len(parts) < 3:
|
||||
return AgentFileConfig(instructions=text)
|
||||
|
||||
frontmatter = yaml.safe_load(parts[1]) or {}
|
||||
body = parts[2].strip()
|
||||
frontmatter["instructions"] = body
|
||||
return AgentFileConfig.model_validate(frontmatter)
|
||||
107
xtrm_agent/engine.py
Normal file
107
xtrm_agent/engine.py
Normal file
@@ -0,0 +1,107 @@
|
||||
"""Single agent engine — one LLM loop per agent."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
|
||||
from loguru import logger
|
||||
|
||||
from xtrm_agent.config import AgentFileConfig
|
||||
from xtrm_agent.llm.provider import LLMProvider, LLMResponse
|
||||
from xtrm_agent.tools.approval import ApprovalEngine
|
||||
from xtrm_agent.tools.registry import ToolRegistry
|
||||
|
||||
|
||||
class Engine:
|
||||
"""Runs one agent's LLM loop: messages → LLM → tool calls → loop → response."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
agent_config: AgentFileConfig,
|
||||
provider: LLMProvider,
|
||||
tools: ToolRegistry,
|
||||
approval: ApprovalEngine,
|
||||
) -> None:
|
||||
self.config = agent_config
|
||||
self.provider = provider
|
||||
self.tools = tools
|
||||
self.approval = approval
|
||||
|
||||
async def run(self, user_message: str) -> str:
|
||||
"""Process a single user message through the agent loop."""
|
||||
messages = self._build_initial_messages(user_message)
|
||||
return await self._agent_loop(messages)
|
||||
|
||||
async def run_delegation(self, task: str) -> str:
|
||||
"""Process a delegation task (no system prompt changes)."""
|
||||
messages = self._build_initial_messages(task)
|
||||
return await self._agent_loop(messages)
|
||||
|
||||
def _build_initial_messages(self, user_message: str) -> list[dict[str, Any]]:
|
||||
messages: list[dict[str, Any]] = []
|
||||
if self.config.instructions:
|
||||
messages.append({"role": "system", "content": self.config.instructions})
|
||||
messages.append({"role": "user", "content": user_message})
|
||||
return messages
|
||||
|
||||
async def _agent_loop(self, messages: list[dict[str, Any]]) -> str:
|
||||
"""Core agent iteration loop."""
|
||||
for iteration in range(self.config.max_iterations):
|
||||
model = self.config.model or self.provider.get_default_model()
|
||||
tool_defs = self.tools.get_definitions() if self.tools.names() else None
|
||||
|
||||
response = await self.provider.complete(
|
||||
messages=messages,
|
||||
tools=tool_defs,
|
||||
model=model,
|
||||
max_tokens=8192,
|
||||
temperature=self.config.temperature,
|
||||
)
|
||||
|
||||
if not response.has_tool_calls:
|
||||
return response.content or "(no response)"
|
||||
|
||||
# Add assistant message with tool calls
|
||||
messages.append(self._assistant_message(response))
|
||||
|
||||
# Execute each tool call
|
||||
for tc in response.tool_calls:
|
||||
result = await self._execute_tool(tc.name, tc.arguments)
|
||||
messages.append(
|
||||
{
|
||||
"role": "tool",
|
||||
"tool_call_id": tc.id,
|
||||
"name": tc.name,
|
||||
"content": result,
|
||||
}
|
||||
)
|
||||
|
||||
logger.debug(
|
||||
f"[{self.config.name}] Iteration {iteration + 1}: "
|
||||
f"{len(response.tool_calls)} tool call(s)"
|
||||
)
|
||||
|
||||
return "(max iterations reached)"
|
||||
|
||||
async def _execute_tool(self, name: str, arguments: dict[str, Any]) -> str:
|
||||
"""Execute a tool with approval check."""
|
||||
approved = await self.approval.check(name, arguments)
|
||||
if not approved:
|
||||
return f"Tool '{name}' was denied by approval policy."
|
||||
return await self.tools.execute(name, arguments)
|
||||
|
||||
def _assistant_message(self, response: LLMResponse) -> dict[str, Any]:
|
||||
"""Build assistant message dict from LLMResponse."""
|
||||
msg: dict[str, Any] = {"role": "assistant"}
|
||||
if response.content:
|
||||
msg["content"] = response.content
|
||||
if response.tool_calls:
|
||||
msg["tool_calls"] = [
|
||||
{
|
||||
"id": tc.id,
|
||||
"type": "function",
|
||||
"function": {"name": tc.name, "arguments": tc.arguments},
|
||||
}
|
||||
for tc in response.tool_calls
|
||||
]
|
||||
return msg
|
||||
0
xtrm_agent/llm/__init__.py
Normal file
0
xtrm_agent/llm/__init__.py
Normal file
131
xtrm_agent/llm/anthropic.py
Normal file
131
xtrm_agent/llm/anthropic.py
Normal file
@@ -0,0 +1,131 @@
|
||||
"""Anthropic/Claude LLM provider."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
|
||||
import anthropic
|
||||
|
||||
from xtrm_agent.llm.provider import LLMProvider, LLMResponse, ToolCallRequest
|
||||
|
||||
|
||||
def _openai_tools_to_anthropic(tools: list[dict[str, Any]]) -> list[dict[str, Any]]:
|
||||
"""Convert OpenAI function-calling tool schema to Anthropic format."""
|
||||
result = []
|
||||
for tool in tools:
|
||||
func = tool.get("function", tool)
|
||||
result.append(
|
||||
{
|
||||
"name": func["name"],
|
||||
"description": func.get("description", ""),
|
||||
"input_schema": func.get("parameters", {"type": "object", "properties": {}}),
|
||||
}
|
||||
)
|
||||
return result
|
||||
|
||||
|
||||
class AnthropicProvider(LLMProvider):
|
||||
"""Claude via the Anthropic SDK."""
|
||||
|
||||
def __init__(self, model: str = "claude-sonnet-4-5-20250929") -> None:
|
||||
self.client = anthropic.AsyncAnthropic()
|
||||
self.model = model
|
||||
|
||||
async def complete(
|
||||
self,
|
||||
messages: list[dict[str, Any]],
|
||||
tools: list[dict[str, Any]] | None = None,
|
||||
model: str | None = None,
|
||||
max_tokens: int = 8192,
|
||||
temperature: float = 0.3,
|
||||
) -> LLMResponse:
|
||||
model = model or self.model
|
||||
|
||||
# Extract system message
|
||||
system_text = ""
|
||||
api_messages = []
|
||||
for msg in messages:
|
||||
if msg["role"] == "system":
|
||||
system_text = msg["content"] if isinstance(msg["content"], str) else str(msg["content"])
|
||||
else:
|
||||
api_messages.append(self._convert_message(msg))
|
||||
|
||||
kwargs: dict[str, Any] = {
|
||||
"model": model,
|
||||
"max_tokens": max_tokens,
|
||||
"temperature": temperature,
|
||||
"messages": api_messages,
|
||||
}
|
||||
if system_text:
|
||||
kwargs["system"] = system_text
|
||||
if tools:
|
||||
kwargs["tools"] = _openai_tools_to_anthropic(tools)
|
||||
|
||||
response = await self.client.messages.create(**kwargs)
|
||||
return self._parse_response(response)
|
||||
|
||||
def get_default_model(self) -> str:
|
||||
return self.model
|
||||
|
||||
def _convert_message(self, msg: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Convert a message to Anthropic format."""
|
||||
role = msg["role"]
|
||||
|
||||
# Tool results → user message with tool_result blocks
|
||||
if role == "tool":
|
||||
return {
|
||||
"role": "user",
|
||||
"content": [
|
||||
{
|
||||
"type": "tool_result",
|
||||
"tool_use_id": msg["tool_call_id"],
|
||||
"content": msg.get("content", ""),
|
||||
}
|
||||
],
|
||||
}
|
||||
|
||||
# Assistant messages with tool_calls → content blocks
|
||||
if role == "assistant" and "tool_calls" in msg:
|
||||
blocks: list[dict[str, Any]] = []
|
||||
if msg.get("content"):
|
||||
blocks.append({"type": "text", "text": msg["content"]})
|
||||
for tc in msg["tool_calls"]:
|
||||
func = tc.get("function", tc)
|
||||
blocks.append(
|
||||
{
|
||||
"type": "tool_use",
|
||||
"id": tc.get("id", func.get("id", "")),
|
||||
"name": func["name"],
|
||||
"input": func.get("arguments", {}),
|
||||
}
|
||||
)
|
||||
return {"role": "assistant", "content": blocks}
|
||||
|
||||
return {"role": role, "content": msg.get("content", "")}
|
||||
|
||||
def _parse_response(self, response: anthropic.types.Message) -> LLMResponse:
|
||||
"""Parse Anthropic response into standardized LLMResponse."""
|
||||
text_parts: list[str] = []
|
||||
tool_calls: list[ToolCallRequest] = []
|
||||
|
||||
for block in response.content:
|
||||
if block.type == "text":
|
||||
text_parts.append(block.text)
|
||||
elif block.type == "tool_use":
|
||||
tool_calls.append(
|
||||
ToolCallRequest(
|
||||
id=block.id,
|
||||
name=block.name,
|
||||
arguments=block.input if isinstance(block.input, dict) else {},
|
||||
)
|
||||
)
|
||||
|
||||
return LLMResponse(
|
||||
content="\n".join(text_parts),
|
||||
tool_calls=tool_calls,
|
||||
finish_reason=response.stop_reason or "",
|
||||
usage={
|
||||
"input_tokens": response.usage.input_tokens,
|
||||
"output_tokens": response.usage.output_tokens,
|
||||
},
|
||||
)
|
||||
92
xtrm_agent/llm/litellm.py
Normal file
92
xtrm_agent/llm/litellm.py
Normal file
@@ -0,0 +1,92 @@
|
||||
"""LiteLLM provider — DeepSeek, Kimi, MiniMax, and more."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from typing import Any
|
||||
|
||||
import litellm
|
||||
from json_repair import repair_json
|
||||
|
||||
from xtrm_agent.llm.provider import LLMProvider, LLMResponse, ToolCallRequest
|
||||
|
||||
|
||||
class LiteLLMProvider(LLMProvider):
|
||||
"""Multi-provider via LiteLLM."""
|
||||
|
||||
def __init__(self, model: str = "deepseek/deepseek-chat-v3.1") -> None:
|
||||
self.model = model
|
||||
litellm.drop_params = True
|
||||
|
||||
async def complete(
|
||||
self,
|
||||
messages: list[dict[str, Any]],
|
||||
tools: list[dict[str, Any]] | None = None,
|
||||
model: str | None = None,
|
||||
max_tokens: int = 8192,
|
||||
temperature: float = 0.3,
|
||||
) -> LLMResponse:
|
||||
model = model or self.model
|
||||
|
||||
kwargs: dict[str, Any] = {
|
||||
"model": model,
|
||||
"messages": messages,
|
||||
"max_tokens": max_tokens,
|
||||
"temperature": temperature,
|
||||
}
|
||||
if tools:
|
||||
kwargs["tools"] = tools
|
||||
kwargs["tool_choice"] = "auto"
|
||||
|
||||
response = await litellm.acompletion(**kwargs)
|
||||
return self._parse_response(response)
|
||||
|
||||
def get_default_model(self) -> str:
|
||||
return self.model
|
||||
|
||||
def _parse_response(self, response: Any) -> LLMResponse:
|
||||
"""Parse LiteLLM (OpenAI-format) response."""
|
||||
choice = response.choices[0]
|
||||
message = choice.message
|
||||
|
||||
content = message.content or ""
|
||||
tool_calls: list[ToolCallRequest] = []
|
||||
|
||||
if message.tool_calls:
|
||||
for tc in message.tool_calls:
|
||||
args = self._parse_arguments(tc.function.arguments)
|
||||
tool_calls.append(
|
||||
ToolCallRequest(
|
||||
id=tc.id,
|
||||
name=tc.function.name,
|
||||
arguments=args,
|
||||
)
|
||||
)
|
||||
|
||||
usage_data = {}
|
||||
if hasattr(response, "usage") and response.usage:
|
||||
usage_data = {
|
||||
"input_tokens": getattr(response.usage, "prompt_tokens", 0),
|
||||
"output_tokens": getattr(response.usage, "completion_tokens", 0),
|
||||
}
|
||||
|
||||
return LLMResponse(
|
||||
content=content,
|
||||
tool_calls=tool_calls,
|
||||
finish_reason=choice.finish_reason or "",
|
||||
usage=usage_data,
|
||||
)
|
||||
|
||||
def _parse_arguments(self, raw: str | dict) -> dict[str, Any]:
|
||||
"""Parse tool call arguments, using json-repair for malformed JSON."""
|
||||
if isinstance(raw, dict):
|
||||
return raw
|
||||
try:
|
||||
return json.loads(raw)
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
try:
|
||||
repaired = repair_json(raw)
|
||||
result = json.loads(repaired)
|
||||
return result if isinstance(result, dict) else {}
|
||||
except Exception:
|
||||
return {}
|
||||
49
xtrm_agent/llm/provider.py
Normal file
49
xtrm_agent/llm/provider.py
Normal file
@@ -0,0 +1,49 @@
|
||||
"""LLM provider abstract base class."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Any
|
||||
|
||||
|
||||
@dataclass
|
||||
class ToolCallRequest:
|
||||
"""A single tool call from the LLM."""
|
||||
|
||||
id: str
|
||||
name: str
|
||||
arguments: dict[str, Any]
|
||||
|
||||
|
||||
@dataclass
|
||||
class LLMResponse:
|
||||
"""Standardized response from any LLM provider."""
|
||||
|
||||
content: str = ""
|
||||
tool_calls: list[ToolCallRequest] = field(default_factory=list)
|
||||
finish_reason: str = ""
|
||||
usage: dict[str, int] = field(default_factory=dict)
|
||||
|
||||
@property
|
||||
def has_tool_calls(self) -> bool:
|
||||
return len(self.tool_calls) > 0
|
||||
|
||||
|
||||
class LLMProvider(ABC):
|
||||
"""Abstract base for LLM providers."""
|
||||
|
||||
@abstractmethod
|
||||
async def complete(
|
||||
self,
|
||||
messages: list[dict[str, Any]],
|
||||
tools: list[dict[str, Any]] | None = None,
|
||||
model: str | None = None,
|
||||
max_tokens: int = 8192,
|
||||
temperature: float = 0.3,
|
||||
) -> LLMResponse:
|
||||
"""Send messages to the LLM and get a response."""
|
||||
|
||||
@abstractmethod
|
||||
def get_default_model(self) -> str:
|
||||
"""Return the default model string for this provider."""
|
||||
187
xtrm_agent/main.py
Normal file
187
xtrm_agent/main.py
Normal file
@@ -0,0 +1,187 @@
|
||||
"""Entry point — typer CLI."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
import typer
|
||||
from loguru import logger
|
||||
from rich.console import Console
|
||||
from rich.table import Table
|
||||
|
||||
app = typer.Typer(name="xtrm-agent", help="Multi-agent AI automation system")
|
||||
console = Console()
|
||||
|
||||
|
||||
@app.command()
|
||||
def chat(
|
||||
message: Optional[str] = typer.Option(None, "-m", "--message", help="Single-shot message"),
|
||||
agent: Optional[str] = typer.Option(None, "--agent", help="Target agent name"),
|
||||
config_path: str = typer.Option("config.yaml", "--config", "-c", help="Config file path"),
|
||||
) -> None:
|
||||
"""Interactive chat REPL or single-shot message."""
|
||||
asyncio.run(_chat(message, agent, config_path))
|
||||
|
||||
|
||||
async def _chat(message: str | None, agent: str | None, config_path: str) -> None:
|
||||
from xtrm_agent.config import load_config
|
||||
from xtrm_agent.orchestrator import Orchestrator
|
||||
|
||||
config = load_config(config_path)
|
||||
orch = Orchestrator(config, interactive=True)
|
||||
await orch.setup()
|
||||
|
||||
# Start orchestrator loop in background
|
||||
loop_task = asyncio.create_task(orch.run_loop())
|
||||
|
||||
try:
|
||||
if message:
|
||||
# Single-shot mode
|
||||
from xtrm_agent.channels.cli import run_single_message
|
||||
|
||||
outbound_queue = orch.bus.subscribe_outbound("cli")
|
||||
result = await run_single_message(orch.bus, message, agent, outbound_queue)
|
||||
console.print(result)
|
||||
else:
|
||||
# Interactive REPL
|
||||
from xtrm_agent.channels.cli import CLIChannel
|
||||
|
||||
cli = CLIChannel(
|
||||
bus=orch.bus,
|
||||
default_agent=config.channels.cli.default_agent,
|
||||
)
|
||||
await cli.start()
|
||||
finally:
|
||||
loop_task.cancel()
|
||||
await orch.stop()
|
||||
|
||||
|
||||
@app.command()
|
||||
def serve(
|
||||
config_path: str = typer.Option("config.yaml", "--config", "-c", help="Config file path"),
|
||||
) -> None:
|
||||
"""Run all agents + Discord bot (production mode)."""
|
||||
asyncio.run(_serve(config_path))
|
||||
|
||||
|
||||
async def _serve(config_path: str) -> None:
|
||||
from xtrm_agent.config import load_config
|
||||
from xtrm_agent.orchestrator import Orchestrator
|
||||
|
||||
config = load_config(config_path)
|
||||
orch = Orchestrator(config, interactive=False)
|
||||
await orch.setup()
|
||||
|
||||
tasks = [asyncio.create_task(orch.run_loop())]
|
||||
|
||||
# Start Discord if enabled
|
||||
if config.channels.discord.enabled:
|
||||
from xtrm_agent.channels.discord import DiscordChannel
|
||||
|
||||
discord_channel = DiscordChannel(
|
||||
bus=orch.bus,
|
||||
token_env=config.channels.discord.token_env,
|
||||
default_agent=config.channels.discord.default_agent,
|
||||
allowed_users=config.channels.discord.allowed_users,
|
||||
)
|
||||
tasks.append(asyncio.create_task(discord_channel.start()))
|
||||
|
||||
logger.info("xtrm-agent serving — press Ctrl+C to stop")
|
||||
|
||||
try:
|
||||
await asyncio.gather(*tasks)
|
||||
except (KeyboardInterrupt, asyncio.CancelledError):
|
||||
pass
|
||||
finally:
|
||||
await orch.stop()
|
||||
|
||||
|
||||
@app.command()
|
||||
def status(
|
||||
config_path: str = typer.Option("config.yaml", "--config", "-c", help="Config file path"),
|
||||
) -> None:
|
||||
"""Show configuration, agents, tools, and MCP servers."""
|
||||
from xtrm_agent.config import load_config
|
||||
|
||||
config = load_config(config_path)
|
||||
|
||||
console.print("[bold]xtrm-agent status[/bold]\n")
|
||||
|
||||
# Providers
|
||||
table = Table(title="LLM Providers")
|
||||
table.add_column("Name")
|
||||
table.add_column("Model")
|
||||
table.add_column("Provider")
|
||||
for name, prov in config.llm.providers.items():
|
||||
table.add_row(name, prov.model, prov.provider)
|
||||
console.print(table)
|
||||
console.print()
|
||||
|
||||
# Agents
|
||||
table = Table(title="Agents")
|
||||
table.add_column("Name")
|
||||
table.add_column("Path")
|
||||
for name, path in config.agents.items():
|
||||
table.add_row(name, path)
|
||||
console.print(table)
|
||||
console.print()
|
||||
|
||||
# Channels
|
||||
table = Table(title="Channels")
|
||||
table.add_column("Channel")
|
||||
table.add_column("Enabled")
|
||||
table.add_column("Default Agent")
|
||||
table.add_row("CLI", str(config.channels.cli.enabled), config.channels.cli.default_agent)
|
||||
table.add_row("Discord", str(config.channels.discord.enabled), config.channels.discord.default_agent)
|
||||
console.print(table)
|
||||
console.print()
|
||||
|
||||
# MCP Servers
|
||||
if config.mcp_servers:
|
||||
table = Table(title="MCP Servers")
|
||||
table.add_column("Name")
|
||||
table.add_column("Type")
|
||||
for name, srv in config.mcp_servers.items():
|
||||
srv_type = "stdio" if srv.command else "http" if srv.url else "unknown"
|
||||
table.add_row(name, srv_type)
|
||||
console.print(table)
|
||||
else:
|
||||
console.print("[dim]No MCP servers configured[/dim]")
|
||||
|
||||
# Tool policies
|
||||
console.print()
|
||||
console.print(f"[bold]Tool Workspace:[/bold] {config.tools.workspace}")
|
||||
console.print(f"[bold]Auto-approve:[/bold] {', '.join(config.tools.auto_approve)}")
|
||||
console.print(f"[bold]Require approval:[/bold] {', '.join(config.tools.require_approval)}")
|
||||
|
||||
|
||||
@app.command()
|
||||
def agents(
|
||||
config_path: str = typer.Option("config.yaml", "--config", "-c", help="Config file path"),
|
||||
) -> None:
|
||||
"""List all agent definitions and their configuration."""
|
||||
from xtrm_agent.config import load_config, parse_agent_file
|
||||
|
||||
config = load_config(config_path)
|
||||
|
||||
for name, agent_path in config.agents.items():
|
||||
p = Path(agent_path)
|
||||
if not p.is_absolute():
|
||||
p = Path.cwd() / p
|
||||
|
||||
console.print(f"\n[bold]{name}[/bold]")
|
||||
if p.exists():
|
||||
cfg = parse_agent_file(p)
|
||||
console.print(f" Provider: {cfg.provider}")
|
||||
console.print(f" Model: {cfg.model or '(default)'}")
|
||||
console.print(f" Temperature: {cfg.temperature}")
|
||||
console.print(f" Max iterations: {cfg.max_iterations}")
|
||||
console.print(f" Tools: {', '.join(cfg.tools) if cfg.tools else '(all)'}")
|
||||
else:
|
||||
console.print(f" [red]File not found: {p}[/red]")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
app()
|
||||
204
xtrm_agent/orchestrator.py
Normal file
204
xtrm_agent/orchestrator.py
Normal file
@@ -0,0 +1,204 @@
|
||||
"""Orchestrator — manages multiple agent engines and delegation."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from contextlib import AsyncExitStack
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from loguru import logger
|
||||
|
||||
from xtrm_agent.bus import AgentMessage, InboundMessage, MessageBus, OutboundMessage
|
||||
from xtrm_agent.config import Config, AgentFileConfig, parse_agent_file
|
||||
from xtrm_agent.engine import Engine
|
||||
from xtrm_agent.llm.anthropic import AnthropicProvider
|
||||
from xtrm_agent.llm.litellm import LiteLLMProvider
|
||||
from xtrm_agent.llm.provider import LLMProvider
|
||||
from xtrm_agent.router import Router
|
||||
from xtrm_agent.tools.approval import ApprovalEngine
|
||||
from xtrm_agent.tools.builtin import register_builtin_tools
|
||||
from xtrm_agent.tools.delegate import DelegateTool
|
||||
from xtrm_agent.tools.mcp_client import connect_mcp_servers
|
||||
from xtrm_agent.tools.registry import ToolRegistry
|
||||
|
||||
|
||||
class Orchestrator:
|
||||
"""Creates and manages multiple agent engines."""
|
||||
|
||||
def __init__(self, config: Config, interactive: bool = True) -> None:
|
||||
self.config = config
|
||||
self.bus = MessageBus()
|
||||
self.interactive = interactive
|
||||
self._engines: dict[str, Engine] = {}
|
||||
self._delegate_tools: dict[str, DelegateTool] = {}
|
||||
self._agent_configs: dict[str, AgentFileConfig] = {}
|
||||
self._mcp_stack = AsyncExitStack()
|
||||
self._running = False
|
||||
|
||||
# Channel defaults for routing
|
||||
channel_defaults = {}
|
||||
if config.channels.cli.default_agent:
|
||||
channel_defaults["cli"] = config.channels.cli.default_agent
|
||||
if config.channels.discord.default_agent:
|
||||
channel_defaults["discord"] = config.channels.discord.default_agent
|
||||
|
||||
self.router = Router(
|
||||
agent_names=list(config.agents.keys()),
|
||||
channel_defaults=channel_defaults,
|
||||
)
|
||||
|
||||
async def setup(self) -> None:
|
||||
"""Load agent definitions and create engines."""
|
||||
workspace = Path(self.config.tools.workspace).resolve()
|
||||
workspace.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Parse all agent definitions
|
||||
for agent_name, agent_path in self.config.agents.items():
|
||||
p = Path(agent_path)
|
||||
if not p.is_absolute():
|
||||
p = Path.cwd() / p
|
||||
if p.exists():
|
||||
agent_cfg = parse_agent_file(p)
|
||||
else:
|
||||
logger.warning(f"Agent file not found: {p} — using defaults")
|
||||
agent_cfg = AgentFileConfig()
|
||||
agent_cfg.name = agent_cfg.name or agent_name
|
||||
self._agent_configs[agent_name] = agent_cfg
|
||||
|
||||
# Build shared tool registry, then create per-agent registries
|
||||
global_registry = ToolRegistry()
|
||||
register_builtin_tools(global_registry, workspace)
|
||||
|
||||
# Connect MCP servers
|
||||
await self._mcp_stack.__aenter__()
|
||||
await connect_mcp_servers(self.config.mcp_servers, global_registry, self._mcp_stack)
|
||||
|
||||
# Create one engine per agent
|
||||
agent_names = list(self._agent_configs.keys())
|
||||
for agent_name, agent_cfg in self._agent_configs.items():
|
||||
provider = self._create_provider(agent_cfg)
|
||||
approval = ApprovalEngine(
|
||||
auto_approve=self.config.tools.auto_approve,
|
||||
require_approval=self.config.tools.require_approval,
|
||||
interactive=self.interactive,
|
||||
)
|
||||
|
||||
# Filter tools for this agent
|
||||
if agent_cfg.tools:
|
||||
agent_registry = global_registry.filtered(agent_cfg.tools)
|
||||
else:
|
||||
agent_registry = global_registry
|
||||
|
||||
# Add delegate tool if agent has "delegate" in its tool list
|
||||
other_agents = [n for n in agent_names if n != agent_name]
|
||||
if not agent_cfg.tools or "delegate" in agent_cfg.tools:
|
||||
delegate_tool = DelegateTool(
|
||||
bus=self.bus,
|
||||
from_agent=agent_name,
|
||||
available_agents=other_agents,
|
||||
timeout=self.config.orchestrator.delegation_timeout,
|
||||
)
|
||||
agent_registry.register(delegate_tool)
|
||||
self._delegate_tools[agent_name] = delegate_tool
|
||||
|
||||
engine = Engine(
|
||||
agent_config=agent_cfg,
|
||||
provider=provider,
|
||||
tools=agent_registry,
|
||||
approval=approval,
|
||||
)
|
||||
self._engines[agent_name] = engine
|
||||
|
||||
logger.info(f"Orchestrator ready: {len(self._engines)} agent(s)")
|
||||
|
||||
def _create_provider(self, agent_cfg: AgentFileConfig) -> LLMProvider:
|
||||
"""Create the appropriate LLM provider for an agent."""
|
||||
provider_name = agent_cfg.provider
|
||||
|
||||
if provider_name == "anthropic":
|
||||
model = agent_cfg.model or "claude-sonnet-4-5-20250929"
|
||||
return AnthropicProvider(model=model)
|
||||
|
||||
# LiteLLM for everything else
|
||||
model = agent_cfg.model
|
||||
if not model:
|
||||
# Look up from config
|
||||
prov_cfg = self.config.llm.providers.get(provider_name)
|
||||
model = prov_cfg.model if prov_cfg else "deepseek/deepseek-chat-v3.1"
|
||||
return LiteLLMProvider(model=model)
|
||||
|
||||
async def handle_message(self, msg: InboundMessage) -> str:
|
||||
"""Route and process an inbound message."""
|
||||
agent_name = self.router.resolve(msg)
|
||||
engine = self._engines.get(agent_name)
|
||||
if not engine:
|
||||
return f"Error: Agent '{agent_name}' not found"
|
||||
|
||||
content = self.router.strip_mention(msg.content) if msg.content.startswith("@") else msg.content
|
||||
logger.info(f"[{agent_name}] Processing: {content[:80]}")
|
||||
return await engine.run(content)
|
||||
|
||||
async def handle_delegation(self, agent_msg: AgentMessage) -> None:
|
||||
"""Handle an inter-agent delegation request."""
|
||||
engine = self._engines.get(agent_msg.to_agent)
|
||||
if not engine:
|
||||
response = f"Error: Agent '{agent_msg.to_agent}' not found"
|
||||
else:
|
||||
logger.info(
|
||||
f"[{agent_msg.to_agent}] Delegation from {agent_msg.from_agent}: "
|
||||
f"{agent_msg.task[:80]}"
|
||||
)
|
||||
response = await engine.run_delegation(agent_msg.task)
|
||||
|
||||
# Resolve the delegation future in the delegate tool
|
||||
delegate_tool = self._delegate_tools.get(agent_msg.from_agent)
|
||||
if delegate_tool:
|
||||
delegate_tool.resolve(agent_msg.request_id, response)
|
||||
|
||||
async def run_loop(self) -> None:
|
||||
"""Main orchestrator loop — process inbound and agent messages."""
|
||||
self._running = True
|
||||
logger.info("Orchestrator loop started")
|
||||
|
||||
while self._running:
|
||||
# Check for inbound messages
|
||||
msg = await self.bus.consume_inbound(timeout=0.1)
|
||||
if msg:
|
||||
response = await self.handle_message(msg)
|
||||
await self.bus.publish_outbound(
|
||||
OutboundMessage(
|
||||
channel=msg.channel,
|
||||
chat_id=msg.chat_id,
|
||||
content=response,
|
||||
)
|
||||
)
|
||||
|
||||
# Check for agent-to-agent messages
|
||||
agent_msg = await self.bus.consume_agent_message(timeout=0.1)
|
||||
if agent_msg:
|
||||
asyncio.create_task(self.handle_delegation(agent_msg))
|
||||
|
||||
async def stop(self) -> None:
|
||||
self._running = False
|
||||
await self._mcp_stack.aclose()
|
||||
logger.info("Orchestrator stopped")
|
||||
|
||||
def get_agent_names(self) -> list[str]:
|
||||
return list(self._engines.keys())
|
||||
|
||||
def get_agent_info(self) -> list[dict[str, Any]]:
|
||||
"""Get info about all registered agents."""
|
||||
info = []
|
||||
for name, cfg in self._agent_configs.items():
|
||||
engine = self._engines.get(name)
|
||||
info.append(
|
||||
{
|
||||
"name": name,
|
||||
"provider": cfg.provider,
|
||||
"model": cfg.model or "(default)",
|
||||
"tools": engine.tools.names() if engine else [],
|
||||
"max_iterations": cfg.max_iterations,
|
||||
}
|
||||
)
|
||||
return info
|
||||
48
xtrm_agent/router.py
Normal file
48
xtrm_agent/router.py
Normal file
@@ -0,0 +1,48 @@
|
||||
"""Message router — routes inbound messages to the correct agent."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import re
|
||||
|
||||
from xtrm_agent.bus import InboundMessage
|
||||
|
||||
|
||||
class Router:
|
||||
"""Routes messages to agents based on mentions, channel defaults, or delegation."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
agent_names: list[str],
|
||||
channel_defaults: dict[str, str] | None = None,
|
||||
) -> None:
|
||||
self.agent_names = set(agent_names)
|
||||
self.channel_defaults = channel_defaults or {}
|
||||
|
||||
def resolve(self, msg: InboundMessage) -> str:
|
||||
"""Determine which agent should handle this message."""
|
||||
# 1. Explicit target set by delegation or system
|
||||
if msg.target_agent and msg.target_agent in self.agent_names:
|
||||
return msg.target_agent
|
||||
|
||||
# 2. @agent_name mention in content
|
||||
mentioned = self._extract_mention(msg.content)
|
||||
if mentioned and mentioned in self.agent_names:
|
||||
return mentioned
|
||||
|
||||
# 3. Channel default
|
||||
default = self.channel_defaults.get(msg.channel)
|
||||
if default and default in self.agent_names:
|
||||
return default
|
||||
|
||||
# 4. First available agent
|
||||
return next(iter(self.agent_names)) if self.agent_names else "coder"
|
||||
|
||||
def strip_mention(self, content: str) -> str:
|
||||
"""Remove @agent_name from content."""
|
||||
return re.sub(r"@(\w+)\s*", "", content, count=1).strip()
|
||||
|
||||
def _extract_mention(self, content: str) -> str | None:
|
||||
match = re.match(r"@(\w+)", content.strip())
|
||||
if match:
|
||||
return match.group(1).lower()
|
||||
return None
|
||||
0
xtrm_agent/tools/__init__.py
Normal file
0
xtrm_agent/tools/__init__.py
Normal file
69
xtrm_agent/tools/approval.py
Normal file
69
xtrm_agent/tools/approval.py
Normal file
@@ -0,0 +1,69 @@
|
||||
"""Tool approval engine — deny-by-default, per-agent policies."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from enum import Enum
|
||||
from typing import Any
|
||||
|
||||
from loguru import logger
|
||||
|
||||
|
||||
class ApprovalPolicy(Enum):
|
||||
AUTO_APPROVE = "auto_approve"
|
||||
REQUIRE_APPROVAL = "require_approval"
|
||||
DENY = "deny"
|
||||
|
||||
|
||||
class ApprovalEngine:
|
||||
"""Deny-by-default tool approval."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
auto_approve: list[str] | None = None,
|
||||
require_approval: list[str] | None = None,
|
||||
interactive: bool = True,
|
||||
) -> None:
|
||||
self._auto_approve = set(auto_approve or [])
|
||||
self._require_approval = set(require_approval or [])
|
||||
self._interactive = interactive
|
||||
|
||||
def get_policy(self, tool_name: str) -> ApprovalPolicy:
|
||||
"""Get the approval policy for a tool."""
|
||||
# MCP tools inherit from the mcp_* prefix pattern
|
||||
base_name = tool_name.split("_", 1)[0] if tool_name.startswith("mcp_") else tool_name
|
||||
|
||||
if tool_name in self._auto_approve or base_name in self._auto_approve:
|
||||
return ApprovalPolicy.AUTO_APPROVE
|
||||
if tool_name in self._require_approval or base_name in self._require_approval:
|
||||
return ApprovalPolicy.REQUIRE_APPROVAL
|
||||
return ApprovalPolicy.DENY
|
||||
|
||||
async def check(self, tool_name: str, arguments: dict[str, Any]) -> bool:
|
||||
"""Check if a tool call is approved. Returns True if approved."""
|
||||
policy = self.get_policy(tool_name)
|
||||
|
||||
if policy == ApprovalPolicy.AUTO_APPROVE:
|
||||
return True
|
||||
|
||||
if policy == ApprovalPolicy.DENY:
|
||||
logger.warning(f"Tool '{tool_name}' denied by policy")
|
||||
return False
|
||||
|
||||
# REQUIRE_APPROVAL
|
||||
if not self._interactive:
|
||||
logger.warning(f"Tool '{tool_name}' requires approval but running non-interactively — denied")
|
||||
return False
|
||||
|
||||
# In interactive mode, prompt the user
|
||||
logger.info(f"Tool '{tool_name}' requires approval. Args: {arguments}")
|
||||
return await self._prompt_user(tool_name, arguments)
|
||||
|
||||
async def _prompt_user(self, tool_name: str, arguments: dict[str, Any]) -> bool:
|
||||
"""Prompt user for tool approval (interactive mode)."""
|
||||
print(f"\n[APPROVAL REQUIRED] Tool: {tool_name}")
|
||||
print(f" Arguments: {arguments}")
|
||||
try:
|
||||
answer = input(" Allow? [y/N]: ").strip().lower()
|
||||
return answer in ("y", "yes")
|
||||
except (EOFError, KeyboardInterrupt):
|
||||
return False
|
||||
257
xtrm_agent/tools/builtin.py
Normal file
257
xtrm_agent/tools/builtin.py
Normal file
@@ -0,0 +1,257 @@
|
||||
"""Built-in tools — file operations, shell, web fetch."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
import re
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
import httpx
|
||||
|
||||
from xtrm_agent.tools.registry import Tool
|
||||
|
||||
# Patterns blocked in bash commands
|
||||
BASH_DENY_PATTERNS = [
|
||||
r"rm\s+-rf\s+/",
|
||||
r"mkfs\.",
|
||||
r"\bdd\b.*of=/dev/",
|
||||
r":\(\)\{.*\|.*&\s*\};:", # fork bomb
|
||||
r"chmod\s+-R\s+777\s+/",
|
||||
r">\s*/dev/sd[a-z]",
|
||||
]
|
||||
|
||||
|
||||
def _resolve_path(workspace: Path, requested: str) -> Path:
|
||||
"""Resolve and sandbox a path to the workspace."""
|
||||
p = (workspace / requested).resolve()
|
||||
ws = workspace.resolve()
|
||||
if not str(p).startswith(str(ws)):
|
||||
raise ValueError(f"Path '{requested}' escapes workspace")
|
||||
return p
|
||||
|
||||
|
||||
class ReadFileTool(Tool):
|
||||
def __init__(self, workspace: Path) -> None:
|
||||
self._workspace = workspace
|
||||
|
||||
@property
|
||||
def name(self) -> str:
|
||||
return "read_file"
|
||||
|
||||
@property
|
||||
def description(self) -> str:
|
||||
return "Read the contents of a file."
|
||||
|
||||
@property
|
||||
def parameters(self) -> dict[str, Any]:
|
||||
return {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"path": {"type": "string", "description": "File path relative to workspace"},
|
||||
},
|
||||
"required": ["path"],
|
||||
}
|
||||
|
||||
async def execute(self, path: str, **_: Any) -> str:
|
||||
p = _resolve_path(self._workspace, path)
|
||||
if not p.exists():
|
||||
return f"Error: File not found: {path}"
|
||||
return p.read_text(errors="replace")
|
||||
|
||||
|
||||
class WriteFileTool(Tool):
|
||||
def __init__(self, workspace: Path) -> None:
|
||||
self._workspace = workspace
|
||||
|
||||
@property
|
||||
def name(self) -> str:
|
||||
return "write_file"
|
||||
|
||||
@property
|
||||
def description(self) -> str:
|
||||
return "Create or overwrite a file with the given content."
|
||||
|
||||
@property
|
||||
def parameters(self) -> dict[str, Any]:
|
||||
return {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"path": {"type": "string", "description": "File path relative to workspace"},
|
||||
"content": {"type": "string", "description": "Content to write"},
|
||||
},
|
||||
"required": ["path", "content"],
|
||||
}
|
||||
|
||||
async def execute(self, path: str, content: str, **_: Any) -> str:
|
||||
p = _resolve_path(self._workspace, path)
|
||||
p.parent.mkdir(parents=True, exist_ok=True)
|
||||
p.write_text(content)
|
||||
return f"Wrote {len(content)} bytes to {path}"
|
||||
|
||||
|
||||
class EditFileTool(Tool):
|
||||
def __init__(self, workspace: Path) -> None:
|
||||
self._workspace = workspace
|
||||
|
||||
@property
|
||||
def name(self) -> str:
|
||||
return "edit_file"
|
||||
|
||||
@property
|
||||
def description(self) -> str:
|
||||
return "Replace an exact string in a file with new content."
|
||||
|
||||
@property
|
||||
def parameters(self) -> dict[str, Any]:
|
||||
return {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"path": {"type": "string", "description": "File path relative to workspace"},
|
||||
"old_string": {"type": "string", "description": "Exact text to find"},
|
||||
"new_string": {"type": "string", "description": "Replacement text"},
|
||||
},
|
||||
"required": ["path", "old_string", "new_string"],
|
||||
}
|
||||
|
||||
async def execute(self, path: str, old_string: str, new_string: str, **_: Any) -> str:
|
||||
p = _resolve_path(self._workspace, path)
|
||||
if not p.exists():
|
||||
return f"Error: File not found: {path}"
|
||||
text = p.read_text()
|
||||
if old_string not in text:
|
||||
return "Error: old_string not found in file"
|
||||
count = text.count(old_string)
|
||||
if count > 1:
|
||||
return f"Error: old_string found {count} times — must be unique"
|
||||
p.write_text(text.replace(old_string, new_string, 1))
|
||||
return f"Edited {path}"
|
||||
|
||||
|
||||
class ListDirTool(Tool):
|
||||
def __init__(self, workspace: Path) -> None:
|
||||
self._workspace = workspace
|
||||
|
||||
@property
|
||||
def name(self) -> str:
|
||||
return "list_dir"
|
||||
|
||||
@property
|
||||
def description(self) -> str:
|
||||
return "List files and directories at the given path."
|
||||
|
||||
@property
|
||||
def parameters(self) -> dict[str, Any]:
|
||||
return {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"path": {
|
||||
"type": "string",
|
||||
"description": "Directory path relative to workspace (default: root)",
|
||||
"default": ".",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
async def execute(self, path: str = ".", **_: Any) -> str:
|
||||
p = _resolve_path(self._workspace, path)
|
||||
if not p.exists():
|
||||
return f"Error: Directory not found: {path}"
|
||||
if not p.is_dir():
|
||||
return f"Error: Not a directory: {path}"
|
||||
entries = sorted(p.iterdir())
|
||||
lines = []
|
||||
for entry in entries:
|
||||
suffix = "/" if entry.is_dir() else ""
|
||||
lines.append(f"{entry.name}{suffix}")
|
||||
return "\n".join(lines) if lines else "(empty directory)"
|
||||
|
||||
|
||||
class BashTool(Tool):
|
||||
def __init__(self, workspace: Path, timeout: int = 60) -> None:
|
||||
self._workspace = workspace
|
||||
self._timeout = timeout
|
||||
|
||||
@property
|
||||
def name(self) -> str:
|
||||
return "bash"
|
||||
|
||||
@property
|
||||
def description(self) -> str:
|
||||
return "Execute a shell command in the workspace directory."
|
||||
|
||||
@property
|
||||
def parameters(self) -> dict[str, Any]:
|
||||
return {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"command": {"type": "string", "description": "Shell command to execute"},
|
||||
},
|
||||
"required": ["command"],
|
||||
}
|
||||
|
||||
async def execute(self, command: str, **_: Any) -> str:
|
||||
# Check deny patterns
|
||||
for pattern in BASH_DENY_PATTERNS:
|
||||
if re.search(pattern, command):
|
||||
return f"Error: Command blocked by security policy"
|
||||
|
||||
try:
|
||||
proc = await asyncio.create_subprocess_shell(
|
||||
command,
|
||||
stdout=asyncio.subprocess.PIPE,
|
||||
stderr=asyncio.subprocess.STDOUT,
|
||||
cwd=str(self._workspace),
|
||||
env={**os.environ},
|
||||
)
|
||||
stdout, _ = await asyncio.wait_for(proc.communicate(), timeout=self._timeout)
|
||||
output = stdout.decode(errors="replace")
|
||||
# Truncate large output
|
||||
if len(output) > 10_000:
|
||||
output = output[:10_000] + "\n... (truncated)"
|
||||
exit_info = f"\n[exit code: {proc.returncode}]"
|
||||
return output + exit_info
|
||||
except asyncio.TimeoutError:
|
||||
return f"Error: Command timed out after {self._timeout}s"
|
||||
|
||||
|
||||
class WebFetchTool(Tool):
|
||||
@property
|
||||
def name(self) -> str:
|
||||
return "web_fetch"
|
||||
|
||||
@property
|
||||
def description(self) -> str:
|
||||
return "Fetch the content of a URL."
|
||||
|
||||
@property
|
||||
def parameters(self) -> dict[str, Any]:
|
||||
return {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"url": {"type": "string", "description": "URL to fetch"},
|
||||
},
|
||||
"required": ["url"],
|
||||
}
|
||||
|
||||
async def execute(self, url: str, **_: Any) -> str:
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=30, follow_redirects=True) as client:
|
||||
resp = await client.get(url)
|
||||
text = resp.text
|
||||
if len(text) > 20_000:
|
||||
text = text[:20_000] + "\n... (truncated)"
|
||||
return text
|
||||
except Exception as e:
|
||||
return f"Error fetching URL: {e}"
|
||||
|
||||
|
||||
def register_builtin_tools(registry: Any, workspace: Path) -> None:
|
||||
"""Register all built-in tools into a ToolRegistry."""
|
||||
registry.register(ReadFileTool(workspace))
|
||||
registry.register(WriteFileTool(workspace))
|
||||
registry.register(EditFileTool(workspace))
|
||||
registry.register(ListDirTool(workspace))
|
||||
registry.register(BashTool(workspace))
|
||||
registry.register(WebFetchTool())
|
||||
88
xtrm_agent/tools/delegate.py
Normal file
88
xtrm_agent/tools/delegate.py
Normal file
@@ -0,0 +1,88 @@
|
||||
"""Delegate tool — allows agents to invoke each other."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import uuid
|
||||
from typing import Any
|
||||
|
||||
from loguru import logger
|
||||
|
||||
from xtrm_agent.bus import AgentMessage, MessageBus
|
||||
from xtrm_agent.tools.registry import Tool
|
||||
|
||||
|
||||
class DelegateTool(Tool):
|
||||
"""Built-in tool for inter-agent delegation."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
bus: MessageBus,
|
||||
from_agent: str,
|
||||
available_agents: list[str],
|
||||
timeout: int = 120,
|
||||
) -> None:
|
||||
self._bus = bus
|
||||
self._from_agent = from_agent
|
||||
self._available_agents = available_agents
|
||||
self._timeout = timeout
|
||||
self._pending: dict[str, asyncio.Future[str]] = {}
|
||||
|
||||
@property
|
||||
def name(self) -> str:
|
||||
return "delegate"
|
||||
|
||||
@property
|
||||
def description(self) -> str:
|
||||
agents = ", ".join(self._available_agents)
|
||||
return f"Delegate a task to another agent. Available agents: {agents}"
|
||||
|
||||
@property
|
||||
def parameters(self) -> dict[str, Any]:
|
||||
return {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"agent_name": {
|
||||
"type": "string",
|
||||
"description": "Name of the agent to delegate to",
|
||||
},
|
||||
"task": {
|
||||
"type": "string",
|
||||
"description": "Description of the task to delegate",
|
||||
},
|
||||
},
|
||||
"required": ["agent_name", "task"],
|
||||
}
|
||||
|
||||
async def execute(self, agent_name: str, task: str, **_: Any) -> str:
|
||||
if agent_name not in self._available_agents:
|
||||
return f"Error: Unknown agent '{agent_name}'. Available: {', '.join(self._available_agents)}"
|
||||
|
||||
if agent_name == self._from_agent:
|
||||
return "Error: Cannot delegate to self"
|
||||
|
||||
request_id = uuid.uuid4().hex[:12]
|
||||
future: asyncio.Future[str] = asyncio.get_event_loop().create_future()
|
||||
self._pending[request_id] = future
|
||||
|
||||
msg = AgentMessage(
|
||||
from_agent=self._from_agent,
|
||||
to_agent=agent_name,
|
||||
task=task,
|
||||
request_id=request_id,
|
||||
)
|
||||
await self._bus.publish_agent_message(msg)
|
||||
logger.info(f"[{self._from_agent}] Delegated to {agent_name}: {task[:80]}")
|
||||
|
||||
try:
|
||||
result = await asyncio.wait_for(future, timeout=self._timeout)
|
||||
return result
|
||||
except asyncio.TimeoutError:
|
||||
self._pending.pop(request_id, None)
|
||||
return f"Error: Delegation to '{agent_name}' timed out after {self._timeout}s"
|
||||
|
||||
def resolve(self, request_id: str, response: str) -> None:
|
||||
"""Resolve a pending delegation with the response."""
|
||||
future = self._pending.pop(request_id, None)
|
||||
if future and not future.done():
|
||||
future.set_result(response)
|
||||
99
xtrm_agent/tools/mcp_client.py
Normal file
99
xtrm_agent/tools/mcp_client.py
Normal file
@@ -0,0 +1,99 @@
|
||||
"""MCP client — connect to MCP servers and wrap their tools."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from contextlib import AsyncExitStack
|
||||
from typing import Any
|
||||
|
||||
from loguru import logger
|
||||
|
||||
from xtrm_agent.config import MCPServerConfig
|
||||
from xtrm_agent.tools.registry import Tool, ToolRegistry
|
||||
|
||||
|
||||
class MCPToolWrapper(Tool):
|
||||
"""Wraps an MCP server tool as a local Tool."""
|
||||
|
||||
def __init__(self, session: Any, server_name: str, tool_def: Any) -> None:
|
||||
self._session = session
|
||||
self._server_name = server_name
|
||||
self._tool_def = tool_def
|
||||
self._tool_name = f"mcp_{server_name}_{tool_def.name}"
|
||||
self._original_name = tool_def.name
|
||||
|
||||
@property
|
||||
def name(self) -> str:
|
||||
return self._tool_name
|
||||
|
||||
@property
|
||||
def description(self) -> str:
|
||||
return getattr(self._tool_def, "description", "") or ""
|
||||
|
||||
@property
|
||||
def parameters(self) -> dict[str, Any]:
|
||||
schema = getattr(self._tool_def, "inputSchema", None)
|
||||
if schema:
|
||||
return dict(schema)
|
||||
return {"type": "object", "properties": {}}
|
||||
|
||||
async def execute(self, **kwargs: Any) -> str:
|
||||
try:
|
||||
result = await self._session.call_tool(self._original_name, arguments=kwargs)
|
||||
parts = []
|
||||
for block in result.content:
|
||||
if hasattr(block, "text"):
|
||||
parts.append(block.text)
|
||||
return "\n".join(parts) if parts else "(empty result)"
|
||||
except Exception as e:
|
||||
return f"Error calling MCP tool '{self._original_name}': {e}"
|
||||
|
||||
|
||||
async def connect_mcp_servers(
|
||||
mcp_servers: dict[str, MCPServerConfig],
|
||||
registry: ToolRegistry,
|
||||
stack: AsyncExitStack,
|
||||
) -> None:
|
||||
"""Connect to configured MCP servers and register their tools."""
|
||||
if not mcp_servers:
|
||||
return
|
||||
|
||||
try:
|
||||
from mcp import ClientSession, StdioServerParameters
|
||||
from mcp.client.stdio import stdio_client
|
||||
except ImportError:
|
||||
logger.warning("MCP SDK not available — skipping MCP server connections")
|
||||
return
|
||||
|
||||
for name, cfg in mcp_servers.items():
|
||||
try:
|
||||
if cfg.command:
|
||||
params = StdioServerParameters(
|
||||
command=cfg.command,
|
||||
args=cfg.args,
|
||||
env={**cfg.env} if cfg.env else None,
|
||||
)
|
||||
read, write = await stack.enter_async_context(stdio_client(params))
|
||||
elif cfg.url:
|
||||
try:
|
||||
from mcp.client.streamable_http import streamable_http_client
|
||||
read, write, _ = await stack.enter_async_context(
|
||||
streamable_http_client(cfg.url)
|
||||
)
|
||||
except ImportError:
|
||||
logger.warning(f"MCP HTTP client not available — skipping {name}")
|
||||
continue
|
||||
else:
|
||||
logger.warning(f"MCP server '{name}' has no command or URL — skipping")
|
||||
continue
|
||||
|
||||
session = await stack.enter_async_context(ClientSession(read, write))
|
||||
await session.initialize()
|
||||
|
||||
tools_result = await session.list_tools()
|
||||
for tool_def in tools_result.tools:
|
||||
wrapper = MCPToolWrapper(session, name, tool_def)
|
||||
registry.register(wrapper)
|
||||
logger.info(f"Registered MCP tool: {wrapper.name}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to connect MCP server '{name}': {e}")
|
||||
80
xtrm_agent/tools/registry.py
Normal file
80
xtrm_agent/tools/registry.py
Normal file
@@ -0,0 +1,80 @@
|
||||
"""Tool registry — ABC and dynamic registration."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Any
|
||||
|
||||
from loguru import logger
|
||||
|
||||
|
||||
class Tool(ABC):
|
||||
"""Abstract base for all tools."""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def name(self) -> str: ...
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def description(self) -> str: ...
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def parameters(self) -> dict[str, Any]:
|
||||
"""JSON Schema for tool parameters."""
|
||||
|
||||
@abstractmethod
|
||||
async def execute(self, **kwargs: Any) -> str:
|
||||
"""Execute the tool and return a string result."""
|
||||
|
||||
def to_openai_schema(self) -> dict[str, Any]:
|
||||
"""Convert to OpenAI function-calling format."""
|
||||
return {
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": self.name,
|
||||
"description": self.description,
|
||||
"parameters": self.parameters,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
class ToolRegistry:
|
||||
"""Manages registered tools and dispatches execution."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self._tools: dict[str, Tool] = {}
|
||||
|
||||
def register(self, tool: Tool) -> None:
|
||||
self._tools[tool.name] = tool
|
||||
|
||||
def get(self, name: str) -> Tool | None:
|
||||
return self._tools.get(name)
|
||||
|
||||
def names(self) -> list[str]:
|
||||
return list(self._tools.keys())
|
||||
|
||||
def get_definitions(self) -> list[dict[str, Any]]:
|
||||
"""Get all tool schemas for the LLM."""
|
||||
return [t.to_openai_schema() for t in self._tools.values()]
|
||||
|
||||
def filtered(self, allowed: list[str]) -> ToolRegistry:
|
||||
"""Return a new registry containing only the specified tools."""
|
||||
filtered_reg = ToolRegistry()
|
||||
for name in allowed:
|
||||
tool = self._tools.get(name)
|
||||
if tool:
|
||||
filtered_reg.register(tool)
|
||||
return filtered_reg
|
||||
|
||||
async def execute(self, name: str, arguments: dict[str, Any]) -> str:
|
||||
"""Execute a tool by name."""
|
||||
tool = self._tools.get(name)
|
||||
if not tool:
|
||||
return f"Error: Unknown tool '{name}'"
|
||||
try:
|
||||
return await tool.execute(**arguments)
|
||||
except Exception as e:
|
||||
logger.error(f"Tool '{name}' failed: {e}")
|
||||
return f"Error executing '{name}': {e}"
|
||||
Reference in New Issue
Block a user