docs: stabilize website diagrams
This commit is contained in:
parent
d5b64ebdb3
commit
259208bfe4
18 changed files with 1504 additions and 112 deletions
|
|
@ -26,7 +26,7 @@ Make it a **Tool** when:
|
|||
|
||||
Bundled skills live in `skills/` organized by category. Official optional skills use the same structure in `optional-skills/`:
|
||||
|
||||
```
|
||||
```text
|
||||
skills/
|
||||
├── research/
|
||||
│ └── arxiv/
|
||||
|
|
|
|||
|
|
@ -28,34 +28,48 @@ The Python environment framework documented here lives under the repo's `environ
|
|||
|
||||
The environment system is built on a three-layer inheritance chain:
|
||||
|
||||
```
|
||||
Atropos Framework
|
||||
┌───────────────────────┐
|
||||
│ BaseEnv │ (atroposlib)
|
||||
│ - Server management │
|
||||
│ - Worker scheduling │
|
||||
│ - Wandb logging │
|
||||
│ - CLI (serve/process/ │
|
||||
│ evaluate) │
|
||||
└───────────┬───────────┘
|
||||
│ inherits
|
||||
┌───────────┴───────────┐
|
||||
│ HermesAgentBaseEnv │ environments/hermes_base_env.py
|
||||
│ - Terminal backend │
|
||||
│ - Tool resolution │
|
||||
│ - Agent loop engine │
|
||||
│ - ToolContext │
|
||||
└───────────┬───────────┘
|
||||
│ inherits
|
||||
┌─────────────────────┼─────────────────────┐
|
||||
│ │ │
|
||||
TerminalTestEnv HermesSweEnv TerminalBench2EvalEnv
|
||||
(stack testing) (SWE training) (benchmark eval)
|
||||
│
|
||||
┌────────┼────────┐
|
||||
│ │
|
||||
TBLiteEvalEnv YCBenchEvalEnv
|
||||
(fast benchmark) (long-horizon)
|
||||
```mermaid
|
||||
classDiagram
|
||||
class BaseEnv {
|
||||
Server management
|
||||
Worker scheduling
|
||||
Wandb logging
|
||||
CLI: serve / process / evaluate
|
||||
}
|
||||
|
||||
class HermesAgentBaseEnv {
|
||||
Terminal backend configuration
|
||||
Tool resolution
|
||||
Agent loop engine
|
||||
ToolContext access
|
||||
}
|
||||
|
||||
class TerminalTestEnv {
|
||||
Stack testing
|
||||
}
|
||||
|
||||
class HermesSweEnv {
|
||||
SWE training
|
||||
}
|
||||
|
||||
class TerminalBench2EvalEnv {
|
||||
Benchmark evaluation
|
||||
}
|
||||
|
||||
class TBLiteEvalEnv {
|
||||
Fast benchmark
|
||||
}
|
||||
|
||||
class YCBenchEvalEnv {
|
||||
Long-horizon benchmark
|
||||
}
|
||||
|
||||
BaseEnv <|-- HermesAgentBaseEnv
|
||||
HermesAgentBaseEnv <|-- TerminalTestEnv
|
||||
HermesAgentBaseEnv <|-- HermesSweEnv
|
||||
HermesAgentBaseEnv <|-- TerminalBench2EvalEnv
|
||||
TerminalBench2EvalEnv <|-- TBLiteEvalEnv
|
||||
TerminalBench2EvalEnv <|-- YCBenchEvalEnv
|
||||
```
|
||||
|
||||
### BaseEnv (Atropos)
|
||||
|
|
|
|||
|
|
@ -45,27 +45,8 @@ hermes -w -q "Fix issue #123" # Single query in worktree
|
|||
|
||||
## Interface Layout
|
||||
|
||||
```text
|
||||
┌─────────────────────────────────────────────────┐
|
||||
│ HERMES-AGENT ASCII Logo │
|
||||
│ ┌─────────────┐ ┌────────────────────────────┐ │
|
||||
│ │ Caduceus │ │ Model: claude-sonnet-4 │ │
|
||||
│ │ ASCII Art │ │ Terminal: local │ │
|
||||
│ │ │ │ Working Dir: /home/user │ │
|
||||
│ │ │ │ Available Tools: 19 │ │
|
||||
│ │ │ │ Available Skills: 12 │ │
|
||||
│ └─────────────┘ └────────────────────────────┘ │
|
||||
├─────────────────────────────────────────────────┤
|
||||
│ Conversation output scrolls here... │
|
||||
│ │
|
||||
│ (◕‿◕✿) 🧠 pondering... (2.3s) │
|
||||
│ ✧٩(ˊᗜˋ*)و✧ got it! (2.3s) │
|
||||
│ │
|
||||
│ Assistant: Hello! How can I help you today? │
|
||||
├─────────────────────────────────────────────────┤
|
||||
│ ❯ [Fixed input area at bottom] │
|
||||
└─────────────────────────────────────────────────┘
|
||||
```
|
||||
<img className="docs-terminal-figure" src="/img/docs/cli-layout.svg" alt="Stylized preview of the Hermes CLI layout showing the banner, conversation area, and fixed input prompt." />
|
||||
<p className="docs-figure-caption">The Hermes CLI banner, conversation stream, and fixed input prompt rendered as a stable docs figure instead of fragile text art.</p>
|
||||
|
||||
The welcome banner shows your model, terminal backend, working directory, available tools, and installed skills at a glance.
|
||||
|
||||
|
|
|
|||
|
|
@ -100,7 +100,7 @@ In the current implementation, distributions assign a probability to **each indi
|
|||
|
||||
All output goes to `data/<run_name>/`:
|
||||
|
||||
```
|
||||
```text
|
||||
data/my_run/
|
||||
├── trajectories.jsonl # Combined final output (all batches merged)
|
||||
├── batch_0.jsonl # Individual batch results
|
||||
|
|
|
|||
|
|
@ -103,7 +103,7 @@ Context files are loaded by `build_context_files_prompt()` in `agent/prompt_buil
|
|||
|
||||
The final prompt section looks roughly like:
|
||||
|
||||
```
|
||||
```text
|
||||
# Project Context
|
||||
|
||||
The following project context files have been loaded and should be followed:
|
||||
|
|
|
|||
|
|
@ -207,16 +207,17 @@ honcho: {}
|
|||
|
||||
Honcho context is fetched asynchronously to avoid blocking the response path:
|
||||
|
||||
```
|
||||
Turn N:
|
||||
user message
|
||||
→ consume cached context (from previous turn's background fetch)
|
||||
→ inject into system prompt (user representation, AI representation, dialectic)
|
||||
→ LLM call
|
||||
→ response
|
||||
→ fire background fetch for next turn
|
||||
→ fetch context ─┐
|
||||
→ fetch dialectic ─┴→ cache for Turn N+1
|
||||
```mermaid
|
||||
flowchart TD
|
||||
user["User message"] --> cache["Consume cached Honcho context<br/>from the previous turn"]
|
||||
cache --> prompt["Inject user, AI, and dialectic context<br/>into the system prompt"]
|
||||
prompt --> llm["LLM call"]
|
||||
llm --> response["Assistant response"]
|
||||
response --> fetch["Start background fetch for Turn N+1"]
|
||||
fetch --> ctx["Fetch context"]
|
||||
fetch --> dia["Fetch dialectic"]
|
||||
ctx --> next["Cache for the next turn"]
|
||||
dia --> next
|
||||
```
|
||||
|
||||
Turn 1 is a cold start (no cache). All subsequent turns consume cached results with zero HTTP latency on the response path. The system prompt on turn 1 uses only static context to preserve prefix cache hits at the LLM provider.
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ The hooks system lets you run custom code at key points in the agent lifecycle
|
|||
|
||||
Each hook is a directory under `~/.hermes/hooks/` containing two files:
|
||||
|
||||
```
|
||||
```text
|
||||
~/.hermes/hooks/
|
||||
└── my-hook/
|
||||
├── HOOK.yaml # Declares which events to listen for
|
||||
|
|
|
|||
|
|
@ -174,21 +174,17 @@ The training loop:
|
|||
|
||||
## Architecture Diagram
|
||||
|
||||
```
|
||||
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
|
||||
│ Atropos API │◄────│ Environment │────►│ OpenAI/sglang │
|
||||
│ (run-api) │ │ (BaseEnv impl) │ │ Inference API │
|
||||
│ Port 8000 │ │ │ │ Port 8001 │
|
||||
└────────┬────────┘ └──────────────────┘ └────────┬────────┘
|
||||
│ │
|
||||
│ Batches (tokens + scores + logprobs) │
|
||||
│ │
|
||||
▼ │
|
||||
┌─────────────────┐ │
|
||||
│ Tinker Trainer │◄──────────────────────────────────────┘
|
||||
│ (LoRA training) │ Serves inference via FastAPI
|
||||
│ + FastAPI │ Trains via Tinker ServiceClient
|
||||
└─────────────────┘
|
||||
```mermaid
|
||||
flowchart LR
|
||||
api["Atropos API<br/>run-api<br/>port 8000"]
|
||||
env["Environment<br/>BaseEnv implementation"]
|
||||
infer["OpenAI / sglang<br/>inference API<br/>port 8001"]
|
||||
trainer["Tinker Trainer<br/>LoRA training + FastAPI"]
|
||||
|
||||
env <--> api
|
||||
env --> infer
|
||||
api -->|"batches: tokens, scores, logprobs"| trainer
|
||||
trainer -->|"serves inference"| infer
|
||||
```
|
||||
|
||||
## Creating Custom Environments
|
||||
|
|
|
|||
|
|
@ -140,7 +140,7 @@ When a missing value is encountered, Hermes asks for it securely only when the s
|
|||
|
||||
## Skill Directory Structure
|
||||
|
||||
```
|
||||
```text
|
||||
~/.hermes/skills/ # Single source of truth
|
||||
├── mlops/ # Category directory
|
||||
│ ├── axolotl/
|
||||
|
|
|
|||
|
|
@ -12,29 +12,33 @@ For the full voice feature set — including CLI microphone mode, spoken replies
|
|||
|
||||
## Architecture
|
||||
|
||||
```text
|
||||
┌───────────────────────────────────────────────────────────────────────────────────────┐
|
||||
│ Hermes Gateway │
|
||||
├───────────────────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌──────────┐ ┌─────────┐ ┌──────────┐ ┌───────┐ ┌───────┐ ┌───────┐ ┌────┐ │
|
||||
│ │ Telegram │ │ Discord │ │ WhatsApp │ │ Slack │ │Signal │ │ Email │ │ HA │ │
|
||||
│ │ Adapter │ │ Adapter │ │ Adapter │ │Adapter│ │Adapter│ │Adapter│ │Adpt│ │
|
||||
│ └────┬─────┘ └────┬────┘ └────┬─────┘ └──┬────┘ └──┬────┘ └──┬────┘ └─┬──┘ │
|
||||
│ │ │ │ │ │ │ │ │
|
||||
│ └─────────────┴───────────┴───────────┴─────────┴─────────┴────────┘ │
|
||||
│ │ │
|
||||
│ ┌────────▼────────┐ │
|
||||
│ │ Session Store │ │
|
||||
│ │ (per-chat) │ │
|
||||
│ └────────┬────────┘ │
|
||||
│ │ │
|
||||
│ ┌────────▼────────┐ │
|
||||
│ │ AIAgent │ │
|
||||
│ │ (run_agent) │ │
|
||||
│ └─────────────────┘ │
|
||||
│ │
|
||||
└───────────────────────────────────────────────────────────────────────────────────────┘
|
||||
```mermaid
|
||||
flowchart TB
|
||||
subgraph Gateway["Hermes Gateway"]
|
||||
subgraph Adapters["Platform adapters"]
|
||||
tg[Telegram]
|
||||
dc[Discord]
|
||||
wa[WhatsApp]
|
||||
sl[Slack]
|
||||
sig[Signal]
|
||||
em[Email]
|
||||
ha[Home Assistant]
|
||||
end
|
||||
|
||||
store["Session store<br/>per chat"]
|
||||
agent["AIAgent<br/>run_agent.py"]
|
||||
cron["Cron scheduler<br/>ticks every 60s"]
|
||||
end
|
||||
|
||||
tg --> store
|
||||
dc --> store
|
||||
wa --> store
|
||||
sl --> store
|
||||
sig --> store
|
||||
em --> store
|
||||
ha --> store
|
||||
store --> agent
|
||||
cron --> store
|
||||
```
|
||||
|
||||
Each platform adapter receives messages, routes them through a per-chat session store, and dispatches them to the AIAgent for processing. The gateway also runs the cron scheduler, ticking every 60 seconds to execute any due jobs.
|
||||
|
|
|
|||
|
|
@ -88,15 +88,8 @@ Session IDs are shown when you exit a CLI session, and can be found with `hermes
|
|||
|
||||
When you resume a session, Hermes displays a compact recap of the previous conversation in a styled panel before the input prompt:
|
||||
|
||||
```text
|
||||
╭─────────────────────────── Previous Conversation ────────────────────────────╮
|
||||
│ ● You: What is Python? │
|
||||
│ ◆ Hermes: Python is a high-level programming language. │
|
||||
│ ● You: How do I install it? │
|
||||
│ ◆ Hermes: [3 tool calls: web_search, web_extract, terminal] │
|
||||
│ ◆ Hermes: You can download Python from python.org... │
|
||||
╰──────────────────────────────────────────────────────────────────────────────╯
|
||||
```
|
||||
<img className="docs-terminal-figure" src="/img/docs/session-recap.svg" alt="Stylized preview of the Previous Conversation recap panel shown when resuming a Hermes session." />
|
||||
<p className="docs-figure-caption">Resume mode shows a compact recap panel with recent user and assistant turns before returning you to the live prompt.</p>
|
||||
|
||||
The recap:
|
||||
- Shows **user messages** (gold `●`) and **assistant responses** (green `◆`)
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue