surfaces/bot-examples/README.md
Mikhail Putilovskij 6ced154124 feat(matrix): land QA follow-ups and refresh docs
- harden Matrix onboarding/chat lifecycle after manual QA
- refresh README and Matrix docs to match current behavior
- add local ignores for runtime artifacts and include current planning/report docs

Closes #7
Closes #9
Closes #14
2026-04-05 19:08:58 +03:00

3.6 KiB

Reference Examples for Bot Development

Sanitized code examples from the agent-core project for building Telegram and Matrix bots that integrate with LLM backends.

Files

Telegram Bot with Forum Topics

telegram_bot_topics.py — Complete Telegram bot using python-telegram-bot 22+.

Key patterns:

  • Forum topics: Create/rename topics, route messages by message_thread_id
  • Message types: Text, photos, voice/audio, documents — each with its own handler
  • Streaming responses: Progressive message editing as LLM generates text
  • Outbox pattern: LLM writes to outbox.jsonl, bot sends files after response
  • Topic naming: LLM generates topic labels, bot auto-renames forum topics
  • Voice transcription: Download voice → external STT → send text to LLM
  • Proxy support: SOCKS5 proxy with retry logic for unreliable connections

Dependencies: python-telegram-bot>=22.0, httpx, pyyaml

Matrix Bot with Room Management

matrix_bot_rooms.py — Matrix bot using matrix-nio with E2E encryption.

Key patterns:

  • Room creation: Create private encrypted rooms, invite users, set avatars
  • Room modes: Per-room behavior (quiet/context/full) stored in config.json
  • Multi-user: Users map with per-user profiles loaded from YAML
  • E2E encryption: Crypto store, key upload, cross-signing, device verification
  • Media handling: Download + decrypt encrypted media (images, voice, files)
  • Message queuing: Persistent queue (queue.jsonl) for messages arriving while busy
  • Status threads: Post tool progress as thread replies under user's message
  • Session management: Per-room Claude sessions with idle timeout, cancel support
  • Room naming: Auto-generate room names from conversation content via local LLM
  • Bot commands: !new, !mode, !status, !security, !help
  • Security modes: strict/guarded/open for E2E device verification policy
  • Typing indicators: Show typing while LLM processes

Dependencies: matrix-nio[e2e]>=0.24, httpx, markdown, pyyaml

Shared: LLM Session Manager

llm_session.py — Process manager for Claude Code CLI (adaptable to any LLM).

Key patterns:

  • Session persistence: Save/restore session IDs for conversation continuity
  • Stream parsing: Parse stream-json output for real-time tool/status tracking
  • Idle timeout: Watchdog task resets on output, kills on silence
  • Cancel support: External event to kill LLM process mid-turn
  • Fallback chain: Primary LLM fails → try secondary provider
  • Sandbox: bubblewrap (bwrap) wrapper for filesystem isolation
  • Status callbacks: Emit events for tool_start, tool_end, thinking text
  • Environment isolation: Strip sensitive env vars before spawning subprocess

Shared: Config

config_example.py — Simple dataclass config loaded from environment variables.

Architecture

User ──► Bot (Telegram/Matrix) ──► LLM Session Manager ──► Claude CLI (sandboxed)
                │                          │
                ├── media download         ├── session persistence
                ├── typing indicators      ├── stream parsing
                ├── outbox file sending    ├── timeout watchdog
                └── topic/room management  └── fallback provider

The bot and LLM session are decoupled — the session manager doesn't know about Telegram or Matrix. It takes a message string, runs the CLI process, and returns text + status callbacks. The bot handles all platform-specific concerns (formatting, media, rooms/topics).