githubinferredactive
OSA
provenance:github:Miosa-osa/OSA
OSA: the Optimal System Agent. One AI that maximizes signal, eliminates noise, and finds the optimal path — across code, work, and life. Elixir/OTP. Runs locally. Open-source OpenClaw alternative.
README
# OSA — the Optimal System Agent
> Signal Theory-optimized proactive AI agent. Local-first. Open source. BEAM-powered.
[](LICENSE)
[](#)
[](https://elixir-lang.org)
[](https://www.erlang.org)
[](#47-built-in-tools)
[](#autonomous-task-orchestration)
## Quick Start
```bash
curl -fsSL https://raw.githubusercontent.com/Miosa-osa/OSA/main/install.sh | bash
osa
```
One command installs. One command runs. First run walks you through setup.
---
## Overview
OSA is the intelligence layer of [MIOSA](https://miosa.ai) — a local-first, open-source AI agent built on Elixir/OTP. It runs on your machine, owns your data, and connects to any LLM provider you choose.
Every agent framework processes every message the same way. OSA does not. Before any message reaches the reasoning engine, a **Signal Classifier** decodes its intent, domain, and complexity. Simple tasks go to fast, cheap models. Complex multi-step tasks get decomposed into parallel sub-agents with the right models for each step. The agent learns from every session.
The theoretical foundation is [Signal Theory](https://zenodo.org/records/18774174) — a framework for maximizing signal-to-noise ratio in AI communication, grounded in Shannon, Ashby, Beer, and Wiener.
---
## Architecture
### Execution Flow
```
User Input
│
├─ Message Queue (300ms debounce batching)
│
├─ UserPromptSubmit Hook (can modify/block)
│
├─ Budget + Turn Limit Check
│
├─ Prompt Injection Guard (3-tier detection)
│
├─ Context Compaction Pipeline
│ ├─ Micro-compact (no LLM — truncate old tool results)
│ ├─ Strip tool args → Merge consecutive → Summarize warm zone
│ ├─ Structured 8-section compression (iterative, preserves details)
│ ├─ Context collapse (413 recovery — withhold large results)
│ └─ Post-compact restore (re-inject files, tasks, workspace)
│
├─ Pre-Directives (explore, delegation, task creation nudges)
│
├─ Genre Routing (low-signal → short-circuit, skip full loop)
│
├─ Context Build (cached static base + dynamic per-request)
│ ├─ Async memory prefetch (fires parallel while context builds)
│ ├─ Effort-aware thinking config (low/medium/high/max)
│ ├─ Agent message injection (inter-agent communication)
│ └─ Iteration budget tracking
│
├─ LLM Streaming Call
│ ├─ Streaming tool execution (tools fire MID-STREAM)
│ ├─ Fallback model chain (auto-switch on rate limit/failure)
│ └─ Max output token recovery (bump + retry on truncation)
│
├─ Tool Execution
│ ├─ Concurrency-aware dispatch (parallel safe, sequential unsafe)
│ ├─ Permission check (tiers + pattern rules + interactive prompt)
│ ├─ Pre-hooks (security, spend guard, MCP cache)
│ ├─ Tool result persistence (large → disk with reference)
│ ├─ Diff generation (unified diff for file operations)
│ ├─ Post-hooks (cost, telemetry, learning, episodic)
│ └─ Doom loop detection (halt on repeated failures)
│
├─ Behavioral Nudges (read-before-write, code-in-text, verification)
│
├─ Stop Hooks (can override response or force continuation)
│
└─ Post-Response
├─ Output guardrail (scrub system prompt leaks)
├─ Post-response hooks (transcript, auto-memory, session save)
├─ Telemetry recording
└─ SSE broadcast to all connected clients
```
### System Layers
```
┌─────────────────────────────────────────────────────────────────────┐
│ Channels: Rust TUI │ Desktop (Tauri) │ HTTP/SSE │ Telegram │ ... │
├─────────────────────────────────────────────────────────────────────┤
│ Signal Classifier: S = (Mode, Genre, Type, Format, Weight) │
├─────────────────────────────────────────────────────────────────────┤
│ Events.Bus (Goldrush compiled BEAM bytecode dispatch) │
├──────────┬──────────┬───────────┬──────────┬────────────────────────┤
│ Agent │ Orchest- │ Swarm │ Scheduler│ Healing Orchestrator │
│ Loop │ rator │ (4 modes)│ (cron) │ (self-repair) │
│ (ReAct) │ (14 roles│ │ │ │
│ │ bg/fork/│ Teams + │ │ Speculative Executor │
│ │ worktree│ NervSys │ │ │
├──────────┴──────────┴───────────┴──────────┴────────────────────────┤
│ Context │ Compactor │ Memory │ Settings │ Hooks │ Permissions │
│ Builder │ (6-step) │ (SQLite │ Cascade │ (25 │ (pattern │
│ │ │ +ETS │ (4-layer)│ events,│ rules, │
│ │ │ +FTS5) │ │ 4 types│ interactive) │
├──────────┴───────────┴─────────┴──────────┴─────────┴───────────────┤
│ 7 Providers │ 47 Tools │ Telemetry │ Credential Pool │ Soul│
│ + Fallback │ (deferred)│ (per-tool) │ (key rotation) │ │
└───────────────┴────────────┴─────────────┴───────────────────┴─────┘
```
**Runtime:** Elixir 1.17+ / Erlang OTP 27+ | **HTTP:** Bandit | **DB:** SQLite + ETS + persistent_term | **Events:** Goldrush | **HTTP Client:** Req
---
## Features
### Signal Classification
Every input is classified into a 5-tuple before it reaches the reasoning engine:
```
S = (Mode, Genre, Type, Format, Weight)
Mode — What to do: BUILD, EXECUTE, ANALYZE, MAINTAIN, ASSIST
Genre — Speech act: DIRECT, INFORM, COMMIT, DECIDE, EXPRESS
Type — Domain category: question, request, issue, scheduling, summary
Format — Container: message, command, document, notification
Weight — Complexity: 0.0 (trivial) → 1.0 (critical, multi-step)
```
The classifier is LLM-primary with a deterministic regex fallback. Results are cached in ETS (SHA256 key, 10-minute TTL). This is what makes tier routing possible.
### Multi-Provider LLM Routing
7 providers, 3 tiers, weight-based dispatch:
| Weight Range | Tier | Use Case |
|---|---|---|
| 0.00–0.35 | Utility | Fast, cheap — greetings, lookups, summaries |
| 0.35–0.65 | Specialist | Balanced — code tasks, analysis, writing |
| 0.65–1.00 | Elite | Full reasoning — architecture, orchestration, novel problems |
| Provider | Notes |
|---|---|
| **Ollama Local** | Runs on your machine — fully private, no API cost |
| **Ollama Cloud** | Fast cloud inference, no GPU required |
| **Anthropic** | Claude Opus, Sonnet, Haiku |
| **OpenAI** | GPT-4o, GPT-4o-mini, o-series |
| **OpenRouter** | 200+ models behind a single API key |
| **MIOSA** | Fully managed Optimal agent endpoint |
| **Custom** | Any OpenAI-compatible endpoint |
### Autonomous Task Orchestration
14 specialized agent roles. Explore → Plan → Execute protocol:
```
User: "Build a REST API with auth, tests, and docs"
OSA:
├── Explorer agent — scans codebase (read-only, fast)
├── Planner agent — designs architecture + implementation plan
├── Backend agent — writes API + auth middleware
├── Tester agent — writes test suite
└── Doc-writer agent — writes documentation
```
Sub-agents share a task list and communicate via ETS-backed mailboxes.
### Multi-Agent Swarm Patterns
```elixir
:parallel # All agents work simultaneously, results merged
:pipeline # Each agent's output feeds the next
:debate # Agents argue positions, consensus emerges
:review_loop # Build → review → fix → re-review (iteration budget enforced)
```
Swarms use ETS-backed team coordination: shared task lists, per-agent mailboxes, scratchpads, and configurable iteration limits.
### 47 Built-in Tools
| Category | Tools |
|---|---|
| **File** | `file_read`, `file_write`, `file_edit`, `file_glob`, `file_grep`, `dir_list`, `multi_file_edit` |
| **System** | `shell_execute`, `git`,
[truncated…]PUBLIC HISTORY
First discoveredMar 21, 2026
IDENTITY
inferred
Identity inferred from code signals. No PROVENANCE.yml found.
Is this yours? Claim it →METADATA
platformgithub
first seenFeb 26, 2026
last updatedMar 21, 2026
last crawledtoday
version—
README BADGE
Add to your README:
