initrunner
InitRunner helps you build specialized AI assistants for various tasks. It allows you to define the assistant's role, the information it can access, and how it remembers past interactions, all in a simple configuration file. This solves the problem of needing technical expertise to create custom AI solutions, enabling businesses to automate tasks like answering customer questions based on their documentation or conducting in-depth research. Business users, project managers, and anyone needing a tailored AI tool can benefit from InitRunner. Its key advantage is its ease of use and flexibility, allowing you to quickly create and deploy AI agents without complex coding or containerization.
README
# InitRunner
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="assets/logo-dark.svg">
<source media="(prefers-color-scheme: light)" srcset="assets/logo-light.svg">
<img src="assets/logo-light.svg" alt="InitRunner" width="500">
</picture>
</p>
<p align="center">
<a href="https://pypi.org/project/initrunner/"><img src="https://img.shields.io/pypi/v/initrunner?color=%2334D058&v=1" alt="PyPI version"></a>
<a href="https://pypi.org/project/initrunner/"><img src="https://img.shields.io/pypi/dm/initrunner?color=%2334D058" alt="PyPI downloads"></a>
<a href="https://hub.docker.com/r/vladkesler/initrunner"><img src="https://img.shields.io/docker/pulls/vladkesler/initrunner?color=%2334D058" alt="Docker pulls"></a>
<a href="LICENSE-MIT"><img src="https://img.shields.io/badge/license-MIT%20OR%20Apache--2.0-%2334D058" alt="MIT OR Apache-2.0"></a>
<a href="https://ai.pydantic.dev/"><img src="https://img.shields.io/badge/PydanticAI-6e56cf?logo=pydantic&logoColor=white" alt="PydanticAI"></a>
<a href="https://discord.gg/GRTZmVcW"><img src="https://img.shields.io/badge/Discord-InitRunner%20Hub-5865F2?logo=discord&logoColor=white" alt="Discord"></a>
</p>
<p align="center">
<a href="https://initrunner.ai/">Website</a> · <a href="https://initrunner.ai/docs">Docs</a> · <a href="https://hub.initrunner.ai/">InitHub</a> · <a href="https://discord.gg/GRTZmVcW">Discord</a> · <a href="https://github.com/vladkesler/initrunner/issues">Issues</a>
</p>
YAML-first AI agent platform. Define an agent's role, tools, knowledge base, and memory in one file. Run it as an interactive chat, a one-shot command, an autonomous daemon with cron/webhook/file-watch triggers, a Telegram/Discord bot, or an OpenAI-compatible API. RAG and persistent memory work out of the box. Manage everything from a web dashboard or native desktop app. Install with `curl` or `pip`, no containers required.
```bash
initrunner run helpdesk -i # docs Q&A with RAG + memory
initrunner run deep-researcher -p "Compare vector databases" # 3-agent research team
initrunner run code-review-team -p "Review the latest commit" # multi-perspective code review
```
15 curated starters, 60+ examples, or define your own.
> **v2026.4.2**: PydanticAI + LangChain agent import. Convert existing agents with `initrunner new --pydantic-ai my_agent.py` or `--langchain`. See the [Changelog](CHANGELOG.md).
## Quickstart
```bash
curl -fsSL https://initrunner.ai/install.sh | sh
initrunner setup # wizard: pick provider, model, API key
```
Or: `uv pip install "initrunner[recommended]"` / `pipx install "initrunner[recommended]"`. See [Installation](docs/getting-started/installation.md).
### Try a starter
Run `initrunner run --list` for the full catalog. The model is auto-detected from your API key.
| Starter | What it does | Kind |
|---------|-------------|------|
| `helpdesk` | Drop your docs in, get a Q&A agent with citations and memory | Agent (RAG) |
| `code-review-team` | Multi-perspective review: architect, security, maintainer | Team |
| `deep-researcher` | 3-agent pipeline: planner, web researcher, synthesizer with shared memory | Team |
| `codebase-analyst` | Index your repo, chat about architecture, learns patterns across sessions | Agent (RAG) |
| `web-researcher` | Search the web and produce structured briefings with citations | Agent |
| `content-pipeline` | Topic researcher, writer, editor/fact-checker via webhook or cron | Compose |
| `telegram-assistant` | Telegram bot with memory and web search | Agent (Daemon) |
| `email-agent` | Monitors inbox, triages messages, drafts replies, alerts Slack on urgent mail | Agent (Daemon) |
| `support-desk` | Sense-routed intake: auto-routes to researcher, responder, or escalator | Compose |
| `memory-assistant` | Personal assistant that remembers across sessions | Agent |
RAG starters auto-ingest on first run. Just `cd` into your project:
```bash
cd ~/myproject
initrunner run codebase-analyst -i # indexes your code, then starts Q&A
```
### Build your own
```bash
initrunner new "a research assistant that summarizes papers" # generates a role.yaml
initrunner run --ingest ./docs/ # or skip YAML entirely, just chat with your docs
```
Browse and install community agents from [InitHub](https://hub.initrunner.ai/): `initrunner search "code review"` / `initrunner install alice/code-reviewer`.
**Docker**, no install needed:
```bash
docker run -d -e OPENAI_API_KEY -p 8100:8100 \
-v initrunner-data:/data ghcr.io/vladkesler/initrunner:latest # dashboard
docker run --rm -it -e OPENAI_API_KEY \
-v initrunner-data:/data ghcr.io/vladkesler/initrunner:latest run -i # chat
```
See the [Docker guide](docs/getting-started/docker.md) for more.
## Define an Agent in YAML
```yaml
apiVersion: initrunner/v1
kind: Agent
metadata:
name: code-reviewer
description: Reviews code for bugs and style issues
spec:
role: |
You are a senior engineer. Review code for correctness and readability.
Use git tools to examine changes and read files for context.
model: { provider: openai, name: gpt-5-mini }
tools:
- type: git
repo_path: .
- type: filesystem
root_path: .
read_only: true
```
```bash
initrunner run reviewer.yaml -p "Review the latest commit"
```
The `model:` section is optional; omit it and InitRunner auto-detects from your API key. Works with Anthropic, OpenAI, Google, Groq, Mistral, Cohere, xAI, OpenRouter, Ollama, and any OpenAI-compatible endpoint. 28 built-in tools (filesystem, git, HTTP, Python, shell, SQL, search, email, Slack, MCP, audio, PDF extraction, CSV analysis, image generation) and you can [add your own](docs/agents/tool_creation.md) in a single file.
## Why InitRunner
A YAML file *is* the agent. Tools, knowledge sources, memory, triggers, model, guardrails, all declared in one place. You can read it and immediately understand what the agent does. You can diff it, review it in a PR, hand it to a teammate. When you want to switch from GPT to Claude, you change one line. When you want to add RAG, you add an `ingest:` section.
The same file runs as an interactive chat (`-i`), a one-shot command (`-p "..."`), a cron/webhook/file-watch daemon (`--daemon`), or an OpenAI-compatible API (`--serve`). You don't pick a deployment mode upfront and build around it. You pick it at runtime with a flag.
What this gets you in practice: your agent config lives in version control next to your code. New team members read the YAML and understand what the agent does. You review agent changes in PRs like any other config. The agent you prototyped interactively is the same one you deploy as a daemon or API. Same file, different flag.
## How It Compares
| | InitRunner | LangChain | CrewAI | AutoGen |
|---|---|---|---|---|
| **Agent config** | YAML file | Python chains + config | Python classes | Python classes |
| **RAG** | `--ingest ./docs/` (one flag) | Loaders + splitters + vectorstore | RAG tool or custom | External setup |
| **Memory** | Built-in, on by default | Add-on (multiple options) | Short/long-term memory | External |
| **Multi-agent** | `compose.yaml` or `kind: Team` | LangGraph | Crew definition | Group chat |
| **Deployment modes** | Same YAML: REPL / daemon / API | Custom per mode | CLI or Kickoff | Custom |
| **Model switching** | Change 1 YAML line | Swap LLM class | Config per agent | Config per agent |
| **Custom tools** | 1 file, 1 decorator | `@tool` decorator | `@tool` decorator | Function call |
| **Bot deployment** | `--telegram` / `--discord` flag | Separate integration | Separate integration | Separate integration |
| **Migration** | `--pydantic-ai` / `--langchain` import | N/A | N/A | N/A |
## What You Get
### Knowledge and memory
Point your agent at a directory. It extracts, chunks, embeds, and indexes your documents. During conversation, the agent searches the index automatically and cites what it find
[truncated…]PUBLIC HISTORY
IDENTITY
Identity inferred from code signals. No PROVENANCE.yml found.
Is this yours? Claim it →METADATA
README BADGE
Add to your README:
