githubinferredactive
zbot
provenance:github:LingaoM/zbot
Iβm zbot π¦, an embedded AI agent running on Zephyr RTOS across multiple hardware platforms.
README
<h1 align="center">π¦ zbot</h1>
<p align="center">
An open-source embedded AI agent powered by Zephyr RTOS<br>
zbot implements a ReAct (Reason + Act) loop that connects to any OpenAI-compatible LLM API,
enabling hardware control, persistent memory, and multi-step skills.
</p>
<p align="center">
<img src="docs/image.png" width="360"/>
</p>
---
## π¬ Demo
<p align="center">
<img src="docs/telegram.gif"/>
<img src="docs/terminal.gif"/>
</p>
**Supported boards:** nRF7002-DK (nRF5340 + nRF7002 WiFi), native_sim (Linux host)
**RTOS:** [Zephyr](https://zephyrproject.org) β₯ latest
**License:** Apache-2.0
---
## Architecture
```
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β zbot Agent β
β β
β ββββββββββββ ββββββββββββ ββββββββββββββββββββ β
β β Config β β Memory β β LLM Client β β
β β endpoint β β slab poolβ β HTTPS β OpenAI β β
β β model β β + NVS β β compatible API β β
β β api key β β summary β β β β
β ββββββββββββ ββββββββββββ ββββββββββββββββββββ β
β β
β ββββββββββββββββββββββββ ββββββββββββββββββββββββ β
β β LLM-visible Tools β β Skills β β
β β (src/tools/) β β (src/skills/) β β
β β β β β β
β β tool_exec βββββββββββΌββΊβ gpio (read/write/ β β
β β ββ skill_run() β β blink/sos) β β
β β β β system(board/uptime/β β
β β read_skill ββββββββββΌββΊβ heap/status) β β
β β ββ skill_read_ β β β β
β β content() β β (add more in β β
β ββββββββββββββββββββββββ β src/skills/<name>/)β β
β ββββββββββββββββββββββββ β
β ββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Shell Commands (zbot key / chat / skill ...) β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
β ββββββββββββ β
β β Telegram β Long-poll thread β agent β sendMessageβ
β β Bot β β
β ββββββββββββ β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
### Modules
| Module | File | Purpose |
|--------|------|---------|
| **Config** | `config.h/c` | LLM endpoint, model, API key + WiFi credentials (NVS persistence) |
| **Memory** | `memory.h/c` | k_mem_slab conversation history + NVS rolling summary |
| **LLM Client** | `llm_client.h/c` | HTTPS POST to OpenAI-compatible Chat Completions API |
| **Tools** | `tools.h/c` + `src/tools/` | LLM-visible tool registry; each tool in its own directory, self-registers via `SYS_INIT` |
| **Skills** | `skills/skill.h/c` + `src/skills/` | Skill registry; each skill in its own directory, self-registers via `SYS_INIT` |
| **Agent** | `agent.h/c` + `src/AGENT.md` | ReAct loop; system prompt loaded from `AGENT.md` at build time |
| **Telegram** | `telegram.h/c` | Telegram Bot long-poll thread; forwards messages to agent and replies |
| **JSON Util** | `json_util.h/c` | Shared `json_get_str()` and `json_escape()` |
| **Shell** | `shell_cmd.c` | All `zbot` shell subcommands |
### Tool & Skill Design
The LLM sees exactly **two tools**:
| LLM Tool | What it does |
|----------|-------------|
| `tool_exec` | Dispatches to any registered skill by name β all hardware/system operations go here |
| `read_skill` | Returns the full Markdown documentation of a skill on demand |
**Skills** are the execution units. They live in `src/skills/<name>/` and self-register at boot via `SYS_INIT`. Each skill has:
- `SKILL.c` β handler implementation, registered with `SKILL_DEFINE`
- `SKILL.md` *(optional)* β Markdown documentation embedded at build time via `generate_inc_file_for_target`
The LLM receives only `name` + `description` for each skill per request. Full docs are fetched on demand via `read_skill`.
Hardware primitives (GPIO, uptime, heap) are themselves skills β the same mechanism, just without Markdown docs.
### ReAct Loop
```
user input
β
βΌ
build messages JSON ββββββββββββββββββββββββββββββββββββββββββββββββββ
β β
βΌ β
LLM API call (tool_exec + read_skill exposed) β
β β
βββ finish_reason: tool_call βββΊ tools_execute(name, args) β
β ββ skill_run(name, args) ββββΊβ
β
βββ finish_reason: stop βββΊ return answer to user
β
βΌ
request summary βββΊ NVS
```
Max iterations per turn: **10**
### Conversation Memory
History uses a **10-node static pool** (`k_mem_slab`) backed by a `sys_slist_t` linked list ordered oldest β newest.
When the pool is full on `memory_add_turn()`:
1. **Compress** β the oldest nodes are summarised by the LLM; those nodes are freed back to the slab.
2. **Evict** *(fallback)* β the oldest node is recycled directly.
After compression, the rolling summary is written to NVS and injected as prior context on the next boot.
**Settings layout:**
| Key | Type | Notes |
|-----|------|-------|
| `zbot/summary` | `char[768]` | Conversation summary |
| `zbot/apikey` | `char[256]` | Set by `zbot key` |
| `zbot/host` | `char[128]` | LLM endpoint hostname |
| `zbot/path` | `char[128]` | LLM API path |
| `zbot/model` | `char[128]` | Model name |
| `zbot/provider_id` | `char[64]` | `X-Model-Provider-Id` header |
| `zbot/use_tls` | `uint8_t` | TLS enabled flag |
| `zbot/tls_verify` | `uint8_t` | TLS peer verification (default: on) |
| `zbot/port` | `uint16_t` | TCP port |
| `zbot/tg_token` | `char[128]` | Telegram Bot token |
| `wifi/...` | β | Managed by Zephyr `wifi_credentials` subsystem |
---
## Prerequisites
Set up a Zephyr development environment following the official guide:
https://docs.zephyrproject.org/latest/develop/getting_started/index.html
---
## Quick Start
### 1. Build & Flash
**nRF7002-DK** (physical hardware with WiFi):
```bash
west build -b nrf7002dk/nrf5340/cpuapp zbot
west flash
```
**native_sim** (Linux host simulation, no WiFi):
```bash
west build -b native_sim zbot
./build/zephyr/zephyr.exe
```
### 2. Connect Serial
For nRF7002-DK:
```bash
minicom -D /dev/ttyACM0 -b 115200
```
For native_sim, the shell is on the terminal where you launched `zephyr.exe`.
### 3. Connect to WiFi (nRF7002-DK only)
```
uart:~$ zbot wifi connect <SSID> <password>
```
Credentials are saved to flash and auto-connect on reboot.
> **native_sim:** No WiFi configuration needed β the host OS provides the network stack.
### 4. Set API Key
> **Default Provider (OpenRouter)**
> Get a free API key from: https://openrouter.ai/settings/keys
```
uart:~$ zbot key sk-...
```
The key is saved to NVS flash and restored on every reboot.
### 5. (Optional) Configure Endpoint
**OpenAI:**
```
uart:~$ zbot host api.openai.com
uart:~$ zbot path /v1/chat/completions
uart:~$ zbot model gpt-4o-mini
uart:~$ zbot key sk-...
```
**DeepSeek:**
```
uart:~$ zbot host api.deepseek.com
uart:~$ zbot path /chat/completions
uart:~$ zbot model deepseek-chat
uart:~$ zbot key sk-...
```
**Local model (e.g. Ollama):**
```
uart:~$ zbot host 192.168.1.100
uart:~$ zbot tls off 11434
```
### 6. (Optional) Configure Telegram Bot
> Create a bot via [@BotFather](https://t.me/BotFather) on Telegram to obtain a token.
```
uart:~$ zbot telegram token 1234567890:AAxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
uart:~$ zbot telegram start
```
The token is saved to NVS flash. Polling starts automatically on the next reboot if a token
[truncatedβ¦]PUBLIC HISTORY
First discoveredMar 21, 2026
IDENTITY
inferred
Identity inferred from code signals. No PROVENANCE.yml found.
Is this yours? Claim it βMETADATA
platformgithub
first seenMar 12, 2026
last updatedMar 20, 2026
last crawledtoday
versionβ
README BADGE
Add to your README:
