githubinferredactive
AI_OS
provenance:github:allee-ai/AI_OS
Local-first workspace for using large language models in a practical, understandable way. It makes reasoning, memory, tools, and workflows visible and reusable, so you can save thinking once, keep control of your data, and move on.
README
# AI OS — A Local OS Extension for LLMs
**An open-source framework for building personal AI that remembers you. Private. Free. Early.**
> **Active Development** — The core agent capabilities work, the modules are scaffolded and a lot of the work that would really make this great is copy paste coding. adding endpoints for things we already use, integrating calendar, etc. Looking for collaborators at any level, and also we respect ideas from non technical users. Key Point I wont burn out because i love this work but Id love even more to see its growth accelerated by even just a few people.


---
<details>
<summary><strong>A Quick Human Note</strong> (click to expand)</summary>
I loved building this.
It's been almost obsessive for close to a year. When I started, I knew a lot of math and quite a bit of psychology, and I'd heard the term neuroscience like six times, so obviously I knew it inside and out. I digress. Moving on.
Initially, the project started because everybody LOVED GPT-4o, so I had to check it out, and I LOVED it too. It was a great thought partner. It made stuff up all the time, but it was cool to be able to rapidly iterate over a theory with a partner who always told me I was the queen of everything. No ego boost needed because I'm awesome.
I wanted to get ChatGPT and Siri to be best buds and work together. So I'd tell ChatGPT to be Siri and do Siri things, and it would. Solved AGI right then and there. Just kidding, it would pretend to do things, and I would want to throw my phone out the window, because parsing an LLM output is like trying to understand someone else's toddler. You can do it… eventually. Mostly you just nod and smile and say, "Okay dear, let's try again." No fluffing.
So I started trying to give GPT-3.5 function calling abilities. I set up a Pythonista endpoint and used CLI and the Openai key to talk to 3.5, and I just always said execution is called by saying "execute:function_name".Mind you, this is a side project I'm doing on my phone in the bathroom at work, so it was slow going. But eventually I got it to mostly work. Then the API broke. I gave up.
And this is what's left.
An entire operating system for LLMs that just takes ALLLL the things we EXPECT them to do and makes them do that. But all of it is visible. Complicated terms like forward pass are not present, but you can look at subconscious in the repo and see that it's a scientifically accurate description of the programs module, and also it's a CoT dashboard where you can save things you've already done.
Form is a tool builder. Log keeps track of stuff. Identity and philosophy are just editable profiles. But the deeper design architecture fascinates me.
I built this because I desperately want it, and I desperately want everyone to have it, because I think if AI made people better, it wouldn't take their jobs. Simple concept. Massive implications.
If you read this, I appreciate it. Please enjoy whatever it is you do with my bathroom project :)
— Allee
</details>
---
## What is AI OS?
LLMs are powerful but unreliable. They hallucinate, forget, lose track of who they are, and treat every conversation like it's the first.
AI OS is an architecture layer that wraps your local LLM and handles what models are bad at:
- **Memory** — Persistent across sessions, organized by relevance
- **Identity** — Consistent personality stored structurally
- **Learning** — Extracts facts from conversations, builds concept graph
- **Control** — LLM handles language; OS handles state
- **Background Loops** — Predefined COT for memory, goals, self-improvement, and custom workflows
- **Tool Calling** — Text-native protocol: file ops, web search, notifications, code editing
- **Eval Harness** — Benchmark your agent against raw models with LLM-as-judge
**The pain point we solve:** These pieces exist separately — RAG libraries, prompt templates, memory plugins, identity frameworks — but nowhere in one integrated package for local LLMs. AI OS is that package.
The LLM is the voice. The OS is the brain.
---
## Getting Started (5 minutes)
### What You'll Need
- A Mac, Windows, or Linux computer
- About 8GB of free disk space
- Internet connection (just for the initial setup)
### Step 1: Download AI OS
You can download AI OS in two ways:
1. **From the Website:**
- Visit [https://allee-ai.com/download](https://allee-ai.com/download) and download the latest version for your operating system.
- Extract the downloaded file and open the folder.
2. **From GitHub:**
- Open your terminal (on Mac: search "Terminal" in Spotlight) and run:
```bash
git clone https://github.com/allee-ai/AI_OS.git
cd AI_OS
```
### Step 2: Run AI OS
**On Mac/Linux:**
1. Open Terminal (on Mac: search "Terminal" in Spotlight)
2. Navigate to the downloaded folder:
```bash
cd ~/Downloads/AI_OS
```
3. Run the start script:
```bash
bash scripts/start.sh
```
**On Windows:**
- Double-click `run.bat` in the downloaded folder.
The script handles everything:
- Installs the LLM runtime (Ollama)
- Starts the OS backend and chat interface
- Opens your browser automatically
> **First time?** The first launch downloads the AI model (~4GB). This only happens once.
### Step 3: Start Chatting!
Your browser will open to `http://localhost:5173` — start talking to your AI.
---
## Command Line Interface
AI OS is fully operable from the terminal — no browser needed. This is particularly useful on servers, VMs, or SSH sessions.
### Launch Modes
```bash
bash scripts/start.sh # Auto-detect: GUI or headless
bash scripts/start.sh --headless # Backend API only (no frontend)
bash scripts/start.sh --cli # Interactive CLI (no server)
bash scripts/start.sh --help # Show all options
```
On headless systems (SSH, no display), the script auto-detects and skips the frontend.
### CLI REPL
Once the backend is running, open a second terminal:
```bash
python cli.py # Start REPL (demo mode)
python cli.py --mode live # Use live DB
python cli.py --show-state # Print STATE block with each response
```
Just type to chat. Use slash commands for everything else:
### Chat & Conversations
| Command | Description |
|---------|-------------|
| *(just type)* | Send a message to the agent |
| `/convos` | List recent conversations |
| `/convos <id>` | Show conversation turns |
| `/convos search <query>` | Search conversations |
| `/convos new [name]` | Create new conversation |
| `/clear` | Clear message history |
### Memory & Knowledge
| Command | Description |
|---------|-------------|
| `/memory` | List temp_memory facts |
| `/memory approve <id>` | Approve a pending fact |
| `/memory reject <id>` | Reject a pending fact |
### Identity & Philosophy
| Command | Description |
|---------|-------------|
| `/identity` | List identity profiles |
| `/identity <profile>` | Show facts for a profile |
| `/identity new` | Create profile interactively |
| `/identity fact <p> <k> <v>` | Add/update a fact |
| `/philosophy` | List philosophy profiles |
| `/philosophy <profile>` | Show stances |
| `/philosophy new` | Create profile interactively |
| `/philosophy fact <p> <k> <v>` | Add/update a stance |
### Tools
| Command | Description |
|---------|-------------|
| `/tools` | List all tools |
| `/tools <name>` | Show tool details |
| `/tools run <name> <action>` | Execute a tool (+ optional JSON params) |
| `/tools new` | Create a tool interactively |
| `/tools code <name>` | Show executable code |
| `/tools toggle <name>` | Enable/disable |
| `/tools categories` | List tool categories |
### Background Loops
| Command | Description |
|---------|-------------|
| `/loops` | Show all loop stats |
| `/loops new` | Create a custom loop (interactive) |
| `/loops custom` | List custom loops |
[truncated…]PUBLIC HISTORY
First discoveredMar 30, 2026
IDENTITY
inferred
Identity inferred from code signals. No PROVENANCE.yml found.
Is this yours? Claim it →METADATA
platformgithub
first seenDec 3, 2025
last updatedMar 29, 2026
last crawled18 days ago
version—
README BADGE
Add to your README:
