AGENTS / GITHUB / babber
githubinferredactive

babber

provenance:github:okhmat-anton/babber
WHAT THIS AGENT DOES

Babber allows you to create and manage your own artificial intelligence assistants that can work independently. It’s like having a team of digital helpers that you can customize with specific personalities and skills to complete tasks. This platform solves the problem of needing external, often costly, AI services by letting you run everything privately on your own computer. Developers, researchers, and teams needing secure AI solutions would find it valuable, especially those wanting to explore how AI can automate processes. What makes it unique is its ability to run entirely offline, giving you complete control over your data and avoiding reliance on outside providers.

View Source ↗First seen 1mo agoNot yet hireable
README
<p align="center">
  <h1 align="center">🤖 AI Agents Server</h1>
  <p align="center">
    <strong>Self-hosted platform for creating, managing, and running autonomous AI agents</strong>
  </p>
  <p align="center">
    <a href="#features">Features</a> •
    <a href="#quick-start">Quick Start</a> •
    <a href="#installation">Installation</a> •
    <a href="#development">Development</a> •
    <a href="#architecture">Architecture</a> •
    <a href="#api">API</a> •
    <a href="#license">License</a>
  </p>
</p>

---

## What is AI Agents Server?

AI Agents Server is a **fully self-hosted, open-source platform** for building and running AI agents powered by local LLMs (via [Ollama](https://ollama.com)) or any OpenAI-compatible API. No cloud dependencies, no API costs — your data stays on your machine.

Think of it as your personal AI workforce: create agents with unique personalities, give them skills (tools), assign tasks, and let them work autonomously — complete with long-term memory, thinking protocols, and project management.

### Who is this for?

- **Developers** who want to experiment with AI agents locally
- **Researchers** exploring autonomous AI behavior, beliefs, and reasoning
- **Teams** that need a private, self-hosted AI agent management system
- **Hobbyists** who want to run powerful AI agents without cloud subscriptions

---

## Features

### 🧠 Intelligent Agents
- Create agents with custom personalities, system prompts, and generation parameters
- **Belief System** — define core (immutable) and additional (mutable) beliefs for each agent
- **Aspirations** — set dreams, desires, and goals that guide agent behavior
- **Multi-Model Support** — assign different LLM models for different roles (primary, analytical, creative)
- Per-agent access controls (filesystem, system)

### 🔧 Skills System
- Extensible tool/skill framework — agents can use skills to interact with the world
- Built-in skills: web fetch, file read/write, shell execution, code execution, memory store/search, project file management, text summarization
- Create custom skills with Python code and JSON schemas
- Agents select and invoke skills autonomously during conversations

### 🧪 Thinking Protocols
- Define step-by-step reasoning workflows for agents
- **Standard** protocols — structured analysis and research flows
- **Orchestrator** protocols — meta-protocols that delegate to child protocols
- **Loop** protocols — autonomous work cycles where agents self-direct
- Full thinking log visibility for debugging and research

### 💬 Chat Interface
- VS Code-style resizable chat panel
- Persistent chat sessions with full history
- Multi-model chat — compare responses from different models side by side
- Multi-agent chat — have several agents collaborate in one session
- Automatic session titles via LLM
- Markdown rendering with syntax highlighting

### 🚀 Autonomous Execution
- Agents can run autonomously — processing tasks, making decisions, writing code
- Built-in project system — agents write code to isolated project directories
- Task management with scheduling (cron expressions)
- Real-time progress via WebSocket

### 🧬 Long-Term Memory
- **Vector Memory** (ChromaDB) — semantic search across agent memories
- **Knowledge Graph** — typed links between memory records
- Memory categories, tags, importance scoring
- Deep memory processing — agents analyze and connect their own memories

### 🏗️ Infrastructure
- **MongoDB** for all data storage (agents, tasks, logs, etc.)
- **Redis** for caching, **ChromaDB** for vector embeddings
- Ollama integration with auto-model sync, health monitoring, and watchdog
- Swagger/ReDoc API documentation

### 🖥️ Admin Dashboard
- Modern dark-themed UI (Vue 3 + Vuetify 3)
- Dashboard with system overview
- Agent management with avatar support
- Skill editor with CodeMirror
- Model management (Ollama + external APIs)
- System logs, file browser, terminal, process monitor
- Project browser for agent-generated code

---

## Quick Start

The fastest way to get running (requires Docker and Ollama):

```bash
git clone https://github.com/okhmat-anton/ai-agents-server.git
cd ai-agents-server
make install
```

This will:
1. Create `.env` from template
2. Check/install Ollama
3. Offer to download a default model (`qwen2.5-coder:14b`)
4. Build and start all services

Then open: **http://localhost:4200**  
Login: `admin` / `admin123`

---

## Installation

### Prerequisites

| Requirement | Version | Notes |
|-------------|---------|-------|
| **Docker** | 20.10+ | [Install Docker](https://docs.docker.com/get-docker/) |
| **Docker Compose** | v2+ | Included with Docker Desktop |
| **Ollama** | Latest | [Install Ollama](https://ollama.com/download) |
| **RAM** | 8 GB+ | 16 GB+ recommended for 14B models |
| **Disk** | 10 GB+ | Models take 4–9 GB each |

> 💡 Ollama is required for local LLM inference. You can also use any OpenAI-compatible API (GPT-4, Claude, Mistral, etc.) by configuring external model providers in the UI.

---

### macOS

#### 1. Install Docker Desktop

```bash
# Download and install from:
# https://www.docker.com/products/docker-desktop/
# Or with Homebrew:
brew install --cask docker
```

Launch Docker Desktop and wait for it to start.

#### 2. Install Ollama

```bash
# Download from https://ollama.com/download/mac
# Or with Homebrew:
brew install ollama
```

Start Ollama:
```bash
ollama serve
```

Pull a model (in another terminal):
```bash
# Recommended for coding tasks (requires ~9 GB RAM):
ollama pull qwen2.5-coder:14b

# Lighter alternative (~4 GB RAM):
ollama pull qwen2.5-coder:7b

# Or any model you prefer:
ollama pull llama3.1:8b
```

#### 3. Clone and Run

```bash
git clone https://github.com/okhmat-anton/ai-agents-server.git
cd ai-agents-server
make install
```

#### 4. Open the App

- **Frontend:** http://localhost:4200
- **Backend API:** http://localhost:4700
- **API Docs:** http://localhost:4700/docs
- **Login:** `admin` / `admin123`

---

### Linux (Ubuntu / Debian)

#### 1. Install Docker

```bash
# Install Docker Engine
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
newgrp docker

# Verify
docker --version
docker compose version
```

#### 2. Install Ollama

```bash
curl -fsSL https://ollama.com/install.sh | sh
```

Start Ollama:
```bash
ollama serve &
```

Pull a model:
```bash
ollama pull qwen2.5-coder:14b
# Or a lighter model:
ollama pull qwen2.5-coder:7b
```

#### 3. Clone and Run

```bash
git clone https://github.com/okhmat-anton/ai-agents-server.git
cd ai-agents-server
make install
```

#### 4. Open the App

- **Frontend:** http://localhost:4200
- **Backend API:** http://localhost:4700
- **API Docs:** http://localhost:4700/docs
- **Login:** `admin` / `admin123`

> **Note for Linux:** If Ollama is running on the same host, the default `OLLAMA_BASE_URL=http://host.docker.internal:11434` in `.env` should work with Docker Desktop. On plain Docker Engine, you may need to change it to `http://172.17.0.1:11434` or your host IP.

---

### Windows

#### 1. Install Docker Desktop

Download and install **Docker Desktop for Windows** from:  
https://www.docker.com/products/docker-desktop/

- Enable **WSL 2 backend** during installation (recommended)
- Launch Docker Desktop and wait for it to start

#### 2. Install Ollama

Download and install from: https://ollama.com/download/windows

After installation, Ollama runs as a system service. Open PowerShell:

```powershell
# Verify Ollama is running
ollama list

# Pull a model
ollama pull qwen2.5-coder:14b
# Or lighter:
ollama pull qwen2.5-coder:7b
```

#### 3. Install Git (if not already installed)

```powershell
# Download from https://git-scm.com/download/win
# Or with winget:
winget install Git.Git
```

#### 4. Install Make (optional but recommended)

```powershell
# Option A: Install via Chocolatey
choco install make

# Option B: Install via winget
winget install GnuWin32.Make
```

If you don't have `make`, see the [Manual Setup](#manual-setup-without-make)

[truncated…]

PUBLIC HISTORY

First discoveredMar 26, 2026

IDENTITY

inferred

Identity inferred from code signals. No PROVENANCE.yml found.

Is this yours? Claim it →

METADATA

platformgithub
first seenFeb 19, 2026
last updatedMar 25, 2026
last crawled17 days ago
version

README BADGE

Add to your README:

![Provenance](https://getprovenance.dev/api/badge?id=provenance:github:okhmat-anton/babber)