githubinferredactive
agentx
provenance:github:axelulu/agentx
WHAT THIS AGENT DOES
AgentX is a helpful assistant that lives directly on your computer, allowing it to work with your files and run commands just like you would. It can automate tasks that involve managing documents, running programs, or organizing information on your desktop. Professionals who frequently handle data, automate workflows, or need a secure way to process information locally would find it particularly useful. What sets AgentX apart is that it doesn't send your data to the cloud; everything stays on your machine, ensuring your privacy and security. It also allows you to choose from different AI services, giving you flexibility in how it operates.
README
<p align="center">
<img src="packages/agentx/resources/icon.png" width="128" height="128" alt="AgentX" />
</p>
<h1 align="center">AgentX</h1>
<p align="center">
Open-source, local-first, multi-modal AI agent for your desktop.
<br />
File system access · Shell execution · Extensible toolkits · Multi-provider
</p>
<p align="center">
<a href="https://github.com/axelulu/agentx/releases/latest">
<img src="https://img.shields.io/github/v/release/axelulu/agentx?style=flat-square&label=Download" alt="Download" />
</a>
<a href="https://github.com/axelulu/agentx/releases/latest">
<img src="https://img.shields.io/badge/platform-macOS-blue?style=flat-square" alt="Platform" />
</a>
<a href="LICENSE">
<img src="https://img.shields.io/github/license/axelulu/agentx?style=flat-square" alt="License" />
</a>
</p>
<p align="center">
<a href="https://github.com/axelulu/agentx/releases/latest">Download Latest Release</a> ·
<a href="#features">Features</a> ·
<a href="#quick-start">Quick Start</a> ·
<a href="#architecture">Architecture</a> ·
<a href="#contributing">Contributing</a> ·
<a href="#license">License</a>
</p>
---
## Screenshots



---
## Download
Download the latest version from the [Releases page](https://github.com/axelulu/agentx/releases/latest):
| Platform | File |
| ------------------------- | ------------------------ |
| **macOS** (Apple Silicon) | `AgentX-x.x.x-arm64.dmg` |
| **macOS** (Intel) | `AgentX-x.x.x-x64.dmg` |
> **Note:** AgentX currently only supports macOS. Windows and Linux are not supported.
> Open the `.dmg` and follow the installer. On first launch, go to **Settings > Providers** to add your API key.
## What is AgentX?
AgentX is a **local-first, general-purpose AI agent** that runs as a native desktop application. Unlike cloud-only assistants, AgentX operates directly on your machine — reading and writing files, executing shell commands, and completing multi-step tasks autonomously through a turn-based reasoning loop.
It supports multiple LLM providers, a pluggable toolkit system, and intelligent context window management — all wrapped in a clean, modern UI.
## Features
### Local Tool Execution
- **File system access** — Read, create, and rewrite files anywhere on your machine, sandboxed to your workspace.
- **Shell commands** — Execute arbitrary shell commands and scripts with full stdout/stderr capture.
- **No cloud relay** — Tools run locally on your OS. Nothing leaves your machine except LLM API calls.
### Multi-Provider Support
Connect your own API keys. AgentX supports:
| Provider | Default Model | Notes |
| ----------------- | -------------------------- | -------------------------------------------- |
| **OpenAI** | `gpt-4o` | Also compatible with Azure, OpenRouter, vLLM |
| **Anthropic** | `claude-sonnet-4-20250514` | Native Anthropic SDK |
| **Google Gemini** | `gemini-2.0-flash` | Google AI Studio |
| **Custom** | — | Any OpenAI-compatible endpoint |
Switch between providers at any time. Each conversation remembers its provider context.
### Autonomous Agent Loop
AgentX uses a **turn-based agent loop** — the LLM reasons, calls tools, observes results, and continues until the task is complete:
```
User message
→ LLM reasoning
→ Tool calls (parallel or sequential)
→ Tool results
→ LLM reasoning (next turn)
→ ... repeat until task complete
```
- Up to **50 turns** per invocation (configurable)
- **Parallel tool execution** with concurrency control
- **Streaming responses** with real-time UI updates
- **Abort** any running task at any time
- **Steer** the agent mid-execution by injecting messages
- **Follow-up queue** — send new messages while the agent is still running
### Smart Context Management
Long conversations are automatically optimized to fit within token budgets:
1. **Tool result compression** — Long outputs are truncated (head + tail)
2. **Gradient compression** — Older tool call groups are progressively removed
3. **LLM summarization** — Older history is condensed into a summary
4. **Fallback truncation** — Graceful degradation when all else fails
Default context budget: **100,000 tokens**, keeping the 5 most recent turns uncompressed.
### Extensible Toolkit
Tools and prompts are defined in **YAML**, making it easy to add new capabilities without touching code:
```
resources/toolkit/
├── prompts/ # System prompt templates (i18n-ready)
├── capabilities/ # Tool groups (file, shell, ...)
│ └── desktop/
│ ├── file/ # file_read, file_create, file_rewrite
│ └── shell/ # Shell execution tools
├── skills/ # High-level skill bundles
└── config/ # Global variables
```
Each capability includes:
- **Tool definitions** — Name, description, input schema (JSON Schema)
- **Prompt rules** — Contextual guidelines injected into the system prompt
- **Handlers** — TypeScript functions that implement the tool logic
### Middleware System
Extend the agent loop with custom middleware hooks:
```typescript
interface AgentMiddleware {
beforeModelCall?: (ctx) => Promise<LLMMessage[] | void>;
afterModelCall?: (ctx) => Promise<boolean | void>; // return true to stop
beforeToolExecution?: (ctx) => Promise<Record<string, unknown> | void>;
afterToolExecution?: (ctx) => Promise<void>;
}
```
Built-in: `createContextMiddleware()` for automatic context optimization.
### MCP Server Integration
Connect external [Model Context Protocol](https://modelcontextprotocol.io) servers to extend the agent with additional tools and resources — databases, APIs, custom services, and more.
### Knowledge Base
Add persistent context entries (facts, preferences, project conventions) that are injected into every conversation. The agent always has access to your most important information.
### Desktop Experience
- Native application for **macOS**
- Conversation management with auto-categorized icons
- Real-time streaming with typing indicators
- Message actions (copy, edit, regenerate)
- Dark theme optimized for extended use
- Lightweight — built with Tauri v2 + React 19
### macOS System Permissions
On macOS, AgentX can request system permissions to unlock advanced capabilities:
- **Accessibility** — Control your computer and interact with apps
- **Screen Recording** — Capture screen content for visual context
- **Microphone / Camera** — Voice and visual input
- **Full Disk Access** — Read and write files across the entire system
- **Automation** — Control other applications via AppleScript
- **Notifications** — Desktop notifications
Go to **Settings > Permissions** to grant each permission directly from the app.
## Quick Start
### Prerequisites
- **Node.js** >= 20
- **pnpm** 10.4.1+
### Install & Run
```bash
# Clone
git clone https://github.com/axelulu/agentx.git
cd agentx
# Install dependencies
pnpm install
# Start development mode
pnpm dev
```
The app will open automatically. Add your LLM provider API key in **Settings** to start chatting.
### Build for Production
```bash
pnpm --filter agentx dist:mac
```
Build artifacts are output to `packages/agentx/src-tauri/target/release/bundle/`.
## Architecture
AgentX is a modular **pnpm monorepo** with clear separation of concerns:
```
agentx/
├── packages/
│ └── agentx/ # Tauri v2 desktop app
│ ├── src-tauri/ # Rust backend (Tauri commands)
│ ├── src/ # React UI (Redux, Tailwind)
│ └── resources/toolkit/ # YAML prompt & tool
[truncated…]PUBLIC HISTORY
First discoveredMar 29, 2026
IDENTITY
inferred
Identity inferred from code signals. No PROVENANCE.yml found.
Is this yours? Claim it →METADATA
platformgithub
first seenMar 7, 2026
last updatedMar 28, 2026
last crawled19 days ago
version—
README BADGE
Add to your README:
