AGENTS / GITHUB / Agentis
githubinferredactive

Agentis

provenance:github:Dhwanil25/Agentis
WHAT THIS AGENT DOES

Okay, here's a summary of Agentis geared towards a non-technical business user: Agentis: Get More Done with an AI Team Imagine instead of asking one AI a question, you could deploy a whole team of specialized AI experts to work on it together. That's what Agentis does. It's a tool that lets you give it a complex task – like researching a new market, analyzing competitors, or even drafting a marketing plan – and it automatically creates a team of AI agents, each with a specific role. These agents pull information from different AI providers (like Google, OpenAI, and others) and work together, sharing their findings to produce a comprehensive and well-rounded result. Why

View Source ↗First seen 1mo agoNot yet hireable
README
<div align="center">

<br />

<img src="public/favicon.png" alt="Agentis" width="96" />

<h1>AGENTIS</h1>

<p><strong>Most AI tools give you one model and one answer. Agentis gives you a team.</strong></p>

<p>Deploy fleets of specialized agents — researchers, coders, analysts, writers, and more — across 12 LLM providers simultaneously. Each agent works its angle, shares findings, and hands off to the next. Watch it unfold live with hexagonal agent nodes, curved edges, real-time thought bubbles, and per-agent token tracking. Open source, provider-agnostic, and built for tasks that are too big for a single prompt.</p>

<br />

[![Version](https://img.shields.io/badge/version-0.3.0-orange?style=flat-square)](https://github.com/Dhwanil25/Agentis/releases/tag/v0.3.0)
[![License: MIT](https://img.shields.io/badge/license-MIT-blue?style=flat-square)](LICENSE)
[![React](https://img.shields.io/badge/React-18-61DAFB?style=flat-square&logo=react&logoColor=white)](https://reactjs.org)
[![TypeScript](https://img.shields.io/badge/TypeScript-5-3178C6?style=flat-square&logo=typescript&logoColor=white)](https://typescriptlang.org)
[![Vite](https://img.shields.io/badge/Vite-5-646CFF?style=flat-square&logo=vite&logoColor=white)](https://vitejs.dev)
[![Stars](https://img.shields.io/github/stars/Dhwanil25/Agentis?style=flat-square&color=orange)](https://github.com/Dhwanil25/Agentis/stargazers)
[![Forks](https://img.shields.io/github/forks/Dhwanil25/Agentis?style=flat-square)](https://github.com/Dhwanil25/Agentis/network/members)
[![Issues](https://img.shields.io/github/issues/Dhwanil25/Agentis?style=flat-square)](https://github.com/Dhwanil25/Agentis/issues)
[![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen?style=flat-square)](https://github.com/Dhwanil25/Agentis/blob/main/CONTRIBUTING.md)
[![Providers](https://img.shields.io/badge/providers-12-a855f7?style=flat-square)](https://github.com/Dhwanil25/Agentis#supported-providers)

<br />

<img src="public/agentis-hero.png" alt="Agentis Agent Universe" width="860" style="border-radius:12px" />

<br /><br />

</div>

---

## Demo

**[▶ Watch the demo on Loom](https://www.loom.com/share/fc32d9dee8314226b9e9cd32e31baf50)**

---

## What is Agentis?

Agentis is a **browser-native multi-agent AI platform**. You describe a task — Agentis spawns a coordinated team of specialized AI agents across multiple LLM providers, visualises their live thinking on an animated canvas, and synthesizes everything into one clean answer.

**No backend. No Docker. No infra.** Clone, `npm install`, go.

```
You → "Research the competitive landscape for AI coding tools"

Agentis orchestrator plans:
  ├── Researcher   (claude-sonnet-4-6   · Anthropic) → market sizing, key players
  ├── Analyst      (gemini-2.5-flash    · Google)    → feature comparison matrix
  ├── Coder        (gpt-4.1-mini        · OpenAI)    → API/SDK landscape
  └── Reviewer     (llama-3.3-70b       · Groq)      → fact-check & critique

                        ↓ ~45 seconds

          One comprehensive, synthesized report.
```

---

## Features

### 🌌 Agent Universe
- **Live animated canvas** — agents spawn as orbiting nodes, spin while working, send visible messages to each other
- **12 LLM providers simultaneously** — mix Anthropic, OpenAI, Google, Groq, Mistral, DeepSeek, OpenRouter, Cohere, xAI, Together, Ollama, LM Studio in one run
- **Exact model names on every node** — `claude-sonnet-4-6`, `gpt-4.1-mini`, `gemini-2.5-flash`
- **Smart tier selection** — orchestrator assigns `simple → fast model`, `complex → frontier model` per task
- **Auto-failover** — when a provider goes down mid-task, agents switch to the next available one automatically, zero data loss
- **Persistent universe** — follow-up questions recall relevant old agents and add new ones; knowledge compounds across turns

### ⬡ Agent Flow View
- **Hexagonal agent nodes** — distinct visual identity per agent with role-based color coding
- **Curved bezier edges** — animated particle flow along curved connections between agents and tools
- **Live thought bubbles** — active agents display their last output snippet in a glassmorphism overlay in real time
- **Token progress bars** — visual indicator under each agent node showing output token usage
- **Tool call diamonds** — web search, LLM calls, and browser actions rendered as animated diamond nodes

### 📊 Timeline Panel
- **Horizontal timeline** — shows every agent's start/end as a color-coded bar
- **Tool call markers** — overlaid on each agent's track showing exactly when web searches, LLM calls, and browser actions fired
- **Live duration counter** — active agents show elapsed time in seconds

### 💰 Token & Cost Tracking
- **Per-agent token counts** — real input/output token data captured from Anthropic's SSE stream
- **Cost estimation** — per-agent and session-total cost calculated from live token counts and model pricing
- **Header metrics bar** — total tokens and estimated cost (`~$X.XXX`) displayed live in the canvas header

### 🧠 Synthesis Engine
- Orchestrator plans agent topology, delegates subtasks, then merges all outputs
- Final answer is a clean direct response — no meta-commentary about which agent said what
- Export as **Markdown**, **plain text**, or **save key insights to persistent memory**

### ⚙️ Settings
- **Providers** — configure all 12 providers with live connection testing + model recommendations per complexity tier
- **Models** — browse all available models with pricing, context window, and availability status
- **Memory** — IndexedDB-backed persistent memory with importance scoring, decay, export/import
- **Migration** — one-click OpenClaw → OpenFang migration (auto-detect, YAML→TOML conversion, tool remapping)

---

## Supported Providers

| Provider | Best Models | Strength |
|---|---|---|
| **Anthropic** | Claude Opus 4.6, Sonnet 4.6, Haiku 4.5 | Reasoning, writing, long context |
| **OpenAI** | GPT-4.1, GPT-4.1 Mini, o4-mini | Code, structured output, tools |
| **Google** | Gemini 2.5 Pro, 2.5 Flash, 2.0 Flash | 1M context, multimodal |
| **Groq** | Llama 3.3 70B, 3.1 8B, Mixtral | Fastest inference on the planet |
| **Mistral** | Large 2, Small 3.1, Codestral | European data, code generation |
| **DeepSeek** | V3, R1 Reasoner | Math, logic, best cost/quality ratio |
| **OpenRouter** | 200+ models | Single API for everything |
| **Cohere** | Command R+, Command R | Enterprise RAG, retrieval |
| **xAI** | Grok 3, Grok 3 Mini | Real-time web knowledge |
| **Together AI** | Llama 405B, Qwen 2.5 72B | Best open-source models |
| **Ollama** | Any model you pull | Local, private, free |
| **LM Studio** | Any GGUF model | Local GUI + OpenAI-compatible API |

---

## Getting Started

### Prerequisites
- Node.js 18+
- At least one LLM provider API key (or Ollama running locally — free)

### Install & run

```bash
git clone https://github.com/Dhwanil25/Agentis.git
cd Agentis
npm install
npm run dev
```

Open [http://localhost:5173](http://localhost:5173), paste any API key, and launch your first agent team.

### Optional: skip the key gate

```bash
# .env.local
VITE_ANTHROPIC_API_KEY=sk-ant-...
```

### Optional features

| Feature | What's needed |
|---|---|
| Web search | Free [Tavily](https://tavily.com) API key |
| Browser agent | `npm install -g pinchtab && pinchtab server` |
| Local models | [Ollama](https://ollama.ai) or [LM Studio](https://lmstudio.ai) running locally |

---

## Feature Status

| Feature | Status | Notes |
|---|---|---|
| Agent Universe (multi-agent) | ✅ Works | Core feature, no extra setup |
| Chat (single agent) | ✅ Works | All 12 providers |
| Workflow Templates | ✅ Works | Pre-built multi-step pipelines |
| Analytics (tokens + cost) | ✅ Works | Stored in IndexedDB |
| Sessions & Memory | ✅ Works | IndexedDB, importance decay, export/import |
| Skills (skills.sh) | ✅ Works | 90K+ skills, live search, role assignment |
| Scheduler | ✅ Works* | *Runs in-browser only — stops if tab is closed |
| C

[truncated…]

PUBLIC HISTORY

First discoveredMar 23, 2026

IDENTITY

inferred

Identity inferred from code signals. No PROVENANCE.yml found.

Is this yours? Claim it →

METADATA

platformgithub
first seenMar 17, 2026
last updatedMar 22, 2026
last crawled23 days ago
version

README BADGE

Add to your README:

![Provenance](https://getprovenance.dev/api/badge?id=provenance:github:Dhwanil25/Agentis)