AGENTS / GITHUB / agentlens
githubinferredactive

agentlens

provenance:github:Nitin-100/agentlens
WHAT THIS AGENT DOES

AgentLens is a tool that lets you see exactly what's happening inside your AI assistants – like the chatbots or automated systems you're using. It tracks every interaction, from the questions asked to the tools used, and displays it in a clear, easy-to-understand dashboard. This helps you identify problems, understand how your AI is performing, and ensure it's working efficiently and cost-effectively. Business leaders and teams managing AI systems would find this valuable for monitoring performance and making improvements.

View Source ↗First seen 1mo agoNot yet hireable
README
<h1 align="center">
  AgentLens
</h1>

<p align="center">
  <strong>Open-source observability for AI agents.</strong><br/>
  Trace every LLM call, tool use, and decision — in real-time.
</p>

<p align="center">
  <a href="https://github.com/Nitin-100/agentlens/actions"><img src="https://github.com/Nitin-100/agentlens/actions/workflows/ci.yml/badge.svg" alt="CI" /></a>
  <a href="https://www.python.org/downloads/"><img src="https://img.shields.io/badge/python-3.9+-blue.svg" alt="Python 3.9+" /></a>
  <a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-green.svg" alt="License: MIT" /></a>
  <img src="https://img.shields.io/badge/tests-100%20passing-brightgreen.svg" alt="Tests: 100 passing" />
</p>

<p align="center">
  <a href="QUICKSTART.md">Quick Start</a> · <a href="#features">Features</a> · <a href="#how-it-works">Architecture</a> · <a href="#integrations">Integrations</a> · <a href="#comparison-vs-langfuse">vs Langfuse</a> · <a href="https://github.com/Nitin-100/agentlens/raw/main/Demo.mp4">Watch Demo</a>
</p>

---

<p align="center">
  <a href="https://github.com/Nitin-100/agentlens/raw/main/Demo.mp4">
    <img src="docs/demo-thumbnail.jpg" alt="Watch AgentLens Demo" width="720" />
    <br/>
    <sub>▶ Click to watch the full demo video</sub>
  </a>
</p>

---

## What is AgentLens?

AgentLens is a **self-hosted observability platform** for AI agents. It captures every LLM call, tool invocation, agent step, and error across any framework — OpenAI, Anthropic, Gemini, LangChain, CrewAI, LiteLLM, MCP — and shows it all in a real-time dashboard with trace trees, execution graphs, cost anomaly detection, and prompt diffs.

**3 lines to instrument your existing agent:**

```python
from agentlens import AgentLens, auto_patch

lens = AgentLens(server_url="http://localhost:8340")
auto_patch()  # auto-detects & patches OpenAI, Claude, Gemini, LangChain, CrewAI, LiteLLM, MCP

# Your existing code — zero changes needed
response = openai.chat.completions.create(model="gpt-4o", messages=[...])
# ^ model, tokens, cost, latency, response — all captured automatically
```

> **🚀 [Get started in 2 minutes → Quick Start Guide](QUICKSTART.md)**

---

## How It Works

```
  Your Agent Code
  ┌──────────────────────────────────────────────────────────┐
  │  OpenAI · Anthropic · Gemini · LangChain · CrewAI · MCP │
  └──────────────────────────┬───────────────────────────────┘
                             │  auto_patch()
                             ▼
                    ┌─────────────────┐
                    │  AgentLens SDK  │  ← zero dependencies
                    │  Batch · Retry  │
                    │  Circuit Breaker│
                    └────────┬────────┘
                             │  HTTP POST (batched every 2s)
                             ▼
  ┌──────────────────────────────────────────────────────────┐
  │               AgentLens Backend (FastAPI)                │
  │                                                          │
  │  ┌────────────┐  ┌──────────┐  ┌───────────────────┐    │
  │  │ Processors │  │ Database │  │    Exporters      │    │
  │  │ PII Redact │  │ SQLite   │  │ S3 · Kafka        │    │
  │  │ Sampling   │→ │ Postgres │→ │ Webhook · File    │    │
  │  │ Filtering  │  │ ClickHse │  └───────────────────┘    │
  │  └────────────┘  └──────────┘                            │
  │                                                          │
  │  OTEL /v1/traces · WebSocket /ws/live · REST API         │
  │  Cost Anomaly Detection · Prompt Diff · Alert Webhooks   │
  └──────────────────────────┬───────────────────────────────┘
                             │
                             ▼
  ┌──────────────────────────────────────────────────────────┐
  │                   Dashboard (React)                      │
  │  Overview · Sessions · Trace Tree · Agent Graph (DAG)   │
  │  Live Feed · Cost Anomalies · Alerts · Prompt Diff      │
  └──────────────────────────────────────────────────────────┘
```

**Data flow:** SDK intercepts LLM calls → batches events → Backend runs through processor pipeline (PII redaction, sampling, filtering) → stores in pluggable DB → forwards to exporters → Dashboard renders in real-time via WebSocket.

---

## Features

| | Feature | Description |
|---|---|---|
| 📡 | **Live Event Feed** | See every LLM call, tool use, and decision as it happens (WebSocket) |
| 🌳 | **Trace Tree** | Collapsible parent→child span hierarchy with timing waterfall |
| 🔗 | **Agent Graph** | Visual DAG of agent execution flow, color-coded by status |
| 🔍 | **Prompt Replay & Diff** | Click any LLM call to see prompt/completion, diff against similar prompts |
| 📉 | **Cost Anomaly Detection** | Zero-config — auto-flags when daily cost exceeds 2× rolling average |
| 💰 | **Cost Tracking** | Automatic pricing for 20+ models (GPT-4o, Claude 4, Gemini Pro, etc.) |
| 🔌 | **Plugin System** | Swap DB (SQLite → Postgres → ClickHouse), add exporters (S3, Kafka, Webhook) |
| 🛡️ | **PII Redaction** | Auto-scrubs emails, phones, SSNs, credit cards, API keys before storage |
| 🔐 | **Encryption at Rest** | AES-128-CBC + HMAC-SHA256 (Fernet) field-level encryption |
| 🔑 | **RBAC & API Keys** | Admin/Member/Viewer roles, key rotation with grace period |
| 📊 | **Prometheus Metrics** | Native `/metrics` endpoint — plug into Grafana |
| 🤖 | **MCP Native** | MCP client monitoring + MCP server for Claude Desktop |
| 🧪 | **One-Click Demo** | Load 500+ events across 5 agent types to explore instantly |
| 🌐 | **OTEL Ingestion** | Accept traces from any OpenTelemetry-compatible tool |
| ⚡ | **Zero Dependencies** | Core SDK uses only Python stdlib — no conflicts, ever |

---

## Integrations

Works with **any** AI agent framework. One `auto_patch()` call instruments everything:

| Framework | Method | What's Captured |
|---|---|---|
| **OpenAI** | `auto_patch()` | model, tokens, cost, latency, response |
| **Anthropic / Claude** | `auto_patch()` | model, tokens, tool_use blocks, cost |
| **Google Gemini / ADK** | `auto_patch()` | model, tokens, cost |
| **LangChain / LangGraph** | Callback handler | chains, tools, agents, retries |
| **CrewAI** | `auto_patch()` | kickoff, task execution, agent actions |
| **LiteLLM** (100+ providers) | `auto_patch()` | all providers via unified API |
| **MCP** | `auto_patch()` | tool calls, resource reads |
| **Any language** | REST API | POST JSON to `/api/v1/events` |

<details>
<summary><b>See framework-specific code examples</b></summary>

### OpenAI
```python
from agentlens import AgentLens
from agentlens.integrations.openai import patch_openai
lens = AgentLens(server_url="http://localhost:8340")
patch_openai(lens)
response = openai.chat.completions.create(model="gpt-4o", messages=[...])
```

### Anthropic / Claude
```python
from agentlens.integrations.anthropic import patch_anthropic
patch_anthropic(lens)
response = client.messages.create(model="claude-sonnet-4-20250514", messages=[...])
```

### Google Gemini
```python
from agentlens.integrations.google_adk import patch_gemini, patch_google_adk
patch_gemini(lens)
patch_google_adk(lens)
```

### LangChain
```python
from agentlens.integrations.langchain import AgentLensCallbackHandler
handler = AgentLensCallbackHandler(lens)
chain = LLMChain(llm=ChatOpenAI(), prompt=prompt, callbacks=[handler])
```

### CrewAI
```python
from agentlens.integrations.crewai import patch_crewai
patch_crewai(lens)
crew = Crew(agents=[analyst], tasks=[task])
result = crew.kickoff()
```

### LiteLLM
```python
from agentlens.integrations.litellm import patch_litellm
patch_litellm(lens)
response = litellm.completion(model="ollama/llama3", messages=[...])
```

### Custom / Manual
```python
lens.record_llm_call(model="my-model", prompt="...", response="...", tokens_in=100, tokens_out=50)
lens.record_tool_call(tool_name="my-tool", args={"key": "value"}, result="success", duration_ms=150)
lens.record_step(step_name="process", data={"status": "done"})

[truncated…]

PUBLIC HISTORY

First discoveredMar 21, 2026

IDENTITY

inferred

Identity inferred from code signals. No PROVENANCE.yml found.

Is this yours? Claim it →

METADATA

platformgithub
first seenMar 15, 2026
last updatedMar 17, 2026
last crawled15 days ago
version

README BADGE

Add to your README:

![Provenance](https://getprovenance.dev/api/badge?id=provenance:github:Nitin-100/agentlens)