AGENTS / GITHUB / agent_trace
githubinferredactive

agent_trace

provenance:github:CURSED-ME/agent_trace
WHAT THIS AGENT DOES

AgentTrace helps you understand exactly how your AI assistants are working. It records every step they take, like a video of their thought process, so you can easily spot errors or inefficiencies. This solves the frustrating problem of not knowing why an AI assistant is making mistakes or behaving unexpectedly, saving you time and effort in debugging. Business users and developers building AI applications will find it invaluable for improving their AI's performance.

View Source ↗First seen 1mo agoNot yet hireable
README
<div align="center">

# 🔍 AgentTrace

**Zero-config visual debugging and auto-evaluation for LLM agents.**

[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
[![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/)
[![Go 1.21+](https://img.shields.io/badge/go-1.21+-00ADD8.svg)](https://go.dev/)
[![OpenTelemetry](https://img.shields.io/badge/OpenTelemetry-native-blueviolet)](https://opentelemetry.io/)

*One import. Zero config. Instant visual timeline of every LLM call, tool execution, and crash your agent makes.*

</div>

---

## The Problem

You build an AI agent. It calls an LLM, uses tools, chains prompts together. Then it hallucinates, loops infinitely, or silently drops context — and you have **no idea where it went wrong.**

Every other observability tool requires accounts, API keys, cloud dashboards, and framework-specific setup. You just want to **see what happened.**

## The Solution

```python
import agenttrace.auto  # ← That's it. One line.

# ... your existing agent code runs normally ...
# When it finishes, a local dashboard opens automatically at localhost:8000
```

AgentTrace intercepts every LLM call, tool execution, and unhandled crash — then serves a beautiful local timeline you can replay step-by-step.

---

## ✨ Features

### 🪄 True Zero-Config
Add `import agenttrace.auto` to the top of your script. No API keys, no accounts, no cloud. Works with **OpenAI**, **Groq**, **Anthropic**, **Mistral**, **Google Gemini**, **LangChain**, **CrewAI**, **Vercel AI SDK**, and **15+ more** out of the box.

### 🧠 Smart Auto-Judge
AgentTrace doesn't just *show* you what happened — it *tells you what went wrong:*

| Evaluation | How It Works | Cost |
|---|---|---|
| 🔁 **Loop Detection** | Flags 3+ identical consecutive tool calls | Free (pure Python) |
| 💰 **Cost Anomaly** | Flags steps using >2x average tokens | Free (pure Python) |
| ⏱️ **Latency Regression** | Flags steps >3x slower than average | Free (pure Python) |
| 🔧 **Tool Misuse** | Detects wrong arguments or failed tool calls | LLM-powered (optional) |
| 📝 **Instruction Drift** | Detects when LLM ignores the system prompt | LLM-powered (optional) |

> LLM-powered checks require a free [Groq API key](https://console.groq.com). Install with `pip install "agenttrace-ai[judge]"`.

### ▶️ Trace Replay
Press **Play** and watch your agent's execution animate step-by-step — like a video recording of its thought process. Drag the scrubber to jump to any moment. Flagged steps pulse red.

### 💥 Crash Detection
If your agent throws an unhandled exception, AgentTrace catches it and logs the full traceback as a trace step — so you never lose debugging data.

### 🔗 Session Tracing
Group related traces into **sessions** for multi-turn agent workflows. Tag traces with custom key-value pairs for filtering and organization:

```python
import os
os.environ["AGENTTRACE_SESSION_ID"] = "user-123-conversation"
os.environ["AGENTTRACE_TAGS"] = "env=prod,agent=support"
```

### 🔀 Trace Comparison (Diff Mode)
Select any two traces and **diff them side-by-side**. AgentTrace uses an LCS-based algorithm to classify each step as added, removed, changed, or unchanged — with a metrics delta bar showing differences in tokens, latency, and step count.

### 📦 Evaluation Datasets
Build golden test datasets directly from your traces:
- **Save** individual LLM call inputs/outputs to a dataset with one click
- **Batch import** all traces from a session or tag filter
- **Export** datasets as `.jsonl` for use in fine-tuning or CI evaluation pipelines

### 🔌 Framework Support

#### LLM Providers
| Provider | Status | Install |
|---|---|---|
| OpenAI | ✅ Native | `pip install "agenttrace-ai[openai]"` |
| Groq | ✅ Native | `pip install "agenttrace-ai[openai]"` |
| Anthropic (Claude) | ✅ Native | `pip install "agenttrace-ai[anthropic]"` |
| Mistral AI | ✅ Native | `pip install "agenttrace-ai[mistral]"` |
| Google Gemini | ✅ Native | `pip install "agenttrace-ai[google]"` |
| Cohere | ✅ Native | `pip install "agenttrace-ai[cohere]"` |
| AWS Bedrock | ✅ Native | `pip install "agenttrace-ai[bedrock]"` |
| Ollama | ✅ Native | `pip install "agenttrace-ai[ollama]"` |
| Replicate | ✅ Native | `pip install "agenttrace-ai[all]"` |
| Together AI | ✅ Native | `pip install "agenttrace-ai[all]"` |

#### Agent Frameworks
| Framework | Status | Install |
|---|---|---|
| LangChain | ✅ Adapter | None (auto-detected) |
| CrewAI | ✅ Adapter | None (auto-detected) |
| Vercel AI SDK | ✅ Experimental | `npm install agenttrace-node ai` |
| LlamaIndex | ✅ Native | `pip install "agenttrace-ai[all]"` |
| Haystack | ✅ Native | `pip install "agenttrace-ai[all]"` |

#### Vector Databases
| Database | Status | Install |
|---|---|---|
| ChromaDB | ✅ Native | `pip install "agenttrace-ai[vectordb]"` |
| Pinecone | ✅ Native | `pip install "agenttrace-ai[vectordb]"` |

---

## 🚀 Quickstart

### Install

```bash
# Python — Core (works with LangChain out of the box)
pip install agenttrace-ai

# Python — With OpenAI/Groq support
pip install "agenttrace-ai[openai]"

# Python — With everything (OpenAI + Auto-Judge + LangChain)
pip install "agenttrace-ai[all]"

# Node.js / TypeScript
npm install agenttrace-node

# Go
go get github.com/CURSED-ME/AgentTrace/agenttrace-go
```

### Basic Usage (OpenAI / Groq)

```python
import agenttrace.auto  # ← Add this one line
import openai

client = openai.OpenAI()
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "What is the capital of France?"}]
)
print(response.choices[0].message.content)
# Dashboard opens automatically at http://localhost:8000 when your script finishes
```

### LangChain (Zero-Config)

```python
import agenttrace.auto  # ← Same one line
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

llm = ChatOpenAI(model="gpt-4")
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("human", "{input}")
])

chain = prompt | llm
result = chain.invoke({"input": "Explain quantum computing"})
# All LLM calls automatically appear in the AgentTrace dashboard
```

### Node.js & TypeScript SDK

AgentTrace now natively supports Javascript/Typescript AI agents via the `@opentelemetry` standard!

**1. Install the SDK:**
```bash
npm install agenttrace-node
```

**2. Initialize tracking at the top of your index file:**
```typescript
import { init, shutdown } from "agenttrace-node";
import { OpenAI } from "openai";

// 1. Initialize OTLP tracer
init({
  serviceName: "my-ai-agent"
});

const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

async function main() {
  const response = await client.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: "Hello!" }]
  });
  
  // 2. Gracefully flush traces before the Node event loop exits
  await shutdown(); 
}
main();
```

**3. Vercel AI SDK Integration (Experimental):**
AgentTrace supports the [Vercel AI SDK](https://sdk.vercel.ai/) out of the box by leveraging its `experimental_telemetry` flag. Tool calls, streaming responses, and custom metadata are all captured automatically.

> **Note:** Vercel's telemetry API is marked as experimental and may change between SDK versions. AgentTrace is tested against `ai@6.0+`.

```typescript
import { init, shutdown } from "agenttrace-node";
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";

// 1. Initialize OTLP tracer
init({ serviceName: "vercel-ai-agent" });

async function main() {
  const { text } = await generateText({
    model: openai("gpt-4o"),
    prompt: "Write a short poem about space.",
    experimental_telemetry: {
      isEnabled: true,
      functionId: "space-poet",
      metadata: { agent: "SpaceAgent" } // Appears as agent name in AgentTrace UI
    }
  });
  
  // 2. Flush traces
  await shutdown();
}
main();
```

### Custom Tool Tracking (Python)


[truncated…]

PUBLIC HISTORY

First discoveredMar 21, 2026

IDENTITY

inferred

Identity inferred from code signals. No PROVENANCE.yml found.

Is this yours? Claim it →

METADATA

platformgithub
first seenMar 7, 2026
last updatedMar 17, 2026
last crawled15 days ago
version

README BADGE

Add to your README:

![Provenance](https://getprovenance.dev/api/badge?id=provenance:github:CURSED-ME/agent_trace)