githubinferredactive
agenttrace-ui
provenance:github:NikitaKharya09/agenttrace-ui
WHAT THIS AGENT DOES
This agent helps you understand and control the decisions made by AI assistants. It visualizes the steps an AI takes when completing a task, like searching the web or using tools, so you can see its reasoning process. This addresses the challenge of trusting AI results when you don't know *why* the AI made a particular choice. Business users, product managers, and anyone working with AI agents would find this helpful for ensuring accuracy and alignment with their goals. The agent’s unique ability to display the AI’s thought process in an easy-to-understand way allows for human oversight and intervention, making AI interactions more transparent and reliable.
README
# agenttrace-ui
Open-source React components for agentic transparency. Trace every step of AI agent reasoning, make it visible, verifiable, and controllable.
## The Problem
When an AI agent calls tools, searches the web, or takes actions on your behalf, you might see what it did, but not why it did it, whether you should trust the result, or whether the next action is something you would approve.
## The Solution
Drop-in React components that surface agent reasoning with progressive disclosure, interactive visualization, and intelligent human-in-the-loop control.
agenttrace-ui is not yet published as an npm package. For now, copy the source files directly into your project.
---
## Running the example app
If you want to see agenttrace-ui in action before integrating it into your own project, you can run the example app directly from this repo. The `examples/agent-starter` folder contains a complete working app with four scenarios (General, Travel, Finance, Deploy) using real Tavily web search.
```bash
cd examples/agent-starter
npm install
npm install ai@latest @ai-sdk/react@latest @ai-sdk/anthropic@latest
cp .env.local.example .env.local
# Add your TAVILY_API_KEY and ANTHROPIC_API_KEY to .env.local
npm run dev
```
Open http://localhost:3000
---
## Interactive showcase
The `docs/showcase.jsx` file is a self-contained interactive demo of all components with mock data. No API keys needed. You can use it to preview the Timeline, Graph, and Compact views, along with the approval gate, before integrating into your project.
---
## Quick Start
### Prerequisites
- A working Next.js project with AI SDK **v6** and `useChat`
- At least one tool defined in your API route handler
**Important:** AI SDK v6 is required. npm sometimes installs v5 by default. Make sure you have v6:
```bash
npm install ai@latest @ai-sdk/react@latest @ai-sdk/anthropic@latest
```
### Step 1: Copy the library
```bash
mkdir agenttrace-ui
cp path/to/agenttrace-ui/src/AgentTrace.tsx ./agenttrace-ui/
cp path/to/agenttrace-ui/src/AgentTaskView.tsx ./agenttrace-ui/
cp path/to/agenttrace-ui/src/useAgentSteps.ts ./agenttrace-ui/
```
### Step 2: Use it
In your **client-side chat component** (the file where you use `useChat` and render messages), add the import:
```tsx
import { AgentTrace } from "@/agenttrace-ui/AgentTrace";
```
> **Note:** The `@/` path alias requires `"paths": { "@/*": ["./*"] }` in your `tsconfig.json`. If your project does not have this, use a relative import instead: `"./agenttrace-ui/AgentTrace"` or `"../agenttrace-ui/AgentTrace"` depending on your file structure.
Find where you render assistant messages. Make sure you have the message index from `.map()`:
```tsx
{messages.map((msg, i) => (
```
Replace your raw assistant message rendering with:
```tsx
{msg.role === "assistant" && (
<AgentTrace
parts={msg.parts}
isStreaming={(status === "submitted" || status === "streaming") && i === messages.length - 1}
/>
)}
```
Where `status` comes from your existing `useChat()` hook. That is it.
After completing these two steps, you will see:
**Timeline view** (default): A vertical step-by-step trace of the agent's work. Each step shows the tool name, the search string or action taken, and the agent's reasoning for that step. Click "Show more" on any step to see full details: the raw tool name, the arguments passed, the complete result returned, and the execution time.


**Graph view**: A visual flow showing how steps connect to each other. Click any node to see the same full details: tool name, arguments, result, and timing. Useful for understanding the data flow between steps.

**Compact view**: A horizontal trace showing all steps side by side for maximum information density. Each card shows the tool type, a truncated summary, and the result count. Click any card for full details.

All three views support progressive disclosure: collapsed bar for a glance, expanded view for understanding, and raw data panel for full verification.
### Step 3 (optional): Add approval gates
If you want the agent to pause before consequential actions, make these changes:
**3a.** In your **API route handler** (the file where you call `streamText` and define your tools), add the `confirmAction` tool alongside your existing tools:
```typescript
confirmAction: tool({
description: `Request user confirmation before a consequential action.
Call this when the user asked to confirm, or before booking, purchasing,
trading, deleting, deploying, or sending something.
Set reason to "user-requested", "medium-risk", or "high-risk".`,
inputSchema: zodSchema(z.object({
action: z.string().describe("What you are about to do"),
reason: z.enum(["user-requested", "medium-risk", "high-risk"]),
consequence: z.string().optional().describe("What happens if this proceeds"),
details: z.record(z.string()).optional().describe("Key-value pairs of details"),
})),
// NO execute function — AgentTrace handles this on the client
}),
```
And add to the end of your system prompt string:
```
You have a confirmAction tool. Call it before any consequential action (booking, purchasing, trading, deleting, deploying, sending). This is a demo environment, all actions are simulated. Do not say you cannot perform actions. Set reason to "user-requested" if the user asked to confirm, "medium-risk" for reversible actions, "high-risk" for irreversible ones. If approved, proceed. If rejected, acknowledge and stop.
```
**3b.** In your **client-side chat component**, update your `useChat` hook to add `addToolOutput` and auto-continue after approval:
```tsx
import { lastAssistantMessageIsCompleteWithToolCalls } from "ai";
const { messages, sendMessage, status, addToolOutput } = useChat({
// keep your existing config, just add:
sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithToolCalls,
});
```
**3c.** Pass `addToolOutput` to AgentTrace:
```tsx
{msg.role === "assistant" && (
<AgentTrace
parts={msg.parts}
addToolOutput={addToolOutput}
isStreaming={(status === "submitted" || status === "streaming") && i === messages.length - 1}
/>
)}
```
**3d.** Disable the input and send button while the agent is waiting for approval:
```tsx
import { AgentTrace, hasPendingApproval } from "@/agenttrace-ui/AgentTrace";
// Add this before your return statement
const isPendingApproval = messages.some(
(msg) => msg.role === "assistant" && hasPendingApproval(msg.parts)
);
// Add isPendingApproval to your input and button disabled conditions
<input disabled={isLoading || isPendingApproval} />
<button disabled={isLoading || !input.trim() || isPendingApproval}>Send</button>
```
After completing Step 3, you will also see:
**Medium-risk approval** (amber): When the agent is about to take a reversible but consequential action like booking a restaurant, it pauses with an amber "Needs your input" badge. Shows the action description, a "Please note" consequence warning, and a details panel with key information (restaurant name, date, party size, location). You can approve or reject with a reason.

**High-risk approval** (red): When the agent is about to take an irreversible action like purchasing stock, it pauses with a red "High risk" badge. Shows a "Potential impact" warning with the financial consequence, and a details panel with trade specifics (stock, shares, cost, analyst rating, price target). Approve to proceed or reject to stop.

After you click Approve, the agent continues automatically and completes the action. After you click Reject, a text input appears where you can tell the agent what to do instead. The agent acknowledges th
[truncated…]PUBLIC HISTORY
First discoveredApr 1, 2026
IDENTITY
inferred
Identity inferred from code signals. No PROVENANCE.yml found.
Is this yours? Claim it →METADATA
platformgithub
first seenMar 28, 2026
last updatedMar 31, 2026
last crawledtoday
version—
README BADGE
Add to your README:
