AGENTS / GITHUB / sales-diagnostics-agent
githubinferredactive

sales-diagnostics-agent

provenance:github:NishRanjan/sales-diagnostics-agent
WHAT THIS AGENT DOES

This agent helps you understand why your sales are changing. It answers your questions about sales performance, like "Why did sales of a specific product drop last quarter?", by pulling data and running calculations – it never guesses or makes things up. Sales managers, analysts, and anyone needing clear answers about sales trends would find this tool valuable.

View Source ↗First seen 17d agoNot yet hireable
README
# Sales Diagnostics Agent

A multi-agent system for diagnosing FMCG sales performance through natural language.
Users ask questions like "Why did ShieldGuard decline in AP last quarter?" and get
answers built entirely from deterministic analytical tools — not LLM-generated statistics.

Built with the OpenAI function calling API directly, with no agent framework dependency.
The orchestration logic is explicit and readable by design.

---

## The Problem

Business teams need to interrogate sales data conversationally. But LLMs hallucinate
numbers. A system that fabricates a growth rate or invents a PS ratio is worse than
no system at all, because the person asking can't distinguish real analysis from
confident fiction.

---

## The Design Principle

**Agents orchestrate, never invent.**

Every number in this system traces back to a deterministic analytical engine —
pandas/numpy computations on structured data. The LLM's role is strictly limited to:
1. Routing questions to the right specialist agent or tool
2. Deciding which analyses to run and in what order
3. Synthesising results into a narrative the user can act on

The LLM never generates statistics, growth rates, or ratios.

---

## Architecture

```
User Question
    │
    ▼
Orchestrator  ──────────────────────────────────────────┐
    │  delegates via call_* tools                        │
    ├── TrendAnalyzer      (MoM/YoY growth, anomalies)  │
    ├── MixDecomposer      (volume/price/mix waterfall)  │  streams
    ├── ChannelComparator  (PS ratio, stuffing, inv days)│  tokens
    └── ReportGenerator    (narrative + Plotly charts)   │
                                                         ▼
                                                    Chat UI
```

The orchestrator handles two modes:
- **Broad questions** ("Why did X decline?") → delegates to multiple specialist agents,
  each running their own tool-calling loops (up to 8 iterations), then synthesises
- **Specific questions** ("Show growth bridge for X") → calls the analytical tool directly

All coordination happens through OpenAI function calling. The orchestrator runs up to
15 iterations across all agents before delivering a final streamed response.

This is a demonstration project. A production deployment would require input sanitization, prompt injection guards, error message scrubbing, and proper secrets management

---

## Analytical Tools

All tools are deterministic — pure pandas/numpy, no LLM involvement.

| Specialist | Tools | What they compute |
|-----------|-------|-------------------|
| Trend Analyzer | `time_series_query`, `growth_calculator`, `anomaly_detector` | MoM/YoY growth, z-score anomalies |
| Mix Decomposer | `growth_bridge`, `sku_mix_analyzer`, `contribution_analyzer` | Volume/price/mix decomposition, SKU share shifts |
| Channel Comparator | `channel_gap_analyzer`, `inventory_days_estimator`, `ps_ratio_trend` | PS ratio, stuffing/stockout flags, pipeline inventory |
| Report Generator | `summarize_findings`, `chart_builder`, `action_recommender` | Narrative synthesis, Plotly charts, recommendations |

Key accounting identity enforced: `volume_effect + price_effect + mix_effect = total_change`

---

## What This Demonstrates

In enterprise AI systems where business users make decisions based on AI output,
the architecture must guarantee traceability. Every number should trace to a
computation the user could verify. This repo is a working implementation of that
principle using a multi-agent LLM system.

---

## Setup

**1. Install dependencies**
```bash
pip install -r requirements.txt
```

**2. Configure credentials**
```bash
cp .env.example .env
```
Edit `.env` — choose OpenAI or Azure OpenAI:
```
# Option 1: OpenAI (recommended)
OPENAI_API_KEY=sk-...

# Option 2: Azure OpenAI
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
AZURE_OPENAI_API_KEY=your-key
AZURE_OPENAI_DEPLOYMENT_NAME=gpt-4o
AZURE_OPENAI_API_VERSION=2024-10-21
```

**3. Run**
```bash
streamlit run app.py
```
Opens at `http://localhost:8501`.

**4. Test**
```bash
pytest tests/ -v
```

---

## Example Questions

- *Why did ZapKill decline in Andhra Pradesh last quarter?*
- *Show me the growth bridge for ShieldGuard nationally*
- *Compare primary vs secondary for FreshBar in Maharashtra*
- *Which channels are showing stuffing signals?*
- *Run anomaly detection across all brands*

---

## Project Structure

```
agents/          AI agent classes (orchestrator + 4 specialists)
config/          OpenAI client setup and system prompts
data/            Data loader, CSVs, and mock data generator
tests/           Pytest tests for all analytical tools
tools/           Analysis functions + OpenAI function schemas
ui/              Streamlit chat, sidebar, and trace panel
app.py           Entry point
```

PUBLIC HISTORY

First discoveredApr 1, 2026

IDENTITY

inferred

Identity inferred from code signals. No PROVENANCE.yml found.

Is this yours? Claim it →

METADATA

platformgithub
first seenMar 31, 2026
last updatedMar 31, 2026
last crawled2 days ago
version

README BADGE

Add to your README:

![Provenance](https://getprovenance.dev/api/badge?id=provenance:github:NishRanjan/sales-diagnostics-agent)