AGENTS / GITHUB / syndicate
githubinferredactive

syndicate

provenance:github:neerajaanil/syndicate
WHAT THIS AGENT DOES

Syndicate acts like an instant team of AI researchers, tackling complex topics by automatically assembling specialists with different areas of expertise. It addresses the challenge of needing deep research and analysis but lacking the resources to hire a full team. Business leaders, consultants, or anyone needing in-depth reports on new trends or technologies would find it valuable. What sets Syndicate apart is its ability to dynamically create and manage these specialized AI agents on the fly, ensuring the right expertise is applied to every research question. This results in well-researched, publication-quality reports delivered efficiently.

View Source ↗First seen 29d agoNot yet hireable
README
# Syndicate — Autonomous AI Research System

> **Spin up a team of AI experts on demand to research, debate, and produce publication-quality reports.**

Syndicate dynamically builds a team of AI domain specialists for any research topic, has them search the web and analyse data, lets them consult each other across disciplines, evaluates the quality of their output, and synthesises everything into a polished research report.

Unlike traditional pipelines, Syndicate creates its own agents at runtime, enabling dynamic specialization per problem.

Built on [AutoGen Core](https://microsoft.github.io/autogen/stable/) with a production-hardened "agents creating agents" pattern.

---

## How It Works

```
Your topic
    │
    ▼
┌─ Planner ──────────────────────────────────────────────────┐
│  Decomposes topic into N specialist roles (structured JSON) │
└────────────────────────────────────────────────────────────┘
    │
    ▼  (parallel)
┌─ ForgeAgent ───────────────────────────────────────────────┐
│  For each role:                                             │
│  1. LLM generates a custom RoutedAgent Python file         │
│  2. Validates syntax + required structure                  │
│  3. Loads via importlib.util.spec_from_file_location()     │
│  4. Registers into the live AutoGen runtime                │
└────────────────────────────────────────────────────────────┘
    │
    ▼  (parallel, streams per-specialist)
┌─ Specialist Agents (dynamically spawned) ──────────────────┐
│  Each specialist:                                           │
│  • Has a unique domain persona generated by the LLM        │
│  • Uses web search + Python REPL tools                     │
│  • Probabilistically consults a peer specialist            │
│  • Writes a 400-600 word section with inline citations     │
└────────────────────────────────────────────────────────────┘
    │
    ▼
┌─ EvaluatorAgent ───────────────────────────────────────────┐
│  Reads all sections, scores each 1-10, writes revision     │
│  notes, produces structured EvaluatorReport                │
└────────────────────────────────────────────────────────────┘
    │
    ▼
┌─ SynthesisAgent ───────────────────────────────────────────┐
│  Assembles sections + applies revision notes into a        │
│  publication-quality markdown report                       │
└────────────────────────────────────────────────────────────┘
```

---

## Features

- **Meta-agent pattern** — a Forge agent uses LLM code generation + `importlib` to spawn custom `RoutedAgent`s at runtime; no specialist agents are hardcoded
- **Reliable code generation** — LLM output validated with `ast.parse()` + structure checks; retried once on failure; template-assembly fallback guarantees the pipeline never stalls
- **Two-phase spawning** — all specialists registered before any tasks run, enabling reliable peer consultation
- **Per-specialist streaming** — Gradio UI updates as each specialist finishes, not waiting for the slowest
- **Graceful degradation** — individual specialist failures are logged and skipped; pipeline continues with available sections
- **Serper / DuckDuckGo search** — uses Serper if a key is provided, falls back to DuckDuckGo automatically
- **Session persistence** — every report saved with metadata; reload any past report from the UI
- **Research modes** — Quick (2 specialists) / Balanced (4) / Deep (6) selectable from the UI

---

## Quickstart

```bash
git clone https://github.com/neerajaanil/syndicate.git
cd syndicate
pip install -r requirements.txt

cp .env.example .env
# Edit .env — set OPENAI_API_KEY at minimum

python app.py
```

Open `http://localhost:7860`.

---

## Configuration

All settings use the `HIVE_` prefix and can be set in `.env` or as environment variables.

| Variable | Default | Description |
|---|---|---|
| `OPENAI_API_KEY` | *(required)* | OpenAI API key |
| `SERPER_API_KEY` | — | [Serper](https://serper.dev) key for Google search (optional; falls back to DuckDuckGo) |
| `HIVE_RESEARCH_MODE` | `balanced` | `quick` \| `balanced` \| `deep` |
| `HIVE_MAX_SPECIALISTS` | `4` | Number of specialists to spawn (1–8) |
| `HIVE_PEER_CONSULT_CHANCE` | `0.4` | Probability a specialist consults a peer (0.0–1.0) |
| `HIVE_PLANNER_MODEL` | `gpt-4o` | Model for planning (needs structured output) |
| `HIVE_FORGE_MODEL` | `gpt-4o` | Model for code generation |
| `HIVE_SPECIALIST_MODEL` | `gpt-4o-mini` | Model for each specialist (cost-sensitive) |
| `HIVE_EVALUATOR_MODEL` | `gpt-4o-mini` | Model for evaluation |
| `HIVE_SYNTHESIS_MODEL` | `gpt-4o-mini` | Model for final synthesis |

---

## Programmatic Usage

```python
import asyncio
from syndicate import ResearchPipeline, SyndicateConfig

async def main():
    config = SyndicateConfig()
    pipeline = ResearchPipeline(
        topic="The economics of lithium-ion battery recycling",
        config=config,
    )
    async for session in pipeline.run():
        if session.final_report:
            print(session.final_report)

asyncio.run(main())
```

---

## Development

```bash
pip install -e ".[dev]"
pytest
```

---

## Commercial Applications

| Use case | How |
|---|---|
| **Market research** | Spawn analysts per vertical; get a structured competitive report |
| **Due diligence** | Assemble financial, legal, and technical specialists per target |
| **Policy analysis** | Multi-discipline crew covering economic, social, and legal angles |
| **Technical deep-dives** | Architect, security, and performance specialists review a system |
| **SaaS wrapper** | Add auth + billing to the multi-session-ready pipeline |

---

## Acknowledgements

This project builds on concepts and example implementations from the
[ed-donner/agents](https://github.com/ed-donner/agents) repository.

Syndicate extends his ideas into a more cohesive, production-oriented system, including:
- Dynamic agent generation ("agents creating agents")
- Multi-stage evaluation and synthesis pipeline
- Runtime validation and fault tolerance
- Streaming and session-based architecture

Huge credit to Ed Donner for the foundational work and inspiration.

---

## License

MIT

PUBLIC HISTORY

First discoveredMar 21, 2026

IDENTITY

inferred

Identity inferred from code signals. No PROVENANCE.yml found.

Is this yours? Claim it →

METADATA

platformgithub
first seenMar 19, 2026
last updatedMar 19, 2026
last crawledtoday
version

README BADGE

Add to your README:

![Provenance](https://getprovenance.dev/api/badge?id=provenance:github:neerajaanil/syndicate)