AGENTS / GITHUB / flowscript-agents
githubinferredactive

flowscript-agents

provenance:github:phillipclapham/flowscript-agents
WHAT THIS AGENT DOES

Flowscript-agents helps AI assistants remember not just *what* they decided, but *why* they made those decisions. Many AI tools make choices without explaining their reasoning, making it difficult to understand or trust their actions. This agent solves that problem by recording the thought process behind each decision, allowing users to later ask "why?" and receive a clear explanation. Business leaders, project managers, and anyone relying on AI for important choices would find this tool valuable. What makes it unique is its ability to track and query this reasoning over time, even when decisions seem to contradict each other, providing a transparent and auditable record of the AI's logic.

View Source ↗First seen 1mo agoNot yet hireable
README
<p align="center">
  <img src="docs/brand/logo-512.png" alt="FlowScript" width="120" />
</p>

<h1 align="center">flowscript-agents</h1>

<p align="center"><strong>Your AI agents make decisions they can't explain.<br>FlowScript makes those decisions queryable.</strong></p>

<p align="center"><em>Vector stores remember what. FlowScript remembers why.</em></p>

<p align="center">
  <a href="https://pypi.org/project/flowscript-agents/"><img src="https://img.shields.io/pypi/v/flowscript-agents" alt="PyPI"></a>
  <a href="https://github.com/phillipclapham/flowscript-agents"><img src="https://img.shields.io/badge/tests-717%20passing-brightgreen" alt="Tests"></a>
  <a href="https://pypi.org/project/flowscript-agents/"><img src="https://img.shields.io/badge/python-3.10%2B-blue" alt="Python"></a>
  <a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT-blue.svg" alt="License: MIT"></a>
</p>

---

Your agent chose PostgreSQL three sessions ago. Now it's recommending Redis. Why did it change its mind? Without reasoning memory, you can't know.

```python
# Your agent stores reasoning as it works:
add_memory("Chose PostgreSQL — $15/mo, ACID compliant. Redis eliminated: $200/mo exceeds $50 budget.")
add_memory("Revisiting: Redis latency (sub-ms) may justify the cost for real-time features.")

# Sessions later, you ask "why?" and get the actual reasoning chain:
query_why("chose-postgresql")
# → budget_constraint ($50/mo limit)
#   → eliminated Redis ($200/mo cluster)
#     → chose PostgreSQL ($15/mo, ACID compliant)

query_tensions()
# → performance vs cost
#     Redis: sub-ms reads, $200/month
#     PostgreSQL: 10-50ms, $15/month
#     constraint: $50/month infrastructure budget
```

Six typed reasoning queries. Nine framework adapters. Session memory that learns across conversations. And when memories contradict, FlowScript doesn't delete — it creates a queryable [tension](#when-memories-contradict).

<p align="center">
  <img src="https://raw.githubusercontent.com/phillipclapham/flowscript/main/docs/flowscript-demo.png" alt="FlowScript — reasoning graph visualization with typed nodes and query results" width="800">
</p>

---

## Install

```bash
pip install flowscript-agents
```

## Quick Start: MCP Server

Add FlowScript to Claude Code, Cursor, Windsurf, or any MCP-compatible editor.

**Claude Code** — add to `.claude/settings.json`:

```json
{
  "mcpServers": {
    "flowscript": {
      "command": "flowscript-mcp",
      "args": ["--memory", "./project-memory.json"],
      "env": {
        "OPENAI_API_KEY": "your-key"
      }
    }
  }
}
```

**Cursor / Windsurf / VS Code** — add to `.mcp.json` in your project root:

```json
{
  "mcpServers": {
    "flowscript": {
      "type": "stdio",
      "command": "flowscript-mcp",
      "args": ["--memory", "./project-memory.json"],
      "env": {
        "OPENAI_API_KEY": "your-key"
      }
    }
  }
}
```

Restart your editor. You now have [20 MCP tools](#tools-at-a-glance) for reasoning memory.

**Then add the [CLAUDE.md snippet](examples/CLAUDE.md.example) to your project** — this tells your agent *when* to record decisions and surface tensions. Without it, the tools are available but passive. With it, your agent proactively tracks your project's reasoning.

---

## What You Get

**With an API key**, plain text is auto-extracted into typed reasoning nodes. Vector search finds memories by meaning. Contradictions become queryable tensions instead of silent overwrites.

| API Key | What auto-configures |
|:--------|:--------------------|
| `OPENAI_API_KEY` | Typed extraction (gpt-4o-mini) + vector search (text-embedding-3-small) + consolidation |
| `ANTHROPIC_API_KEY` | Typed extraction (claude-haiku) + consolidation. Keyword search (no embeddings). |
| Neither | Raw text storage + keyword search. Tools work, but no typed extraction or vector search. |

**Add session persistence** with one env var. Your agent's memory is compressed at session boundaries — patterns that recur get promoted, noise fades naturally:

```json
"env": {
  "OPENAI_API_KEY": "your-key",
  "FLOWSCRIPT_CONTINUITY": "true"
}
```

**Local embeddings (free, no API key for embeddings):**

| Provider | Install | Flag |
|:---------|:--------|:-----|
| Ollama | [Install Ollama](https://ollama.com) + `ollama pull nomic-embed-text` | `--embedder ollama --embedding-model nomic-embed-text` |
| SentenceTransformers | `pip install sentence-transformers` | `--embedder sentence-transformers --embedding-model BAAI/bge-m3` |

You still need an LLM API key for typed extraction, even with local embeddings.

---

## Six Reasoning Queries

These are graph traversals — sub-millisecond, deterministic, no LLM calls.

| Query | Returns | Ask when |
|:------|:--------|:---------|
| `query_why(node_id)` | Causal chain backward from any decision | "Why did we choose this?" |
| `query_tensions()` | Tradeoffs with named axes | "What tradeoffs are we navigating?" |
| `query_blocked()` | Blockers + downstream impact | "What's stuck and what does it affect?" |
| `query_alternatives(node_id)` | Options considered + outcome | "What else did we consider?" |
| `query_what_if(node_id)` | Forward impact analysis | "What breaks if we change this?" |
| `query_counterfactual(node_id)` | What would need to change | "What would it take to reverse this?" |

No vector store can answer these. Embedding similarity tells you what *looks like* your query. These queries tell you what *caused*, *blocked*, *traded off against*, and *follows from* your agent's decisions.

---

## When Memories Contradict

Every other memory system handles contradictions by deleting. Mem0's consolidation uses ADD/UPDATE/DELETE — when facts contradict, the old one is replaced. LangGraph's langmem does the same.

FlowScript doesn't delete. It **relates**.

When consolidation detects a contradiction, it creates a tension with a named axis. Both perspectives survive. The disagreement itself becomes queryable knowledge. Call `query_tensions()` to see every active tradeoff your agent is navigating.

---

## Tools at a Glance

20 MCP tools, grouped by purpose:

| Group | Tools |
|:------|:------|
| **Memory** | `add_memory`, `search_memory`, `get_context`, `remove_memory`, `memory_stats` |
| **Reasoning Queries** | `query_tensions`, `query_blocked`, `query_why`, `query_alternatives`, `query_what_if`, `query_counterfactual` |
| **Session** | `session_wrap`, `encode_exchange` |
| **Compliance** | `explain_decision`, `query_audit`, `verify_audit`, `verify_integrity` |
| **Thinking** | `think_deeper`, `think_creative`, `think_breakthrough` |

---

## Works With Your Stack

Each adapter implements your framework's native memory interface — `BaseStore` for LangGraph, `StorageBackend` for CrewAI, and so on. You don't learn a new API. You get reasoning queries on top of the one you already use.

| Framework | Adapter | Install |
|:----------|:--------|:--------|
| **LangGraph** | `FlowScriptStore` → `BaseStore` | `pip install flowscript-agents[langgraph]` |
| **CrewAI** | `FlowScriptStorage` → `StorageBackend` | `pip install flowscript-agents[crewai]` |
| **Google ADK** | `FlowScriptMemoryService` → `BaseMemoryService` | `pip install flowscript-agents[google-adk]` |
| **OpenAI Agents** | `FlowScriptSession` → `Session` | `pip install flowscript-agents[openai-agents]` |
| **Pydantic AI** | `FlowScriptDeps` → Deps + tools | `pip install flowscript-agents[pydantic-ai]` |
| **smolagents** | `FlowScriptMemory` → Tool protocol | `pip install flowscript-agents[smolagents]` |
| **LlamaIndex** | `FlowScriptMemoryBlock` → `BaseMemoryBlock` | `pip install flowscript-agents[llamaindex]` |
| **Haystack** | `FlowScriptMemoryStore` → `MemoryStore` | `pip install flowscript-agents[haystack]` |
| **CAMEL-AI** | `FlowScriptCamelMemory` → `AgentMemory` | `pip install flowscript-agents[camel-ai]` |

All adapters expose `.memory` for direct query access and support `with` blocks for automatic session lifecycle. Install everything: `pip in

[truncated…]

PUBLIC HISTORY

First discoveredMar 21, 2026

IDENTITY

inferred

Identity inferred from code signals. No PROVENANCE.yml found.

Is this yours? Claim it →

METADATA

platformgithub
first seenMar 17, 2026
last updatedMar 20, 2026
last crawledtoday
version

README BADGE

Add to your README:

![Provenance](https://getprovenance.dev/api/badge?id=provenance:github:phillipclapham/flowscript-agents)