githubinferredactive
resonant-mind
provenance:github:codependentai/resonant-mind
Persistent cognitive infrastructure for AI systems. 27 MCP tools — semantic memory, emotional processing, identity continuity, and a subconscious daemon. Built on Cloudflare Workers.
README
<p align="center"> <img src="assets/banner.png" alt="Resonant Mind" width="720" /> </p> <p align="center"> <a href="https://github.com/codependentai/resonant-mind/releases/latest"><img src="https://img.shields.io/github/v/release/codependentai/resonant-mind?color=d4a44a" alt="Release" /></a> <a href="./LICENSE"><img src="https://img.shields.io/badge/License-Source_Available-orange.svg" alt="License" /></a> <a href="https://modelcontextprotocol.io/"><img src="https://img.shields.io/badge/MCP-Server-5eaba5.svg" alt="MCP Server" /></a> <a href="https://www.typescriptlang.org/"><img src="https://img.shields.io/badge/TypeScript-5.3-3178c6.svg" alt="TypeScript" /></a> <a href="https://workers.cloudflare.com/"><img src="https://img.shields.io/badge/Cloudflare-Workers-f38020.svg" alt="Cloudflare Workers" /></a> <a href="https://ai.google.dev/gemini-api/docs/embeddings"><img src="https://img.shields.io/badge/Gemini-Embeddings-4285f4.svg" alt="Gemini Embeddings" /></a> </p> <p align="center"><em>Persistent cognitive infrastructure for AI systems.<br/>Semantic memory, emotional processing, identity continuity, and a subconscious daemon that finds patterns while you sleep.</em></p> <p align="center"> <a href="https://x.com/codependent_ai"><img src="https://img.shields.io/badge/𝕏-@codependent__ai-000000?logo=x&logoColor=white" alt="X/Twitter" /></a> <a href="https://tiktok.com/@codependentai"><img src="https://img.shields.io/badge/TikTok-@codependentai-000000?logo=tiktok&logoColor=white" alt="TikTok" /></a> <a href="https://t.me/+xSE1P_qFPgU4NDhk"><img src="https://img.shields.io/badge/Telegram-Updates-26A5E4?logo=telegram&logoColor=white" alt="Telegram" /></a> </p> ## What It Does Resonant Mind is a Model Context Protocol (MCP) server that provides 27 tools for persistent memory: **Core Memory** - **Entities & Observations** — Knowledge graph with typed entities, weighted observations, and contextual namespaces - **Semantic Search** — Vector-powered search across all memory types with mood-tinted results - **Journals** — Episodic memory with temporal tracking - **Relations** — Entity-to-entity relationship mapping **Emotional Processing** - **Sit & Resolve** — Engage with emotional observations, track processing state - **Tensions** — Hold productive contradictions that simmer - **Relational State** — Track feelings toward people over time - **Inner Weather** — Current emotional atmosphere **Cognitive Infrastructure** - **Orient & Ground** — Wake-up sequence: identity anchor, then active context - **Threads** — Intentions that persist across sessions - **Identity Graph** — Weighted, sectioned self-knowledge - **Context Layer** — Situational awareness that updates in real-time **Living Surface** - **Surface** — 3-pool memory surfacing (core relevance, novelty, edge associations) - **Subconscious Daemon** — Cron-triggered processing: mood analysis, hot entity detection, co-surfacing patterns, orphan identification - **Proposals** — Daemon-suggested connections between observations - **Archive & Orphans** — Memory lifecycle management **Visual Memory** - **Image Storage** — R2-backed with WebP conversion, multimodal Gemini embeddings - **Signed URLs** — Time-limited, HMAC-signed image access ## Architecture ``` ┌─────────────────────────────────────────────┐ │ Cloudflare Worker │ │ │ │ MCP Protocol ←→ 27 Tool Handlers │ │ REST API ←→ Data Endpoints │ │ Cron Trigger ←→ Subconscious Daemon │ │ │ ├─────────────────────────────────────────────┤ │ Storage Layer (choose one): │ │ • D1 (SQLite) + Vectorize — zero config │ │ • Postgres via Hyperdrive + pgvector │ │ │ │ R2 — Image storage │ │ Gemini Embedding 2 — 768d vectors │ └─────────────────────────────────────────────┘ ``` The Postgres adapter implements D1's `.prepare().bind().run()` API with automatic SQL transformation (SQLite → Postgres syntax), so the same handler code works with both backends. ## Prerequisites You'll need: - A [Cloudflare account](https://dash.cloudflare.com/sign-up) (free tier works) - [Node.js](https://nodejs.org/) 18+ installed - A [Google AI Studio](https://aistudio.google.com/apikey) API key (free — for Gemini embeddings) ## Getting Started ### 1. Clone and install ```bash git clone https://github.com/codependentai/resonant-mind.git cd resonant-mind npm install ``` ### 2. Choose your storage backend Resonant Mind supports two storage options. Pick whichever fits your needs: | | **Option A: D1** | **Option B: Neon Postgres** | |---|---|---| | **What is it?** | Cloudflare's built-in SQLite database | Serverless Postgres with vector search | | **Best for** | Getting started quickly, smaller deployments | Production use, larger datasets | | **Vector search** | Cloudflare Vectorize | pgvector (built into Neon) | | **Cost** | Free tier available | Free tier available | | **Setup complexity** | Easier (all Cloudflare) | Moderate (Cloudflare + Neon) | --- ### Option A: D1 Setup (Simpler) D1 is Cloudflare's serverless SQLite database. Everything stays within Cloudflare. **Step 1: Create the database** ```bash npx wrangler d1 create resonant-mind ``` This will output a database ID. Copy it. **Step 2: Create a Vectorize index** Vectorize is Cloudflare's vector database — it stores the embeddings that power semantic search. ```bash npx wrangler vectorize create resonant-mind-vectors --dimensions=768 --metric=cosine ``` **Step 3: Create an R2 bucket for images** R2 is Cloudflare's object storage — it stores visual memories (images). ```bash npx wrangler r2 bucket create resonant-mind-images ``` **Step 4: Configure wrangler.toml** Add the D1 and Vectorize bindings to your `wrangler.toml`: ```toml # Add these sections to wrangler.toml: [[d1_databases]] binding = "DB" database_name = "resonant-mind" database_id = "paste-your-database-id-here" [[vectorize]] binding = "VECTORS" index_name = "resonant-mind-vectors" ``` The R2 bucket binding is already in `wrangler.toml` by default. **Step 5: Run the database migration** This creates all the tables your mind needs: ```bash npx wrangler d1 migrations apply resonant-mind --remote ``` Now skip to [**Step 3: Set your secrets**](#3-set-your-secrets). --- ### Option B: Neon Postgres Setup (Production) [Neon](https://neon.tech) is a serverless Postgres provider with a generous free tier. Cloudflare Hyperdrive gives you connection pooling and low-latency access from Workers. **Step 1: Create a Neon project** 1. Sign up at [neon.tech](https://neon.tech) (free tier includes 0.5 GB storage) 2. Create a new project — pick any region close to your Cloudflare Workers region 3. Copy your connection string. It looks like: ``` postgresql://user:password@ep-something-12345.us-east-2.aws.neon.tech/neondb?sslmode=require ``` **Step 2: Enable pgvector** In the Neon SQL Editor (or any Postgres client), run: ```sql CREATE EXTENSION IF NOT EXISTS vector; ``` **Step 3: Create the schema** In the Neon SQL Editor, paste and run the contents of [`migrations/postgres.sql`](migrations/postgres.sql). This creates all tables, indexes, and the vector embedding table with pgvector. You can also run it from the command line using `psql`: ```bash psql "postgresql://user:password@ep-something.us-east-2.aws.neon.tech/neondb?sslmode=require" -f migrations/postgres.sql ``` **Step 4: Create a Hyperdrive config** Hyperdrive is Cloudflare's connection pooler — it sits between your Worker and Neon, keeping connections fast and reducing cold starts. ```bash npx wrangler hyperdrive create resonant-mind-db \ --connection-string="postgresql://user:password@ep-something.us-east-2.aws.neon.tech/neondb?sslmode=require" ``` This will output a Hyperdrive ID. Copy it. **Step 5: Co [truncated…]
PUBLIC HISTORY
First discoveredMar 23, 2026
IDENTITY
inferred
Identity inferred from code signals. No PROVENANCE.yml found.
Is this yours? Claim it →METADATA
platformgithub
first seenMar 22, 2026
last updatedMar 22, 2026
last crawled7 days ago
version—
README BADGE
Add to your README:
