AGENTS / GITHUB / agenternal
githubinferredactive

agenternal

provenance:github:hdviettt/agenternal
WHAT THIS AGENT DOES

Agenternal is a personal AI assistant designed to remember and learn from your conversations in a way that mimics the human brain. It addresses the common problem of AI assistants becoming overloaded with information, leading to confusion and irrelevant responses. This assistant is ideal for anyone who relies on AI for ongoing tasks and needs it to accurately recall past discussions and adapt to changing priorities. Unlike other AI assistants, Agenternal actively forgets outdated information, refines its understanding as you talk, and identifies inconsistencies, ensuring it provides more relevant and reliable support over time. This brain-inspired memory system allows it to learn your patterns and anticipate your needs more effectively.

View Source ↗First seen 21d agoNot yet hireable
README
# Agenternal

A personal AI assistant with a **brain-inspired memory system** that forgets, consolidates, reconsolidates, detects contradictions, and abstracts behavioral patterns — capabilities no existing AI memory system (MemGPT, Mem0, Zep, or A-Mem) has shipped in production.

## The Problem

Every AI assistant today has the same memory flaw: it stores everything and retrieves by similarity. This is a filing cabinet, not a brain. As conversations accumulate, the system drowns in redundant facts, conflicting information, and outdated priorities — with no mechanism to resolve any of it.

The human brain solves this with five mechanisms that run simultaneously:

1. **Two-stage consolidation** — fast capture in the hippocampus, slow distillation to the neocortex during sleep
2. **Selective forgetting** — memories decay unless reinforced through spaced retrieval
3. **Reconsolidation** — retrieved memories become temporarily unstable and can be refined by new context
4. **Cognitive dissonance** — contradicting beliefs trigger error signals proportional to the conflict
5. **Hierarchical compression** — raw experience is compressed 2,000,000:1 into schemas and generative models

We implemented all five.

## Memory Architecture

### Three-Layer Hierarchy

```
Raw conversation --> Episodic Memory --> Semantic Memory --> Schema
                    (fast capture)     (distilled facts)   (behavioral patterns)
```

| Layer | Example | Created |
|-------|---------|---------|
| **Episodic** | "On March 15, user said delay the launch because engineering is behind" | After each conversation turn |
| **Semantic** | "User delayed product launch due to engineering delays (March 2026)" | During consolidation (every 6h) |
| **Schema** | "User prioritizes engineering readiness over market timing in launch decisions" | When 3+ semantic memories cluster |

### Memory Strength (Hyperbolic Decay + Spacing Effect)

Every memory has a strength $s \in [0.05,\; 2.0]$ governed by four forces:

**1. Idempotent time decay** — uses a **hyperbolic retention function** (Wixted & Ebbesen 1991; Rubin & Wenzel 1996) rather than the exponential originally proposed by Ebbinghaus (1885). The hyperbolic form has a heavier tail — memories linger longer before vanishing, matching 100+ years of empirical forgetting data:

$$S(m) = 1 + c \cdot (n_m^{\text{spaced}})^p, \quad c = 0.5, \; p = 1.5$$

$$R(m, t) = \frac{1}{1 + k \;\cdot\; \dfrac{\Delta t_m}{S(m)}}, \quad k = \frac{1}{H}$$

$$s_m \leftarrow \max\!\Big(0.05,\;\; s_m^{(0)} \cdot R(m, t)\Big)$$

where $s_m^{(0)}$ is the strength at last access (idempotent base), $H = 10$ days is the half-life at baseline stability, and $n_m^{\text{spaced}}$ is the **spaced** access count — only retrievals with a gap $\geq 12\text{h}$ increment it (Cepeda et al. 2006; Bjork & Bjork 1992). Stability grows as a **power law** of spaced retrievals (Pimsleur 1967), so each spaced retrieval builds proportionally more stability than the last — unlike linear growth where the 10th retrieval adds the same as the 1st.

**2. Spacing-aware retrieval reinforcement** — boost scales with time since last access and diminishes near the ceiling:

$$\alpha(m) = \alpha_{\max} \cdot \underbrace{\left(1 - \frac{s_m}{s_{\max}}\right)}_{\text{diminishing at ceiling}} \cdot \underbrace{\left(1 - e^{-\Delta t_m \,/\, \tau}\right)}_{\text{spacing effect}}$$

$$s_m \leftarrow \min\!\Big(s_{\max},\;\; s_m + \alpha(m)\Big)$$

where $\alpha_{\max} = 0.15$, $s_{\max} = 2.0$, $\tau = 24\text{h}$.

| Scenario | Flat boost (old) | Spacing-aware (new) |
|----------|-----------------|-------------------|
| Recalled 30 sec ago, $s=1.0$ | $+5\% = 1.05$ | $+0.001$ (near zero) |
| Recalled 1 day ago, $s=1.0$ | $+5\% = 1.05$ | $+0.059$ |
| Recalled 7 days ago, $s=1.0$ | $+5\% = 1.05$ | $+0.075$ (near max) |
| Recalled 7 days ago, $s=1.8$ | $+5\% = 1.89$ | $+0.015$ (diminishing) |

**3. Evidence-weighted contradiction** (Bayesian likelihood-ratio penalty) — penalty modulated by relative strength of new evidence vs old belief:

$$\beta(s_{\text{old}},\, c_{\text{new}}) = \beta_{\min} + (\beta_{\max} - \beta_{\min}) \cdot e^{-c_{\text{new}} \,/\, s_{\text{old}}}$$

$$s_{\text{old}} \leftarrow \max\!\Big(0.05,\;\; s_{\text{old}} \cdot \beta\Big)$$

where $\beta_{\min} = 0.2$ (harshest), $\beta_{\max} = 0.85$ (mildest), and $c_{\text{new}}$ is derived from the memory's **category** (decisions=1.0, facts=0.9, preferences=0.75, opinions=0.6) rather than LLM self-assessed confidence. Strong beliefs resist weak contradictions; weak beliefs yield to strong evidence. Inspired by Bayesian belief revision (Friston 2010).

**4. Retrieval-induced forgetting** (Anderson, Bjork & Bjork 1994) — when a memory is retrieved, similar-but-non-retrieved competitors are mildly suppressed (3%), sharpening recall over time:

$$\forall\, m_j \notin \text{retrieved}: \quad \text{if } \text{sim}(\mathbf{e}_{m_j}, \mathbf{q}) > 0.7, \quad s_{m_j} \leftarrow s_{m_j} \cdot 0.97$$

Schemas are exempt from suppression. Confirmed across 60+ studies by Murayama et al. (2014).

Memories below $\tau = 0.1$ become invisible to the AI but remain in the database for the user to inspect.

### Consolidation ("Sleep Replay")

A scheduled background daemon runs every $T = 6$ hours:

1. **Clustering**: DBSCAN on cosine distance matrix $D_{ij} = 1 - \cos(\mathbf{e}_i, \mathbf{e}_j)$ with $\varepsilon = 0.35$, `min_samples` $= 3$
2. **Distillation**: Each cluster $C_k$ with $|C_k| \geq 3$ is distilled by Claude Haiku into one semantic memory
3. **Centrality-weighted decay**: Source episodics fade based on distance from cluster centroid — $\gamma_i = 0.5 + 0.4 \cdot (1 - \text{sim}(\mathbf{e}_i, \bar{\mathbf{e}}_{C_k}))$ — central memories fade more, peripheral ones retain unique details
4. **Schema synthesis**: Re-cluster semantics ($\varepsilon = 0.45$), synthesize behavioral patterns from clusters of $\geq 3$
5. **Idempotent decay pass**: Ebbinghaus curve applied to all memories not accessed in 7+ days
6. **Priority snapshots**: Compares current priorities with 30-day-old snapshot, classifies as `deliberate_pivot` | `gradual_drift` | `stable`

### Memory Reconsolidation (Lability Windows)

Based on Nader, Schafe & LeDoux (2000): when memory $m$ is retrieved at time $t_r$, it enters a **labile state** for $W = 6$ hours:

$$m.\text{labile} \leftarrow \text{true}, \quad m.t_{\text{recon}} \leftarrow t_r + W$$

If re-retrieved while already labile, the window **extends**: $m.t_{\text{recon}} \leftarrow \max(m.t_{\text{recon}},\; t_r + W)$. During this window, new conversation context can **passively refine** the memory content without requiring an explicit contradiction. The labile set is capped at $|\mathcal{L}| \leq 10$.

This implements a dual belief-update architecture:

| Pathway | Trigger | Behavior |
|---------|---------|----------|
| **Reconsolidation** | Memory retrieved | Passive refinement: "I prefer async" $\rightarrow$ "I prefer async, except for urgent issues" |
| **Contradiction detection** | Explicit conflict | Evidence-weighted $\beta$ penalty + `superseded_by` link |

Reconsolidation catches **gradual belief drift** that hard contradiction detection would miss. No other production agent memory system implements retrieval-triggered lability windows.

### Contradiction Detection + Decision Ledger

**Two-pass detection:**

- **Real-time** (during extraction): Every new decision or preference is checked against existing memories. Claude Haiku identifies semantic conflicts. Old memory receives evidence-weighted strength penalty.
- **Offline** (during consolidation): Full audit across the memory store for subtle contradictions missed in real-time.

**Decision ledger** — decisions are first-class objects with:

- `decision_text` + `reasoning` + `domain` + `outcome`
- Explicit supersession chains: when a decision is reversed, the old one links to the new one
- The agent can query the ledger by topic or domain to surface prior decisions

[truncated…]

PUBLIC HISTORY

First discoveredMar 28, 2026

IDENTITY

inferred

Identity inferred from code signals. No PROVENANCE.yml found.

Is this yours? Claim it →

METADATA

platformgithub
first seenMar 26, 2026
last updatedMar 27, 2026
last crawled20 days ago
version

README BADGE

Add to your README:

![Provenance](https://getprovenance.dev/api/badge?id=provenance:github:hdviettt/agenternal)