Long-Term Memory
Time made stable — a self that survives sleep, distraction, and decades.
What we mean.
Long-term memory in humans is episodic (where I was), semantic (what I know), procedural (what my body learned), and emotional (what I cannot forget). It is reconstructive, not a recording.
AI memory is, by default, parametric (weights) and contextual (a sliding window). Vector databases now add an external episodic layer.
In the brain
The hippocampus indexes new memories; sleep, especially slow-wave and REM, consolidates them into cortical traces. Each retrieval rewrites the memory — it is plastic until last recall.
Capacity is enormous (estimates: 1–2.5 petabytes effective) but bandwidth is narrow. We forget on purpose, and forgetting is part of intelligence.
In silicon
Trained weights store compressed knowledge from trillions of tokens. Retrieval-augmented generation (RAG) gives quasi-episodic recall. Context windows now exceed 1–2 million tokens.
But weights are frozen at training; updates require fine-tuning. The flexibility-stability dilemma is unsolved: how to update without catastrophic forgetting?
How we arrived here.
- 1885
Ebbinghaus's forgetting curve
- 1953
Patient H.M. — hippocampus removed
- 1972
Tulving: episodic vs semantic memory
- 2017
Attention is All You Need (transformer)
- 2023
RAG, vector DBs go mainstream
- 2024
Gemini 1.5 — 1M+ token context
“We are our memory.”
Where the edge moves next.
Continual learning, neuromorphic memory, brain-computer interfaces with externalized episodic storage — the next decade dissolves the boundary between biological and digital memory.
Hyper-personalized AI agents will hold richer logs of your life than you do. Memory becomes a stewardship issue.
Where it touches the world.
Personal AI archivists.
Alzheimer's prosthetic memory.
Knowledge work co-pilots.
Cultural preservation at planetary scale.
Why it matters.
If your memories become an AI's, who holds your story?