Modern Large Language Models are often trapped in a "stateless" loop, limited by the boundaries of a sliding context window.
MnemonicStream is a research initiative dedicated to breaking this cycle by developing a persistent,
high-fidelity memory primitive for agentic AI. We believe that for an agent to be truly autonomous, it must possess a
memory that mimics the human ability to form associative links, prioritize relevance, and evolve over time.
Core Research Pillars
-
Associative Retrieval Engine
Moving beyond simple RAG to create a graph-based associative memory that links disparate interactions through semantic and logical "hooks."
-
Cognitive Tiering
Implementing a multi-layered storage architecture—separating immediate "Working Memory" from consolidated "Deep Knowledge" to optimize inference speed and reasoning depth.
-
Active Consolidation & Dreaming
Developing "offline" processes where the agent periodically reviews, compresses, and synthesizes raw interaction logs into high-level insights, effectively "pruning" the noise to retain the signal.
-
Temporal Persistence
Ensuring that an agent identity and learned preferences remain stable across thousands of sessions, allowing for long-term collaboration.
Practical Application: Adaptive Text-to-SQL Systems
A primary application of MnemonicStream is the optimization of Text-to-SQL agentic interactions. In enterprise environments, databases are too complex to fit into a single context window. Our system enables the agent to learn from every user interaction:
- Feedback Loops: The agent "remembers" when a user corrected a join condition or preferred a specific business logic (e.g., "Profit means Net Revenue after Tax").
- Schema Evolution: Instead of re-scanning a 1,000-table database every time, the agent maintains an associative map of which tables were successful for specific types of queries.
- Persistent Context: The memory allows for multi-turn data exploration where the agent maintains state across weeks of analysis.