Back to Blog
AI

Proactive Context: Memory That Surfaces When You Need It

December 31, 20259 min readBy Shodh Team · AI Research
proactive-contextspreading-activationmemory-retrievalcognitive-scienceMCP

There's a fundamental difference between searching for something and having it come to mind.

When you see a friend, you don't query your brain for "memories tagged with John, sorted by recency." Relevant memories just surface—shared experiences, inside jokes, recent conversations. This happens automatically, without conscious effort.

Most AI memory systems work like databases: you query, they return results.Shodh Memory works differently. The proactive_context tool surfaces relevant memories automatically based on the current conversation.

Search vs. Recall

Traditional Memory Search

Explicit Searchpython
# You explicitly search
results = memory.recall("authentication patterns")

# You get back what matches your query
# But you had to know what to ask for

This works when you know what you need. But often, you don't. The most valuable context is the thing you forgot to ask about.

Proactive Context

Proactive Contextpython
# You provide the current conversation context
memories = memory.proactive_context("I'm implementing the login page")

# The system surfaces what's relevant:
# - "We decided to use JWT with refresh tokens" (Decision)
# - "Last login implementation had a race condition with state updates" (Error)
# - "User prefers shadcn/ui for auth forms" (Preference)
# - "OAuth integration is planned for v2" (Context)

You didn't ask for these. They're relevant, so they appear.

How It Works

Proactive context uses three signals to determine relevance:

1. Entity Matching (40% weight)

The system extracts entities from your current context and finds memories mentioning the same entities.

Entity Matchingtext
Context: "Working on the Stripe payment integration"

Entities extracted: ["Stripe", "payment", "integration"]

Memories mentioning these entities get boosted:
- "Stripe webhooks require raw body parsing" ✓
- "Payment flow needs idempotency keys" ✓
- "Integration tests run on CI" (different "integration") △

2. Semantic Similarity (40% weight)

Vector similarity between your context and stored memories.

3. Recency Boost (20% weight)

Recent memories get a boost. Yesterday's decision is more relevant than last year's.

Why This Matters

1. You Can't Search for What You Forgot

The biggest problem with explicit search is that you need to know what to ask. But the most valuable context is often something you've forgotten.

Surfacing Forgotten Contexttext
# You're about to implement caching
# You forgot that 3 months ago, you decided against Redis

Proactive context for "adding cache layer":
→ "Decision: Use in-memory caching initially, Redis for v2"

# The decision surfaces automatically
# You don't repeat past mistakes

2. Context Flows Naturally

In a conversation, relevant memories should appear as the topic shifts—without explicit queries.

3. Less Cognitive Load

With explicit search, you have to recognize you need information, formulate a query, review results. With proactive context, relevant information appears automatically.

Using Proactive Context

Basic Usage

Basic Usagepython
from shodh_memory import Memory

memory = Memory()

# Provide the current conversation or task
context = memory.proactive_context(
    "I'm debugging the authentication flow. Users report being logged out randomly."
)

# Returns the most relevant memories
for mem in context:
    print(f"[{mem['memory_type']}] {mem['content']}")

Configuring Sensitivity

Configuring Sensitivitypython
# More strict - only highly relevant memories
context = memory.proactive_context(
    "current task description",
    semantic_threshold=0.75,  # Higher = more strict
    max_results=3
)

# More permissive - cast a wider net
context = memory.proactive_context(
    "current task description",
    semantic_threshold=0.5,  # Lower = more permissive
    max_results=10
)

The Biological Inspiration

Human memory doesn't have a search bar. When you think of your childhood home, associated memories surface: the smell of the kitchen, the creak of the stairs, family dinners. You didn't query for these—they activated through association.

This is called spreading activation in cognitive science. Activating one memory spreads activation to connected memories. The most activated memories surface to consciousness.

proactive_context implements this: your current context activates matching entities and embeddings, activation spreads through the knowledge graph, and the most activated memories surface.

It's not a search algorithm. It's a memory system.

Blog | Shodh | Shodh RAG