Back to Blog
Tutorial

Add Persistent Memory to Any AI Agent in 5 Minutes

December 15, 20255 min readBy Shodh Team · Developer Experience
quickstartpythonAI-agentsmemorytutorial

Your AI assistant forgets everything the moment you close the chat. Here's how to fix that in 5 minutes with persistent memory—no cloud accounts, no API keys, just code.

What you'll build

An AI agent that remembers user preferences, past conversations, and learned facts across sessions. Everything runs locally on your machine.

Step 1: Install

One command:

Terminalbash
pip install shodh-memory

That's it. No Docker, no database setup, no cloud accounts.

Step 2: Initialize Memory

app.pypython
from shodh_memory import Memory

# Create memory instance (stores in ./shodh_data by default)
memory = Memory()

# Or specify a custom path
memory = Memory(storage_path="./my_agent_memory")

Step 3: Store Memories

app.pypython
# Remember facts, preferences, decisions
memory.remember(
    "User prefers dark mode and concise responses",
    memory_type="Decision",
    tags=["preferences", "ui"]
)

memory.remember(
    "Project uses Next.js 14 with TypeScript",
    memory_type="Context",
    tags=["tech-stack", "frontend"]
)

memory.remember(
    "API timeout fixed by increasing connection pool to 20",
    memory_type="Learning",
    tags=["debugging", "api"]
)

Step 4: Recall Memories

app.pypython
# Semantic search - finds relevant memories
results = memory.recall("What's the tech stack?", limit=3)

for r in results:
    print(f"[{r['experience_type']}] {r['content']}")

# Output:
# [Context] Project uses Next.js 14 with TypeScript

Step 5: Use with Your LLM

Here's the pattern for injecting memory into LLM prompts:

agent.pypython
from shodh_memory import Memory
from openai import OpenAI  # or any LLM client

memory = Memory()
client = OpenAI()

def chat(user_message: str) -> str:
    # 1. Recall relevant memories
    relevant = memory.recall(user_message, limit=5)
    memory_context = "\n".join([m["content"] for m in relevant])

    # 2. Build prompt with memory
    messages = [
        {"role": "system", "content": f"""You are a helpful assistant.

Here's what you remember about this user:
{memory_context}

Use this context to personalize your response."""},
        {"role": "user", "content": user_message}
    ]

    # 3. Get response
    response = client.chat.completions.create(
        model="gpt-4",
        messages=messages
    )

    # 4. Optionally store new learnings
    # memory.remember(f"User asked about: {user_message}", memory_type="Conversation")

    return response.choices[0].message.content

# Now your AI remembers across sessions!
print(chat("What framework are we using?"))
# Uses the stored context about Next.js 14

Bonus: Context Summary

Get a structured summary for bootstrapping sessions:

session_start.pypython
summary = memory.context_summary(max_items=5)

print("Recent Decisions:", summary.get("decisions", []))
print("Learnings:", summary.get("learnings", []))
print("Errors to avoid:", summary.get("errors", []))

# Perfect for system prompts at session start

That's It!

You now have an AI agent with persistent memory. The memories survive:

  • Application restarts
  • System reboots
  • Forever (until you delete them)

Next Steps

Blog | Shodh | Shodh RAG