Back to Blog
Tutorial

Shodh Memory Python SDK: Complete Tutorial with Examples

December 13, 20258 min readBy Shodh Team · Developer Experience
pythonSDKtutorialshodh-memoryAI-memory

The Shodh Memory Python SDK gives your applications persistent, semantic memory in just a few lines of code. This tutorial covers everything from installation to advanced patterns.

Installation

Install from PyPI:

Terminalbash
pip install shodh-memory

That's it. No external services, no API keys required for local use.

Quick Start

Here's the minimal example to get started:

quickstart.pypython
from shodh_memory import Memory

# Initialize (creates local storage in ./shodh_data by default)
m = Memory()

# Store a memory
m.remember("User prefers dark mode in all applications",
           memory_type="Decision",
           tags=["preferences", "ui"])

# Recall memories by semantic search
results = m.recall("What are the user's UI preferences?")
for r in results:
    print(f"[{r['experience_type']}] {r['content']} (importance: {r.get('importance', 0):.2f})")

Memory Types

Shodh Memory supports different memory types, each with different importance weights during retrieval:

TypeWeight BoostUse Case
Decision+30%Choices, preferences, architectural decisions
Learning+25%New knowledge, discoveries
Error+25%Mistakes to avoid, bugs encountered
Pattern+20%Recurring behaviors, habits
Task+15%Work items, todos
Context+10%General information, session context
Observation+0%Default type for general notes

Retrieval Modes

The SDK supports three retrieval modes:

retrieval_modes.pypython
# Semantic mode: Pure vector similarity search
results = m.recall("authentication", mode="semantic", limit=5)

# Associative mode: Graph-based traversal
# Follows learned connections between memories
results = m.recall("authentication", mode="associative", limit=5)

# Hybrid mode (default): Combines both
# Uses density-dependent weighting
results = m.recall("authentication", mode="hybrid", limit=5)

Context Summary

Get a structured summary of memories - perfect for bootstrapping LLM context at the start of a session:

context_summary.pypython
summary = m.context_summary(max_items=5)

print("Decisions:", summary["decisions"])
print("Learnings:", summary["learnings"])
print("Errors to avoid:", summary.get("errors", []))
print("Patterns:", summary.get("patterns", []))

# Use in LLM prompt
system_prompt = f"""You are helping a user. Here's what you know:

Decisions: {summary["decisions"]}
Learnings: {summary["learnings"]}
Errors to avoid: {summary.get("errors", [])}
"""

Tag-Based Retrieval

Search memories by tags for precise filtering:

tags.pypython
# Store with tags
m.remember("PostgreSQL connection pool size should be 20",
           memory_type="Learning",
           tags=["database", "postgres", "performance"])

# Retrieve by tags
db_memories = m.recall_by_tags(["database", "postgres"])

# Delete by tags
m.forget_by_tags(["temporary", "scratch"])

Date-Based Operations

Query and manage memories by date range:

date_operations.pypython
from datetime import datetime, timedelta

# Get memories from the last 7 days
week_ago = (datetime.now() - timedelta(days=7)).isoformat()
now = datetime.now().isoformat()

recent = m.recall_by_date(start=week_ago, end=now)

# Clean up old memories
month_ago = (datetime.now() - timedelta(days=30)).isoformat()
m.forget_by_date(start="2024-01-01", end=month_ago)

Memory Statistics

Monitor your memory system:

stats.pypython
stats = m.memory_stats()

print(f"Total memories: {stats.total_memories}")
print(f"Graph edges: {stats.graph_edges}")
print(f"Storage size: {stats.storage_size_bytes / 1024:.1f} KB")

Integration with LangChain

Use Shodh Memory as a LangChain memory backend:

langchain_integration.pypython
from shodh_memory import Memory
from langchain.memory import ConversationBufferMemory

class ShodhLangChainMemory(ConversationBufferMemory):
    def __init__(self, shodh_memory: Memory):
        super().__init__()
        self.shodh = shodh_memory

    def save_context(self, inputs, outputs):
        # Store in Shodh for persistence
        self.shodh.remember(
            f"User: {inputs['input']}\nAssistant: {outputs['output']}",
            memory_type="Conversation"
        )
        super().save_context(inputs, outputs)

    def load_memory_variables(self, inputs):
        # Retrieve relevant context from Shodh
        if 'input' in inputs:
            relevant = self.shodh.recall(inputs['input'], limit=3)
            # Add to context...
        return super().load_memory_variables(inputs)

Custom Storage Path

Specify where memories are stored:

custom_path.pypython
# Project-specific memory
project_memory = Memory(storage_path="./my_project/.shodh")

# User-specific memory
user_memory = Memory(storage_path="~/.shodh/personal")

# Temporary memory (will be deleted)
import tempfile
temp_memory = Memory(storage_path=tempfile.mkdtemp())

Best Practices

  • Be specific with types — Use Decision for choices, Error for mistakes. This improves retrieval relevance.
  • Tag liberally — Tags enable precise filtering. Use project names, topics, technologies.
  • Use context_summary for LLMs — At session start, bootstrap with recent decisions and learnings.
  • Clean up periodically — Use forget_by_date to remove old, irrelevant memories.
  • One Memory instance per user/project — Don't share instances across different contexts.

Next Steps

Blog | Shodh | Shodh RAG