Quickstart

Standalone SDK

Use the SDK directly in any Python project. Persistent memory, directives, dream cycles, and health monitoring.

pip install wisdom-layer
Get started ↓
Integration

LangGraph

Drop-in nodes for LangGraph StateGraph pipelines. Add persistent, compounding memory to any agent graph.

pip install "wisdom-layer[langgraph]"
Get started ↓
Tool

MCP Server

Expose wisdom to Claude Code, Cursor, or any MCP-compatible tool. Memory, directives, dreams via stdio.

pip install "wisdom-layer[mcp]"
Get started ↓
Integration

LangChain

WisdomStore for LangGraph persistence or legacy BaseMemory adapter for existing chains.

pip install "wisdom-layer[langchain]"
Get started ↓

Quickstart — Standalone SDK

1. Install

pip install "wisdom-layer[ollama]"    # local LLM (free)
# or: pip install "wisdom-layer[anthropic]" and set ANTHROPIC_API_KEY

2. Create an Agent

from wisdom_layer import WisdomAgent, AgentConfig
from wisdom_layer.storage.sqlite import SQLiteBackend
from wisdom_layer.llm.ollama import OllamaAdapter

llm = OllamaAdapter(model="llama3.2")
backend = SQLiteBackend("agent.db", embed_fn=llm.embed)
config = AgentConfig.for_dev(name="My Agent", role="Assistant")

agent = WisdomAgent(
    agent_id="my-agent",
    config=config,
    llm=llm,
    backend=backend,
)
await agent.initialize()

3. Capture & Recall

# Store a memory
await agent.memory.capture(
    event_type="observation",
    content={"text": "User prefers concise answers"},
)

# Search memories
results = await agent.memory.search("user preferences", limit=5)

4. Trigger a Dream Cycle

# Reflect: consolidate memories, evolve directives, audit coherence
report = await agent.dreams.trigger()

# Check health
health = await agent.health()
print(f"Wisdom score: {health.wisdom_score}")
print(f"Status: {health.classification}")

5. Visualize

# Open the dashboard in a browser
pip install "wisdom-layer[dashboard]"
wisdom-layer-dashboard --db agent.db

LangGraph Integration

Add persistent, compounding memory to any LangGraph agent. Your agent remembers past conversations, learns behavioral directives, and improves over time.

Install

pip install "wisdom-layer[langgraph,ollama]"
ollama pull llama3.2

Quick Start — 3-Node Graph

from langgraph.graph import END, START, StateGraph
from wisdom_layer import WisdomAgent, AgentConfig
from wisdom_layer.storage.sqlite import SQLiteBackend
from wisdom_layer.llm.ollama import OllamaAdapter
from wisdom_layer.integration.langgraph import (
    WisdomCaptureNode,
    WisdomRecallNode,
)

# State definition
class AgentState(TypedDict):
    messages: list[dict[str, str]]
    wisdom_context: list[dict[str, Any]]

# Set up agent
llm = OllamaAdapter(model="llama3.2")
backend = SQLiteBackend("my_agent.db", embed_fn=llm.embed)
config = AgentConfig.for_dev(name="My Agent", role="Helpful assistant")
agent = WisdomAgent(agent_id="my-agent", config=config, llm=llm, backend=backend)
await agent.initialize()

# Your LLM node uses wisdom context
async def call_llm(state):
    wisdom = state.get("wisdom_context", [])
    context = "\n".join(f"- {m['content']}" for m in wisdom)
    system = f"You are a helpful assistant.\n\nRelevant memories:\n{context}"
    user_msg = state["messages"][-1]["content"]
    response = await llm.generate(
        messages=[{"role": "user", "content": user_msg}],
        system=system,
    )
    return {"messages": [*state["messages"], {"role": "assistant", "content": response}]}

# Build the graph
graph = StateGraph(AgentState)
graph.add_node("recall", WisdomRecallNode(agent))
graph.add_node("llm", call_llm)
graph.add_node("capture", WisdomCaptureNode(agent))
graph.add_edge(START, "recall")
graph.add_edge("recall", "llm")
graph.add_edge("llm", "capture")
graph.add_edge("capture", END)

app = graph.compile()
result = await app.ainvoke({
    "messages": [{"role": "user", "content": "Hello!"}],
    "wisdom_context": [],
})

Available Nodes

NodeParametersDescription
WisdomRecallNode agent, limit=5, context_key, message_key Searches memory, writes results to state
WisdomCaptureNode agent, event_type="interaction", message_key Captures last exchange as a memory
WisdomDreamNode agent, result_key="dream_result" Triggers a reflection cycle
WisdomDirectivesNode agent, context_key="wisdom_directives" Retrieves active directives for prompt injection

WisdomStore (Cross-Thread Persistence)

from wisdom_layer.integration.langchain import WisdomStore

store = WisdomStore(agent)
app = graph.compile(store=store)

This makes wisdom memory available across threads via LangGraph's store parameter injection.

MCP Server

Expose your agent's capabilities to Claude Code, Cursor, Windsurf, or any MCP-compatible AI tool via the standard Model Context Protocol.

Install & Start

pip install "wisdom-layer[mcp,anthropic]"
export ANTHROPIC_API_KEY=sk-ant-...
wisdom-layer-mcp --db wisdom.db --agent-id my-agent

Configure Claude Code

Add to .claude/settings.local.json:

{
  "mcpServers": {
    "wisdom-layer": {
      "command": "wisdom-layer-mcp",
      "args": ["--db", "/path/to/wisdom.db", "--agent-id", "my-agent"],
      "env": {
        "ANTHROPIC_API_KEY": "sk-ant-..."
      }
    }
  }
}

Configure Cursor

Add to .cursor/mcp.json:

{
  "mcpServers": {
    "wisdom-layer": {
      "command": "wisdom-layer-mcp",
      "args": ["--db", "/path/to/wisdom.db", "--agent-id", "my-agent"],
      "env": {
        "ANTHROPIC_API_KEY": "sk-ant-..."
      }
    }
  }
}

Available Tools

ToolDescription
wisdom_captureStore a memory (observation, interaction, feedback)
wisdom_recallSemantic search across all memory tiers
wisdom_healthGet the agent's cognitive health report
wisdom_directivesList active behavioral directives
wisdom_add_directiveAdd a new behavioral rule
wisdom_dreamTrigger a reflection cycle
wisdom_provenanceTrace the origin/history of any entity

Available Resources

URIDescription
wisdom://configAgent configuration (name, role, tier)
wisdom://directivesActive directives as structured data
wisdom://healthCurrent health report snapshot

CLI Options

FlagDefaultDescription
--dbwisdom.dbSQLite database path
--agent-idmcp-agentAgent ID
--transportstdiostdio | sse | streamable-http
--log-levelWARNINGDEBUG | INFO | WARNING | ERROR

LLM Auto-Detection

The CLI detects your LLM from environment variables, in order:

LangChain Integration

Two adapters for different LangChain generations.

WisdomStore (Recommended)

For LangGraph's cross-thread persistence. This is the modern, recommended approach.

from wisdom_layer.integration.langchain import WisdomStore

store = WisdomStore(agent)

# Use with graph.compile for cross-thread memory
app = graph.compile(store=store)

# Or use directly
results = await store.asearch(("user", "123"), query="Python")
await store.aput(("user", "123"), "key1", {"content": "learned preference"})

WisdomLayerMemory (Legacy)

For existing LangChain chains that use BaseMemory. Deprecated — migrate to WisdomStore when possible.

from wisdom_layer.integration.langchain import WisdomLayerMemory

memory = WisdomLayerMemory(agent=agent)
result = memory.load_memory_variables({"input": "What do you know about me?"})
memory.save_context(
    {"input": "I prefer Python"},
    {"output": "Noted! I'll remember that."},
)

How It Works

The Wisdom Layer SDK models its cognitive architecture on how humans learn:

Three-Tier Memory

Dream Cycles

Autonomous reflection pipeline (schedulable or manual):

Directives

Self-authored behavioral rules with a lifecycle: provisional → active → permanent. Reinforced through usage, decayed when unused. Every directive has full provenance tracking.

Health Monitoring

A composite wisdom score (0–1) with five components: memory diversity, directive coherence, reflection frequency, learning velocity, and knowledge depth. Automatic classification: healthy / stagnant / drifting / overloaded.

Dashboard

Visualize your agent's cognitive architecture in a browser.

pip install "wisdom-layer[dashboard]"
wisdom-layer-dashboard --db agent.db

Five screens: Health (wisdom score, trajectory), Directives (lifecycle, provenance), Memory (search, tier distribution), Dreams (cycle history, scheduling), and Configuration (feature flags, personality, resource limits).

Programmatic

from wisdom_layer.dashboard import mount_dashboard

app = mount_dashboard(agent)
# Serve with uvicorn on any port