Get from zero to a learning agent in under 5 minutes. Pick your integration path below.
Use the SDK directly in any Python project. Persistent memory, directives, dream cycles, and health monitoring.
Drop-in nodes for LangGraph StateGraph pipelines. Add persistent, compounding memory to any agent graph.
Expose wisdom to Claude Code, Cursor, or any MCP-compatible tool. Memory, directives, dreams via stdio.
WisdomStore for LangGraph persistence or legacy BaseMemory adapter for existing chains.
pip install "wisdom-layer[ollama]" # local LLM (free)
# or: pip install "wisdom-layer[anthropic]" and set ANTHROPIC_API_KEY from wisdom_layer import WisdomAgent, AgentConfig
from wisdom_layer.storage.sqlite import SQLiteBackend
from wisdom_layer.llm.ollama import OllamaAdapter
llm = OllamaAdapter(model="llama3.2")
backend = SQLiteBackend("agent.db", embed_fn=llm.embed)
config = AgentConfig.for_dev(name="My Agent", role="Assistant")
agent = WisdomAgent(
agent_id="my-agent",
config=config,
llm=llm,
backend=backend,
)
await agent.initialize() # Store a memory
await agent.memory.capture(
event_type="observation",
content={"text": "User prefers concise answers"},
)
# Search memories
results = await agent.memory.search("user preferences", limit=5) # Reflect: consolidate memories, evolve directives, audit coherence
report = await agent.dreams.trigger()
# Check health
health = await agent.health()
print(f"Wisdom score: {health.wisdom_score}")
print(f"Status: {health.classification}") # Open the dashboard in a browser
pip install "wisdom-layer[dashboard]"
wisdom-layer-dashboard --db agent.db Add persistent, compounding memory to any LangGraph agent. Your agent remembers past conversations, learns behavioral directives, and improves over time.
pip install "wisdom-layer[langgraph,ollama]"
ollama pull llama3.2 from langgraph.graph import END, START, StateGraph
from wisdom_layer import WisdomAgent, AgentConfig
from wisdom_layer.storage.sqlite import SQLiteBackend
from wisdom_layer.llm.ollama import OllamaAdapter
from wisdom_layer.integration.langgraph import (
WisdomCaptureNode,
WisdomRecallNode,
)
# State definition
class AgentState(TypedDict):
messages: list[dict[str, str]]
wisdom_context: list[dict[str, Any]]
# Set up agent
llm = OllamaAdapter(model="llama3.2")
backend = SQLiteBackend("my_agent.db", embed_fn=llm.embed)
config = AgentConfig.for_dev(name="My Agent", role="Helpful assistant")
agent = WisdomAgent(agent_id="my-agent", config=config, llm=llm, backend=backend)
await agent.initialize()
# Your LLM node uses wisdom context
async def call_llm(state):
wisdom = state.get("wisdom_context", [])
context = "\n".join(f"- {m['content']}" for m in wisdom)
system = f"You are a helpful assistant.\n\nRelevant memories:\n{context}"
user_msg = state["messages"][-1]["content"]
response = await llm.generate(
messages=[{"role": "user", "content": user_msg}],
system=system,
)
return {"messages": [*state["messages"], {"role": "assistant", "content": response}]}
# Build the graph
graph = StateGraph(AgentState)
graph.add_node("recall", WisdomRecallNode(agent))
graph.add_node("llm", call_llm)
graph.add_node("capture", WisdomCaptureNode(agent))
graph.add_edge(START, "recall")
graph.add_edge("recall", "llm")
graph.add_edge("llm", "capture")
graph.add_edge("capture", END)
app = graph.compile()
result = await app.ainvoke({
"messages": [{"role": "user", "content": "Hello!"}],
"wisdom_context": [],
}) | Node | Parameters | Description |
|---|---|---|
| WisdomRecallNode | agent, limit=5, context_key, message_key | Searches memory, writes results to state |
| WisdomCaptureNode | agent, event_type="interaction", message_key | Captures last exchange as a memory |
| WisdomDreamNode | agent, result_key="dream_result" | Triggers a reflection cycle |
| WisdomDirectivesNode | agent, context_key="wisdom_directives" | Retrieves active directives for prompt injection |
from wisdom_layer.integration.langchain import WisdomStore
store = WisdomStore(agent)
app = graph.compile(store=store) This makes wisdom memory available across threads via LangGraph's store parameter injection.
Expose your agent's capabilities to Claude Code, Cursor, Windsurf, or any MCP-compatible AI tool via the standard Model Context Protocol.
pip install "wisdom-layer[mcp,anthropic]"
export ANTHROPIC_API_KEY=sk-ant-...
wisdom-layer-mcp --db wisdom.db --agent-id my-agent Add to .claude/settings.local.json:
{
"mcpServers": {
"wisdom-layer": {
"command": "wisdom-layer-mcp",
"args": ["--db", "/path/to/wisdom.db", "--agent-id", "my-agent"],
"env": {
"ANTHROPIC_API_KEY": "sk-ant-..."
}
}
}
} Add to .cursor/mcp.json:
{
"mcpServers": {
"wisdom-layer": {
"command": "wisdom-layer-mcp",
"args": ["--db", "/path/to/wisdom.db", "--agent-id", "my-agent"],
"env": {
"ANTHROPIC_API_KEY": "sk-ant-..."
}
}
}
} | Tool | Description |
|---|---|
| wisdom_capture | Store a memory (observation, interaction, feedback) |
| wisdom_recall | Semantic search across all memory tiers |
| wisdom_health | Get the agent's cognitive health report |
| wisdom_directives | List active behavioral directives |
| wisdom_add_directive | Add a new behavioral rule |
| wisdom_dream | Trigger a reflection cycle |
| wisdom_provenance | Trace the origin/history of any entity |
| URI | Description |
|---|---|
| wisdom://config | Agent configuration (name, role, tier) |
| wisdom://directives | Active directives as structured data |
| wisdom://health | Current health report snapshot |
| Flag | Default | Description |
|---|---|---|
| --db | wisdom.db | SQLite database path |
| --agent-id | mcp-agent | Agent ID |
| --transport | stdio | stdio | sse | streamable-http |
| --log-level | WARNING | DEBUG | INFO | WARNING | ERROR |
The CLI detects your LLM from environment variables, in order:
Two adapters for different LangChain generations.
For LangGraph's cross-thread persistence. This is the modern, recommended approach.
from wisdom_layer.integration.langchain import WisdomStore
store = WisdomStore(agent)
# Use with graph.compile for cross-thread memory
app = graph.compile(store=store)
# Or use directly
results = await store.asearch(("user", "123"), query="Python")
await store.aput(("user", "123"), "key1", {"content": "learned preference"}) For existing LangChain chains that use BaseMemory. Deprecated — migrate to WisdomStore when possible.
from wisdom_layer.integration.langchain import WisdomLayerMemory
memory = WisdomLayerMemory(agent=agent)
result = memory.load_memory_variables({"input": "What do you know about me?"})
memory.save_context(
{"input": "I prefer Python"},
{"output": "Noted! I'll remember that."},
) The Wisdom Layer SDK models its cognitive architecture on how humans learn:
Autonomous reflection pipeline (schedulable or manual):
Self-authored behavioral rules with a lifecycle: provisional → active → permanent. Reinforced through usage, decayed when unused. Every directive has full provenance tracking.
A composite wisdom score (0–1) with five components: memory diversity, directive coherence, reflection frequency, learning velocity, and knowledge depth. Automatic classification: healthy / stagnant / drifting / overloaded.
Visualize your agent's cognitive architecture in a browser.
pip install "wisdom-layer[dashboard]"
wisdom-layer-dashboard --db agent.db Five screens: Health (wisdom score, trajectory), Directives (lifecycle, provenance), Memory (search, tier distribution), Dreams (cycle history, scheduling), and Configuration (feature flags, personality, resource limits).
from wisdom_layer.dashboard import mount_dashboard
app = mount_dashboard(agent)
# Serve with uvicorn on any port