Using Vercel AI SDK? Check out the AI SDK integration for the cleanest implementation with @supermemory/tools/ai-sdk.
Memory API
Step 1. Sign up for Supermemory’s Developer Platform to get the API key. Click on API Keys -> Create API Key to generate one.
Step 2. Install the SDK and set your API key:
pip install supermemory
export SUPERMEMORY_API_KEY="YOUR_API_KEY"
npm install supermemory
export SUPERMEMORY_API_KEY="YOUR_API_KEY"
Step 3. Here’s everything you need to add memory to your LLM:
from supermemory import Supermemory
client = Supermemory()
USER_ID = "dhravya"
conversation = [
{"role": "assistant", "content": "Hello, how are you doing?"},
{"role": "user", "content": "Hello! I am Dhravya. I am 20 years old. I love to code!"},
{"role": "user", "content": "Can I go to the club?"},
]
# Get user profile + relevant memories for context
profile = client.profile(container_tag=USER_ID, q=conversation[-1]["content"])
context = f"""Static profile:
{"\n".join(profile.profile.static)}
Dynamic profile:
{"\n".join(profile.profile.dynamic)}
Relevant memories:
{"\n".join(r.content for r in profile.search_results.results)}"""
# Build messages with memory-enriched context
messages = [{"role": "system", "content": f"User context:\n{context}"}, *conversation]
# response = llm.chat(messages=messages)
# Store conversation for future context
client.add(
content="\n".join(f"{m['role']}: {m['content']}" for m in conversation),
container_tag=USER_ID,
)
import Supermemory from "supermemory";
const client = new Supermemory();
const USER_ID = "dhravya";
const conversation = [
{ role: "assistant", content: "Hello, how are you doing?" },
{ role: "user", content: "Hello! I am Dhravya. I am 20 years old. I love to code!" },
{ role: "user", content: "Can I go to the club?" },
];
// Get user profile + relevant memories for context
const profile = await client.profile({
containerTag: USER_ID,
q: conversation.at(-1)!.content,
});
const context = `Static profile:
${profile.profile.static.join("\n")}
Dynamic profile:
${profile.profile.dynamic.join("\n")}
Relevant memories:
${profile.searchResults.results.map((r) => r.content).join("\n")}`;
// Build messages with memory-enriched context
const messages = [{ role: "system", content: `User context:\n${context}` }, ...conversation];
// const response = await llm.chat({ messages });
// Store conversation for future context
await client.memories.add({
content: conversation.map((m) => `${m.role}: ${m.content}`).join("\n"),
containerTag: USER_ID,
});
That’s it! Supermemory automatically:
- Extracts memories from conversations
- Builds and maintains user profiles (static facts + dynamic context)
- Returns relevant context for personalized LLM responses
Optional: Use the threshold parameter to filter search results by relevance score. For example: client.profile(container_tag=USER_ID, threshold=0.7, q=query) will only include results with a score above 0.7.
Learn more about User Profiles and Search.