Quickstart
Get your first fact into Atlas and retrieve it in under 5 minutes.
Step 1 — Get an API key
Sign up at atlas.bsyncs.com/signup.
Your API key is generated instantly and shown once in the dashboard — copy it now
atlas_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Step 2 — Install the SDK
pip install bsyncs-atlas-memory
No installation needed. Use any HTTP client.
# REST only — native JS SDK coming soon
npm install axios
Step 3 — Ingest your first fact
from atlas_memory import CognitiveBrain
brain = CognitiveBrain(
api_key="atlas_your_key_here",
base_url="https://api.atlas.bsyncs.com",
user_id="user-123",
)
result = brain.add("Project Apollo uses PostgreSQL on AWS RDS.")
print(result)
# IngestResult(facts=1, chunks=1, latency=1240ms)
curl -X POST https://api.atlas.bsyncs.com/brain/ingest \
-H "X-API-Key: atlas_your_key_here" \
-H "Content-Type: application/json" \
-d '{
"text": "Project Apollo uses PostgreSQL on AWS RDS.",
"user_id": "user-123"
}'
Response:
{
"facts_ingested": 1,
"episodic_chunks": 1,
"entities_extracted": 2,
"triples_extracted": 1,
"latency_ms": 1240
}
Step 4 — Retrieve context
results = brain.search("What database does Apollo use?")
print(results.context)
# - Project Apollo uses PostgreSQL (high confidence, via semantic)
curl -X POST https://api.atlas.bsyncs.com/brain/retrieve \
-H "X-API-Key: atlas_your_key_here" \
-H "Content-Type: application/json" \
-d '{
"query": "What database does Apollo use?",
"user_id": "user-123",
"k": 5
}'
Step 5 — Inject into your LLM
from openai import OpenAI
client = OpenAI()
brain = CognitiveBrain(api_key="atlas_...", user_id="user-123")
# Retrieve relevant memory
memory = brain.search(user_message)
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{
"role": "system",
"content": f"You are a helpful assistant.\n\n{memory.format()}"
},
{"role": "user", "content": user_message}
]
)
The memory.format() method returns a compact, LLM-ready string. Pass it
directly into your system prompt — no parsing needed.
Next steps