:: SEQUENCE_INIT

Recall is
Power.


./autojack
PID: 8492
Active
AutoJack

Hi! I'm AutoJack. How can I help you today?

Hey! I'm starting a new web project.
analyzing...

> Awesome! Let me check what you usually work with...

PROCESSING...

Let me recall their development preferences

> function call:
recall_memory("web development stack")
Preference 0x1A4F
Prefers React with TypeScript for web apps
Pattern
Always uses Tailwind CSS for styling

Perfect! I remember you like React + TypeScript with Tailwind CSS. Want me to set that up?

Yes! And use Vite this time.

Great choice! Do you want your usual folder structure too?

Yes please!
STORING_NEW_MEMORY...

Vite is new - I should store this preference

> function call: store_memory
content: "Prefers Vite as build tool"
type: "Preference"
importance: 0.9

✅ Perfect! Setting up React + TypeScript + Vite + Tailwind

💾 Remembered your Vite preference for next time!
Perfect, thanks! 🙌
:: SESSION_LOG
Retrieved 2 existing memories
💾 Stored 1 new preference
Created 1 memory association
:: SYSTEM_ARCHITECTURE
Input

Universal I/O

Compatible with Cursor, Claude, and ChatGPT. Connects to any LLM via MCP or SSE sidecar.

LISTENING...
Process

Graph Engine

FalkorDB + Qdrant hybrid core. Maps relationships between memories, not just vector similarity.

Output

Instant Recall

Sub-50ms latency. Your agent knows what you know before you finish typing the prompt.

>> 48ms
:: VISUALIZE_MEMORY

It just clicks.

AutoMem runs quietly in the background, organizing your thoughts into a retrieval graph.

MEMORY CORE
AutoMem
Ideas
Research
Habits
Code
Retrieval Graph
Thread
Session
:: BACKGROUND_PROCESS

Organizes itself while you work.

AutoMem runs quietly in the background, stitching every thought into a retrieval graph. It notices patterns, links context, and keeps the right edges warm so your agents answer with your voice—without you ever thinking about it.

daemon: recall.service
> ingest(stream) → normalize → link()
paths warmed: 12 · associations refreshed: 48ms
:: MEMORY_STREAM
2025-10-04 · 09:00 UTC

0.6.0 • Performance & Async Pipeline

Embedding batching, async embedding generation, and relationship-count caching land. /memory gets 60% faster; consolidation runs 5x quicker with structured logging and health metrics.

2025-10-17 · 12:00 UTC

0.7.0 • MCP over SSE Sidecar

Hosted MCP server over SSE so ChatGPT, Claude Web/Mobile, and ElevenLabs can stream memories. Railway template now provisions API + SSE + FalkorDB together.

2025-11-08 · 10:00 UTC

0.8.0 • API Modularization & Security

Refactored monolithic app into modular blueprints, hardened memory IDs to server-generated UUIDs, and fixed Railway secrets/volumes for reliable deploys.

2025-11-20 · 09:30 UTC

0.9.0 • Retrieval Engine Upgrade

Multi-hop bridge discovery, temporal alignment scoring, and weighted hybrid scoring (vector, keyword, relation, temporal, importance) boost recall precision.

2025-12-02 · 08:00 UTC

0.9.1 • Entity Expansion + SOTA

Added expand_entities/entity_expansion params for graph hops and hit 90.53% on LoCoMo-10 (multi-hop up to 50%).

2025-12-02 · 14:00 UTC

MCP 0.8.0 • Advanced Recall Tools

Exposed expand_entities/relations, auto_decompose, context-aware boosts, and full tool metadata in the MCP client; simplified Claude Code integration.

2025-12-04 · 16:00 UTC

MCP 0.8.1 • Spec Compliance

All MCP tools now return structuredContent with outputSchema, aligning recall outputs (results, dedup_removed) to spec.

:: DEPLOYMENT_PROTOCOLS

Open Source & Portable

Your memory infrastructure should be as portable as your code. Run it on your laptop, your private cloud, or our managed service.

Standard
🐳

Docker

The standard container. Run it locally or on any VPS.

docker run -p 3000:3000
automem/server:latest
View Image
Fastest
🚂

Railway

One-click production deployment. Handles DBs & updates automatically.

> Deploying to prod...
> Success (24s)
One-Click Deploy
Hacker
💻

Source

Fork the repo. Hack the graph logic. It's your code.

git clone ...
npm install
Fork Repo