Quick Start

Deploy your personal AI assistant in under 5 minutes with Docker Compose. One instance handles everything — coding, research, scheduling, browsing, automation — across every platform you connect. The more you use it, the smarter it gets.

Prerequisites

1. Clone the repository

git clone https://github.com/kraken-agent/kraken-agent.git
cd kraken-agent

2. Configure environment

cp .env.example .env

Open .env and set the required values:

# Pick your LLM provider (at least one is required)
OPENAI_API_KEY=sk-...
# ANTHROPIC_API_KEY=sk-ant-...

# Secure your API — choose any secret
KRAKEN_API_KEY=sk-kraken-your-secret-here

# Database passwords
POSTGRES_PASSWORD=change-me
NEO4J_PASSWORD=change-me

That's all you need. Everything else has sensible defaults.

3. Start the stack

docker-compose up -d

This starts six services — the full brain of your personal assistant:

Service Purpose
kraken-api REST API on port 8080 — your assistant's front door
worker Background jobs (memory extraction, skill reflection, scheduling)
postgres PostgreSQL 17 with pgvector (sessions, messages, skills, embeddings)
neo4j Neo4j 5 (knowledge graph — entities, relationships, communities)
redis Redis 7 (job queues, session cache)
chromium Headless browser for web automation

Once running, connect any number of platforms to this single instance — Discord, Telegram, CLI tools, cron jobs, or your own apps. They all talk to the same brain.

4. Initialize the database

docker-compose exec kraken-api npm run db:push

5. Verify it's running

curl http://localhost:8080/health

You should get {"status":"ok"}.


First conversation

Using curl

curl -X POST http://localhost:8080/v1/chat \
  -H "Authorization: Bearer sk-kraken-your-secret-here" \
  -H "Content-Type: application/json" \
  -d '{
    "message": "Hello! What can you do?",
    "session_key": "my-first-session"
  }'

Using the Python SDK

pip install kraken-agent
from kraken import KrakenClient

client = KrakenClient(
    api_url="http://localhost:8080",
    api_key="sk-kraken-your-secret-here",
    model="gpt-5.4",
)

# Chat
response = client.chat("Hello! What can you do?", session_key="getting-started")
print(response.content)

# The agent remembers you across calls
client.chat("My name is Alice", session_key="getting-started")
r = client.chat("What's my name?", session_key="getting-started")
print(r.content)  # "Alice"

Using the OpenAI-compatible endpoint

Any OpenAI client library works:

from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8080/v1",
    api_key="sk-kraken-your-secret-here",
)

response = client.chat.completions.create(
    model="gpt-5.4",
    messages=[{"role": "user", "content": "Hello from the OpenAI SDK!"}],
)
print(response.choices[0].message.content)

What happens behind the scenes

When you send a message, Kraken:

  1. Resolves or creates a session using your session_key
  2. Stores your message in PostgreSQL with a vector embedding
  3. Builds a system prompt — assembling SOUL.md, user model, relevant memory, and matching skills
  4. Checks if context compaction is needed — if approaching the token limit, it silently persists important context to memory and summarizes older messages
  5. Calls the LLM with your message history and available tools
  6. Stores the response with embedding
  7. Queues background jobs — entity extraction, user model update, skill reflection, community re-clustering

Every conversation makes Kraken smarter. It's not just answering — it's building a permanent understanding of who you are, what you're working on, and how you like things done. This is the assistant you deploy once and keep running.


Next steps

Your assistant is running. Now connect it to your life: