Skip to content

Tutorial

Build your first brain

Initialize a brain, write a page, search it, and ask it a question — about ten minutes from cold start.

This tutorial walks you through the three things that make GigaBrain useful: writing a page, searching for it, and asking a question that requires more than a keyword match. By the end, you’ll understand the page model and have a brain you can keep using.

  • You’ve installed gbrain (see Install GigaBrain).
  • gbrain version works in your shell.

This tutorial uses ~/brain.db throughout. Substitute another path if you prefer; the --db flag and GBRAIN_DB env var both work.

Initialize a brain

Creates a SQLite file at ~/brain.db with the v5 schema, the FTS5 index, the active embedding model registered, and the embedded skills extracted to ~/.gbrain/skills/ on first run.

CLI
gbrain init ~/brain.db

Write your first page

Every page splits into compiled truth (above the line) and timeline (below it). The top stays current; the bottom is append-only evidence. See the page model for why.

Compiled truth + timeline
cat <<'MD' | gbrain put people/alice --db ~/brain.db
---
title: Alice Example
type: person
summary: Founder, RiverAI — offline-first knowledge systems
---

# Alice Example

Founder at RiverAI. Works on offline-first knowledge systems.
Reliable async correspondent; consistently sharp on retrieval design.

---

## Timeline

- 2026-04-14 — Met at a demo day; interested in offline-first knowledge.
- 2026-04-22 — Replied to outreach; warm intro to Bob.
MD

Search what you just wrote

search runs FTS5 keyword retrieval. query runs hybrid retrieval (FTS5 + local vector embeddings) so paraphrases and synonyms are honored. Both work over the page you just wrote.

Keyword + hybrid
gbrain search "offline-first" --db ~/brain.db
gbrain query "who is interested in offline knowledge?" --db ~/brain.db

Ask a question with context

With —depth auto, the engine expands context up to the active token budget — assembling enough surrounding chunks to feed straight into a downstream LLM. See hybrid search for the mechanics.

Progressive retrieval
gbrain query "who would I ask about retrieval design?" --depth auto --db ~/brain.db

Append a timeline entry

Add evidence without rewriting the whole page. timeline-add is idempotent on (slug, date, summary) — re-running the same call is a no-op.

Structured timeline
gbrain timeline-add people/alice \
--date 2026-05-03 \
--summary "Co-authored draft on local embeddings; sharp edits." \
--db ~/brain.db

See what you have

You’ll see one page, one timeline entry, and one active embedding model.

Inspect the brain
gbrain stats --db ~/brain.db
gbrain list --type person --db ~/brain.db

You created a SQLite file, wrote a page that follows the compiled-truth + timeline model, and ran two different retrieval strategies against it. Specifically:

  • init wrote the v5 schema, registered BAAI/bge-small-en-v1.5 (or whichever model you used), and extracted the embedded skills.
  • put parsed the frontmatter, split the body at the --- separator, ran novelty deduplication, and enqueued an embedding job.
  • search hit the FTS5 virtual table; query ran FTS5 and the local vector index in parallel and fused the results with set-union.
  • query --depth auto then ran progressive retrieval, expanding context until it hit the 4000-token default budget.

All of this happened on your machine. Nothing went over the network.

Connect this brain to Claude Code

The brain you just built can be served to any MCP client over stdio. Five lines of config, no keys.