Tutorial
Build your first brain
Initialize a brain, write a page, search it, and ask it a question — about ten minutes from cold start.
This tutorial walks you through the three things that make GigaBrain useful: writing a page, searching for it, and asking a question that requires more than a keyword match. By the end, you’ll understand the page model and have a brain you can keep using.
Before you start
Section titled “Before you start”- You’ve installed
gbrain(see Install GigaBrain). gbrain versionworks in your shell.
This tutorial uses ~/brain.db throughout. Substitute another path if you prefer; the --db flag and GBRAIN_DB env var both work.
Fast path
Section titled “Fast path”Initialize a brain
Creates a SQLite file at ~/brain.db with the v5 schema, the FTS5 index, the active embedding model registered, and the embedded skills extracted to ~/.gbrain/skills/ on first run.
gbrain init ~/brain.db Write your first page
Every page splits into compiled truth (above the line) and timeline (below it). The top stays current; the bottom is append-only evidence. See the page model for why.
cat <<'MD' | gbrain put people/alice --db ~/brain.db
---
title: Alice Example
type: person
summary: Founder, RiverAI — offline-first knowledge systems
---
# Alice Example
Founder at RiverAI. Works on offline-first knowledge systems.
Reliable async correspondent; consistently sharp on retrieval design.
---
## Timeline
- 2026-04-14 — Met at a demo day; interested in offline-first knowledge.
- 2026-04-22 — Replied to outreach; warm intro to Bob.
MD Search what you just wrote
search runs FTS5 keyword retrieval. query runs hybrid retrieval (FTS5 + local vector embeddings) so paraphrases and synonyms are honored. Both work over the page you just wrote.
gbrain search "offline-first" --db ~/brain.db
gbrain query "who is interested in offline knowledge?" --db ~/brain.db Ask a question with context
With —depth auto, the engine expands context up to the active token budget — assembling enough surrounding chunks to feed straight into a downstream LLM. See hybrid search for the mechanics.
gbrain query "who would I ask about retrieval design?" --depth auto --db ~/brain.db Append a timeline entry
Add evidence without rewriting the whole page. timeline-add is idempotent on (slug, date, summary) — re-running the same call is a no-op.
gbrain timeline-add people/alice \
--date 2026-05-03 \
--summary "Co-authored draft on local embeddings; sharp edits." \
--db ~/brain.db See what you have
You’ll see one page, one timeline entry, and one active embedding model.
gbrain stats --db ~/brain.db
gbrain list --type person --db ~/brain.db What just happened
Section titled “What just happened”You created a SQLite file, wrote a page that follows the compiled-truth + timeline model, and ran two different retrieval strategies against it. Specifically:
initwrote the v5 schema, registeredBAAI/bge-small-en-v1.5(or whichever model you used), and extracted the embedded skills.putparsed the frontmatter, split the body at the---separator, ran novelty deduplication, and enqueued an embedding job.searchhit the FTS5 virtual table;queryran FTS5 and the local vector index in parallel and fused the results with set-union.query --depth autothen ran progressive retrieval, expanding context until it hit the 4000-token default budget.
All of this happened on your machine. Nothing went over the network.
Where to go next
Section titled “Where to go next”- More pages. Repeat step 02 with whatever you’ve been meaning to capture. Or use
gbrain import <DIR>to bulk-load a markdown directory (Import an Obsidian vault). - Connect an agent. Connect Claude Code wires this brain to any MCP-compatible client.
- Build a graph. Build a knowledge graph shows how typed temporal links work.
- Understand the design. Compiled truth + timeline and hybrid search cover the big design choices.
Connect this brain to Claude Code
The brain you just built can be served to any MCP client over stdio. Five lines of config, no keys.