v1.1.0

qmd External Knowledge Base Search

levineam levineam ← All skills

Local hybrid search for markdown notes and docs. Use when searching notes, finding related content, or retrieving documents from indexed collections.

Downloads
1.3k
Stars
3
Versions
1
Updated
2026-02-24

Install

npx clawhub@latest install qmd-external

Documentation

qmd - Quick Markdown Search

Local search engine for Markdown notes, docs, and knowledge bases. Index once, search fast.

When to use (trigger phrases)

  • -"search my notes / docs / knowledge base"
  • -"find related notes"
  • -"retrieve a markdown document from my collection"
  • -"search local markdown files"

Default behavior (important)

  • -Prefer qmd search (BM25). It's typically instant and should be the default.
  • -Use qmd vsearch only when keyword search fails and you need semantic similarity (can be very slow on a cold start).
  • -Avoid qmd query unless the user explicitly wants the highest quality hybrid results and can tolerate long runtimes/timeouts.

Prerequisites

  • -Bun >= 1.0.0
  • -macOS: brew install sqlite (SQLite extensions)
  • -Ensure PATH includes: $HOME/.bun/bin

Install Bun (macOS): brew install oven-sh/bun/bun

Install

bun install -g https://github.com/tobi/qmd

Setup

qmd collection add /path/to/notes --name notes --mask "**/*.md"

qmd context add qmd://notes "Description of this collection" # optional

qmd embed # one-time to enable vector + hybrid search

What it indexes

  • -Intended for Markdown collections (commonly **/*.md).
  • -In our testing, "messy" Markdown is fine: chunking is content-based (roughly a few hundred tokens per chunk), not strict heading/structure based.
  • -Not a replacement for code search; use code search tools for repositories/source trees.

Search modes

  • -qmd search (default): fast keyword match (BM25)
  • -qmd vsearch (last resort): semantic similarity (vector). Often slow due to local LLM work before the vector lookup.
  • -qmd query (generally skip): hybrid search + LLM reranking. Often slower than vsearch and may timeout.

Performance notes

  • -qmd search is typically instant.
  • -qmd vsearch can be ~1 minute on some machines because query expansion may load a local model (e.g., Qwen3-1.7B) into memory per run; the vector lookup itself is usually fast.
  • -qmd query adds LLM reranking on top of vsearch, so it can be even slower and less reliable for interactive use.
  • -If you need repeated semantic searches, consider keeping the process/model warm (e.g., a long-lived qmd/MCP server mode if available in your setup) rather than invoking a cold-start LLM each time.

Common commands

qmd search "query"             # default

qmd vsearch "query"

qmd query "query"

qmd search "query" -c notes # Search specific collection

qmd search "query" -n 10 # More results

qmd search "query" --json # JSON output

qmd search "query" --all --files --min-score 0.3

Useful options

  • --n <num>: number of results
  • --c, --collection <name>: restrict to a collection
  • ---all --min-score <num>: return all matches above a threshold
  • ---json / --files: agent-friendly output formats
  • ---full: return full document content

Retrieve

qmd get "path/to/file.md"       # Full document

qmd get "#docid" # By ID from search results

qmd multi-get "journals/2025-05*.md"

qmd multi-get "doc1.md, doc2.md, #abc123" --json

Maintenance

qmd status                      # Index health

qmd update # Re-index changed files

qmd embed # Update embeddings

Keeping the index fresh

Set up a cron job or hook to automatically re-index. For example, a daily 5 AM reindex:

Via Clawdbot cron (isolated job, runs silently):

clawdbot cron add \

--name "qmd-reindex" \

--cron "0 5 * * *" \

--tz "America/New_York" \

--session isolated \

--message "Run: export PATH=\"\$HOME/.bun/bin:\$PATH\" && qmd update && qmd embed"

Or via system crontab:

0 5 * * * export PATH="$HOME/.bun/bin:$PATH" && qmd update && qmd embed

This ensures your vault search stays current as you add or edit notes.

Models and cache

  • -Uses local GGUF models; first run auto-downloads them.
  • -Default cache: ~/.cache/qmd/models/ (override with XDG_CACHE_HOME).

Relationship to Clawdbot memory search

  • -qmd searches *your local files* (notes/docs) that you explicitly index into collections.
  • -Clawdbot's memory_search searches *agent memory* (saved facts/context from prior interactions).
  • -Use both: memory_search for "what did we decide/learn before?", qmd for "what's in my notes/docs on disk?".

Launch an agent with qmd External Knowledge Base Search on Termo.