Disaggregated AI search for modern applications
Made for your website
The global
AI Search Widget
Static .msp on the CDN
Ship the compiled index beside your HTML—cached at the edge like any other asset, no query-time database.
Hybrid search in-browser
BM25 plus vectors run in WASM with IndexedDB caching so retrieval stays local and fast.
Edge AI when you need it
Stream cited answers from Workers only after local search returns chunks—no LLM on every keystroke.
The Seek.js pipeline,
from source to answer
Build-time extraction, binary index generation, browser-side search, and edge inference stay separate so each layer stays cheap and fast.
import { extractHtml } from "@seekjs/parser";
const stream = extractHtml({
inputDir: "./dist",
urlBase: "https://yoursite.com",
selectors: ["article", "main"],
});
for await (const batch of stream) {
// batch: { text, url, hash }[]
await compiler.push(batch);
}- Parse at build time
- Extract semantic chunks from your generated site and bind them to source URLs.
- Compile to .msp
- Vectorize chunks and serialize a compact index for CDN delivery.
- Search in the browser
- Cache the index in IndexedDB and run hybrid search locally.
- Stream edge summaries
- Send top chunks to the edge only when an AI answer is requested.
Built to remove the vector database tax
Seek.js disaggregates the RAG pipeline into parser, compiler, client, and edge reasoning modules so docs and product search stay fast, portable, and cheap to run.
- Runtime databases
- In-browser search
- Static index shipped