Disaggregated AI search for modern applications

Seek.js shifts indexing to build-time, search to the browser, and reasoning to the edge so teams can ship AI search without the vector database tax.

npm install @seekjs/core

Compatible with

Node.jsNode.js
BunBun
DenoDeno
ReactReact
Next.jsNext.js
Vue.jsVue
SvelteSvelte
NuxtNuxt
AstroAstro
ViteVite
RemixRemix
SolidSolid
Made for your website

The global
AI Search Widget

Static .msp on the CDN

Ship the compiled index beside your HTML—cached at the edge like any other asset, no query-time database.

Hybrid search in-browser

BM25 plus vectors run in WASM with IndexedDB caching so retrieval stays local and fast.

Edge AI when you need it

Stream cited answers from Workers only after local search returns chunks—no LLM on every keystroke.

Pipeline

The Seek.js pipeline,
from source to answer

Build-time extraction, binary index generation, browser-side search, and edge inference stay separate so each layer stays cheap and fast.

import { extractHtml } from "@seekjs/parser";

const stream = extractHtml({
  inputDir: "./dist",
  urlBase: "https://yoursite.com",
  selectors: ["article", "main"],
});

for await (const batch of stream) {
  // batch: { text, url, hash }[]
  await compiler.push(batch);
}
Parse at build time
Extract semantic chunks from your generated site and bind them to source URLs.
Compile to .msp
Vectorize chunks and serialize a compact index for CDN delivery.
Search in the browser
Cache the index in IndexedDB and run hybrid search locally.
Stream edge summaries
Send top chunks to the edge only when an AI answer is requested.
Why Seek.js

Built to remove the vector database tax

Seek.js disaggregates the RAG pipeline into parser, compiler, client, and edge reasoning modules so docs and product search stay fast, portable, and cheap to run.

Zero
Runtime databases
<15 ms
In-browser search
~600 KB
Static index shipped

Contribute to Seek.js

Issues, design feedback, and docs PRs welcome on GitHub; chat with contributors on Discord.