Skip to content

Deployment

Use querymode/local to query files directly from disk:

import { QueryMode } from "querymode/local"
const qm = QueryMode.local()
const result = await qm
.table("./data/events.parquet")
.filter("amount", "gt", 100)
.collect()

No server, no network — reads files directly via LocalExecutor.

name = "querymode"
main = "src/worker.ts"
compatibility_date = "2025-12-01"
compatibility_flags = ["nodejs_compat"]
[[r2_buckets]]
binding = "DATA_BUCKET"
bucket_name = "querymode-data"
[durable_objects]
bindings = [
{ name = "MASTER_DO", class_name = "MasterDO" },
{ name = "QUERY_DO", class_name = "QueryDO" },
{ name = "FRAGMENT_DO", class_name = "FragmentDO" },
]
[[migrations]]
tag = "v1"
new_classes = ["MasterDO", "QueryDO"]
[[migrations]]
tag = "v2"
new_classes = ["FragmentDO"]
[[rules]]
type = "CompiledWasm"
globs = ["**/*.wasm"]
fallthrough = false

At PB scale, a single R2 bucket hits rate limits. Add shard buckets for 2-4x throughput:

# wrangler.toml — add alongside the primary DATA_BUCKET
[[r2_buckets]]
binding = "DATA_BUCKET_1"
bucket_name = "querymode-data-shard-1"
[[r2_buckets]]
binding = "DATA_BUCKET_2"
bucket_name = "querymode-data-shard-2"
[[r2_buckets]]
binding = "DATA_BUCKET_3"
bucket_name = "querymode-data-shard-3"

QueryMode automatically distributes tables across buckets using FNV-1a hash routing on the R2 key prefix (first path segment, typically the table name). All DOs (Master, Query, Fragment) and the Worker use the same deterministic routing — no configuration needed beyond binding the extra buckets.

Terminal window
pnpm build && wrangler deploy
EndpointMethodDescription
/healthGETHealth check (?deep=true for full diagnostics)
/queryPOSTExecute query, return JSON rows
/query/streamPOSTStream columnar results
/query/countPOSTCount matching rows
/query/existsPOSTCheck if any rows match
/query/firstPOSTFirst matching row
/query/explainPOSTExecution plan
/tablesGETList registered tables
/meta?table=XGETTable metadata
/upload?key=XPOSTDirect R2 file upload (dev mode only, requires DEV_MODE env)
/writePOSTWrite rows
/refreshPOSTRefresh metadata cache
/registerPOSTRegister table
/register-icebergPOSTRegister Iceberg table
{
"table": "events",
"filters": [
{ "column": "amount", "op": "gt", "value": 100 }
],
"projections": ["id", "amount", "region"],
"sortColumn": "amount",
"sortDirection": "desc",
"limit": 20
}

Import operators directly for custom pipelines:

import {
FilterOperator, AggregateOperator, TopKOperator,
HashJoinOperator, WindowOperator, drainPipeline,
} from "querymode"

See the Operators page for the full operator reference.

Start a local server that accepts psql, DBeaver, Metabase, or any PostgreSQL-compatible client:

Terminal window
npx tsx src/pg-wire/server.ts
# psql -h localhost -p 5433 -U querymode
# SELECT * FROM './data/events.parquet' WHERE region = 'us' LIMIT 10;

See Postgres Wire Protocol for configuration, supported SQL, and programmatic embedding.

import { QueryMode } from "querymode/local"
export async function GET(request: Request) {
const url = new URL(request.url)
const category = url.searchParams.get("category") ?? "Electronics"
const qm = QueryMode.local()
const result = await qm
.table("./data/products.parquet")
.filter("category", "eq", category)
.sort("price", "asc")
.limit(50)
.collect()
return Response.json(result.rows)
}
Terminal window
pnpm install # install dependencies
pnpm dev # local dev with wrangler (localhost:8787)
pnpm test # run all tests (workerd + node)
pnpm test:workers # workerd tests only
pnpm test:node # node tests only (DuckDB conformance)
pnpm bench:local # local micro-benchmarks
pnpm bench:operators # QueryMode vs DuckDB (requires pnpm dev)