Playground
Paste any text and click Detect to see the spans the
openai/privacy-filter model finds. Click Redact to replace each
span with its category marker. Everything runs in your browser — the
text never leaves the page.
1. Configuration — change anything, then click Reload
idle
⚠ First click will download ~770 MB of model weights from
HuggingFace and cache them on this device (OPFS). Subsequent visits load
from the local cache.
Equivalent code — same call you'd make from your own JS/TS project (works in browsers + Node from the single textsift ESM entry; no separate textsift/browser import is needed)
Raw result — the full object your code receives. detect() returns { spans, summary, ... }; redact() returns the same plus redactedText, containsPii. Each span has label, start/end character offsets, the matched text, the marker it'd be replaced with, and a confidence score 0..1.
4. Storage — model weights cached on this device
…
What’s happening
Section titled “What’s happening”The first click loads the textsift JS bundle (~2.6 MB) and the
underlying model_q4f16.onnx from the Hugging Face Hub (~770 MB).
Subsequent clicks are served from the browser’s HTTP cache, so the
slow path only runs once per browser. WebGPU is auto-selected when
available; the Backends page covers fallbacks.
The default backend selection mirrors what
PrivacyFilter.create({}) does in your own code — see
Quickstart and the
API reference.