Masker exposes a standard OpenAI-compatibleDocumentation Index
Fetch the complete documentation index at: https://docs.masker.dev/llms.txt
Use this file to discover all available pages before exploring further.
/v1/chat/completions endpoint. If your voice platform supports a “Custom LLM,” “Custom OpenAI-compatible base URL,” or similar setting, you can drop Masker into the request path without writing any code, installing an SDK, or adding extra headers. You paste one URL, and every conversation turn flows through the compliance firewall automatically.
How it works
Your voice platform sendsPOST /v1/chat/completions requests to Masker instead of OpenAI directly. Masker intercepts each request, tokenizes PHI in the messages array, forwards the masked request to your configured upstream model, and rehydrates tokens in the response before returning it to your platform. The request and response shapes are identical to the OpenAI Chat Completions API — your platform never needs to know Masker is in the middle.
Base URL format
/chat/completions to this base URL automatically. Do not add the suffix yourself.
For a portal-provisioned agent, use the agent-specific URL shown in the Masker portal:
The
/s/<your-session>/ path segment in the demo URL pins requests to the browser tab that minted the token, so events appear in the correct Masker dashboard session. In production, the {agent_id} segment in the portal URL serves the same scoping role.Generic setup steps
Get your Masker URL
For the public demo, open try.masker.dev and copy the session URL shown for your tab:For a production deployment, open the Masker portal, create an agent, and copy the proxy URL from the agent detail page.
Find your platform's Custom LLM or base URL setting
Look for a setting labelled one of the following — the exact name varies by platform:
- Custom LLM URL
- Custom LLM base URL
- OpenAI-compatible base URL
- LLM Server URL
- Proxy URL
Paste the Masker base URL
Paste your Masker URL into the field. Do not append
/chat/completions — your platform adds that automatically.Set a model name
Enter any OpenAI-shaped model string in the model field, for example
gpt-4o-mini. Masker forwards this value to your configured upstream provider — the string just needs to be valid for whatever model API you have set up on the Masker side.Save and test
Save your platform’s configuration. Start a test conversation that includes PHI — a name, phone number, or date of birth. Check the Masker dashboard to confirm the tokenized version appears on the public side of the firewall, and verify in your LLM provider’s logs that only tokens — not raw PHI — reached the model.
What your platform needs to support
Masker works with any voice platform that meets these requirements:| Requirement | Notes |
|---|---|
| Custom base URL or Custom LLM URL field | The platform must let you override the LLM endpoint |
| OpenAI Chat Completions request format | POST /v1/chat/completions with messages array |
Automatic /chat/completions suffix (or accepts full URL) | Most platforms append it; some accept the full path |
assistant-request events — if your platform has a similar webhook mechanism, contact hello@masker.dev to discuss native support.
Session token pinning
The/s/<token>/ segment in the URL path is how Masker ties a stream of requests to a specific session in the dashboard. Each browser tab on try.masker.dev mints its own token. In the Masker portal, each agent has a stable agent ID that serves the same purpose.
This means you can have multiple agents or sessions running simultaneously and each appears as a separate row in the dashboard — no request headers or cookies needed.