Masker integrates with Vapi by acting as a drop-in Custom LLM. Vapi sends each conversation turn to Masker’s OpenAI-compatible proxy endpoint; Masker redacts PHI, forwards the masked prompt to your upstream model, then rehydrates tokens in the response before returning it to Vapi. The caller hears a natural reply. Your LLM provider only ever sees tokens.Documentation Index
Fetch the complete documentation index at: https://docs.masker.dev/llms.txt
Use this file to discover all available pages before exploring further.
You need a Masker agent already created in the portal. If you haven’t done that yet, follow the Quickstart first — the portal creates the Vapi assistant for you automatically when you paste your Vapi API key.
Automatic setup (recommended)
The Masker portal configures Vapi on your behalf. When you create a new agent and paste your Vapi API key, Masker calls the Vapi API to create an assistant pre-wired with the correct Custom LLM URL and webhook URL. No manual steps are needed. After the wizard completes, skip to Verify the assistant settings to confirm everything looks correct.Manual setup
If you want to connect Masker to an existing Vapi assistant — or if you prefer to configure Vapi yourself — follow these steps.Sign in to the Masker portal and locate your agent
Open masker-voice.fly.dev/portal and find the agent you want to connect. Each agent has its own proxy URL and webhook URL shown on the agent detail page.
Copy the proxy URL
The proxy URL follows this format:Copy the full URL from the portal — the
{agent_id} segment is unique to your agent.The
{agent_id} path segment acts as a session-scoping token. Every request that hits this URL is attributed to this agent in your Masker dashboard.Open your assistant in the Vapi dashboard
Go to dashboard.vapi.ai, open Assistants, and click the assistant you want to route through Masker.
Switch the model provider to Custom LLM
In the assistant settings, find the Model section. Change Provider from its current value to Custom LLM. A Custom LLM URL field appears below.
Paste the proxy URL
Paste the proxy URL you copied from the Masker portal into the Custom LLM URL field. Leave the model field as-is — Masker forwards the model name you set here to your upstream provider.
Add the webhook URL
Copy the webhook URL for your agent from the Masker portal:In the Vapi assistant settings, open Advanced and paste this URL into the Server URL (webhook) field. Masker uses this to receive
assistant-request events and associate them with the correct session.Verify the assistant settings
After setup — whether automatic or manual — open the assistant in the Vapi dashboard and confirm:| Setting | Expected value |
|---|---|
| Model → Provider | Custom LLM |
| Custom LLM URL | https://masker-voice.fly.dev/proxy/{agent_id}/v1/chat/completions |
| Server URL (webhook) | https://masker-voice.fly.dev/vapi/webhook/{agent_id} |
/proxy/ prefix, the traffic is not passing through Masker.
Place a test call
Use Vapi’s Talk to Assistant button on the assistant detail page, or dial the phone number attached to the assistant. Say something that includes PHI — for example:“Hi, my name is Sarah Johnson. My date of birth is March 14, 1982, and my member ID is BCBS-447299.”The assistant should respond naturally. To verify redaction, open platform.openai.com/logs and inspect the most recent chat completion. The input should contain tokens like
MSKV1.person_name.K_HEALTHCARE.a3f9... rather than the caller’s real name or date of birth.
Back in the Masker portal, open Sessions to see the live transcript split into regulated (left) and public (right) sides of the firewall.