API Overview
Base URL, authentication, response format, and endpoint reference.
Base URL
All API requests are sent to:
https://api.noirdoc.de
The proxy endpoints live under the /v1 prefix, matching the OpenAI API convention.
Authentication
Proxy API keys authenticate requests to the /v1/* endpoints. Every proxy key starts with the px- prefix, making it easy to distinguish from provider keys.
You can send the key in any of these headers — Noirdoc checks them in order:
Authorization: Bearer px-your-key
x-api-key: px-your-key
api-key: px-your-key
The header format also determines the provider routing:
| Header | Provider |
|---|---|
Authorization: Bearer px-... | OpenAI / OpenRouter |
x-api-key: px-... | Anthropic |
api-key: px-... | Azure OpenAI |
Proxy keys are scoped to a tenant. Each key inherits the tenant’s provider configuration and pseudonymization settings. You can create and manage proxy keys through the Portal or via environment variables in self-hosted deployments.
Authentication errors
Missing or invalid credentials return HTTP 401:
{
"error": {
"type": "auth_error",
"code": "unauthorized",
"message": "Invalid or expired API key."
}
}
Content types
All requests and responses use JSON:
Content-Type: application/json
File upload endpoints (/v1/files) accept multipart/form-data. Streaming responses use text/event-stream (Server-Sent Events).
Response format
Proxy endpoints return responses in the exact format of the upstream LLM provider. For example, a call to /v1/chat/completions returns the standard OpenAI chat completion object. Noirdoc is transparent — it pseudonymizes the request, forwards it, restores the response, and passes it through unchanged.
Error responses
All errors follow a consistent envelope:
{
"error": {
"type": "proxy_error",
"code": "provider_not_configured",
"message": "No API key configured for provider 'openai'."
}
}
The type groups errors, code identifies the specific error, and message is human-readable. See Error Codes for the full list.
Provider auto-detection
You do not need to tell Noirdoc which provider you are using. It determines the provider from:
- Authentication header —
Authorization: Bearer(OpenAI/OpenRouter) vsx-api-key(Anthropic) vsapi-key(Azure) - Endpoint path —
/v1/chat/completions(OpenAI) vs/v1/messages(Anthropic) - Model identifier — provider-prefixed names like
anthropic/claude-sonnet-4-6indicate OpenRouter - Query parameters —
api-versionindicates Azure OpenAI
This means you can switch providers by changing the model name and header — no other configuration changes required.
Rate limiting
Noirdoc enforces rate limits on proxy endpoints. When you exceed the limit, the API returns HTTP 429 Too Many Requests with a Retry-After header.
Streaming
Proxy endpoints support streaming when the upstream provider supports it. Pass "stream": true in your request body. Pseudonymization and restoration happen on each chunk in real time. See Streaming for details.
Endpoint groups
| Prefix | Purpose | Auth |
|---|---|---|
/v1/* | LLM proxy — forward requests to providers | Proxy API key (px-*) |
/v1/pseudonymize | Standalone PII pseudonymization | Proxy API key |
/v1/detect | Standalone PII detection | Proxy API key |
/portal/* | Self-service key & provider management | JWT (Managed Service) |
/admin/* | Multi-tenant administration | JWT admin (Managed Service) |
Next steps
- Explore the Proxy Endpoints to route LLM traffic
- Review all Error Codes the proxy can return