Governance Proxy
wl-proxy is a transparent governance proxy built on Pingora that intercepts agent API calls for policy evaluation and credential injection.
How It Works
Agent → proxy-openai:9443 → Caddy (TLS) → wl-proxy
├─ Extract request metadata (model, action, agent identity)
├─ Evaluate Cedar policies via wl-apdp
│ └─ observe mode: log decision, always forward
│ └─ enforce mode: deny if unauthorized
├─ Request credentials from wl-secrets-broker
│ └─ SCT issued (Ed25519-signed, single-use)
│ └─ API key injected into request headers
├─ Forward to upstream (api.openai.com, api.anthropic.com)
├─ Scrub credentials from response
└─ Log full audit trail
Agent Configuration
Route API calls through the proxy by changing the base URL:
services:
my-agent:
environment:
OPENAI_BASE_URL: https://proxy-openai:9443/v1
REQUESTS_CA_BUNDLE: /certs/beacon-ca.crt
# No OPENAI_API_KEY needed
volumes:
- ./beacon-ca.crt:/certs/beacon-ca.crt:ro
networks: [default, beacon-partner]
For Anthropic:
environment:
ANTHROPIC_BASE_URL: https://proxy-anthropic:9443/v1
REQUESTS_CA_BUNDLE: /certs/beacon-ca.crt
Enforcement Modes
| Mode | Behavior |
|---|---|
observe (default) | Log all decisions, never deny requests |
enforce | Deny unauthorized requests with 403 |
Switch modes by re-running the bootstrap script — the mode is saved to .beacon-env.
Audit Trail
Every proxied request generates an audit record including:
- Agent identity and request metadata
- Cedar policy evaluation result (allow/deny, which policies applied)
- Secret access record (which credentials were injected)
- Upstream response status
- Timestamps for each step
Credential Scrubbing
wl-proxy scrubs injected credentials from responses to prevent accidental exposure. API keys never appear in agent-visible responses or logs.