Prova Gateway

Proof-carrying
AI inference.

One URL change routes your existing OpenAI client through Prova. Every completion arrives with a formal certificate of logical soundness, or a precise diagnosis of where reasoning fails.

p50 overhead<180ms
fail-openalways returns 200 unless strict policy
streamingsupported (verdict in trailer header)
providers6 supported

One line of code

Point your OpenAI client at the Prova gateway. Your API key, your model, your prompts -- nothing else changes.

Before

const openai = new OpenAI({
  baseURL: "https://api.openai.com/v1",
  apiKey: process.env.OPENAI_API_KEY,
})

After

const openai = new OpenAI({
  baseURL: "https://api.prova.cobound.dev/v1",
  apiKey: process.env.OPENAI_API_KEY,
  defaultHeaders: {
    "X-Prova-Key": process.env.PROVA_API_KEY,
    "X-Prova-Policy": "flag",
  },
})

Python

from openai import OpenAI

client = OpenAI(
    base_url="https://api.prova.cobound.dev/v1",
    api_key=os.environ["OPENAI_API_KEY"],
    default_headers={
        "X-Prova-Key": os.environ["PROVA_API_KEY"],
        "X-Prova-Policy": "flag",
    },
)

resp = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "..."}],
)

# Prova verdict is always in the response headers
print(resp._raw_response.headers["X-Prova-Verdict"])   # VALID | INVALID | UNVERIFIED
print(resp._raw_response.headers["X-Prova-Certificate"])  # PRV-2026-5573

Enforcement policies

Set X-Prova-Policy per request or as a default header. Different endpoints in the same application can use different policies.

observedefault

Attach the Prova verdict as a response header and JSON sidecar. The original response is never altered. Use this to build dashboards and audit trails without touching your serving path.

X-Prova-Policy: observe
{
  "verdict": "VALID",
  "confidence_score": 97,
  "certificate_id": "PRV-2026-5573"
}
flagrecommended for production

For INVALID responses, append a structured warning block to the completion text. Upstream callers see the full response plus an explicit flag. The HTTP status is always 200 — your error handling does not need to change.

X-Prova-Policy: flag
[PROVA WARNING]
failure_type: CIRCULAR
"The conclusion validates the premise."
certificate: PRV-2026-BDC2
strictcompliance gating

INVALID responses return HTTP 422 with a structured error body. Use this at compliance-critical decision points: loan approvals, clinical recommendations, legal analysis. The model call still completes; Prova blocks delivery.

X-Prova-Policy: strict
HTTP 422
{
  "error": "reasoning_invalid",
  "failure_type": "UNSUPPORTED_LEAP",
  "certificate_id": "PRV-2026-3DC6"
}

Fail-open guarantee

If Prova verification fails for any reason -- timeout, extraction error, validator overload -- the gateway returns the original model response unmodified with verdict UNVERIFIED. HTTP status is always 200 (unless you set strict policy). Your application never fails because of Prova. SLA for verification: 99.5% of calls complete within 3s of the model response finishing.

UNVERIFIED responses include X-Prova-Reason explaining why verification was skipped.

Provider matrix

Prova is OpenAI-compatible. Any provider that exposes the /v1/chat/completions interface works -- pass X-Prova-Upstream to target non-OpenAI providers.

ProviderModelsStatus
OpenAIgpt-4o, gpt-4o-mini, o3GA
Anthropicclaude-sonnet-4-6, claude-opus-4-7GA
Azure OpenAIAll deployments via /openai/deployments/*GA
Mistralmistral-large, mistral-smallbeta
Groqllama-3.3-70b, mixtral-8x7bbeta
Together AIAny OpenAI-compatible endpointbeta

Targeting Anthropic directly

curl https://api.prova.cobound.dev/v1/chat/completions \
  -H "Authorization: Bearer $ANTHROPIC_API_KEY" \
  -H "X-Prova-Key: $PROVA_API_KEY" \
  -H "X-Prova-Upstream: https://api.anthropic.com/v1" \
  -H "X-Prova-Policy: flag" \
  -d '{"model": "claude-sonnet-4-6", "messages": [...]}'

Latency characteristics

Prova runs verification concurrently with response streaming. For non-streaming calls, the overhead is the verify step only -- the model call itself is not slowed.

MetricValueNote
p50 verify overhead<180msadded to end of model response
p95 verify overhead<600msextraction is the long tail
p99 verify overhead<2scomplex reasoning chains
Streaming TTFB delta0mstokens stream through unblocked; verdict in trailer
Fail-open timeout3safter model finishes; then UNVERIFIED

New · Live gateway

One line of code.
Every AI response verified.

Point your existing OpenAI client at the Prova gateway. The response you already ship now arrives with a formal Prova verdict attached — VALID, INVALID, or flagged inline. No SDK change. No pipeline rewrite.

// Before
const openai = new OpenAI({
  baseURL: "https://api.openai.com/v1",
  apiKey: process.env.OPENAI_API_KEY,
})
// After — every response is verified by Prova
const openai = new OpenAI({
  baseURL: "https://api.prova.cobound.dev/v1",
  apiKey: process.env.OPENAI_API_KEY,
  defaultHeaders: { "X-Prova-Policy": "flag" },
})

Try a live call

Policies: observe (default, attach verdict), flag (append warning for INVALID), strict (block INVALID with 422).