Models

Verify reasoning from any model, not just the ones you trained.

Prova operates on the reasoning text, not the model. Route any LLM through the Gateway or submit transcripts directly. You get the same certificate format, the same failure taxonomy, and the same audit artifact across every provider.

Anthropic

  • claude-opus-4-7
  • claude-sonnet-4-6
  • claude-haiku-4-5

OpenAI

  • gpt-5
  • gpt-4.1
  • o4-mini

Google

  • gemini-2.5-pro
  • gemini-2.5-flash

Meta

  • llama-4-maverick
  • llama-4-scout

Mistral

  • mistral-large-2
  • mixtral-8x22b

Open weights, self-hosted

  • any model that emits reasoning text

Why model-agnostic verification matters

Every training-time intervention (Constitutional AI, RLHF, fine-tuning) is bounded to the model that received it. Runtime verification is not. Prova issues the same signed certificate whether the reasoning came from a frontier API, an open-weights checkpoint you serve yourself, or a legacy model that predates any of todays alignment work.

ROUTE YOUR FIRST MODEL