Policy Library
Regulatory Coverage
Every Prova certificate automatically evidences the regulatory controls it satisfies. Below is the full map across 14 frameworks and 30 controls.
frameworks
14
controls
30
total evidenced
0
EU AI Act
6 controlsHigh-risk AI systems must implement a risk management system that continuously identifies, analyzes, and evaluates risks throughout the lifecycle.
High-risk AI systems must automatically log events with traceability throughout the system's lifetime.
High-risk AI systems must be sufficiently transparent so users can interpret outputs and use them appropriately.
High-risk AI systems must be designed with human oversight measures enabling detection and intervention when anomalies occur.
High-risk AI systems must achieve appropriate levels of accuracy and be robust against errors, faults, and inconsistencies.
Providers of high-risk AI must implement a quality management system covering the full system lifecycle with documented controls.
FDA 21 CFR 11
2 controlsSystems must have computer-generated audit trails that record operator actions and changes to electronic records, with time/date stamps.
Additional controls for open systems including document encryption and use of established standards for reliability of data.
FDA 21 CFR 820
1 controlsManufacturers must establish documented procedures to control design of the device to ensure specified design requirements are met.
SEC
1 controlsBroker-dealers with market access must establish, document, and maintain risk controls preventing orders that exceed pre-set limits or are erroneous.
SOC 2
3 controlsEntities must implement controls to monitor system components for anomalies that could indicate malicious acts, natural disasters, or errors.
Entities must evaluate and respond to identified security events to achieve the entity's objectives.
Entities select, develop, and perform ongoing assessments of vendors and business partners, ensuring they meet the entity's controls.
NIST AI RMF
4 controlsAI system performance and goals are measured, and performance is documented with quantified uncertainty.
The AI risk or impact metrics reflect organizational risk tolerance and include explainability, interpretability, and accountability.
Accountability for AI risk is clear and processes are in place to achieve it throughout the AI lifecycle.
Organizational risk tolerance is set and documented for AI risks, including bias, safety, and security.
ISO 42001
2 controlsThe organization shall apply the AI risk assessment process to identify risks associated with AI systems.
The organization shall determine what needs to be monitored, the methods, and when results shall be analyzed and evaluated.
HIPAA
2 controlsImplement hardware, software, and procedural mechanisms that record and examine activity in information systems containing ePHI.
Conduct an accurate and thorough assessment of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of ePHI.
GDPR
2 controlsData subjects have the right not to be subject to decisions based solely on automated processing, including profiling, with legal or similarly significant effects.
Personal data shall be processed in a manner that ensures appropriate security of the personal data.
MAS FEAT
2 controlsFinancial institutions using AI/ML models must ensure models are explainable and outcomes can be understood by stakeholders.
Financial institutions must maintain audit trails for model decisions and ensure models can be audited.
FINRA
1 controlsMember firms must establish and maintain a system to supervise the activities of each associated person and automated systems.
PCI DSS
1 controlsAudit logs must capture all individual user access, all actions taken by privileged users, and use of identification and authentication mechanisms.
CCPA
1 controlsBusinesses using automated decision-making must provide meaningful information about the logic and likely outcomes.
DORA
2 controlsFinancial entities shall identify, classify and adequately document all ICT supported business functions, roles and responsibilities.
Financial entities shall put in place mechanisms to promptly detect anomalous activities, including ICT network performance issues and ICT-related incidents.