Results for "compliance"
Processes and controls for data quality, access, lineage, retention, and compliance across the AI lifecycle.
Tracking where data came from and how it was transformed; key for debugging and compliance.
Systematic review of model/data processes to ensure performance, fairness, security, and policy compliance.
AI used in sensitive domains requiring compliance.
Information that can identify an individual (directly or indirectly); requires careful handling and compliance.
Policies and practices for approving, monitoring, auditing, and documenting models in production.
Central catalog of deployed and experimental models.
AI used without governance approval.
AI-assisted review of legal documents.
Automated detection/prevention of disallowed outputs (toxicity, self-harm, illegal instruction, etc.).
Techniques to understand model decisions (global or local), important in high-stakes and regulated settings.
Central system to store model versions, metadata, approvals, and deployment state.
A discipline ensuring AI systems are fair, safe, transparent, privacy-preserving, and accountable throughout lifecycle.
Framework for identifying, measuring, and mitigating model risks.
Categorizing AI applications by impact and regulatory risk.
Logged record of model inputs, outputs, and decisions.
Legal or policy requirement to explain AI decisions.
Extracting system prompts or hidden instructions.
European regulation classifying AI systems by risk.
Required descriptions of model behavior and limits.
Ability to inspect and verify AI decisions.
Privacy risk analysis under GDPR-like laws.
Central log of AI-related risks.
Review process before deployment.
Classifying models by impact level.
Governance of model changes.
US approval process for medical AI devices.
Legal right to fair treatment.
Quantifying financial risk.
Maximum expected loss under normal conditions.