Results for "transparency"
Transparency Obligation
IntermediateRequirement to inform users about AI use.
Transparency obligation means that companies using AI must be open about how their systems work. They need to explain to users what the AI is doing and why it makes certain decisions. For example, if an AI system denies a loan application, the company should provide a clear explanation of the rea...
Requirement to inform users about AI use.
Ensuring decisions can be explained and traced.
Techniques to understand model decisions (global or local), important in high-stakes and regulated settings.
Studying internal mechanisms or input influence on outputs (e.g., saliency maps, SHAP, attention analysis).
Policies and practices for approving, monitoring, auditing, and documenting models in production.
Standardized documentation describing intended use, performance, limitations, data, and ethical considerations.
Structured dataset documentation covering collection, composition, recommended uses, biases, and maintenance.
Ability to replicate results given same code/data; harder in distributed training and nondeterministic ops.
A discipline ensuring AI systems are fair, safe, transparent, privacy-preserving, and accountable throughout lifecycle.
Required human review for high-risk decisions.
Central catalog of deployed and experimental models.
Logged record of model inputs, outputs, and decisions.
Legal or policy requirement to explain AI decisions.
Models whose weights are publicly available.
European regulation classifying AI systems by risk.
AI used in sensitive domains requiring compliance.
International AI risk standard.
Required descriptions of model behavior and limits.
AI used without governance approval.
Assigning AI costs to business units.
Models estimating recidivism risk.
Legal right to fair treatment.
Ensuring models comply with lending fairness laws.
Credit models with interpretable logic.