Results for "traceable decisions"
Ability to inspect and verify AI decisions.
Central system to store model versions, metadata, approvals, and deployment state.
Techniques to understand model decisions (global or local), important in high-stakes and regulated settings.
Requirement to reveal AI usage in legal decisions.
Ensuring decisions can be explained and traced.
Required human review for high-risk decisions.
Legal or policy requirement to explain AI decisions.
Models estimating recidivism risk.
Credit models with interpretable logic.
Early signals disproportionately influence outcomes.
Decisions dependent on others’ actions.
A subfield of AI where models learn patterns from data to make predictions or decisions, improving with experience rather than explicit rule-coding.
A learning paradigm where an agent interacts with an environment and learns to choose actions to maximize cumulative reward.
Studying internal mechanisms or input influence on outputs (e.g., saliency maps, SHAP, attention analysis).
Policies and practices for approving, monitoring, auditing, and documenting models in production.
System design where humans validate or guide model outputs, especially for high-stakes decisions.
Logged record of model inputs, outputs, and decisions.
Number of steps considered in planning.
Agent reasoning about future outcomes.
Sample mean converges to expected value.
Requirement to inform users about AI use.
Central log of AI-related risks.
Requirement to provide explanations.
Assigning AI costs to business units.
External sensing of surroundings (vision, audio, lidar).
RL without explicit dynamics model.
RL using learned or known environment models.
Learned model of environment dynamics.
AI systems assisting clinicians with diagnosis or treatment decisions.
Patient agreement to AI-assisted care.