Results for "high-risk"
High-Risk AI System
IntermediateAI used in sensitive domains requiring compliance.
High-risk AI systems are types of artificial intelligence that can have serious consequences if they fail. For example, AI used in medical devices or self-driving cars is considered high-risk because mistakes could harm people. Because of this, there are strict rules that these systems must follo...
A datastore optimized for similarity search over embeddings, enabling semantic retrieval at scale.
Techniques to understand model decisions (global or local), important in high-stakes and regulated settings.
System design where humans validate or guide model outputs, especially for high-stakes decisions.
A model is PAC-learnable if it can, with high probability, learn an approximately correct hypothesis from finite samples.
Learns the score (∇ log p(x)) for generative sampling.
Diffusion performed in latent space for efficiency.
Flat high-dimensional regions slowing training.
Applying learned patterns incorrectly.
Probabilities do not reflect true correctness.
High-fidelity virtual model of a physical system.
A function measuring prediction error (and sometimes calibration), guiding gradient-based optimization.
How well a model performs on new data drawn from the same (or similar) distribution as training.
Randomly zeroing activations during training to reduce co-adaptation and overfitting.
Breaking documents into pieces for retrieval; chunk size/overlap strongly affect RAG quality.
Training across many devices/silos without centralizing raw data; aggregates updates, not data.
Constraining model outputs into a schema used to call external APIs/tools safely and deterministically.
Systematic review of model/data processes to ensure performance, fairness, security, and policy compliance.
Logged record of model inputs, outputs, and decisions.
Central catalog of deployed and experimental models.
Inferring sensitive features of training data.
Average value under a distribution.
Review process before deployment.
Process for managing AI failures.
Governance of model changes.
AI used without governance approval.
Learning action mapping directly from demonstrations.
Ensuring robots do not harm humans.
Systems where failure causes physical harm.
US approval process for medical AI devices.
Software regulated as a medical device.