Results for "sensitivity to data"
Ability to correctly detect disease.
Small prompt changes cause large output changes.
A conceptual framework describing error as the sum of systematic error (bias) and sensitivity to data (variance).
Error due to sensitivity to fluctuations in the training dataset.
Sensitivity of a function to input perturbations.
Plots true positive rate vs false positive rate across thresholds; summarizes separability.
A narrow minimum often associated with poorer generalization.
AI that ranks patients by urgency.
Identifying suspicious transactions.
Categorizing AI applications by impact and regulatory risk.
Classifying models by impact level.
AI applied to X-rays, CT, MRI, ultrasound, pathology slides.
Automated assistance identifying disease indicators.
Testing AI under actual clinical conditions.
Of true positives, the fraction correctly identified; sensitive to false negatives.
Often more informative than ROC on imbalanced datasets; focuses on positive class performance.
Framework for identifying, measuring, and mitigating model risks.
Matrix of first-order derivatives for vector-valued functions.
Alternative formulation providing bounds.
Processes and controls for data quality, access, lineage, retention, and compliance across the AI lifecycle.
Tracking where data came from and how it was transformed; key for debugging and compliance.
Artificially created data used to train/test models; helpful for privacy and coverage, risky if unrealistic.
Expanding training data via transformations (flips, noise, paraphrases) to improve robustness.
Training across many devices/silos without centralizing raw data; aggregates updates, not data.
Increasing performance via more data.
Protecting data during network transfer and while stored; essential for ML pipelines handling sensitive data.
When information from evaluation data improperly influences training, inflating reported performance.
Human or automated process of assigning targets; quality, consistency, and guidelines matter heavily.
Shift in feature distribution over time.
Maliciously inserting or altering training data to implant backdoors or degrade performance.