Results for "true positive rate"
Plots true positive rate vs false positive rate across thresholds; summarizes separability.
Adjusting learning rate over training to improve convergence.
Of true positives, the fraction correctly identified; sensitive to false negatives.
Of predicted positives, the fraction that are truly positive; sensitive to false positives.
Often more informative than ROC on imbalanced datasets; focuses on positive class performance.
Ability to correctly detect disease.
Controls the size of parameter updates; too high diverges, too low trains slowly or gets stuck.
Gradually increasing learning rate at training start to avoid divergence.
A table summarizing classification outcomes, foundational for metrics like precision, recall, specificity.
Of true negatives, the fraction correctly identified.
Model relies on irrelevant signals.
Rate at which AI capabilities improve.
Failure to detect present disease.
Measures divergence between true and predicted probability distributions.
Scalar summary of ROC; measures ranking ability, not calibration.
Activation max(0, x); improves gradient flow and training speed in deep nets.
Optimal control for linear systems with quadratic cost.
Returns above benchmark.
Iterative method that updates parameters in the direction of negative gradient to minimize loss.
Organizational uptake of AI technologies.
Methods like Adam adjusting learning rates dynamically.
Maximum system processing rate.
Trend reversal when data is aggregated improperly.
The degree to which predicted probabilities match true frequencies (e.g., 0.8 means ~80% correct).
Systematic error introduced by simplifying assumptions in a learning algorithm.
Probabilities do not reflect true correctness.
Penalizes confident wrong predictions heavily; standard for classification and language modeling.
A point where gradient is zero but is neither a max nor min; common in deep nets.
Gradients shrink through layers, slowing learning in early layers; mitigated by ReLU, residuals, normalization.
Matrix of second derivatives describing local curvature of loss.