Results for "precision-recall"
Harmonic mean of precision and recall; useful when balancing false positives/negatives matters.
Often more informative than ROC on imbalanced datasets; focuses on positive class performance.
Of true positives, the fraction correctly identified; sensitive to false negatives.
Of predicted positives, the fraction that are truly positive; sensitive to false positives.
A table summarizing classification outcomes, foundational for metrics like precision, recall, specificity.
Plots true positive rate vs false positive rate across thresholds; summarizes separability.
How well a model performs on new data drawn from the same (or similar) distribution as training.
Fraction of correct predictions; can be misleading on imbalanced datasets.
Automated detection/prevention of disallowed outputs (toxicity, self-harm, illegal instruction, etc.).
When some classes are rare, requiring reweighting, resampling, or specialized metrics.
Assigning category labels to images.
Combining signals from multiple modalities.
Detects trigger phrases in audio streams.
Reducing numeric precision of weights/activations to speed inference and reduce memory with acceptable accuracy loss.
Measures how much information an observable random variable carries about unknown parameters.
Pixel-level separation of individual object instances.
Cost of model training.