Results for "area under ROC"
Plots true positive rate vs false positive rate across thresholds; summarizes separability.
Scalar summary of ROC; measures ranking ability, not calibration.
Often more informative than ROC on imbalanced datasets; focuses on positive class performance.
When some classes are rare, requiring reweighting, resampling, or specialized metrics.
AI applied to X-rays, CT, MRI, ultrasound, pathology slides.
Grouping patients by predicted outcomes.
A dataset + metric suite for comparing models; can be gamed or misaligned with real-world goals.
Systematic review of model/data processes to ensure performance, fairness, security, and policy compliance.
Optimization with multiple local minima/saddle points; typical in neural networks.
A narrow minimum often associated with poorer generalization.
Restricting updates to safe regions.
Temporary reasoning space (often hidden).
Identifying suspicious transactions.
System-level design for general intelligence.
Internal representation of the agent itself.
What would have happened under different conditions.
Estimating parameters by maximizing likelihood of observed data.
Set of vectors closed under addition and scalar multiplication.
Formal framework for sequential decision-making under uncertainty.
Optimization under uncertainty.
Maintaining alignment under new conditions.
Motion of solid objects under forces.
Maximum expected loss under normal conditions.
Ability to replicate results given same code/data; harder in distributed training and nondeterministic ops.
Learning where data arrives sequentially and the model updates continuously, often under changing distributions.
A broader capability to infer internal system state from telemetry, crucial for AI services and agents.
Stress-testing models for failures, vulnerabilities, policy violations, and harmful behaviors before release.
A model is PAC-learnable if it can, with high probability, learn an approximately correct hypothesis from finite samples.
A theoretical framework analyzing what classes of functions can be learned, how efficiently, and with what guarantees.
Neural networks can approximate any continuous function under certain conditions.