Results for "classification"
Use-Case Classification
IntermediateCategorizing AI applications by impact and regulatory risk.
Use-case classification is like sorting different types of tools in a toolbox based on how dangerous or useful they are. For AI, this means looking at various applications and deciding how much risk they carry. Some AI tools might be very safe and helpful, while others could pose significant risk...
Categorizing AI applications by impact and regulatory risk.
Assigning category labels to images.
A table summarizing classification outcomes, foundational for metrics like precision, recall, specificity.
Scalar summary of ROC; measures ranking ability, not calibration.
Penalizes confident wrong predictions heavily; standard for classification and language modeling.
Converts logits to probabilities by exponentiation and normalization; common in classification and LMs.
Of true negatives, the fraction correctly identified.
AI focused on interpreting images/video: classification, detection, segmentation, tracking, and 3D understanding.
Plots true positive rate vs false positive rate across thresholds; summarizes separability.
GNN framework where nodes iteratively exchange and aggregate messages from neighbors.
Extension of convolution to graph domains using adjacency structure.
Learning a function from input-output pairs (labeled data), optimizing performance on predicting outputs for unseen inputs.
A continuous vector encoding of an item (word, image, user) such that semantic similarity corresponds to geometric closeness.
A scalar measure optimized during training, typically expected loss over data, sometimes with regularization terms.
A function measuring prediction error (and sometimes calibration), guiding gradient-based optimization.
Fraction of correct predictions; can be misleading on imbalanced datasets.
Of predicted positives, the fraction that are truly positive; sensitive to false positives.
Of true positives, the fraction correctly identified; sensitive to false negatives.
Often more informative than ROC on imbalanced datasets; focuses on positive class performance.
The degree to which predicted probabilities match true frequencies (e.g., 0.8 means ~80% correct).
Networks using convolution operations with weight sharing and locality, effective for images and signals.
Automated detection/prevention of disallowed outputs (toxicity, self-harm, illegal instruction, etc.).
When some classes are rare, requiring reweighting, resampling, or specialized metrics.
Information that can identify an individual (directly or indirectly); requires careful handling and compliance.
Raw model outputs before converting to probabilities; manipulated during decoding and calibration.
A dataset + metric suite for comparing models; can be gamed or misaligned with real-world goals.
Identifying and localizing objects in images, often with confidence scores and bounding rectangles.
Measures divergence between true and predicted probability distributions.
Assigning labels per pixel (semantic) or per instance (instance segmentation) to map object boundaries.
Routes inputs to subsets of parameters for scalable capacity.