Results for "features"
A measurable property or attribute used as model input (raw or engineered), such as age, pixel intensity, or token ID.
Designing input features to expose useful structure (e.g., ratios, lags, aggregations), often crucial outside deep learning.
Temporal and pitch characteristics of speech.
Automatically learning useful internal features (latent variables) that capture salient structure for downstream tasks.
Identifying and localizing objects in images, often with confidence scores and bounding rectangles.
Inferring sensitive features of training data.
Neural networks that operate on graph-structured data by propagating information along edges.
Combining signals from multiple modalities.
Extension of convolution to graph domains using adjacency structure.
Changing speaker characteristics while preserving content.
GNN using attention to weight neighbor contributions dynamically.
Centralized repository for curated features.
Detects trigger phrases in audio streams.
Training one model on multiple tasks simultaneously to improve generalization through shared structure.
A branch of ML using multi-layer neural networks to learn hierarchical representations, often excelling in vision, speech, and language.
A structured collection of examples used to train/evaluate models; quality, bias, and coverage often dominate outcomes.
The internal space where learned representations live; operations here often correlate with semantics or generative factors.
A parameterized mapping from inputs to outputs; includes architecture + learned parameters.
When a model fits noise/idiosyncrasies of training data and performs poorly on unseen data.
Techniques that stabilize and speed training by normalizing activations; LayerNorm is common in Transformers.
When information from evaluation data improperly influences training, inflating reported performance.
Networks using convolution operations with weight sharing and locality, effective for images and signals.
Mechanism that computes context-aware mixtures of representations; scales well and captures long-range dependencies.
Techniques to understand model decisions (global or local), important in high-stakes and regulated settings.
Studying internal mechanisms or input influence on outputs (e.g., saliency maps, SHAP, attention analysis).
Feature attribution method grounded in cooperative game theory for explaining predictions in tabular settings.
AI focused on interpreting images/video: classification, detection, segmentation, tracking, and 3D understanding.
Reduction in uncertainty achieved by observing a variable; used in decision trees and active learning.
Generating speech audio from text, with control over prosody, speaker identity, and style.
Tradeoffs between many layers vs many neurons per layer.