Results for "representation learning"
Representation Learning
IntermediateAutomatically learning useful internal features (latent variables) that capture salient structure for downstream tasks.
Representation learning is like teaching a computer to understand the essence of data without needing someone to explain every detail. Imagine trying to recognize different animals in pictures. Instead of manually pointing out features like fur color or size, a representation learning model can a...
Deep learning system for protein structure prediction.
AI limited to specific domains.
A measure of a model class’s expressive capacity based on its ability to shatter datasets.
A measurable property or attribute used as model input (raw or engineered), such as age, pixel intensity, or token ID.
A function measuring prediction error (and sometimes calibration), guiding gradient-based optimization.
Practices for operationalizing ML: versioning, CI/CD, monitoring, retraining, and reliable production management.
Automated testing and deployment processes for models and data workflows, extending DevOps to ML artifacts.
Reconstructing a model or its capabilities via API queries or leaked artifacts.
Hidden behavior activated by specific triggers, causing targeted mispredictions or undesired outputs.
Reduction in uncertainty achieved by observing a variable; used in decision trees and active learning.
All possible configurations an agent may encounter.
Models evaluating and improving their own outputs.
Expected return of taking action in a state.
Probabilistic energy-based neural network with hidden variables.
Simplified Boltzmann Machine with bipartite structure.
Increasing performance via more data.
Flat high-dimensional regions slowing training.
Optimization under uncertainty.
Model trained on its own outputs degrades quality.
RL using learned or known environment models.
Learning action mapping directly from demonstrations.
Identifying suspicious transactions.
AI applied to scientific problems.
Awareness and regulation of internal processes.
A mismatch between training and deployment data distributions that can degrade model performance.
The relationship between inputs and outputs changes over time, requiring monitoring and model updates.
A parameterized mapping from inputs to outputs; includes architecture + learned parameters.
Configuration choices not learned directly (or not typically learned) that govern training or architecture.
A gradient method using random minibatches for efficient training on large datasets.
Popular optimizer combining momentum and per-parameter adaptive step sizes via first/second moment estimates.