Domain: AI Economics & Strategy
A point where gradient is zero but is neither a max nor min; common in deep nets.
Empirical laws linking model size, data, compute to performance.
Optimization using curvature information; often expensive at scale.
Models evaluating and improving their own outputs.
A narrow minimum often associated with poorer generalization.
Attention mechanisms that reduce quadratic complexity.
All possible configurations an agent may encounter.
Simulating adverse scenarios.
Compromising AI systems via libraries, models, or datasets.
Maximum system processing rate.
Limiting inference usage.
Models trained to decide when to call tools.
Cost of model training.
Neural networks can approximate any continuous function under certain conditions.
Categorizing AI applications by impact and regulatory risk.
Maximum expected loss under normal conditions.
Expected cumulative reward from a state or state-action pair.
Error due to sensitivity to fluctuations in the training dataset.
A measure of a model class’s expressive capacity based on its ability to shatter datasets.
Gradually increasing learning rate at training start to avoid divergence.