Results for "resource acquisition"
Tendency to gain control/resources.
Goals useful regardless of final objective.
Limiting inference usage.
Dynamic resource allocation.
Software pipeline converting raw sensor data into structured representations.
Learning without catastrophic forgetting.
AI selecting next experiments.
Hardware resources used for training/inference; constrained by memory bandwidth, FLOPs, and parallelism.
Techniques that fine-tune small additional components rather than all weights to reduce compute and storage.
PEFT method injecting trainable low-rank matrices into layers, enabling efficient fine-tuning.
Reducing numeric precision of weights/activations to speed inference and reduce memory with acceptable accuracy loss.
Optimization problems where any local minimum is global.
Removing weights or neurons to shrink models and improve efficiency; can be structured or unstructured.
Routes inputs to subsets of parameters for scalable capacity.
Empirical laws linking model size, data, compute to performance.
Predicting future values from past observations.
Classical statistical time-series model.
Low-latency prediction per request.
Running predictions on large datasets periodically.
Cost of model training.
Using limited human feedback to guide large models.
Assigning AI costs to business units.
Maximum system processing rate.
Storing results to reduce compute.
Finding control policies minimizing cumulative cost.
Finding routes from start to goal.
AI that ranks patients by urgency.
Grouping patients by predicted outcomes.
No agent can improve without hurting another.
Truthful bidding is optimal strategy.