Results for "market making"
Continuous cycle of observation, reasoning, action, and feedback.
Required human review for high-risk decisions.
Low-latency prediction per request.
Simple agent responding directly to inputs.
No agent can improve without hurting another.
Learning structure from unlabeled data, such as discovering groups, compressing representations, or modeling data distributions.
Training with a small labeled dataset plus a larger unlabeled dataset, leveraging assumptions like smoothness/cluster structure.
A parameterized mapping from inputs to outputs; includes architecture + learned parameters.
Fraction of correct predictions; can be misleading on imbalanced datasets.
Of predicted positives, the fraction that are truly positive; sensitive to false positives.
Of true positives, the fraction correctly identified; sensitive to false negatives.
Of true negatives, the fraction correctly identified.
The degree to which predicted probabilities match true frequencies (e.g., 0.8 means ~80% correct).
Popular optimizer combining momentum and per-parameter adaptive step sizes via first/second moment estimates.
Methods to set starting weights to preserve signal/gradient scales across layers.
Networks using convolution operations with weight sharing and locality, effective for images and signals.
The set of tokens a model can represent; impacts efficiency, multilinguality, and handling of rare strings.
An RNN variant using gates to mitigate vanishing gradients and capture longer context.
Generates sequences one token at a time, conditioning on past tokens.
The text (and possibly other modalities) given to an LLM to condition its output behavior.
A high-priority instruction layer setting overarching behavior constraints for a chat model.
Stepwise reasoning patterns that can improve multi-step tasks; often handled implicitly or summarized for safety/privacy.
Retrieval based on embedding similarity rather than keyword overlap, capturing paraphrases and related concepts.
Rules and controls around generation (filters, validators, structured outputs) to reduce unsafe or invalid behavior.
Techniques to understand model decisions (global or local), important in high-stakes and regulated settings.
Controlled experiment comparing variants by random assignment to estimate causal effects of changes.
Processes and controls for data quality, access, lineage, retention, and compliance across the AI lifecycle.
Standardized documentation describing intended use, performance, limitations, data, and ethical considerations.
A discipline ensuring AI systems are fair, safe, transparent, privacy-preserving, and accountable throughout lifecycle.
Structured dataset documentation covering collection, composition, recommended uses, biases, and maintenance.