Results for "path generation"
Finding routes from start to goal.
Sampling-based motion planner.
Model execution path in production.
AI proposing scientific hypotheses.
Agent reasoning about future outcomes.
Field combining mechanics, control, perception, and AI to build autonomous machines.
Generates sequences one token at a time, conditioning on past tokens.
Architecture that retrieves relevant documents (e.g., from a vector DB) and conditions generation on them to reduce hallucinations.
A high-capacity language model trained on massive corpora, exhibiting broad generalization and emergent behaviors.
Search algorithm for generation that keeps top-k partial sequences; can improve likelihood but reduce diversity.
Stochastic generation strategies that trade determinism for diversity; key knobs include temperature and nucleus sampling.
Models that learn to generate samples resembling training data.
Generating human-like speech from text.
The internal space where learned representations live; operations here often correlate with semantics or generative factors.
Attention where queries/keys/values come from the same sequence, enabling token-to-token interactions.
Training objective where the model predicts the next token given previous tokens (causal modeling).
A model that assigns probabilities to sequences of tokens; often trained by next-token prediction.
Predicts masked tokens in a sequence, enabling bidirectional context; often used for embeddings rather than generation.
Crafting prompts to elicit desired behavior, often using role, structure, constraints, and examples.
Breaking documents into pieces for retrieval; chunk size/overlap strongly affect RAG quality.
Constraining outputs to retrieved or provided sources, often with citation, to improve factual reliability.
Automated detection/prevention of disallowed outputs (toxicity, self-harm, illegal instruction, etc.).
Rules and controls around generation (filters, validators, structured outputs) to reduce unsafe or invalid behavior.
Scales logits before sampling; higher increases randomness/diversity, lower increases determinism.
Samples from the k highest-probability tokens to limit unlikely outputs.
Samples from the smallest set of tokens whose probabilities sum to p, adapting set size by context.
Raw model outputs before converting to probabilities; manipulated during decoding and calibration.
Stress-testing models for failures, vulnerabilities, policy violations, and harmful behaviors before release.
Coordinating tools, models, and steps (retrieval, calls, validation) to deliver reliable end-to-end behavior.
Forcing predictable formats for downstream systems; reduces parsing errors and supports validation/guardrails.