Results for "BERT-style"
Explicit output constraints (format, tone).
Updating a pretrained model’s weights on task-specific data to improve performance or adapt style/behavior.
Architecture based on self-attention and feedforward layers; foundation of modern LLMs and many multimodal models.
Predicts masked tokens in a sequence, enabling bidirectional context; often used for embeddings rather than generation.
AI supporting legal research, drafting, and analysis.
A high-priority instruction layer setting overarching behavior constraints for a chat model.
Model trained to predict human preferences (or utility) for candidate outputs; used in RLHF-style pipelines.
Generating speech audio from text, with control over prosody, speaker identity, and style.