Results for "staged access"
Restricting distribution of powerful models.
Protecting data during network transfer and while stored; essential for ML pipelines handling sensitive data.
Regulating access to large-scale compute.
Detecting unauthorized model outputs or data leaks.
Models whose weights are publicly available.
Enables external computation or lookup.
Letting an LLM call external functions/APIs to fetch data, compute, or take actions, improving reliability.
Local surrogate explanation method approximating model behavior near a specific input.
When information from evaluation data improperly influences training, inflating reported performance.
Information that can identify an individual (directly or indirectly); requires careful handling and compliance.
Processes and controls for data quality, access, lineage, retention, and compliance across the AI lifecycle.
Maliciously inserting or altering training data to implant backdoors or degrade performance.
Methods to protect model/data during inference (e.g., trusted execution environments) from operators/attackers.
Mechanisms for retaining context across turns/sessions: scratchpads, vector memories, structured stores.
Central catalog of deployed and experimental models.
Logged record of model inputs, outputs, and decisions.
Extracting system prompts or hidden instructions.
Agent calls external tools dynamically.
Compromising AI systems via libraries, models, or datasets.
Models accessible only via service APIs.
Storing results to reduce compute.
AI supporting legal research, drafting, and analysis.
Ensuring models comply with lending fairness laws.
Isolating AI systems.