Results for "unauthorized usage"
Detecting unauthorized model outputs or data leaks.
AI giving legal advice without authorization.
Protecting data during network transfer and while stored; essential for ML pipelines handling sensitive data.
AI used without governance approval.
Limiting inference usage.
Information that can identify an individual (directly or indirectly); requires careful handling and compliance.
Reducing numeric precision of weights/activations to speed inference and reduce memory with acceptable accuracy loss.
Maliciously inserting or altering training data to implant backdoors or degrade performance.
Reconstructing a model or its capabilities via API queries or leaked artifacts.
Methods to protect model/data during inference (e.g., trusted execution environments) from operators/attackers.
Routes inputs to subsets of parameters for scalable capacity.
Extracting system prompts or hidden instructions.
Models trained to decide when to call tools.
Embedding signals to prove model ownership.
Compromising AI systems via libraries, models, or datasets.
Dynamic resource allocation.
Running models locally.
Patient agreement to AI-assisted care.
Requirement to reveal AI usage in legal decisions.
Restricting distribution of powerful models.