Results for "deployment"
Shadow Deployment
IntermediateRunning new model alongside production without user impact.
This approach lets developers test a new version of a model alongside the current one without anyone noticing. Imagine a restaurant trying out a new recipe while still serving its regular menu. Customers don’t see the new dish, but the restaurant can gather feedback on how it performs. In the tec...
Maintaining two environments for instant rollback.
Automated testing and deployment processes for models and data workflows, extending DevOps to ML artifacts.
Practices for operationalizing ML: versioning, CI/CD, monitoring, retraining, and reliable production management.
Central system to store model versions, metadata, approvals, and deployment state.
Running new model alongside production without user impact.
Incrementally deploying new models to reduce risk.
A mismatch between training and deployment data distributions that can degrade model performance.
Systematic differences in model outcomes across groups; arises from data, labels, and deployment context.
Reducing numeric precision of weights/activations to speed inference and reduce memory with acceptable accuracy loss.
Review process before deployment.
Central catalog of deployed and experimental models.
AI used without governance approval.
Model behaves well during training but not deployment.
Achieving task performance by providing a small number of examples inside the prompt without weight updates.
Structured dataset documentation covering collection, composition, recommended uses, biases, and maintenance.
Techniques that fine-tune small additional components rather than all weights to reduce compute and storage.
Systematic review of model/data processes to ensure performance, fairness, security, and policy compliance.
Stress-testing models for failures, vulnerabilities, policy violations, and harmful behaviors before release.
Removing weights or neurons to shrink models and improve efficiency; can be structured or unstructured.
A discipline ensuring AI systems are fair, safe, transparent, privacy-preserving, and accountable throughout lifecycle.
Ensuring decisions can be explained and traced.
Categorizing AI applications by impact and regulatory risk.
Required human review for high-risk decisions.
Cost to run models in production.
Organizational uptake of AI technologies.
Maintaining alignment under new conditions.
Train/test environment mismatch.
US framework for AI risk governance.
Requirement to provide explanations.
Central log of AI-related risks.