Domain: MLOps & Infrastructure
Running predictions on large datasets periodically.
Maintaining two environments for instant rollback.
Incrementally deploying new models to reduce risk.
Automated testing and deployment processes for models and data workflows, extending DevOps to ML artifacts.
Shift in feature distribution over time.
A mismatch between training and deployment data distributions that can degrade model performance.
Centralized repository for curated features.
Using production outcomes to improve models.
Model execution path in production.
Practices for operationalizing ML: versioning, CI/CD, monitoring, retraining, and reliable production management.
Observing model inputs/outputs, latency, cost, and quality over time to catch regressions and drift.
Low-latency prediction per request.
Shift in model outputs.
Running new model alongside production without user impact.
End-to-end process for model training.