Scaling up your model operations? in this blog we will offer some practical advice on how to build your MLOps roadmap
Performance monitoring
Define performance monitors for your use cases and track them continuously, no matter how short or long your feedback loop is.
Prediction shift
Identify changes in your model input-output relationship and pinpoint where assumed and real-world input-output pairs have diverged.
Performance degradation
Measure and monitor model performance continuously and detect drops or gradual changes over time before they impact your business and outcomes.
Label shift
Monitor for changes between the true label distribution in your training data and the true label distribution in your collected ground truth.
Label Proportion
Track the distribution of labels or classes to ensure that poor performance isn’t going under the radar.
Try the community edition
No credit card required.
Featured resources
Putting together a continuous ML stack
Due to the increased usage of ML-based products within organizations, a new CI/CD like paradigm is on the rise. On top of testing your code, building a package, and continuously deploying it, we must now incorporate CT (continuous training) that can be stochastically triggered by events and data and not necessarily dependent on time-scheduled triggers.
Build or buy? Choosing the right strategy for your model observability
If you’re using machine learning and AI as part of your business, you need a tool that will give you visibility into the models that are in production: How is their performance? What data are they getting? Are they behaving as expected? Is there bias? Is there data drift? Clearly, you can’t do machine learning