[2023 update] In this blog post, we will take you through the major fundamental differences between GCP’s Vertex AI and AWS’s Sagemaker
Self-service ML monitoring
Easily create, customize, and automate your ML monitoring with an extensive out-of-the-box library of metrics, monitoring policies, and notification channels.
Measure anything
Model metric store
Hit the ground running with 100+ pre-built metrics covering data, drift, performance, bias, and explainability. All metrics are accessible, customizable, and consumable from the metric store, SDK, and API.
Streamline your monitoring
Monitoring policy templates
Select from dozens of pre-built monitoring policy templates ranging from data drift to equal opportunity, or customize policies to take into account your domain expertise.
Focus only on what matters
Dynamic anomaly detection
Your ML. Your rules. Use Superwise’s dynamic anomaly detection engine that takes seasonality and temporality into account. Tune the monitor’s sensitivity and detection direction, or even configure a fixed threshold.
Stop the alert fatigue
Alert management console
You have complete control over which alerts are sent to which notification channels to ensure that the right teams get the right alert at the right time.
See what Superwise can do for you
Try Superwise out or contact us to learn more.
Featured resources
Monitoring NLP with Superwise & Elemeta
In this post, we’re going to show you an example of how to use Elemeta together with Superwise’s model observability community edition to supply visibility and monitoring of your NLP model’s input text.
Putting together a continuous ML stack
Due to the increased usage of ML-based products within organizations, a new CI/CD like paradigm is on the rise. On top of testing your code, building a package, and continuously deploying it, we must now incorporate CT (continuous training) that can be stochastically triggered by events and data and not necessarily dependent on time-scheduled triggers.