đź“Ł Webinar Oct. 17th, 2:00 PM EST | Unraveling prompt engineering

Performance monitoring

Define performance monitors for your use cases and track them continuously, no matter how short or long your feedback loop is.

Performance Monitoring

Prediction shift

I want to monitor output drift for prediction in all LTV models across all segments

Identify changes in your model input-output relationship and pinpoint where assumed and real-world input-output pairs have diverged.

  • Get alerted when predictions shift on the segment level.
  • Identify increases and declines in prediction probability and confidence.

Performance degradation

I want to monitor distribution shift for prediction probability in risk model across all segments & entire set

Mesure and monitor model performance continuously and detect drops or gradual changes over time before they impact your business and outcomes.

  • Mesure from a variety of out-of-the-box performance metrics such as recall, accuracy, ROC, etc.
  • Create and monitor custom performance metrics that reflect account business KPIs.

Label shift

I want to monitor label shift for prediction in all CRO models across all segments

Monitor for changes between the true label distribution in your training data and the true label distribution in your collected ground truth.

  • Pinpoint true label changes on the sub-population level.
  • Receive alerts if labels are not collected as expected downstream.

Label proportion

I want to monitor label proportion for prediction in 1 model across all segments

Track the distribution of labels or classes to ensure that poor performance isn’t going under the radar.

  • Monitor binary & multi-class classification problems.
  • Identify accuracy vs. performance issues.

Try the community edition

No credit card required. FREE forever.