AI for marketing: how well is it working for you?

Superwise team

October 11th, 2020 min read

October 11th, 2020

min read

AI for marketing

While it is true to say that AI is everywhere, this is especially accurate when it comes to AI for marketing. Every leading marketing team today knows that machine learning can dramatically help them boost their effectiveness and their impact. Whether it’s to identify and engage users who are most likely to convert, ensure that the lifetime value of customers (LTV) is realized within a short timeframe, or calibrate how much to spend on specific campaigns, the applications are endless when it comes to designing ML-driven razor-sharp marketing programs.

Marketing environments are amongst the most dynamic and complex as they are defined by an (almost) infinite number of data points, a wealth of offerings, and the volatility of customer behavior. As such, marketing applications of AI are amongst the most interesting case studies when it comes to robust AI assurance and monitoring solutions.

Who is babysitting your models?

In past posts, I referred to the “ownership gap”. In my conversations with prospects and customers, the question of “Who owns your models in production?” is as common as it is thought-provoking. Once models are in production, it’s tough for any organization to determine who is responsible for their health – and it’s especially challenging for marketing use cases. Yet, for most organizations, this “grey area” is left unaddressed; with the data science teams producing the models but without clear definitions of whose role it is to attend to their performance.

There is friction here between the role of the data scientists who create the models and that of the marketing teams who actually use the models. And the friction is not only a conceptual one, rather, it impacts the day-to-day processes of most organizations – organizations that need to empower their marketing teams while reducing the “maintenance and troubleshooting” overheads of their data science teams.

While AI practitioners and organizations are better understanding the need for a monitoring strategy, the matter of the ownership of models in production remains a yawning gap. For marketing use cases, in particular, it seems that the actual predictions are owned by marketing analysts and that they need to have a clear understanding of what drives these predictions and their seasonality: was there a change in the data traffic, or are these results of their own efforts? How dependent are they on the actual feedback from the predictions, which may take days or even weeks, in order to determine how well they are doing?

Granularity and timing: the holy grail of marketers and the strong suit of data scientists

The success of marketing campaigns lies in their ability to understand their users at an almost intimate level. That is to say, below the surface. In this sense, granularity and an understanding of the statistical significance of specific sub-populations are paramount for marketers. If specific metrics are considered valid for the data science team, they may not necessarily be valid for your marketing teams. Let’s take the example of accuracy. A 75% accuracy rate may be good enough for your overall model. But if this model is only 30% accurate for a specific sub-group of your customers that you’re targeting, then how good is it for your business? And how bad could that be for your next campaign?

As such, it becomes clearer that monitoring the health of your models is not only about ensuring their performance, and it’s not only about looking at the high-level business result metrics. It’s also – and no less importantly – about having the right visibility and the right level of control for both the data science and the business operations teams.

More to the point, timing is of the essence. Very often, the realization that something went wrong with the predictions only comes once the business has already been impacted. Ultimately, this makes the marketing team less enthusiastic about relying on AI predictions which may translate into friction and frustrating, manual exploration, and putting out fires that will leave your data science teams out of breath. In other words, without the right visibility at the right time, your AI program will not have the impact it was designed for.

This is where AI assurance comes into play. It’s the ideal solution to monitor the health of your models while supporting the right set of practices that will help all of your teams gain the right insights at the right time. Whether it’s a bias or a concept drift, your data science teams need to know to optimize their models – and your marketing team wants to know to optimize their campaigns.

Blog CTA Section

The model observability formula

At Superwise, we monitor the health of your models in production while alerting you when something goes wrong. We also provide complete visibility as to what’s going on – we do so by creating a single common language across the enterprise so that data science and marketing teams can each benefit:

For the business:

Do better marketing – More than 10% error reduction in campaign spend 

Marketing teams can now independently understand when the predictions they receive are not optimal in a timely manner and gain insights into the data categories that influence the model and the decision-making processes before damage is done. By catching degradations in real-time – whether data or infrastructure changes, drifts, and biases, investment leakage can be reduced. In addition, the ability to analyze data at a low granularity enables teams to track specific behaviors for particular segments and optimize their campaigns.

For the data science teams:

Do less firefighting – 96% reduction in time to detect and fix anomalies.

With Superwise, data science teams benefit from a thorough understanding of their model’s health with metrics and performance tracking over time and versions and automatic predictions of performance levels to circumvent blindspot periods. The days of waiting for the business to be impacted to have a sense of the health of the models are long gone!

Data science teams can also receive alerts on drifts/biases to enable more proactivity and prevent AI failures: data and concept drifts, biases, performance issues, and correlated events to avoid too much noise. Last but not least, they can derive better retraining strategies with key insights into the real-life behavior of the models.

Everything you need to know about AI direct to your inbox

Superwise Newsletter

Superwise needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, please review our privacy policy.


Featured Posts

Drift in machine learning
May 5, 2022

Everything you need to know about drift in machine learning

What keeps you up at night? If you’re an ML engineer or data scientist, then drift is most likely right up there on the top of the list. But drift in machine learning comes in many forms and variations. Concept drift, data drift, and model drift all pop up on this list, but even they

Read now >
Everything you need to know about drift in machine learning
July 12, 2022

Concept drift detection basics

This article will illustrate how you can use Layer and Amazon SageMaker to deploy a machine learning model and track it using Superwise.

Read now >
Concept drift detection basics
Data Drift
August 31, 2022

Data drift detection basics

Drift in machine learning comes in many shapes and sizes. Although concept drift is the most widely discussed, data drift is the most frequent, also known as covariate shift. This post covers the basics of understanding, measuring, and monitoring data drift in ML systems. Data drift occurs when the data your model is running on

Read now >
Data drift detection basics