Matt Shump

Superwise team

April 9th, 2021 min read

Matt Shump

I’m the VP of data at ChowNow where we provide commission-free online food ordering products and platforms to local restaurants. I own data, analytics & data science and I’m fortunate to be a member of the executive team. We serve all the different functions and help with guiding strategy, product development, improving business as well as partner with security and engineering to build all data and data science-related infrastructure and processes. When I joined 5 months ago, we had 2 people on the centralized team. Today we have 8, we’re going to double that team in size again this next year. We focus on impact, empowering data-informed decision making, and building data science tools so we’re a mission-driven startup within a mission-driven startup which is extremely fun.

So you’re really at a point where you’re scaling your use of AI or have you started that?

At ChowNow, we are in the nascency of focusing on ML and AI use cases, identifying what those are, what core problems need to be solved with our industry and for our product. There are really three steps in the way that I think about ML and AI: The first step is around decision intelligence. So the ad hoc process: “I built a predictive model that identifies specific features, describe the variation in the response, and I can use that for decision making. I can talk to the business decision-makers or the product team, and we can inform our strategy in that way.” Then the next two steps are the scaling steps, operationalizing, and productizing. That’s where I think most people think ML starts to come in: how you automate to systematically impact your operational workflows and decision-making or to integrate with your product and ultimately touch your end customers in a scalable, automated way. At ChowNow we are at that first step: decision intelligence building towards the next two steps.

Do you have any best practices around scaling the use of AI?

I’d say the first one is to consider the process and method. When an organization’s thinking about doing ML, it ends up missing the piece where AI and ML solutions require a scientific approach first. It’s not: “I am going to write predictive model function around all of my data, and ship it”

There’s an actual empirical process that you have to go through:

  • What is the problem we’re solving?
  • How are we actually going to drive action or interact with our operational flow or customer experience?
  • Do we have the data that’s requisite for that?
  • How am I going to identify and predict patterns, establish that you can make an impact, and build confidence you can generalize it?

And the list goes on… That specific first step is the decision intelligence step that I was talking about before, which is very ad-hoc and it’s not automated yet. So that process always has to happen first.

You can’t jump to “flip the switch”, you have to do that continuum first. These are key principles that I think about but are only as a result of having the black and blue bruises or failures of not thinking about them.

What’s the second one?

Another one is around the infrastructure. Oftentimes there is actually net new infrastructure that you have to consider, that’s outside of what you would normally have with a BI team and a product team. Even if you have a data science team that’s building predictive models with a Python notebook, it could be that none of the infrastructure in place might actually allow for the data to be latent enough to serve your product from these different sources at the time that you need it. You might not have the right infrastructure to actually rescore or refit or rebuild a model at the right time or the right instances to actually serve those needs. So then you have to think about: “can I build this internally? Can I make that investment or can I buy it?”

Also, there’s a lot of organizations that are smaller, that are wanting to start this AI/ML journey. It is a very large upfront investment if you want to build everything yourself internally: cross-functional team with specific skills, subject matter experts, net new infrastructure, operational excellence and software engineering best practices wrapped around the data flows and the performance of the model in production, etc. So this rise of MLOps is really interesting and a key consideration when you’re starting this journey.

Any more thoughts?

It goes without saying that data’s important but data takes a different shape in the ML world.

I like to use this imperfect metaphor: if there’s a bug in your production code for your existing product today, your application goes down and there is direct customer impact. If there’s no value in a field that’s critical for creating a score for an ML model that’s in production and you don’t have the right controls in place, your model goes down and there is direct customer impact.

Unless you have the right processes in place to make sure that data doesn’t have any issues, or when it does, you have error catching and guardrails, that lack of perfectly accurate and on-time data is just as bad as a bug in your production code. It’s a whole other area of considerations that most people don’t even think about when they’re looking to automate the predictive models they’ve made.

The data science team, even if they build the model and even if they could solve all of the technical considerations underneath, they have to be doing it in partnership with the operational owners who own those metrics they care about. The same thing applies to Product and Product Engineering.

Two other quick considerations that are more high level: what is the team and what is the organizational structure?

I think this is another classic challenge: do I hire someone externally? Is this just a data scientist job? Is this a product that can be owned only be the product engineering teams? From my experience, if you think about the two places where you can automate an ML model, one is within your operations on the business side and the other one is in your products.

I have not met a sales leader or a marketing leader who’s willing for me to black box automate a lead scoring model for them. They want to know what’s going on underneath the hood.

They want to know when it’s going to be delivered and how it’s being delivered. If it’s down, they need to know about it because it’s impacting revenue. They’re the ones who are on the hook for that metric. The data science team, even if they build the model and even if they could solve all of the technical considerations underneath, they have to be doing it in partnership with the operational owners who own those metrics they care about. The same thing applies to Product and Product Engineering.

It’s not one magic formula. It’s not one data scientist you could bring to the organization. And it’s also not just a product team that now can write a Python function around the data that’s available to them and then solve the ML problem. It really requires a cross-functional team to do this. ML is really a cross-functional problem to solve.

Who owns the models at ChowNow? What is the organizational structure?

The organizational structure and delineation of ownership have been different for every company that I’ve worked in for better or worse. At ChowNow, there is a very progressive and modern way of looking at executive representation and ownership of data: the data analytics and data science team own the scientific methods. We own the internal data and data science infrastructure, et cetera, but we don’t own the product, and we don’t own the operations. Those are owned by other Executives. When building the decision intelligence phase and understanding how to operationalize or productize ML, we still are the subject matter experts. However, we don’t own production backend of our application: we’re not the Product Engineering team or the DevOps team. Those are the teams we partner with, and it is a cross-functional problem to solve.

We’re just the subject matter experts of those scientific capabilities, we partner to build operational excellence and timeliness around the data, scoring, and performance. Do we have the right observability around that? Do we have alerting? What’s our on-call and incident response plan? These are examples of things that we uphold in partnership with these functional teams. But at the end, the marketing and sales teams are responsible for the conversion of our leads and retention of our customers, and the product and engineering teams are responsible for the experience and the availability of our products. We have to make sure that we’re collaboratively operating with them, in their world and their ecosystem.‍

Once the models are in production, what is the ongoing interaction with the business functions?

The ideal future state we partner with the Engineering team builds the right monitoring, observability, error handling, and performance monitoring to know if the models are working or not. If they’re not, there is an incident response plan to figure out what’s going on and resolve with a retrospective to learn, improve and prevent. Although for ML models, there is a new set of problems that you have to figure out: how to build those tools and capabilities, whether that’s a build or a buy? This is what allows a data scientist to not be looking at it on a daily or hourly basis…

With that said, there still is another classic challenge for organizations as they go from: “Hey, I built this predictive model” to once you try to have it put into production, touching the operations: “do you have the right cross-functional team and the right process and ownership in place to be paying attention to this?”

Now if you have an ML model that’s in production, you’re probably going to have a data scientist that’s dedicated to improving it or exploring new ones. But as you start to get more models touching your operational workflows, is it a scalability thing? Do you want data scientists only to maintain that existing model? Or do you want them to be exploring and building new models? It just depends on how big of an organization you are, and how many models you have in place vs other important problems the organization needs to solve that Data Science can help with.

What’s keeping you up at night in terms of the way the AI could go wrong?

The failure of AI for me is often direct. It ends up being directly impactful to revenue. I’ll go back to the lead score example: if the lead scores fails overnight, we have a hundred sales reps that lack the leads they need. And we could lose a full day’s worth of revenue. If you aren’t prepared for that, it’s potentially terrifying. It keeps me from being overly excited about being ready to flip the switch to automate our predictive models. I have to build this like it’s a product, otherwise, when this does go wrong, it goes really wrong. And it’s really hard to get off that treadmill afterwards or repair the trust from the Operations or Product owner.

How did COVID-19 impact ChowNow?

At ChowNow our mission is to help local restaurants thrive. COVID-19 was devastating to most local restaurants forcing all diners to order food online. So we focused all of our energy to help those restaurants survive the storm. All other problems pale in comparison, but from a Data perspective, if you look at any of our historical data, there is a step function, as well as a secular change across all metrics, as of March 2020.

The first thing that anybody thinks about when they want to build a predictive model is to look at observational historical data. It does create a significant Data Science challenge of understanding seasonal trends and what this secular change actually means, and what historical trends you can and can’t use. Can we use any Data for modeling purposes during the volatility of the start of the pandemic? Before that March 2020 was a totally different world and market dynamics.

We’ve actually had to lean on intuition and consider experimentation more to get ourselves through this time where there is no such thing as a baseline. I can absolutely speculate in other organizations where they had predictive or machine learning models in production in some way, shape, or form where they had to think about the memory assumptions of the model. ChowNow is just starting our journey in ML so we didn’t have any catastrophic fallout from COVID-19 on production ML models.

Everything you need to know about AI direct to your inbox

Superwise Newsletter

Superwise needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, please review our privacy policy.


Featured Posts

Aviv Ben Arie
May 23rd, 2021

Aviv Ben Arie

“The key to our success is tight collaboration within our business segments […]”

Read now >
Aviv Ben Arie
Dino Bernicchi
April 7th, 2021

Dino Bernicchi

“To truly scale AI you need competency in data, tech and business buy-in.”

Read now >
Dino Bernicchi
JIazhen Zhu ML talk
March 28th, 2022

Jiazhen Zhu

Moving a model to production and making sure the ML system is running well is different story.

Read now >
Jiazhen Zhu