So you want to be API-first?

Deciding to become an API-first product is not a trivial decision to be made by a company. There needs to be a deep alignment throughout the company, from R&D all the way to marketing, on why and how an API-first approach will accelerate development, go-to-market, and the business at large. But more importantly, just like you need product-market fit, you need product-market-API fit. There is a big difference between externalizing APIs and being API-first, and depending on your clients and their use cases, you’ll need to understand whether API or API-first is the right choice for you. 

This post explores how APIs and API-first impact both the business and R&D through the evolution we at Superwise went through as we became an API-first product and business. 

APIs are not just about code

Luckily we don’t need to go into depth here. APIs are so common at this point that even the most non-technical of business persona knows that an API is an Application Programming Interface that standardizes communications so any two apps can send/receive data between each other. The problem with this is that they are so ubiquitous today that occasionally, you’ll see businesses pushing for APIs without a strong product-API fit and/ or product-dev maturity. 

Should your APIs be first-class citizens?

You need to ask yourself a set of criteria before deciding what to do regarding APIs; go all-in and become API-first, expose a set of APIs, or say no to APIs in their entirety. There is no magic number of yeses or nos here; you might even say yes to everything listed below, and still, API-first will be wrong for your product/business. 

Bigger picture fit

The first thing you need to figure out is where your API fits in the bigger picture and how integral it is to enhancing value. 

  • Is your solution part of a more extensive process? Yes, users tend to get annoyed with the overabundance of tools and platforms they need to use to do their jobs, but there is a big difference between a tool used monthly and a daily tool. 
  • Does consuming your solution via API generate more value for users? BI is an excellent example of higher value via API by making information accessible to all stakeholders in the organization.

Look at model observability, for example. It isn’t necessarily a day-to-day tool, but it is mission-critical, and when something goes wrong with ML in production, monitoring can trigger any set of processes to resolve the anomalies. Furthermore, almost always, you’ll also need to expose issues to other stakeholders in the organization so they can take preventive actions until the root cause is uncovered and the incident is resolved. 

Consistent reusability 

So you have a big picture fit, and your API creates additional value to your users; fantastic. Now think about how your users use your product and if this translates consistently, across your user base, to API usage. 

  • Is your product-market fit ubiquitous? Will most of your users want to use the API more or less in the same way? Social login is an excellent example of API-first. It’s a product with consistent reusability across the user base.
  • Can any organization implement your API? This is about both endpoints, not just your API. If the system you typically integrate into is niche or requires specific domain knowledge, it could be that not all organizations will be receptive to your API because they don’t have the necessary resources to bake it into their processes. 
  • Do your users need an API or all the APIs? Is it worth your time and effort to go API-first, or will you get the same impact with one or two APIs in a non-API-first approach?  
API-firstJust APIRational
Auth0/Frontegg✔️All authentication and authorization processes are done by API calls.
Paypal✔️All payments are done on the merchants’ website and sent to Paypal APIs.
Slack✔️Slack provides APIs, but its main value is in giving users an amazing organizational chat experience – which kinda needs UI.

Superwise’s journey to API-first

In all honesty, when we first started exposing APIs, we didn’t have a robust process in place, much less an API-first mentality. We were exposing quite a few APIs, some for internal use in our web application and some for direct customer consumption. It was a headache to maintain both the APIs and their documentation, and we had no flow in place to handle the influx of customer requests to change /create APIs. In addition, and probably most importantly, because we didn’t have a well-defined process and mindset in place, there was a ton of miscommunication and ‘lost in translation moments between our backend and frontend teams that resulted, more often than I want to admit, in bugs and over-fetching.

All of these problems stemming from our APIs made us sit down and think about what is right for us when it comes to APIs and how to build processes that facilitate scale, both ours and our customers, without the issues we experienced till now. The result is evident from this post’s title; we decided to go API-first. 

So what did we do?

APIs are a big deal for our customers, internal and external, and our product depends on the quality of our APIs and their ability to deliver value seamlessly. 

  • Who is the client? Internal? External?
  • Are the API requests and responses aligned and fit the client’s use case? You need to find a balance between minimizing API calls and invocation explicitly. Too many API calls are inefficient, but confusing invocations are ineffective – both are detrimental to the user experience. 

Once we figured all this out, we documented our APIs to create a “construct” of how the APIs will be consumed. This gives our frontend team the ability to mock data and continue developing front-end features without waiting on support from the backend. This way of thinking about our APIs as an integral part of the product makes us always examine any request to ensure that our APIs stay reusable and flexible. 

The advantages of going API-first

Going API-first, both technically and in terms of mindset, had a powerful impact on our ability to scale the application and integrate with external services rapidly. Before we started thinking about our APIs as first-class citizens, when we had a load on a specific API, it was impossible to scale just that specific API; we had to scale all our applications. With the switch to API-first, all our APIs are designed for a microservice with a specific task. This enables us to scale each API according to its load and be efficient with our resources.

  • Minimize dependencies – An API-first mindset brings dependencies to the forefront and encourages us to decouple APIs by design so that updates/changes can be done on the API level and not at the application level, which affects all APIs. This is not always attainable, but where it is, upgrading/changing APIs will be a more effortless and independent task. 
  • Parallelize development – Development teams can work in parallel by creating contracts (i,e: documenting your APIs route, requests, and responses) between services on how the data will be exposed. This way, developers do not have to wait for updates to an API to be released before moving on to the next API, and teams can mock APIs and test API dependencies based on the established API definition.
  • Speed up the development cycle – API-first means we design our APIs before coding. Early feedback on the design allows the team to adapt to new inputs while the cost of change is still relatively low, reducing overall cost over the project’s lifetime.
  • QA in design – double down on the design phase because fixing issues once APIs are coded costs a lot more than fixing them during the design phase. 
  • Design for reusability – You can also reduce development costs by reusing components across multiple API projects.

Key points to becoming API-first

Implementing an API-first development approach in your organization requires planning, cooperation, and enforcement. Here are some key points and concepts to bake into your API-first strategy to make sure it’s a success:

  • Get early feedback – Understanding who your API clients are, inside and outside of your organization, and getting early feedback on API designs helps you ensure API-use case fit. This will make APIs easier to use and shorten your development cycle.
  • Always design first – API (design)-first means you describe every API design in an iterative way that both humans and computers can understand – before you write any code. API consumption is part of the design process, and it’s important to remember that clients (in plural) will interact with the feature through an API, so you need to always keep everyone in mind and not focus too much on a specific client. Considering design first will also make it easier to understand all the dependencies in the task.
  • Document your APIs – API documentation is a must as it creates a construct between clients and developers. The documentation is critical to ensure that the API consumption is effective and efficient. We want to be exact in the language and examples so the client gets maximum impact with minimum effort. 
  • Automate your processes – Use tools like SwaggerHub to automate processes like generating API documentation, style validation, API mocking, and versioning. 
  • Make it easy to get started – Provide interactive documentation and sandboxes so that developers can try out API endpoints and get started building apps with your APIs right away. 

A lot has been said about going API-first, and there are many resources and best practices (for example, these articles from Auth0 and Swagger) that can help you through the transition. But going API-first doesn’t necessarily require refactoring your existing applications; it’s about embracing a different mindset. For us, it was, without a doubt, the right path to take, we see it in customer satisfaction and increased usage, we see it in how we are scaling faster and more efficiently, and we see it in how we are developing and deploying new capabilities faster to our customers.

Don’t forget to check out our careers page and join us!

Something is rotten in the holi-dates of models

Let’s get the obvious out of the way. First, ML models are built on the premise that the data observed in the past on which we trained our models reflects production data accurately. Second, “special” days like holidays such as  Thanksgiving or, more specifically, the online shopping bonanza boom of the last decade have different patterns than the rest of the year. Obviously. Third, we can prepare ourselves and our models for the accompanying mess of less accurate predictions and a lot more volume because we know about these peaks. Or can we?  

To train or not to train?

Understanding your problem is half the solution, but as data scientists, it’s tempting to view these peaks as an issue that needs to be resolved by training our model to adapt to these times of the year and their fluctuations or perhaps ever going a step further and engineering a feature to address these specific events. While this may be the correct answer for some use cases and businesses, it will not always be the most efficient solution or even an exclusive solution. 

There are a few holiday season model strategies and tactical mixes that can be used to ensure that model issues don’t impact business. 

Train 1: global

Assuming that this is a known fluctuation theoretically, you have enough time to ensure it is represented in your training data. There is a twofold problem here. To address this “noisy” data that is now present, either the model will result in a suboptimal function that may underperform on normal days or result in a fitted but significantly more complex model function.  

Train 2: custom

Another training route that you can take is custom model training, looking at a particular fluctuation/s and building a custom model for those specific time frames only. The potential downsides here are that it may be challenging to grab enough data to train a model for such a specific resolution and that if the real-world changes, case in point COVID, your historical data on which you trained can’t predict the real-world next time around. 

Models without safety nets never to heaven go 

Training aside, other tactics can be applied here. The key to them all is that they do not attempt to learn but accept them as inherently unpredictable anomalies. We know that they will manifest, but not precisely how or to what extent. Here, our goal will be just to detect these anomalies and trigger manual action via safety nets we set up. 

Don’t train 1: monitor

Model monitoring is a pretty obvious must here and should be regardless of any specific event or day of the year. That said, the configurations that are realistic for “normal” days tend to go off the rails and blast everyone with alert galore when they need to contend with events like Chinese Singles Day. So if alert fatigue is a no, then observability is the answer – we want to destructure the known issue monitoring aspect of our models to a degree. This can be achieved, for example, by increasing segment resolutions, changing feature sensitivity thresholds, or even ignoring specific time-coupled features altogether for a given period.

Don’t train 2: go back to building rules

At its core, we use models because they provide a better representation of the real world, detect complex patterns hidden from the human eye, and come up with predictions much faster than we, as mere humans, would ever be able to. Unfortunately, the cost of all that is their black-box nature. Given that they are the ones essentially driving the business on “special” days, heuristics is not necessarily a dirty word. Yes, they are slower to reach optimal answers, but they are simpler to understand, and more importantly, they are much more flexible and capable of adapting immediately to specific domain knowledge you have about your business that you can correct for. 

Don’t train 3: it’s time for humans in the loop

All of the tactics listed up till now also call for humans in the loop but as the next step in an escalation process. Here we’re going to discuss humans in the loop as a tactic in itself and cases where you may decide to double down on it.

Model confidence:

Even on normal days, certain cases will be escalated to analysts when the prediction confidence levels are low enough to trigger an escalation. On non-normal days two additional questions need to be considered. First, do you want to maintain the same criteria but with different thresholds? Second, do you now need to trigger escalations on specific criteria mixes that are not within the norm or clustering in a different pattern than typically normal?

Industry and/ or use case compliance sensitivity:

So this is pretty clear cut (at least in the current regulatory space in which AI operates. In the future, AI regulation will become more complex and nuanced as regulations advance), some industries and use cases, such as banking and lending vs. AdTech and click-through-rate optimization, have more tangible compliance requirements, and even fines, associated with them. Because of the closer association of specific industries to regulators, these organizations should consider instituting additional lines of defense to self-audit themselves ahead of any potential regulatory examination.

Adverse media and adversarial attacks:

Up until now, we’ve mainly talked about peak events in the context of holidays and e-commerce fiestas, so let’s take a right turn and talk about elections, social media, fake news, and influencing public opinion. And with that string of nouns alone, we’ve illustrated a case where the sensitivity to adversarial attacks and the potential adverse media fallout, as a result, justifies humans in the loop.   

To thine own self be true 

Your use case, industry, and internal policies are going to greatly influence what tactical mix you set up this holiday season, and in all honesty, there is no one set truth as to what you should choose from this list or even beyond it. What is true? One tactic alone is unlikely to protect your business during peak events, and strategically it’s never a good idea to put all your eggs in one basket. That said (and food for thought), holidays, Black Friday, Chinese Singles Day, and so forth, to an extent, are easy; because we know about them. It’s the peaks and dips, seasonality, and anomalies that you don’t know about that are the real challenge of model observability. 

Want to see the Superwise way to observe model seasonality? Request a demo

Scaling model observability with Superwise & New Relic

Let’s skip the obvious, if you’re reading this it’s a safe bet that you already know that ML monitoring is a must; data integrity, model drift, performance degradation, etc., are already the basic standard of any MLOps monitoring tool. But as any ML practitioner will attest to, it’s one thing to monitor a single machine learning model, it’s another altogether to achieve automated model observability for dozens of live models all with immediate impact on daily predictions and business operations. Enter Superwise; high-scale model observability done right. What does that mean? Zeroing in on issues that lead to action, without alert fatigue and false alarms. The platform comes with built-in KPIs, automated issue detection and insights, and a self-service monitoring engine to deliver immediate value without sacrificing customization down the road. 

Model observability is all about context so it’s only natural for us to integrate our model KPIs and model insights into New Relic to take observability higher, further, faster. With the integration, Superwise and New Relic users will be able to explore model incidents within their New Relic workflow, as well as view Superwise’s model KPIs. 

What do you get?

The Superwise model observability dashboard gives you out-of-the-box information regarding your active models, their activity status, drift levels, and any open incidents detected for specific time intervals or filters. But we don’t stop there; you can configure any custom metric and incident you need to monitor for your specific use cases and monitor them in New Relic.

The basics

  • The model activity overview gives you a quick view of your active models, their activity (predictions) over time, and the total number of predictions during the filtered timeframe.
  • With drift detection and the model input drift chart, users can identify what models are drifting and may require retraining.
  • Using incident widgets, users can easily see how many models currently have open incidents (violations of any monitoring policy configured), how incidents are being distributed among the different models, and drill down into the model incident details. 

The custom

Superwise’s flexible monitoring policy builder lets you configure various model monitoring policies and send detected incidents into one or more downstream channels including New Relic, PagerDuty, Slack, Email, and more. You have full control over what policies are sent to which channels to ensure that the right team gets the right alert at the right time. 

What do you need to do?

It takes only a few minutes to integrate Superwise and New Relic so you can access our model KPIs and incidents in New Relic One. Check out the integration documentation and we’ll walk you through it.  

Don’t have Superwise yet?

We can fix that. Request a demo

Stories from the ML trenches

What led us to create the #MLTalks initiative

Back in February when we were on our 3rd lockdown, my team and I regrouped to think about our next steps. As we are in a fortunate position to meet with dozens of leading DS teams every week to brainstorm and discuss their challenges with scaling ML, we realized there was a need to give a structure to these voices and to create a repository of best practices and “stories from the field.”

At Superwise we see ourselves as a team of engineers and data scientists who bear the scars of putting ML models in the real world and learning from our mistakes. These scars have led us to create a solution that automates the assurance of ML models to help others scale their use of AI in a way that is safer and easier. The ML talks initiative is only a continuation of those efforts, and while the data science and MLOps community is a very vocal one with a wealth of information out there in the shape of blogs, Gits, and Slack channels, there is still a real need to consolidate the experience from the trenches, the real stories of the women and men, who have been awake at 3 AM on a Saturday to understand what really is happening with their models.

So far we have interviewed 5 (and counting!) rockstars in the ML world, and have learned something new from every conversation.

Here are some of our key takeaways:

1 – It takes a village

Scaling AI is about making sure that everyone is on board with it. Each and every one of our interviewees mentioned the necessity to facilitate adoption by being transparent to the downstream users. As Maya Bercovitch, Director of Engineering & Data Science, Anaplan notes: “we create a glass-box, not a black-box”. Clearly, scaling AI is about making it accessible for all stakeholders. What’s more, in our discussion with Matt Shump, VP Data, Chownow, he notes: “I have not met a sales leader or a marketing leader who’s willing for me to black-box automate a lead scoring model for them. They want to know what’s going on underneath the hood.” From data science, data engineering and operational users, each stakeholder in the organization needs to be aligned on how the models are doing to facilitate adoption and ROI.

2 – Visibility is paramount

In order to avoid delays and errors, the ability to understand how the data fluctuates and how the model behaves is paramount. Yet, the use of in-house tools or solutions that are not dedicated to machine learning tasks often fails to deliver the right results – especially as the number of models grows. As Maxim Khalilov, Head of R&D, Glovo notes:“ The nearest priority in terms of time is the monitoring. Because we don’t have enough visibility into the technical characteristics of the models, but primarily on what happens with the data, how the data flows through our pipeline, and most importantly, how our model behaves, and how it reacts and changes in the data.”

3 – MLOps and automation are at the top of everyone’s mind

When asked about what was at the top of her mind, Nufar Gaspar, Head of operational AI, product & strategy, Intel Corporation answers: “A lot of MLOps, as everyone. […] The ability to have one MLOps across different verticals and different organizations and to ease the access to MLOps for teams without high proficiency in machine learning is key.”

One of the top best practices that Dino Bernicchi, Head of Data Science, Homechoice notes is: “Develop your own AutoML pipelines and systems to deploy and manage solutions in production. This will allow you to rapidly test and deploy models.”

I hope you enjoy reading these as much as we enjoyed conducting them. I want to thank all those who participated. We are only just getting started. So please feel free to contact me if you want to take part in the ML Talks, recommend a co-worker, or share some questions that you would want us to investigate!

Thinking about building your own ML monitoring solution?

“We already have one!” That’s the first sentence most of our customers said when we met to discuss AI assurance solutions. Most AI-savvy organizations today have some form of monitoring. Yet, as they scale their activities, they find themselves at a crossroads: should they invest more in their homegrown solution or receive support from vendor solutions? And if they do choose to invest more, for how long will their DIY solution be “good enough”?

In this blog, we explore how far homegrown solutions can take you and what you need to think about when planning to scale your use of machine learning.

DIY tools are (only) a start when monitoring your AI

Data science teams spend months researching and training their best models. The production phase and the necessary MLOps/monitoring phase sometimes only come as an afterthought. In this context, many data science and engineering teams develop initial AI monitoring tools in-house. But while DIY tools may be a decent approach for businesses with a contained use of AI when the time comes to expand the use of modeling, homegrown tools fail short of supporting the diversity and complexity of the models and the data used. Here is a shortlist of some of the lessons learned that we have witnessed with customers scaling their AI.

As they grow, the number of models and use cases grow

Guess what? Homegrown solutions don’t scale in sync with the models and require more and more maintenance, tweaks, and attention…This is especially true as organizations adopt AI for various use cases: from marketing to core activities embedded in their product.

Models monitoring is not a one-off task. As organizations adopt new models, they need to create a new monitoring paradigm that caters to the different types of data – structured, text, image, video, etc..; all of which require different measures and techniques to analyze the incoming data for the process. In other words, what works for a classification model probably won’t work for a regression/clustering one, and a new set of tools will need to be developed. And even for specific structured use cases, different features of the model require different KPIs to analyze the health of the process: numerical/categorical/time/etc…

Regardless of the sophistication of the models, monitoring is an ongoing task that requires 25%-40% of a data science team’s time. The inefficiency and the frustration that comes with the heavy investment in homegrown monitoring solutions are among the first reasons that push organizations to turn to vendor solutions. Along with the fact that they would much rather their teams focus on creating models that have an impact on the business.

You don’t know what you don’t know

This is perhaps the most critical point. For organizations that have already engineered a solution that computes specific KPIs for your models, they find themselves struggling to proactively understand when concept drifts happen or when biases start to develop. More often than not, homegrown solutions tend to look at the things that are already known, and the issues that were already anticipated, thus realizing too late when events occur that are beyond this scope. This is often the point where organizations realize the limitations of their own solution, however sophisticated they engineered it to be, as it fails to bring value to the whole ML process.

In environments where data is extremely dynamic, assuring the health of models in production is about leveraging the expertise and best practices to be proactive: be alerted on issues that pertain to the health of the models, gain insights, and diagnose issues promptly.

Multiple stakeholders

As mentioned in a previous post, scaling AI poses the question of who owns it when it’s in production: data science teams? data engineering? business analysts? hybrid creatures? Ultimately, as AI use grows, the stakeholders involved also change, regardless of the number of models. Think about the fraud detection and cybersecurity space where analysts are the predominant users of the AI predictions and need to make sure the models are always tuned to a very dynamic data landscape.

For a monitoring solution to be useful, all the stakeholders involved need to derive insights and an understanding of the health of the predictions:

  • Data science teams need to understand if/when/how they should retrain the model, and the cases in which the model doesn’t perform well,
  • Business analysts want to know what drives decisions and get alerted as soon as there is high uncertainty regarding the model decision quality,
  • Data engineers need to know about the quality of the data streaming through the system, and whether it has outliers missing values or strange data distributions

To do so organizations need to create and maintain a view of the ML predictions that everyone involved can access and extract value from, without creating unnecessary noise. Beyond determining if there are sufficient resources, there is also a matter of skill set as all stakeholders often have different perceptions that need to be bridged under one enterprise-wide view. Ultimately, the complexity of these tasks is what drives AI practitioners to scale their activities to select a best-of-breed solution for assuring their models in production.

The amount of data is exponential!

In industries such as Adtech where models process TBs of data each day, the velocity of the data is a challenge to obtaining a clear picture. Do you have the time and tools necessary to continuously extract, compare, and analyze statistical metrics for your ML process, without impacting your core activities?

Scaling your AI? Here’s what you need to ask yourself

Here’s a quick list of considerations you may want to think over as you consider the best way to assure the health of your models in production. At the end of the day, it boils down to a question of resources management and efficiency: how much time should you invest in developing a set of tools to monitor your models in production, today? And what will it cost you tomorrow as you add more and more models and use cases?

  • How much will a homegrown solution cost?
  • How efficient will this be in the long run?
  • Is it really what my team needs to focus on, or is it better to buy and use such a capability?
  • How can I foster an enterprise-wide understanding of the models’ health?
  • How can I make my monitoring solution a proactive one?

Don’t play around with your growth

At Superwise, we specialize in accompanying our customers as they transition from using homegrown solutions–or even nothing!–to a rich model observability solution that helps them achieve business impact and grow their AI practice. Enabling them to focus on what they do best: developing and deploying models that help their business grow.

Facing the challenges of day 2 with your models in production

AI is everywhere.

Businesses from all verticals are promptly adopting AI algorithms to automate some of their most important decisions: from approving transactions to diagnosing cancer to granting credit and so much more.

As the AI race is booming, more organizations step into “Day 2”, the day their models are moved out of the realms of research and training and into production. And this is when the picture starts to crack.

Once they move to production, maintaining the models is a whole new story: they become subjects to drifts, develop biases, or simply suffer from low-quality data. In this “Day 2”, no one other than the data scientists who created the model really trusts it or understands how well it’s doing. And sometimes, even they feel they’ve lost control once it’s in production!

Operating ML models is essentially operating in the dark, without clear standards of what is entailed to ensure models make the impact they were designed for: what metrics should you look at? At what granularity? And most importantly, with what resources when your team needs to focus on creating future models and not troubleshooting the existing ones?

And this is the great paradox of AI in production; What it can do is great, but if the natural degradation of the models over time cannot be controlled, we remain blind to biases, drifts, and compliance risks. Leaving us with no way to really achieve the full business value of machine learning. In other words, we’re headed for trouble.

So what’s the deal? How can we scale AI efforts while fostering trust and without losing sight?

Mind the Gaps of Day 2!

The way we see it, there are two main gaps today that prevent organizations from  stepping into “Day 2” with confidence :

Lack of Real-World Assurance – There is a lack of best practices or capabilities to help assure the health of models in production. As we evolve into a more mature use of AI, practitioners are starting to look at monitoring more seriously, but the field and the literature are still in their inception. Data scientists across all verticals reach out to us as they find themselves turning away from their homegrown solutions that lack an all-encompassing view and often drain the resources of teams that are already spread out pretty thin. They need to find solutions that will enable them to get the right insights at the right time to help them become more efficient. They need to know if there is an issue before the business is impacted, when and if to retrain the model, and how to decide what data should be used to do so. And all this should be accomplished without creating unnecessary noise.

Lack of Ownership – Models are created by data scientists, but their results/predictions are used by the operational teams.

These users are the ones who are the most at risk of being impacted by wrong predictions. Take marketing analysts who use machine learning to predict users’ Life Time Value, for example, these teams are measured by the success of the activities that depend on AI predictions…and when their activities don’t yield the expected results, they are the ones losing out – and so is the whole business.

Operational teams need to become independent and gain visibility into what makes their models tick. More than that, they should be able to put the models to work for them and get key insights into their business: are there biases? Are there missed sub-populations?

For our users, the ability to gain independence and access information regarding the health of the model that matters for their business is crucial. More than that, as they start understanding that the models should work for them, they become their favorite resource!

AI Assurance as the necessary leap to success

At Superwise, we get it. With years of experience in building AI solutions and supporting organizations through their digitalization initiatives, we deeply understand the benefits and the blindspots of AI. We know that performant AI models can empower decision-makers, giving them the confidence to run free with their models, innovate and drive efficiency.

But as incredibly powerful as AI is, it requires a leap–one that is both technological and organizational–it needs Assurance.

AI Assurance gives you the visibility and control needed to create trust and confidence and enable you to scale the use of AI across the enterprise. With AI Assurance, you’ll be prepared for Day 2, when your models meet real life.

What every organization wants is to be in control of its models, even once they’ve been let out into the real world. AI Assurance not only delivers the practical tools to make this possible, but it also empowers you, the user, to use your AI models to their fullest extent with confidence. And this is what assurance is all about–providing the right metrics and the right insights to enable real-world success and independence with AI models.

To support this leap, we deliver an AI for AI solution. We learn from your models what their normal behavior can and should be and help you face the challenges of bias and concept drifts.

To illustrate, we recently helped one of our customers reduce their time to detect and fix concept drifts by  95%!

It’s not only the wealth of out-of-the-box metrics that make this possible – it’s our ability to give you the grid from which you can understand your models and get the tools to gain independence and control. Our solution grants you the right insights at the right time so you can know how your models are doing, get alerted when they go south, and take the right corrective action before there’s any business impact.

Want to take the leap? Schedule a demo today