Dino Bernicchi

Tell us about yourself

I head up Data Science at a South African home-shopping and financial services group called Homechoice, driving the group’s Data Science and AI strategy. This includes developing end-to-end AI products and the frameworks which support feature engineering, model training, production, and measuring AI ROI.

From a young age, I always had a keen interest in computers and technology and therefore, studied mathematics, statistics, and finance at university. My first (real) job was at Finchoice, a fintech company in the Homechoice group, where I was responsible for modeling customer behavior and the loan book.

I then joined Eighty20 Consulting, a niche analytics consulting firm. Consulting is a great place to learn: new technologies and methodologies, product development, sales/pitching, how to manage stress and 80+ hour weeks; and building a strong professional network. From consulting, I moved to Homechoice and founded the group’s Data Science and AI Team.

Even now, I still spend 80+ hour weeks working, self-studying, building AI or video game prototypes, and pursuing side projects. One of which I’m very humbled to be a part of is consulting on the development of Andrew Ng’s deeplearning.ai’s online courses. When I do have downtime, I love spending it with my girlfriend, family, and friends; and playing video games.

Congrats! You are amongst the happy few who have scaled their use of AI. Please share your main operational challenges and best practices

I believe to truly scale AI you need competency in data, tech, and business buy-in.

The main challenge we had was business-buy in. This included getting execs on the AI journey as well as getting execution teams to take action on the predictions of AI solutions. We tackled this by developing an “uplift calculation engine”. A system that manages A/B testing and the uplift calculation of AI solutions in production. This drove operational reporting for the execution teams and an AI income statement and AI ROI for the execs. When a business believes the numbers and sees the impact, there is a great rally behind the solutions and the team.

Best practices. Data. Develop a feature store to allow quicker feature engineering and sharing features between models. This will save massive time when developing new solutions and adding newly engineered features to existing models.

Skills and tech. Don’t make rash hiring decisions. Hire people with experience and a keen interest in continuous learning. Develop your own AutoML pipelines and systems to deploy and manage solutions in production. This will allow you to rapidly test and deploy models.

In your organization, who is in charge of assuring that the models in production behave as they should?

The data science team. It is our mission to drive AI strategy and therefore we are ultimately responsible. We do not take the deploy and forget approach.

The “Full Stack Data Scientist” is an approach claiming that the Data Scientist is the owner of the model from development to maintenance in production. What’s your view on that?

I believe that the data science team should be responsible for maintenance in production.

But this might not mean that the data scientist who developed the solution is the one that will also maintain it. Depending on the size and skills in the team, and the nature of the solution, these roles may be split out. Someone like an ML Ops Engineer could specialize in solution maintenance. Especially when your company starts to have hundreds of solutions in production. When you have the correct system in place (feature store, AutoML pipelines, deployment systems, uplift calculations), then being a “Full Stack Data Scientist” is very feasible.

Once models are in production, what is the ongoing interaction between the live predictions and the business owner?

Very much solution-dependent. Some business owners only care about solution performance. Others like to consume the predictions for their own analytics and business insights. Our deployment process makes this data easily available for business owners.

How do you assure the performance of your models? Do you use a monitoring solution? If in house what are you doing?

We developed proprietary tools which monitor performance at four key stages:

1. Feature Engineering; a component in the Feature Store that monitors data quality and data drift.

2. Model training and retraining; a Model Repository system monitors model performance metrics and training degradation.

3. Model inference; the Uplift Calculation Engine monitors prediction performance versus expectations set during model training on blind, out-of-period validation sets.

4. Execution on predictions or recommendations; through live A/B testing, the Uplift Calculation Engine monitors the lift on key metrics over business-as-usual or other competing processes.

2020 was about COVID-19 and drifts. How did it affect your models in production? How did you manage to be on top of things?

It was actually more manageable than we first anticipated. All of our solutions in production were designed and deployed with automatic retraining in mind. This means each month, each solution retrained with the latest data, and subject to passing performance and quality checks, updated into production. COVID-19 resulted in us being more hands-on with manual checks and adjusting some of the desired outcomes of solutions to align with new business strategy.

Are there any compliance requirements that will be impacting your AI/ML efforts soon? If yes – which?

Yes, in South Africa we need to be POPI (Protection of Personal Information) compliant this year. This means the way we store, access, process data will be regulated. The impact on us is minimal as we have already been thoughtful in the way we manage our customer data.

What are the main improvements and investment areas planned for your AI/ML activities in the next two years?

We have large models in production, in terms of the quantity of data and the number of features used when training the models. A bottleneck is preparing that data for model ingestion. We are therefore keen to invest in GPU accelerated data pipeline solutions such as NVIDIA RAPIDS and BlazingSQL.

As we keep hearing about more and more “AI fails”, what’s keeping you up at night?

Model fairness is an important one for us. We do not want our models to treat people unfairly because of their ethnicity or gender.

It is a very difficult problem because it is not always easy to tell if there is model bias, especially when the gender or ethnicity of a customer is not known. This means it is not always straightforward to check for bias. Even if you can get past this, there are a few ways to define fairness, and it is a situation where you cannot satisfy each and every definition of fairness. I believe we have to choose a definition for each specific use case, and then ensure that the specific model meets that requirement. At the end of the day, when our models are automatically making decisions that materially impact our customers (e.g. granting or denying credit), we want to make sure that no group of customers are being treated unfairly.

Are you working on any interesting AI side projects at the moment?

Yes! Myself & two friends developed an AI facial animation app called Fakerface, which allows you to animate any face from just one photo. You can use your own images and videos or choose from a growing in-app demo gallery. For more info please see fakerface.ai/press

Everything you need to know about AI direct to your inbox

Superwise Newsletter

Superwise needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, please review our privacy policy.


Featured Posts

Aviv Ben Arie
May 23rd, 2021

Aviv Ben Arie

“The key to our success is tight collaboration within our business segments […]”

Read now >
Aviv Ben Arie
Dino Bernicchi
April 7th, 2021

Dino Bernicchi

“To truly scale AI you need competency in data, tech and business buy-in.”

Read now >
Dino Bernicchi
JIazhen Zhu ML talk
March 28th, 2022

Jiazhen Zhu

Moving a model to production and making sure the ML system is running well is different story.

Read now >
Jiazhen Zhu