Implementing Powerful ML Models with MLOps for Delivery Hero 

Client

Delivery Hero

Tech stack

Google Cloud

Solution

MLOps Platform

Service

AI + Machine Learning

Delivery Hero is a leading multinational online food-delivery service based in Berlin, Germany. The company operates in 70+ countries and partners with 1,500,000+ restaurants. Committed to continuously improving customer satisfaction, Delivery Hero worked with Datatonic to enable the delivery of impactful Machine Learning (ML) use cases, including Customer Lifetime Value (CLV) prediction. To support the productionisation of these ML models and enable cross-team collaboration, Datatonic implemented a scalable MLOps platform using Google Cloud’s Vertex AI.

Our impact

  • Implemented a scalable MLOps platform on Vertex AI.
  • Seamlessly leveraged MLOps best practices for batch prediction.
  • Facilitated increased productivity and easier team collaboration by providing a common tech stack and experimentation environment for Data Scientists.

 

The challenge

Delivery Hero has very strong Data Science capabilities and a lot of potential Machine Learning use cases that could generate business value. However, the organisation structure and size result in many teams working on different use cases, and productionising each of these requires very unique approaches. In addition to this, teams vary in size, skill sets, and maturity levels, making it difficult to scale AI and ML across the organisation. 

Without a solid MLOps platform, machine learning models quickly degrade, requiring manual retraining, which is time-consuming, and difficult to do correctly. Most of the time, this leads to model failure, meaning ML models can no longer be used. Furthermore, without MLOps best practices in place, Data Science teams were required to start their ML projects from scratch, investing too much time in MLOps implementation and maintenance, and allowing for less time for other projects and new ideas.

“​​Since not all of the Data Science teams have ML engineers, some data scientists spent a lot of time implementing their MLOps routine without having a good model to follow.” – Thomas Nguyen, Staff Data Engineer, Delivery Hero

 

Our solution

Delivery Hero needed a stable end-to-end MLOps platform; this would enable them to develop Machine Learning models more quickly and efficiently, and allocate less time and resources to monitoring model degradation or failure. 

Datatonic joined forces with Delivery Hero to develop their desired Machine Learning models and implement their MLOps platform. This involved:

  • Upskilling data scientists with Vertex AI training to give them a better understanding of the work they would be doing; these covered the most important aspects of MLOps for batch serving. 
  • Providing Delivery Hero with tooling, as well as access to a clean, generalisable, production-ready repository structure by tailoring our MLOps Turbo Templates to meet their requirements.
  • Using these templates to optimise each model much more easily and spend less time on retraining. 
  • Setting up Vertex AI Pipelines so that Delivery Hero can automate, monitor, and govern their ML systems by orchestrating their ML workflow in a serverless manner.
  • Building a use case promotion workflow for ML model productionisation.

This new MLOps platform offers Delivery Hero a way to accelerate Machine Learning models into production, reduce manual effort, and increase automation, freeing up their data scientists’ time to focus on Data Science. Furthermore, MLOps helps models, such as those for predicting CLV and Customer Churn, to remain more accurate and require less maintenance, reducing costs for Delivery Hero.