Eliminating In-Store Packaging with Image Recognition for LUSH

Lush Bath Bomb

Client

LUSH

Tech stack

Google Cloud

Solution

Computer vision

Service

AI + Machine Learning

In their bid to become packaging-free, Lush is developing Lush Lens, a mobile app feature allowing customers to view product information (such as name, ingredients and allergens) simply by taking a picture. This completely eliminates the need for packaging and labels. However, in order to do this effectively, they needed to improve their Image Classification model accuracy and create a cross-platform solution available both on Android and iOS.

Our impact

  • Improved model performance from 45% to 98% f1-score
  • Developed a cross-platform solution to serve both Android & iOS
  • Reduced overhead with a fully automated ML pipeline on Google Cloud and a framework that enables Lush to re-train the model as new product images land on Cloud Storage

 

The challenge

Lush wanted to improve the performance of their Lush Lens app with a reliable, cross-platform, mobile-friendly, highly-performant, ad-hoc Product Identification model. Lush successfully built and deployed a CoreML model for iOS trained on >400 products, despite the absence of Machine Learning expertise in-house. Unfortunately, this initial model did not meet their requirements, as the model could not be served on Android devices and its performance was not reliable nor measured.

Furthermore, data quality and quantity was not reviewed and there was no automated solution to enable data acquisition, data augmentation, and ML model creation, training, or serving.

“We’re really pleased to be working with industry experts on Machine Learning at Datatonic, helping us with our ongoing initiative to remove packaging from our cosmetics and offering our customers a useful alternative that elegantly utilises modern technology on a device they have in their pocket.” – Adam Goswell, Head of Technology R&D, LUSH

 

The solution

At Datatonic, we have a wealth of experience developing ML pipelines in the Retail industry and place emphasis on sharing our knowledge with the teams we work with. Lush wanted to leverage our expertise in Google Cloud and our focus on helping our customers fully understand the subject at hand so they can explore best practices around ML and keep growing the model in-house.

To help Lush achieve their goals, the Datatonic team developed an end-to-end, fully automated model training and serving pipeline orchestrated with Google Cloud Composer during a five-week project.

This framework enables the Lush team to re-train the model on Cloud ML Engine as new product images land on Cloud Storage, monitor the new model performance on key metrics as reported in a BigQuery evaluation table and access the newly created TFLite model on a serving bucket.

The key component of the solution is data quality: the Lush team now has an understanding of the data range required for the model to deal with the real-world scenario of pictures taken in store, and notebooks have been provided for them to visualise colour distributions and other important data characteristics so to understand the predictive power of new data.

A more powerful Neural Network architecture and this new data acquisition process have driven the performance boost from 45% f1-score with the CoreML model to 98% f1-score with the ad-hoc TensorFlow Keras architecture. The Lush team has successfully integrated the TFLite model in both iOS and Android, ready to launch with the new app release.