Detecting Consumer Emotion with ML for Realeyes



Tech stack

Google Cloud


Computer vision


AI + Machine Learning

Realeyes is a world-leading AI technology company which deploys state-of-the-art Emotion AI to understand and classify consumers’ emotions and attention. Continuous improvement on their AI Technology, including optimising model serving for memory usage and latency, as well as reducing model training times and associated costs, is a must for the business.

Datatonic helped Realeyes enhance their R&D capabilities following best practices within MLOps: the AI automation methodology. Successful implementation allowed for improvement in Realeyes model performance (e.g. lower latency and memory usage) as well as cost-efficiency in developing, running and updating models. For this purpose, Datatonic implemented two of their Machine Learning pipelines on Kubeflow, the cutting-edge machine learning platform on Google Cloud.

Our impact

  • Enabled Realeyes with scalable, fully customisable and efficient R&D including a user-friendly interface
  • Improved AI model performance (1.5x faster latency, 4x less memory on mobile and browser, reduced power consumption) and costs (75% less model re-training costs)
  • Implemented Kubeflow pipelines, the cutting-edge machine learning platform, for two ML processes following best practices in MLOps: the AI automation methodology


The challenge

Computer Vision tasks are amongst the most expensive and long training regimes, and efficient experimenting can make the difference in cost, time and number and success of R&D iterations. 

Realeyes aimed to scale up the efforts of their internal Machine Learning team to continuously improve their models and solutions by optimising how they create code, run experiments and track / share results. The main challenges experienced by the team while developing their models were around scalability, experiment tracking, duplication, automation and robustly conducting R&D as a team.

Following a joint brainstorming session, it was defined that a new MLOps set-up with Kubeflow could lead to more efficient experimentation. Kubeflow would power the capability for automated training and hyperparameter tuning, versioning and shared experimenting that overcomes issues around R&D.

These issues include propensity to duplication, high dependency on each researcher with limited tracking and visibility over each researcher’s experimenting. Additionally, the inherent scalability of Kubernetes would allow for increasing workload sizes at training time and quantization within TensorFlow would allow for optimal serving on browser and edge.

“Datatonic joined one of our R&D teams for a duration of five weeks with an objective of upgrading two of Realeyes most critical Machine Learning pipelines. Transformations that happened in that period beat every expectation. As a result, our model training pipelines have been upgraded to the latest version of TensorFlow. They have been rewired around Kubeflow and can now scale much easier in the cloud. Blueprint for all future model training pipelines have been firmly laid down in our organisation. All engineers and scientists have significantly increased their experience with the latest technologies and new pipelines are already making their work multiple times easier and more efficient. Working with the Datatonic team was easy, positive and felt very natural. This was a hugely beneficial partnership from many angles and we will gladly collaborate together again.” – Elnar Hajiyev, Co-founder and CTO, Realeyes


Our solution

The solution was built in the spirit of knowledge sharing and co-development with the mindset of enabling Realeyes to easily extend the solution to all its workflows. Five main phases were defined, each culminating in a show and tell session to upskill the Realeyes team throughout the delivery:

  1. Creation of a custom Kubeflow cluster on Google Cloud Kubernetes that can be automatically re-deployed in existing or new GCP projects in 15 minutes
  2. Restructuring of the first model as a Kubeflow pipeline made of reusable components, following best practices and maintaining existing data transformations, model design and processes
  3. Restructuring of the second model as a Kubeflow pipeline reusing pre-built components for the first model when possible
  4. Introduction of a hyperparameter tuning step shared by both models using Katib, which allows for parallelised trials, easy experimentation and result sharing via the Kubeflow UI
  5. Creation and embedding of reusable quantisation components for both tf1.x and tf2.x to optimise the models for latency and size at serving time for both TensorFlow Lite and TensorFlow.js formats

Last but not least, recommendations for future R&D on all deliverables were shared to improve upon the delivered solutions and to highlight new features of interest soon to be released by Kubeflow.

“This collaboration was a true highlight. Throughout the whole project – despite the challenges – I felt the common desire to deliver something sincerely useful, to transfer all the knowledge necessary to take the deliverables forward. Datatonic provided full support not only through executing the previously set roadmap, but also throughout a month-long post-handover session, making sure that we’re comfortable with each and every component of the project. Via kicking off the project with a shared architecting brainstorming, we were made sure that the yet-to-be-delivered solution aligns with our way of thinking. The tech personnel had all the necessary knowledge to make such projects succeed, they are truly knowledgeable. The last highlight goes to the received Technical Report. It is very detailed, is a truly go-to-doc if we have questions in the future. 5/5, genuinely recommended.” – Denes Boros, Specialist AI Engineer, Realeyes