Insights

ZSL – Identifying Species in Camera Trap Images with Cloud AutoML Vision

ZSL Animal Panda

This story about how AutoML can help with wildlife preservation was also featured on this Google Cloud blog.

Over the last 40 years, more than half of the world’s wildlife has disappeared, largely due to habitat loss, over-exploitation, and man-made climate change. Such extensive loss of biodiversity is both tragic and reckless, with potentially critical implications for us as a human species. We have an urgent responsibility to fight this trend.

Effective action relies upon innovation and collaboration, so we are partnering with the Zoological Society of London (ZSL) – an international conservation charity – and Google, to bring the power of machine learning to the forefront of conservation efforts.

Camera Trap Generating (Wild) Big Data

To track life in the wild, ZSL’s conservation team uses camera traps to capture wild animal behaviour, estimate population numbers and record human actions like hunting. Camera traps use a passive infrared sensor to detect passing animals and digitally photograph them.

AutoML Vision Camera Trap

BBC Wildlife Magazine Camera-trap Photo of the Year 2010

AutoML Vision Camera Trap

Pig-tailed macaques find a camera trap (Oliver Wearn, ZSL)

While it may not seem so bad sifting through these “animal selfies,” it becomes an expensive barrier for conservation charities when there are a lot of images that need processing. A single camera trap can record images for months, taking about 60 images a day, so it could hold over 10,000 images when retrieved after six months – and a survey area might have 100 cameras. Examining each individual image to identify the animals’ species  requires months of effort.

“Of the thousands of photographs our conservationists might analyse from a camera trap survey, often only a small percentage include a rare species,” says Sophie Maxwell, ZSL’s Conservation Technology Lead. “That’s a lot of time those experts spend looking at blank images.”

Machine Learning Approach

Image recognition has evolved to a great extent in recent years. Deep learning models are capable of exceeding human performance, so the animal annotation task is technologically feasible. Skeptics of deep learning have sometimes ridiculed its impact as limited to merely distinguishing cats from dogs; we do not agree with this restricted view on the potential of deep learning, but hey, being able to recognise animals is exactly what we need to have a direct impact on saving wildlife!

Best ImageNet error rates since 2011 (human performance is roughly 5%)

Best ImageNet error rates since 2011 (human performance is roughly 5%)

Conservationists need models to be able to recognise not just cats and dogs but specific species such as jaguarundi and ocelots. To achieve this, we data scientists would propose transfer learning. This works by taking a pre-trained deep model that performs well on more general image data, freezing the early layers, and retraining the final layers using our labeled imagery. The result is a specialised custom model that might be excellent at, say, monitoring species that are illegally targeted by hunters.

TensorFlow, Caffe, Keras, anyone?

Customised Deep Learning Model with a Few Clicks

Deep transfer learning is not easy, and certainly not something researchers and volunteers at conservation charities can be expected to know intimately. They are not all coders or deep learning gurus, and would perhaps rather spend the effort doing conservation work rather than data science tasks. How can we make cutting-edge machine learning more accessible?

Enter Google Cloud AutoML.  

“With AutoML, conservationists who are non-coders can create image recognition models that automatically identify species in camera trap images,” says Sophie. “Those models can then be used by any conservationist, so we can dramatically speed up conservation efficiency.”

The ZSL team uses camera trap images that have already been labelled to train an AutoML model. “Once you have the data in the cloud, it’s as simple as clicking ‘train new model’,” says Oliver Wearn, AXA Research Fellow at ZSL. “That’s really all there is to it. Google has taken all the hard work of optimising the algorithms out of the process. You literally press ‘Train’ and after a day it comes back to you with a model. As a researcher, AutoML has put some very clever machine learning algorithms, and Google’s cloud computing power, at my fingertips. For the first time, this has allowed us to really start investigating the power of this technology for clearing the backlog of wildlife imagery we have in conservation.”

Data and Model Sharing Bring about the Best Models 

Model Factory built by Datatonic for ZSL and Google.

Model Factory built by Datatonic for ZSL and Google.

Having eliminated the technical barrier to deep learning, we now turn to the challenge of data quantity. While camera traps do capture a lot of images, typically only a small percentage of these contain the rare species that we want to track most. And what if a new animal pays a visit? Even a perfect model cannot identify species it has never seen before.

Addressing this requires sharing – of both data and models. The accuracy of Google’s vision API is thanks to training on images found all over the internet. If more data means better models, can we pool imagery from deserts all over the world to train the best model to recognise a desert fox?

In partnership with ZSL and Google Cloud, we are creating a platform tentatively called “model factory,” which allows conservationists to use Cloud AutoML to create image recognition models from their existing camera trap data. These models can then be shared with other conservation-focused organisations, and applied to unlabelled datasets, thereby saving hundreds of hours that would otherwise be spent on manual identification. We hope this model factory will be community driven and open-source, and will allow the conservation community to collectively refine and improve their accuracy. The models will also be available via APIs to be incorporated into applications such as Instant Detect, a proprietary camera trap monitoring system that won the Google Impact Award in 2014.

In the words of Charles Darwin, ZSL Fellow: “In the long history of humankind (and animal kind, too) those who learned to collaborate and improvise most effectively have prevailed.” This collaboration marks a new chapter in conservation, one where private and public sector organisations come together to apply modern technology to one of the world’s greatest challenges: preserving biodiversity in the Anthropocene.

Related
View all
View all
Women in Data and Analytics
Insights
Coding Confidence: Inspiring Women in Data and Analytics
Prompt Engineering
Insights
Prompt Engineering 101: Using GenAI Effectively
Generative AI
Data-Migration-Challenges
Insights
Future-Proof Your Data: Overcoming Migration Challenges
Cloud Data Engineering