How many individual zebras are represented in this collage of 10 photos? If we were looking at human faces, most of us would have little trouble differentiating between multiple photos of the same person, and photos of different people. But when it comes to wildlife, people are easily stumped.
Not so for computers. If the differences between zebras, or other animals with distinctive markings, can be expressed in mathematical terms, computers can analyze those differences — and identify which animals appear in each photograph —at speeds and levels of accuracy that leave humans in the dust. And that is the idea behind the experimental Image-Based Ecological Information System, or IBEIS.
IBEIS can identify individual animals in photos and match them against images of known animals stored in a database. Intended as an tool for conservation efforts, and ecology and population biology research, the beauty of IBEIS is that it can ID animals from ordinary snapshots, such as the thousands of photos taken daily by visitors to wildlife conservation reserves such as Yellowstone National Park, Amboseli National Park in Kenya, or the Galapagos Islands off the coast of Ecuador.
To help test the system, Chuck Stewart, head of the Department of Computer Science at Rensselaer, and a lead researcher on the project, visited Mpala Research Centre in Kenya in January (and had a close encounter with a rhino). Here’s what he had to say about the system:
The old style of data collection is to tag an individual animal with a radio collar. But that’s just one data point in time, it doesn’t tell you much else. We know that visitors on safari each take hundreds if not thousands of pictures a day, but up until now, the only way to turn that into data would be to have a person look at each photo and manually identify the animals pictured, which is very difficult and impractical. With IBEIS, not only can the computer look at a picture and say ‘this is a zebra,’ it can also say ‘this is zebra number 126.’
IBEIS is being developed by researchers at Rensselaer, the University of Illinois at Chicago, Princeton University, and the wildlife conservation organization Wild Me. The developers have already applied the system to Grevy’s and plains zebras, giraffes, leopards, seals, rhinos, frogs, polar bears and even lionfish. (According to the developers, giraffes and leopards are easiest.)
The process starts with photos gathered not only from tourists, but also collected by field scientists, and automatic cameras installed in the field. Once uploaded, “regions of interest” — rectangular sections within each photo that contain the distinct features of each animal — are chosen within each picture. In the development stage, images are manually screened, to eliminate those with poor views of wildlife, and “regions of interest” are manually identified, but the developers are working to automate these steps.
Next, the images are analyzed by HotSpotter, a fast accurate algorithm developed by Jon Crall, a Rensselaer graduate student working with Stewart, which identifies individual animals against a labeled database. HotSpotter starts by finding a thousand or more “keypoints” in the image’s region of interest, and extracts 128-dimensional vectors that describe each keypoint. These descriptions become a form of “fingerprint” for that animal. HotSpotter then searches for matches to each keypoint descriptor in a large labeled database of descriptors taken from many previous images. A sampling of the vectors is shown in the collage below.
Database images that produce a lot of these descriptor matches become candidate for identification. HotSpotter then scores these candidates, rechecks them for consistency, and produces a similarity score. In most cases, the database image with the highest score is a picture of the same animal, seen at a different time. When all scores are low, the animal is likely to be new to HotSpotter and is added to the database as a new individual.
Once animals are individually identified, the images are entered into a growing “Wild Book” that can be used to answer questions about who is where and when, providing invaluable data on research regarding population, behavior, and stressors on animals.
Still in the early stages, the program is 90 percent accurate for test images, and requires five seconds to match an individual image against a database with more than one thousand images. But there is plenty of room for continued development. Poor lighting, an odd view of an animal, or an obstruction in front of an animal, can all cause errors. In addition to overcoming those pitfalls, the developers are working on automated filtering, animal detection, species recognition and identification.
By July, the researchers hope to complete “IBEIS-Lite,” which will be able to process more than 1,000 images of three to four species per day. Stewart and a small team of graduate students will install IBEIS-Lite in two conservancies during a two-week trip to Kenya in late July. The long term goal is “IBEIS 2,” a version that will be able to automatically filter, detect, and identify more than 100,000 images of diverse species — including facial recognition of primates — per day. Developers hope to have that version in place within a few years.
So take aim, and prepare to become a “citizen scientist” with IBEIS!