Animl docs
  • Welcome to Animl
  • Getting started
    • Intro to AI for processing camera trap data
    • How AI works in Animl
    • Structure, concepts, and terminology
  • Fundamentals
    • Uploading images
    • Image review
    • Camera and Deployment management
    • Filtering images
    • Views
    • Automation Rules
    • Export Data
    • User management
    • Deleting images
  • Resources
    • Guide to real-time wireless camera trapping
    • Terms of Service
Powered by GitBook
On this page
  • Overview
  • Digging deeper

Welcome to Animl

Animl is an open-source platform for managing camera trap data, built by The Nature Conservancy

NextIntro to AI for processing camera trap data

Last updated 1 year ago

Overview

was designed to:

  1. accept camera trap data from a wide variety of camera trap types, integrate real-time data streams from wireless camera traps (VHF radio-based cameras, cellular cameras) or upload images in bulk from traditional, SD-card cameras

  2. allow for the rapid deployment and integration of multiple machine learning models that may be suited for different environments, different target species, or different business use cases

  3. empower users to configure custom machine learning pipelines to automatically predict what’s in their images – and weed out empty images if nothing is detected

  4. send automated alerts if a species of concern is detected

  5. allow users to query, filter, and sort images, review and validate ML-predicted objects and labels, and manage users and their permissions for collaborative image review

  6. Export images and labels for use in downstream data analysis/modeling and machine learning training

Digging deeper

Curious about how AI works on camera trap data and what Animl might be able to do for you? We recommend reading through the pages below as a starting point:

Intro to AI for processing camera trap data
How AI works in Animl
Animl