Google Launches SpeciesNet for Wildlife Conservation

Google releases open-source AI model for wildlife monitoring

Google has released SpeciesNet, an artificial intelligence system that identifies animals in camera trap photographs. The model recognises more than 2,000 species and processes images at speeds that change how conservation teams handle field data. Released under the Apache 2.0 licence in early 2025, the system is available on GitHub for anyone to use.

Camera traps have become standard equipment for wildlife monitoring. These motion-activated cameras with infrared sensors sit in forests, grasslands, and protected areas, capturing images whenever animals pass by. The technology works well for data collection. However, researchers face a significant bottleneck when analysing what those cameras record.

Manual image review typically processes between 300 and 1,000 photographs per hour. Large conservation projects can generate millions of images annually. Consequently, valuable field data often sits unexamined for months while small teams work through backlogs. SpeciesNet addresses this constraint by automating the identification process at a speed of 3.6 million images per hour.

The system builds on Wildlife Insights, a platform Google launched around 2019 through its Earth Outreach programme. Initially, the platform helped research teams share and analyse camera trap data collaboratively. Early versions of SpeciesNet powered the identification features. Following several years of development and testing with conservation partners, Google released the complete model as open-source software in 2025.

Training data spans 65 million images from global partners

SpeciesNet was trained using more than 65 million photographs from public repositories and partner organisations. Major contributors include the Smithsonian Conservation Biology Institute, Wildlife Conservation Society, North Carolina Museum of Natural Sciences, and Zoological Society of London. This geographically diverse dataset enables the model to recognise animals across different habitats, lighting conditions, and camera angles.

The training process focused on accuracy across multiple metrics. For animal detection, the system achieves 99.4% accuracy in determining whether an image contains wildlife. Species-level identification reaches 83% accuracy, while the model correctly excludes 98.6% of empty images triggered by wind or vegetation movement. When the system makes a species prediction, it proves correct 94.5% of the time on test datasets not used during training.

These figures matter because conservation decisions depend on reliable data. Population counts, habitat use patterns, and threat assessments all start with knowing which animals appear in which locations. Misidentification can lead to wasted resources or missed conservation priorities. Therefore, the model includes features that improve reliability beyond raw classification.

SpeciesNet uses geofencing to filter results based on known species distributions. Each prediction can be constrained by country or region codes, preventing impossible identifications. For example, the system will not suggest kangaroos in Denmark or polar bears in Tanzania. This geographic filtering reduces false positives significantly.

Two-stage architecture combines detection and classification

The model operates through a two-stage process. First, an object detection component called MegaDetector locates animals, people, or vehicles within each photograph. This stage produces bounding boxes around detected objects. Second, a classification network built on the EfficientNet V2 M architecture identifies species from those detected objects.

This separation proves valuable in practice. Many camera trap images contain multiple animals, partial views, or objects at different distances. By detecting first and classifying second, SpeciesNet can handle complex scenes more accurately than single-stage approaches. The system currently supports more than 2,500 categories, updated from the initial 2,000 at launch.

When confidence levels fall below reliable thresholds, the model employs label rollup. Instead of guessing between similar species, it provides a broader taxonomic classification. An uncertain identification between two deer species might roll up to “deer” or even “mammal” rather than forcing a specific choice. This approach maintains data quality while still providing useful information for many conservation questions.

Users run SpeciesNet through Python code available on GitHub. The system supports GPU acceleration, which means teams with appropriate hardware can process large image libraries quickly. Organisations without specialised computing resources can still use the model, although processing times will be longer without GPU support.

Conservation agencies report faster monitoring and analysis

Adoption has grown rapidly since the 2025 release. The Idaho Department of Fish and Game uses SpeciesNet to monitor deer, elk, and bear populations across forested areas. Automated identification speeds up their annual surveys, although staff still verify results for management decisions. Previously, seasonal monitoring required months of manual review. Now, preliminary results appear within days of collecting camera data.

In Colombia, conservation teams track pumas and ocelots through protected corridors. These elusive cats rarely appear on camera, which means thousands of images must be reviewed to find relevant sightings. SpeciesNet filters out empty images and common species, letting researchers focus attention on the predators they are monitoring. This targeted approach has improved understanding of how these animals move through fragmented habitats.

Australian researchers apply the model to species found nowhere else. Cassowaries and rat-kangaroos present identification challenges because of their unique appearance and behaviours. Training datasets for these species were limited until Wildlife Insights partners contributed regional images. Now, SpeciesNet recognises them reliably, supporting conservation programmes in Queensland and other areas.

Tanzania provides another example. Serengeti monitoring programmes track lions, elephants, and other megafauna to understand population dynamics and human-wildlife conflict. Camera networks span vast areas, generating data volumes that previously overwhelmed analysis capacity. With automated identification, research teams can now examine seasonal movements, breeding patterns, and responses to environmental changes with much greater temporal resolution.

Anti-poaching applications have emerged in Southeast Asia. Real-time image analysis alerts rangers when rare species or suspicious human activity appears on camera. One protected area reported a 67% year-on-year reduction in illegal hunting after implementing automated monitoring with rapid response protocols. However, these results depend on many factors beyond the AI model itself, including enforcement capacity and community engagement.

Five essential facts about SpeciesNet implementation

  • Google released the complete model as open-source software in early 2025 after developing it through the Wildlife Insights platform since approximately 2019.
  • Training used more than 65 million images from global conservation partners, enabling recognition of over 2,500 animal species, broader taxonomic groups, and non-animal objects.
  • The system processes 3.6 million images per hour with 99.4% animal detection accuracy and 83% species-level identification accuracy on test datasets.
  • Geographic filtering prevents impossible species identifications by constraining predictions to known distributions within specified countries or regions.
  • Active deployments span government agencies, research institutions, and conservation organisations across six continents, with applications ranging from population monitoring to anti-poaching alerts.

Open-source approach enables broader conservation innovation

The Apache 2.0 licence permits both commercial and non-commercial use without fees. This licensing choice matters for conservation, where funding constraints often limit access to technology. Small organisations, academic researchers, and conservation startups can now incorporate species identification into their own tools and workflows.

Several groups have already built applications on top of SpeciesNet. Some focus on specific taxa, adding training data for local species not well represented in the original dataset. Others integrate the model into end-to-end platforms that handle everything from image upload to population reports. This ecosystem approach accelerates innovation faster than any single organisation could achieve.

Nevertheless, open access creates potential concerns. Poaching networks could theoretically use the same technology to locate valuable wildlife. Geographic filtering provides some protection by limiting where predictions work, but determined users might circumvent these safeguards. Conservation technologists continue debating how to balance accessibility with security.

Microsoft offers a competing tool called PyTorch Wildlife, which similarly automates camera trap analysis. The Wildlife Conservation Society and WWF work with multiple AI systems depending on project needs. This diversity proves healthy for the field. Different models excel in different contexts, and having multiple options encourages continuous improvement across all platforms.

Speed improvements free researchers for fieldwork and analysis

The primary business case for automated identification centres on staff time. Conservation organisations operate with limited budgets. Researchers spending weeks on image review cannot simultaneously conduct field surveys, analyse population trends, or engage with local communities. By reducing image processing from weeks to hours, SpeciesNet changes what conservation teams can accomplish with fixed resources.

For example, a typical camera trap survey might deploy 50 cameras for three months, capturing perhaps 100,000 images. Manual review at 500 images per hour requires 200 staff hours. At UK consultancy rates, this represents £6,000 to £10,000 in labour costs for a single survey. Automated processing reduces this to equipment time and verification work, cutting costs substantially while freeing skilled ecologists for higher-value tasks.

Speed also enables questions that were previously impractical. Researchers can now examine how animal behaviour changes across seasons, how populations respond to weather events, or how species interact at fine temporal scales. One study used SpeciesNet to analyse whether mammals shift to nocturnal activity in areas with high human presence. The dataset included millions of images across dozens of sites. Manual analysis would have taken years. Automated identification made it feasible within months.

Real-time monitoring becomes possible at scale. Instead of waiting for quarterly data downloads and analysis, rangers can receive alerts within hours of significant detections. This matters for anti-poaching, human-wildlife conflict management, and documenting rare species. However, implementing real-time systems requires reliable power, connectivity, and response protocols, which remain challenging in many conservation contexts.

Accuracy limitations require verification for critical decisions

An 83% species-level accuracy rate sounds impressive, yet it means roughly one in six identifications may be incorrect. For many conservation questions, this error rate is acceptable. Population trends, habitat use, and activity patterns often remain clear even with some misidentifications. However, decisions involving protected species, legal compliance, or significant resource allocation typically require human verification.

The Idaho Department of Fish and Game, for instance, uses SpeciesNet to filter and sort images, but staff verify identifications before including them in official reports. This hybrid approach captures most of the efficiency gains while maintaining data quality for management decisions. Similarly, researchers studying rare species usually review all detections manually, using the AI to find candidate images rather than trusting classifications blindly.

Accuracy varies by species and context. Common, distinctive animals like elephants or zebras achieve near-perfect identification rates. Small, similar-looking species such as rodents or certain birds prove more challenging. Image quality, distance, and partial views all affect performance. Users must understand these limitations when designing monitoring programmes and interpreting results.

The model improves as more organisations contribute data. Each new deployment in a different ecosystem or with different camera equipment provides learning opportunities. Google and Wildlife Insights partners continue refining SpeciesNet through regular updates that incorporate feedback and additional training examples. This iterative improvement follows standard practice in machine learning, where models evolve through use.

Integration with UK biodiversity monitoring programmes

UK conservation organisations have begun exploring SpeciesNet for native species monitoring. Camera trap surveys track everything from badgers and deer to pine martens and wildcats. Automated identification could accelerate these programmes significantly. However, British species represent a small fraction of SpeciesNet’s training data, which emphasises globally significant and tropical wildlife.

Some UK users report mixed results. The model recognises common deer and fox reliably, but struggles with less common species like polecats or mountain hares. Regional variations in appearance can affect accuracy. For example, Scottish wildcats look similar to domestic cats, and distinguishing them requires careful attention to markings that may not register clearly in nighttime infrared images.

These limitations suggest opportunities for UK-specific model training. Organisations could build on SpeciesNet’s architecture while adding local species data. The open-source nature of the project makes this feasible. Alternatively, partnerships with Wildlife Insights might incorporate more British and European species into future versions of the global model.

For UK businesses supporting biodiversity net gain requirements, automated monitoring could provide cost-effective evidence of habitat quality and species presence. Development projects increasingly need to demonstrate no net loss of biodiversity. Camera traps with automated analysis offer scalable monitoring at lower cost than traditional survey methods. This application may drive UK adoption more than pure conservation research.

Technical requirements for deployment

Running SpeciesNet requires moderate technical capability. Users need Python programming skills and familiarity with machine learning libraries. The GitHub repository includes documentation and example code, but implementing the system for a specific monitoring programme involves customisation. Small conservation groups without technical staff may need external support.

Hardware requirements scale with image volumes. Processing thousands of images on a standard laptop is feasible, though slow. Projects handling hundreds of thousands or millions of images benefit significantly from GPU acceleration. Cloud computing services like Google Cloud or AWS provide GPU access on demand, allowing organisations to process large batches without investing in specialised hardware.

Data management becomes important at scale. Camera traps generate not just images but also metadata including timestamps, locations, camera IDs, and equipment settings. Effective monitoring programmes need systems that link SpeciesNet identifications back to this metadata, enable verification workflows, and export results for analysis. Some organisations build custom solutions, while others use existing platforms like Wildlife Insights that integrate SpeciesNet natively.

For UK businesses considering camera trap monitoring, working with ecological consultancies that already have SpeciesNet workflows may prove more practical than in-house implementation. This mirrors the broader pattern in sustainability services, where specialist providers offer capabilities that would be inefficient for most organisations to develop internally.

Model performance compared to alternatives

SpeciesNet competes with several other automated identification systems. Microsoft’s PyTorch Wildlife offers similar functionality with different technical foundations. Some regional conservation programmes have developed specialised models for local species. Academic researchers continue publishing new approaches in computer vision and ecology journals.

Comparative studies suggest SpeciesNet performs well across diverse geographic contexts because of its large, global training dataset. Models trained on smaller or more focused datasets may achieve higher accuracy for specific species or regions, but generalise less effectively. For organisations working across multiple countries or ecosystems, a broadly capable model like SpeciesNet offers practical advantages.

The open-source release also matters competitively. Proprietary systems may perform well but limit what users can do with them. Open models enable customisation, integration, and extension in ways that closed systems prevent. This flexibility particularly benefits research and conservation contexts where standard solutions rarely fit perfectly.

Nevertheless, no single model suits all situations. Conservation technology increasingly involves multiple AI systems working together, each contributing particular strengths. SpeciesNet might handle initial species identification, while specialised models analyse behaviour, count individuals, or assess body condition. The field moves towards interoperable tools rather than universal solutions.

Government and regulatory resources for biodiversity monitoring

UK organisations implementing camera trap monitoring should consult guidance from the Department for Environment, Food and Rural Affairs, which oversees biodiversity policy and protected species regulations. The department provides frameworks for monitoring requirements under biodiversity net gain rules and environmental impact assessments.

Natural England offers technical guidance on survey methods and species identification standards for protected and priority species. Camera trap protocols should align with these standards to ensure data quality for regulatory purposes. The organisation also maintains lists of species for which monitoring evidence is particularly valuable.

For businesses working on projects with biodiversity requirements, our nature-positive investment guidance explains how monitoring and evidence collection support compliance with environmental obligations. Effective biodiversity monitoring increasingly underpins planning permissions, environmental permits, and corporate sustainability reporting.

The Mammal Society coordinates national recording schemes and provides species identification resources relevant to UK camera trap surveys. Their distribution data helps interpret results and identify significant records. Academic institutions including the Natural History Museum offer taxonomic expertise and maintain reference collections for species verification.

Wildlife Insights itself remains a valuable resource. The platform at wildlifeinsights.org provides not just the SpeciesNet model but also data management, collaboration tools, and access to global camera trap datasets for comparative analysis.

Contact Us

We are here to support your net-zero journey, whatever your stage

Our team offers practical guidance and tailored solutions to help your business thrive sustainably.

SBS sustainability team
🌿

Sustainable Business Services

AI-powered sustainability assistant

Online — typically replies instantly
Verified by MonsterInsights