WildTrax: a platform for the management, storage, processing, sharing and discovery of avian data

Authors

Alexander G. MacPhail

Corrina Copp

Erin M. Bayne

Michael Packer

Chad Klassen

Joan Fang

Hedwig E. Lankau

Monica Kohler

Tara Narwani

Steven L. Van Wilgenburg

Elly C. Knight

Kevin G. Kelly

Charles M. Francis

Published

August 23, 2025

Abstract

This is a draft manuscript for submission to Avian Conservation and Ecology Special Feature: A synthesis of data, tools, and resources for avian research and conservation planning in Canada (https://ace-eco.org/feature/8/). The text and content may change as results are finalized.

-French-

Au fur et à mesure que les capteurs environnementaux deviennent des outils indispensables pour surveiller et évaluer les oiseaux, leur efficacité repose sur des systèmes capables de gérer les ensembles de données volumineux et souvent complexes qu’ils génèrent. WildTrax (https://www.wildtrax.ca ) est une plateforme web conçue pour gérer, stocker, traiter, partager et découvrir ces données à différentes échelles. Grâce à ses améliorations et à son entretien continus, WildTrax permet aux chercheurs de répondre à des questions écologiques sur des échelles spatiales et temporelles, tout en renforçant le réseau de chercheurs, en favorisant la collaboration et en améliorant le partage de l’information pour soutenir la conservation des oiseaux au Canada et ailleurs.

-English-

As environmental sensors become indispensable tools for monitoring and assessing birds, their effectiveness depends on robust systems capable of managing the large, and often overwhelming, datasets they generate. WildTrax (https://www.wildtrax.ca) is a web-based platform built to manage, store, process, share, and discover environmental sensor data from local to international scales. Through its ongoing development and maintenance, WildTrax enables researchers to address ecological questions across multiple spatial and temporal scales, while strengthening the avian data network, fostering collaboration, and enhancing data sharing to support bird conservation in Canada and beyond.

Highlights

  • Environmental sensor data generate large and complex datasets, requiring structured systems for harmonization, quality control, and integration across studies.
  • WildTrax provides a framework for storing, managing, processing, sharing, and discovering ecological data, enabling broader accessibility and reproducibility.
  • Analytical pipelines, including thoughtfully applied artificial intelligence tools, can enhance data interpretation while maintaining transparency and user control.
  • Continued development of integrated data platforms fosters opportunities for researchers to synthesize diverse datasets, advancing both ecological understanding and conservation practice.

Keywords:
Environmental sensors, bird population monitoring, data management, online platform, ecological data sharing, multi-scale analysis, conservation technology

Introduction

Birds have long been recognized as ecological indicators (Temple and Wiens (1989), Bibby (1999), Canterbury et al. (2000), Gregory et al. (2003), Mekonen (2017)) due to their sensitivity to environmental changes (Furness and Greenwood (2013)), broad distribution across ecosystems (Orme et al. (2006), Jetz et al. (2012), Greenberg and Godin (2012), Aide et al. (2013), Lepage, Vaidya, and Guralnick (2014), Ahumada et al. (2020)), and measurable population dynamics (Reif (2013)). They can signal shifts in ecosystem health (Zakaria, Leong, and Yusuf (2005), Newman et al. (2007), Smits and Fernie (2013)) and function (Sekercioglu (2012), Sitters et al. (2016), Morante-Filho and Faria (2017)), providing integral metrics (Stanton et al. (2016), Michel et al. (2020)) for conservation and management. Birds are also valuable indicators because they occupy diverse niches, have well-studied life histories, and often correlate with the health of other taxa (Fleishman et al. (2005)). Their utility spans ecosystem monitoring, habitat quality assessment, and gauging the impact of environmental stressors such as land use change, climate change (Jiguet et al. (2007)), and pollution (Niemi et al. (1997), Mekonen (2017)). Advances in statistical modelling now allow for better integration of uncertainty, phylogenetic relationships, and temporal autocorrelation, enhancing the reliability of bird-based indicators (Fraixedas et al. (2020)). However, there remains challenges in their application as indicators, including spatial, seasonal, and habitat biases, as well as insufficient consideration of statistical uncertainty and temporal autocorrelation in multi-species bird indicators (Gregory et al. (2003), Fraixedas et al. (2020)).

Environmental sensors, such as autonomous recording units (ARUs) and remote camera traps, are reshaping avian monitoring by providing continuous, high-resolution, and large-scale data collection (Hobson et al. (2002), Shonfield and Bayne (2017), Pollet et al. (2025)). The adoption of ARUs has allowed single-visit human surveys to be supplemented or replaced with archived acoustic recordings, enabling “big data” approaches that integrate multiple datasets for broader ecological inference (Hampton et al. (2013); Farley et al. (2018); Shin and Choi (2015); Nathan et al. (2022); Peters et al. (2014); Hallgren et al. (2016)) while still being statistical harmonized across methodologies (Sólymos et al. (2013)) with detectability rates depend on factors such as distance from the ARU, frequency range, and habitat structure (Yip et al. (2017)). Remote camera traps have likewise become valuable tools for avian research, particularly for documenting nesting behavior, predation events, and species presence in otherwise inaccessible habitats (Bolton et al. (2007); Randler and Kalb (2018); O’Brien and Kinnaird (2008)). When deployed alongside ARUs, cameras provide complementary visual evidence that strengthens ecological inference, highlighting the advantages of multimodal sensor programs (Buxton et al. (2018); Garland et al. (2020)). Both sensors generate large ecological media datasets, often characterized as ‘big data’ due to their volume, variety (e.g., different file types), veracity (uncertainty in data quality such as noise or misclassification), and velocity (rate of accumulation or acquistion, Hampton et al. (2013); Farley et al. (2018)). Centralized software platforms facilitate standardization, integration, and sharing of these data, enabling interdisciplinary collaboration and advancing biodiversity research, and helping to reduce bias (Peters et al. (2014)). Notably, machine learning is being combined with ecological models to meet computational demands, positioning sensors and big data systems to link raw ecological observations with actionable conservation strategies and sustainability goals (Nathan et al. (2022)) and minimizing data waste (Binley et al. (2023)).

To maximize their utility in an open framework, biodiversity data must align with FAIR principles: findable, accessible, interoperable, and reusable (Kush et al. (2020)). However, challenges persist around data quality, equitable access, and long-term preservation, highlighting the importance of strong and lasting socio-technical frameworks (Shin and Choi (2015)). Software platforms that implement these principles and harmonize diverse datasets while maintaining consistency, equity, and quality can unlock advanced species- and community-level analyses, ultimately transforming raw observations into actionable insights for conservation and policy (Stephenson and Stengel (2020); Fox et al. (2017)). By overcoming limitations of traditional methods and promoting open data-sharing, such platforms enhance understanding of ecological trends (Buxton et al. (2021)) and foster collaboration among stakeholders, aligning priorities across regional to global scales (Kartez and Casto (2008)). Here we introduce WildTrax, a platform for storing, managing, processing, sharing, and discovering avian environmental sensor data. This paper describes version 2.0 of WildTrax, corresponding with its 2025 release.

Methods

Database

Infrastructure

WildTrax uses PostgreSQL, a free and open-source relational database management system (RDBMS). PostgreSQL is well-suited for managing complex biodiversity datasets and rich metadata that accompanies ARU and remote camera data, offering optimized querying, relational structure, and robust storage capabilities (Douglas and Douglas (2003); Zhang, Gertz, and Gruenwald (2009); Kim et al. (2021)). The application infrastructure runs on a virtualized server environment configured for dedicated applications. This architecture supports both daily user interactions and more computationally intensive batch tasks, such as uploading large volumes of audio and image files or running artificial intelligence classifiers for species recognition or object detection. To ensure reliability and performance at scale, the system employs load balancing and redundancy measures, enabling near-continuous availability for a large and growing user base. The production server is hosted at the University of Alberta (Edmonton, Canada), with optional long-term storage services available both at the University of Alberta server or through Amazon Web Services (AWS) with nodes available in Calgary, Alberta, Canada; Montreal, Quebec, Canada; and Oregon, USA. Off-site storage uses AWS Deep Glacier providing a recommended 3-2-1 data backup policy (Perkel (2019)). To optimize performance, a local M.2 drive paired with a CPU significantly reduces I/O bottlenecks, enabling data read / write speeds multiple times faster than traditional SSD NAS configurations. In practice, these enhancements improve query response times and batch processing speeds, while the upgraded CPU delivers smoother responsiveness and improved multitasking during high-demand operations. Media files are indexed and linked to the storage location option an Organization chooses.

Role-based access and data security

WildTrax uses Auth0, a third-party role-based access control (RBAC) system that provides secure login and token-based authentication and authorization. Data membership and access follow the principle of least privilege, with permissions managed at both the Organization and Project levels. At the Organization level, users are assigned either an Administrator role, with full read–write access, or a Read-Only role, which restricts modifications to protect data integrity. Organizations may also designate a Principal Investigator (PI), who serves as the primary account for handling access requests. If a PI is not assigned, requests are directed to Organization and Project Administrators. At the Project level, roles provide more granular control. Administrators have full management access, including user assignments and data syncing. Taggers can annotate media but do not have administrative privileges. Read-Only users have view-only access, enabling collaboration without altering project data.

User assignments

User assignments

Schemas

WildTrax employs a hierarchical structure to maintain data integrity, enforce standardization, and support interoperability of environmental sensors. The system is built around three primary data schemas, ARUs, cameras, and point counts, which are organized within Projects. Projects serve as centralized containers for all observations and sensor outputs linked to a specific study or research question. At a higher level, Organizations aggregate multiple Projects, bringing sensor data and media together under a unified framework. In addition, species schemas are managed across multiple taxonomic levels, enabling both standardized presets and the custom addition of species within individual Projects.

Entity relationship diagram of the common WildTrax schema

Entity relationship diagram of the common WildTrax schema

Organization locations tab

Organization locations tab

Media uploaded to WildTrax, whether recordings or images, must include a string name prefix and media date-time stamp. For recordings this is contained in the media file name (e.g. LOCATION_YYYYMMDD_HHMMSS) and within the parent folder name and EXIF metadata in image files. This way, each media file is explicitly linked to its parent Location, the precise geographic point of deployment, ensuring that sensors and their outputs remain inseparable from their environmental context. The schema enforces referential integrity while streamlining front-end operations such as data retrieval, filtering, and visualization through optimized queries and external APIs. For example, temporal metadata is standardized across sensors (YYYY-MM-DD HH:MM:SS), while spatial coordinates are consistently defined at the Location level (longitude, latitude; WGS84). This framework enables multi-dimensional analyses that integrate raw sensor output with ecological, climatic, and biogeographic patterns. From the parent Location, metadata can be expanded through various Organizational tabs: Location Photos for visual records of the landscape or context of the Location, Visits for records of when an observer visited a Location; Equipment, an indexed inventory of devices used to collect media; and Deployments, the record of a specific piece of equipment deployed during a Visit. Together, these metadata provide users with precise information on the exact equipment used to survey each Location and collect media files. This detail strengthens data quality by enabling issues related to mechanical failure, physical damage, environmental conditions, or recording errors to be traced and validated later during processing tasks.

Point count project dashboard ARU project dashboard

Entity relationship diagram of the acoustic (ARU) WildTrax schema

Entity relationship diagram of the acoustic (ARU) WildTrax schema

User interface and user experience (UI/UX)

WildTrax’s user interface is a responsive web application built with Vuejs, leveraging its modular component-based architecture, including the Composition API, for scalability, maintainability, and efficient logic reuse. The user interface (UI) is styled and enhanced using PrimeVue, among other libraries, which delivers rich, customization components such as data tables and dashboards for seamless data exploration and visualization. The application is deployed via an Apache HTTP Server, which serves the front-end and handles API routing through reverse-proxy configurations to back-end services. WildTrax exposes APIs for data exchange and provides export tools in standard scientific formats (e.g., CSV, JSON, text and zip), supporting downstream integration with statistical and geospatial workflows.

ARU context menu for a public project

ARU context menu for a public project

Dashboards are designed to give users a clear, intuitive overview of their sensors’ data, enabling both novice and expert users to navigate and interpret project information with minimal friction. Key features, including filters, sorting, clearly labeled column headers with hover information, and tooltips, are complemented by dropdown menus accessed through responsive, context-aware controls, allowing users to refine or manipulate large datasets quickly without navigating complex database relationship structures. Visual elements such as sortable data tables, progress indicators, and status icons support efficient user scanning and pattern recognition, while maintaining consistency with the broader WildTrax design system. Attention to micro-interactions, such as inline feedback and notifications when data are updated or filters applied, reinforces a sense of responsiveness and reduces cognitive user load. The dashboard prioritizes transparency by linking each dataset directly back to its associated location and media, ensuring users can trace results from summary views down to the raw sensor files from Tasks to Organizations. WildTrax 2.0 incorporates a new content management system (CMS) that provides support for translation (currently supported in English and French), internationalization (i18n), and content localization, ensuring that the platform can accommodate multilingual users and diverse regional requirements.

Content management system

Content management system

ARU recording upload popup

ARU recording upload popup

Each sensor processing interface and data management capabilities are tailored to the unique characteristics of its respective data type while ensuring consistency in data harmonization, design, and usability. Within each sensor type, ARUs, cameras and point counts, Projects serve as the core organizational unit, enabling users to upload media, manage processing tasks from recordings or image sets, integrate external data, assign species and user roles, verify species tags, attach ancillary metadata and files, and publish results for discovery and use by other members and the broader community.

Acoustic sensor

The design of the acoustic sensor workflow is to provide a few steps as possible to maximize the use of the acoustic data for any level of user. Supported audio files types are WAC and W4V (Wildlife Acoustics proprietary lossless compressed file types), FLAC (open-source lossless compressed file types), MP3 (lossy compressed audio; (MPEG-1 Audio Layer III)) and WAV (uncompressed audio). All uploaded file types, except MP3, are converted to using a lossless audio codec, FLAC, that preserves bit-for-bit fidelity while reducing storage needs by approximately 30–70% compared to WAV. FLAC also allows researchers to uncompress the lossless data for use in other applications, avoided any issues that arise with lossy data compression (MacPhail et al. (2024)).

Recordings can be uploaded to either Projects or Organizations, with all media ultimately owned and stored at the Organization level. Within an Organization, recordings are aggregated in the Recordings tab, where users can generate Tasks based on pre-selected criteria with over 40 filters are available, grouped into Weather, Location, Recording, Species, and Astronomical conditions. For example, users can select recordings with a BirdNET confidence threshold of 0.8, during a new moon, and between 6 AM and 9 AM. These selected recordings are placed in a cart and can then be generated into a Project. This workflow is unprecedented, as it allows users to manage all their media centrally while creating Projects only for the specific questions they want to investigate.

Acoustic task selection process with filters applied for recordings with Swainson’s Thrushes on partly cloudly detected

Acoustic task selection process with filters applied for recordings with Swainson’s Thrushes on partly cloudly detected

At the Project level, uploaded recordings become Tasks also allowing the users to assign a duration, processing method and observer to their recording. This unique combination allow recordings to be re-processed at different durations, observers or processing methods allowing for multi-observer data processing ((macphail2025unpublished?)) and flexibililty of re-using the same audio data for different processing methodologies or questions. Once audio recordings are archived, they are then dynamically converted into spectrograms, visual representations of the audio signals using short-time Fourier transforms (STFT) using the SoX within the acoustic processing interface. Default project parameters (X-scale for duration of time and Y-scale for spectrogram height in pixels, and colour formatting) define the size and style of the spectrogram but can be changed dynamically using the Audio Settings panel to modify the dimensions and range of the spectral signatures in order to isolate frequency ranges. This is particularly useful to isolate the range of species that can in narrow frequency bands.

Dynamic audio settings panel

Ruffed Grouse non-vocal drumming

Ruffed Grouse non-vocal drumming

The processing interface allows users to play back audio while simultaneously viewing the spectrogram, providing both auditory and visual interaction with the data. This setup enables users to isolate individuals across recordings by examining signal directionality (left vs. right channel), amplitude (e.g. intensity or loudness), timing or frequency of vocalizations, and distinct song types. By combining these features with visual and auditory interpretation, experts can curate recordings with high precision and accuracy for both species identification and count estimates.

Acoustic processing interface showcasing multiple Yellow Warblers signing with different intensities, directionality and timing

Acoustic processing interface showcasing multiple Yellow Warblers signing with different intensities, directionality and timing

Various panels support and enhance the acoustic processing UX and accessibility to external data. The Weather panel provides environmental context, including temperature, wind speed, precipitation at the nearest hour of the recording, as well as sunrise, sunset, moonrise, moonset, and lunar phase. The AI-assisted classification panel displays species predictions at 1.5-second intervals along the spectrogram, with outputs filterable by classifier-specific score thresholds (BirdNET and HawkEars), with access to the overlays enabled by Project Administrators. The Noise panel allows users to annotate geophonic noise (e.g., wind, rain, ocean), anthropogenic noise (e.g., industry, traffic), or equipment malfunctions. Noise events can be further categorized by channel (left/right), amplitude (low to extreme), and frequency (intermittent, frequent, constant, infrequent). Access to Location Photos also aids species identification and helps assess landscape factors, such as vegetation density, forest type or human features, that may influence species detection or noise parameters.

Acoustic processing interface

Acoustic sensor layout with major components

Species detections are recorded as Tags, created when a human observer draws a bounding box around an acoustic signal on the spectrogram. Each Tag captures temporal and frequency information, including date and time of first detection, duration, and frequency range (Hz). The Tag Info Panel enables rapid entry of metadata such as individual identifiers, count estimates, vocalization type, confidence flags, and comments. This structure allows flexibility, for example, a flocking species may be represented by a single Tag with a count estimate greater than one. Tagging is further guided by the Task Method, which enforces consistency and reduces inappropriate annotations.

Automated classification outputs (e.g., single-species recognizers or alternative models) can follow a parallel workflow. Recordings are uploaded and Tasks created, after which classifier-generated Tags are imported via the sync functionality, which supports CSV-based import and export. Users can then be assigned as validators to review, correct, or remove Tags and even rate them to assess classifier performance relative to score thresholds. Some classifiers may only provide limited detection information, such as a window or start time. In these cases, classifier Tags can be uploaded using just the signal start time, and WildTrax will generate the average frequency and duration of the species’ signal based on existing Tags in the database. Users can then adjust Tag dimensions manually if necessary. When recordings are selected using classifier outputs from the Task generator within an Organization, it is recommended that a human review the recording by drawing a Tag, providing a direct comparison of time-to-first-detection and score threshold between human and classifier annotations.

Acoustic species verification

As Tasks are marked Complete, Tags are aggregated by species in the Species Verification tab. Users are then assigned privileges in the Project as validators to review, confirm or edit key information (e.g., species ID, count, vocalization). Tags are first created or imported, then reviewed by species–vocalization type. Each Tag is either confirmed as Verified or reclassified as Transcribed. The Species tab provides summaries of verification status along with tools for managing validators and tracking workflow progress.

Camera sensor

The camera sensor in WildTrax is designed supports the ingestion, processing, and validation of image data from remote cameras. Images are uploaded at the Project level, with ownership and long-term storage again retained by the Organization, and role-based access accordingly, similar to ARU. Image sets are created from the start and end date of deployed images which then become Tasks, a unique combination of Project, User and Image Set, which can be assigned to analysts for processing and validation. Upon upload, Project Administrators can automatically pre-process images using MegaDetector V6 (Beery (2023)), a convolutional neural network that detects animals, people, vehicles, and empty images. It performs especially well on medium- to large-bodied mammals, humans, and vehicles, and effectively filters out false triggers and blanks. However, its performance is limited, particularly in cases of partial occlusion, rapid movement, or low image resolution. For this reason, human tagging still remains a required step to ensure accuracy and to add finer-scale annotations and metadata

After pre-classification, images enter the human tagging stage, where analysts validate or correct automated tags and add additional metadata. Metadata are divided into two main categories, selectable in the camera Project settings: tag-level metadata (e.g., species, sex, age class, behavior) and image-level metadata. To address the limitations of automated avian classification, WildTrax allows analysts to assign species codes to birds not detected by MegaDetector and to supplement metadata across sequences where birds are observed. Multiple individuals and behaviors can be tracked flexibly.

The processing interface supports two primary analysis modes. Series Mode displays images in chronological order, preserving event structure and enabling interpretation of sequences such as repeated detections or interspecific interactions. Tagging Mode provides access to all images via a filterable interface, allowing users to isolate subsets based on pre-classified tags, metadata, or project-specific codes. Critical metadata, such as camera field of view (FOV), is recorded as a boolean, indicating whether the camera maintained its expected setup during deployment—a key factor for detection radius and species detectability. As tags accumulate throughout a camera project, they are aggregated under the same Species Verification framework used for ARU data, with tag- and image-level metadata filters facilitating rapid verification.

Point count sensor

The point count sensor functions as a central repository for the Boreal Avian Modelling Centre’s, or BAM’s, point count data, consolidating observations from multiple surveys into a standardized framework, and making them shareable and discoverable to the public. Point counts are harmonized into specific distance bands and durations, ensuring that methods are consistent with those used for Autonomous Recording Units (ARUs), which facilitates integrated analyses across the two data types that collect avian data. When exporting ARU data, users can configure it to match a point count format, for example, treating all detections within a single ARU as belonging to a 0–INF distance band, helps to allow synthesis with traditional point count datasets ensuring comparability of abundance estimates, species occurrence, and spatial patterns between ARU and human-observed point count data.

Data publication and sharing

When the Project is completed and all Tasks processed, the status of the Project can be changed to a published status. Project publication allows other WildTrax users, who are not Project members, to access the media, metadata or species detections from the project either through the Project context menus to Download Report or Data Discover. The publication status will control how data will become visible across the system. Project publication will lock users from editing species detections and is considered the final version of the data. In addition, for every published Project, WildTrax facilitates the open avian data network with converted with a format flag on the download-report API to meet BDME (Bird Monitoring Database Exchange) with NatureCounts. Only Map + Report and Public Projects are sent to NatureCounts with the media staying accessible through WildTrax.

Code
statuses <- data.frame(
  Status = c(
    "Test",
    "Active",
    "Published – Private",
    "Published – Map Only",
    "Published – Map + Report Only",
    "Published – Public"
  ),
  Description = c(
    "For users just getting started with an environmental sensor program or testing WildTrax functionalities. Project data are not avaialble for Download or in Data Discover except to project members.",
    "Default status when a project is created. Active projects are currently being processed, designed, or in early stages of data uploading. Use for any general active work.",
    "Project data are only available to project members. Other users must request access to view details such as species or locations.",
    "Project data are visible in Data Discover, but media and reports are not accessible to non-project members. Location buffering and visibility settings apply.",
    "Project data are available for Download and through Data Discover, but media is not accessible. Useful if data can be public but media must remain private. Does not exist for point counts.",
    "All project data and details are publicly available for Download and through Data Discover. Location buffering and visibility settings still apply for non-members."
  )
)

datatable(
  statuses,
  rownames = FALSE,
  options = list(pageLength = 10, autoWidth = TRUE)
)

ABMI Ecosystem Health Project on NatureCounts found at https://naturecounts.ca/nc/default/datasets.jsp?code=WILDTRAX1&sec=bmdr

Reports

Each sensor is customized with a series of reports of data. Reports are purported to be abstractions and summaries of data collected across each sensor for various reasons. Each report serves a specific purpose to allow downstream users to analyze their data.

Code
# wildrtrax report downloads for main report
report <- wt_download_report(620, 'ARU', reports = 'main')
names(report)
 [1] "organization"                "project_id"                 
 [3] "location"                    "location_id"                
 [5] "location_buffer_m"           "longitude"                  
 [7] "latitude"                    "equipment_make"             
 [9] "equipment_model"             "recording_id"               
[11] "recording_date_time"         "task_id"                    
[13] "aru_task_status"             "task_duration"              
[15] "task_method"                 "species_code"               
[17] "species_common_name"         "species_scientific_name"    
[19] "individual_order"            "tag_id"                     
[21] "individual_count"            "vocalization"               
[23] "detection_time"              "tag_duration"               
[25] "rms_peak_dbfs"               "tag_is_verified"            
[27] "tag_rating"                  "observer"                   
[29] "observer_id"                 "species_individual_comments"
[31] "task_comments"              

WildTrax structures outputs from each sensor into standardized reports containing the most relevant for its end user.

Data Discover

Data Discover is the central hub for exploring environmental sensor data in WildTrax. In Data Discover, users and the public can search for data from ARUs, cameras, and point counts, using a variety of attribute filters, and create summary statistics within a dynamic mapping interface. Users can gain a comprehensive understanding of environmental sensor data in an area that interests them noting which organizations have published data on WildTrax, which species were detected and to what frequency, and explore media elements such as images and sounds captured in the environment.

Analysis with wildrtrax

wildrtrax (pronounced wild-r-tracks) is the corresponding R package containing functions to help manage and analyze the corresponding Report data. It also helps to simplify the entire data life cycle with WildTrax by offering tools for data pre-processing (file scanning, renaming), wrangling, and analysis, in order to facilitate seamless data transfer to and from WildTrax. wildrtrax helps users to establish their end-to-end workflows and to help ensure reproducibility in their analyses. wildrtrax functions are called and reflected with GET and POST APIs available throughout the system. All functions begin with wt_ to allow ease-of-use of functions. For example users can query the Data Discover APIs using wt_dd_summary() corresponding to the get-dd-map-and-projects and get-dd-long-lat-summary endpoints, and retrieve large-scale datasets, or refine it based on an area of interest.

Code
data <- wt_download_report(project_id = 1144,
                           sensor_id = "ARU",
                           reports = c("main", "birdnet"), 
                           weather_cols = FALSE)

eval_ccsp <- wt_evaluate_classifier(data,
                              resolution = "task",
                              remove_species = TRUE,
                              species = "CCSP",
                              thresholds = c(10, 99))

#Filter the detections to the best threshold
threshold_ccsp <- wt_classifier_threshold(eval_ccsp)

detections_ccsp <- data[[1]] |>
  filter(species_code == "CCSP", 
         confidence > threshold_ccsp)

#Calculate detections per second and mean confidence in each recording
rate_ccsp <- detections_ccsp |> 
  group_by(location_id, recording_date_time, recording_length) |>
  summarize(calls = n(),
            confidence = mean(confidence),
            .groups = "keep") |> 
  ungroup() |> 
  mutate(rate = calls/recording_length*60,
  recording_date_time = as.POSIXct(recording_date_time, format = "%Y-%m-%d %H:%M:%S"),
  yday = as.numeric(format(recording_date_time, "%j")),
  hour = as.numeric(format(recording_date_time, "%H")))

#Filter to the sites with most recordings with detections
occupied_ccsp <- rate_ccsp |> 
  group_by(location_id) |> 
  mutate(recordings = n()) |> 
  ungroup() |> 
  filter(recordings >= 4)
Code
#Plot call rate by day of year
ggplot(occupied_ccsp) + 
  geom_point(aes(x=yday, y=rate)) +
  geom_smooth(aes(x=yday, y=rate)) +
  xlab("Day of year") +
  ylab("Rate of Clay-coloured sparrow detections per minute") +
  theme_bw()
Figure 1: Clay-coloured detection rates across the season in the ABMI Ecosystem Health 2022 dataset.

BirdNET and HawkEars output can be used to automatically classify species in Tasks. Classifier outputs are downloaded via wt_download_report() and combined with the main project report for evaluation. Classifier performance is quantified using precision, recall, and F-score across score thresholds (wt_evaluate_classifier()), and thresholds can be selected to maximize F-score (wt_classifier_threshold()). BirdNET and HawkEars can also identify species missed by human listeners, enhancing species richness estimates (wt_additional_species()). For behavioural analyses, individual call rates can be quantified using species-specific evaluation (wt_evaluate_classifier() for selected species) and filtering detections by optimal thresholds. Call rate trends can then be analyzed temporally or spatially, as demonstrated for Clay-coloured Sparrow (Spizella pallida) in Figure 1 modelling detection rates across the season (see also https://abbiodiversity.github.io/wildrtrax/index.html).

Results

With the release of acoustic sensors in 2018 and camera sensors in 2020 within the WildTrax platform, Figure 3 illustrates the cumulative growth of remote camera images and acoustic recordings managed through the ABMI and WildTrax frameworks. Since the launch of WildTrax 1.0 in 2018, contributions from ABMI internal programs and partner organizations have resulted in over 150 million images and 2.5 million acoustic recordings by 2025, reflecting substantial platform adoption. The rate of data accumulation has increased exponentially on an annual basis, indicating not only the expanding use of WildTrax but also the growing engagement of contributing organizations.

Code
species_acoustic_tags <- read_csv("assets/acoustictags.csv") |>
  filter(species_code %in% c("WTSP","SWTH","YEWA","REVI","OSFL","CONI","RUGR","FRGU","SPSA","RTHA")) |>
  group_by(species_code, date_added_on) |>
  summarise(count = sum(tag_count), .groups = "drop_last") |>
  arrange(species_code, date_added_on) |>
  mutate(cumulative_count = cumsum(count)) |>
  ungroup()

ggplot(species_acoustic_tags, aes(x=date_added_on, y=cumulative_count, colour=species_code)) +
  geom_point() +
  geom_smooth(fill = "#FFFFEE") +
  theme_bw() +
  scale_colour_viridis_d(name = "Species") +
  scale_y_log10(labels = trans_format("log10", math_format(10^.x))) +
  ylab("Log cumulative count of acoustic species tags over time") +
  xlab("Date") +
  labs("Species")
Figure 2: All publicly accessible cumulative species detections in WildTrax for ten avian species across multiple taxonomic groups

Projects within the platform cover spatial scales ranging from local sites to provincial-level networks. A growing proportion of these projects are publicly accessible, demonstrating the scalability, interoperability, and broad applicability of the system for diverse monitoring initiatives. Currently, public datasets, primarily derived from point count sensors deployed through BAM collaborations, constitute approximately 10% of total Project holdings, covering an area of [insert area] km² and encompassing 263 projects (205 ARU and point count, 58 camera) across 31 organizations. Ecologically, the dataset enables high-resolution monitoring of species occurrence and abundance. For example, the total number of White-throated Sparrow (Zonotrichia albicollis) detections now exceeds 5485 individual locations, providing an already large dataset for long-term population and distribution analyses. These accumulating records illustrate the capacity of WildTrax to support large-scale ecological research, facilitate data sharing, and generate comprehensive biodiversity insights across multiple taxa and monitoring modalities.

Figure 3: Accumulation of media since WildTrax (2018) and ABMI’s (2014) inception of environmental sensor monitoring programs
Code
# Apply an area of interest. Define a polygon or use a bbox from sf::st_bbox
my_aoi <- list(
  c(-113.96068, 56.23817),
  c(-117.06285, 54.87577),
  c(-112.88035, 54.90431),
  c(-113.96068, 56.23817)
)

ab_cd <- sf::read_sf(".../Alberta_Census_Boundaries_SHP/Data/AB_CD_2021.shp")

abbox <- ab_cd |>
  sf::st_transform(crs = 4326) |>
  sf::st_bbox()

my_data <- wildrtrax::wt_dd_summary(sensor = 'ARU', species = 'White-throated Sparrow', boundary = abbox)
Code
ggplot() +
  geom_sf(data = ab_cd, fill = "white", color = "black") +
  geom_point(data = my_data[[2]], aes(x = longitude, y = latitude, size = count), color = "red", alpha = 0.6) +
  coord_sf(crs = st_crs(4326)) +
  theme_minimal() +
  labs(x = "Longitude", y = "Latitude",
       caption = "Source: WildTrax Data Discover")
Figure 4: All publicly accessible White-throated Sparrow (Zonotrichia albicollis) detections in Alberta in WildTrax’s Data Discover portal queried through the wildrtrax package function wt_dd_summary

Discussion and future prospects

The expansion of environmental sensor networks for avian monitoring has fundamentally altered how ecological data can be collected, managed, and interpreted. These trends in software uptake emphasize the importance of continually upgrading and handling data management strategies, optimized storage solutions, and creating efficient analytical tools to ensure continued accessibility and usability of increasing big data. High-resolution, continuous recordings from ARUs and remote cameras generate unprecedented volumes of temporal and spatial data, yet the potential of these datasets is only realized through structured, scalable, and accessible management frameworks. Platforms such as WildTrax exemplify this need, offering centralized solutions that enable integration across sensor types, standardization of metadata, and harmonization of multi-source observations. Ongoing considerations in deploying these systems is the balance between data volume and interpretability.

While sensors and artificial intelligence results can produce vast archives of audio and visual media, ensuring data quality and reducing noise or misclassification remains under ongoing management. Analytical pipelines that incorporate deep learning, combined with human verification, provide a mechanism to efficiently extract meaningful ecological signals while maintaining transparency and user control, shepherding futher human-computer collaboration. The potential of these platforms extends beyond single or multi-taxa monitoring; by linking fine-scale temporal datasets with spatially distributed sensors, researchers can begin to explore community-level patterns, seasonal dynamics, and multi-species interactions at scales that were previously unattainable. Furthermore, standardized, open-access repositories facilitate reproducibility and comparative studies, allowing new ecological questions to be addressed without the need to recollect data. This is particularly relevant for studies of ecosystem health and biodiversity trends, where longitudinal and multi-site datasets provide the statistical power required to detect subtle changes.

The adoption of integrated sensor platforms also raises considerations for infrastructure, cost, and workflow design. Effective long-term monitoring relies not only on hardware reliability and data storage solutions, but also on user-friendly interfaces that allow project administrators and contributors to manage complex datasets efficiently. By supporting automated workflows alongside human oversight, platforms can reduce observer bias, improve detection accuracy, and provide verifiable, permanent records for ecological studies. Overall, while technological and computational advances have enabled large-scale environmental monitoring, the realization of these benefits is contingent upon robust data management systems. The development and refinement of platforms like WildTrax illustrate how socio-technical frameworks can mediate between raw sensor outputs and actionable ecological insights, creating opportunities for broader collaboration, reproducibility, and adaptive conservation practices across multiple spatial and temporal scales.

Glossary

  • Organization: A collection of users who manage Locations, Equipment, Deployments, Recordings, and Image Sets.
  • Visits: Occasions when a user goes to a Location to collect data.
  • Deployments: The act of placing a piece of Equipment at a Location during a Visit.
  • Equipment: Environmental sensor devices, such as ARUs, remote camera traps, SD cards, or microphones.
  • Project: A collaborative effort by a group of users who design studies and process Tasks to answer scientific questions.
  • Location: A physical, geographic site where environmental sensors (such as ARUs or cameras) are placed.
  • ARU (Autonomous Recording Unit): A self-contained device that records environmental audio for research purposes.
  • Tasks: Unique combinations of a user, processing method, and media used to answer a scientific question.
  • Tags: Enclosed portions of a Recording or Image that contain a species detection.

Additional Information and Declarations

Competing Interests

The authors declare no competing interests.

Author Contributions

All authors contributed to manuscript edits and revisions.

Alexander G. MacPhail led the manuscript preparation and design, as well as user support and engagement for the acoustic sensor.

Corrina Copp led platform development, project and data management, and oversaw the initial design and implementation of the camera sensor.

Erin M. Bayne made formative and ongoing contributions to the conceptual development, study design, and overall scientific guidance of the project.

Michael Packer served as lead developer, overseeing server architecture, back-end development, system scalability, user support, and ensuring optimized data processing workflows.

Chad Klassen designed the user interface, provided front-end development, user support, and led user experience design to optimize accessibility and usability across devices.

Joan Fang helped with the initial concept and development.

Hedwig E. Lankau contributed to the development of the Bioacoustic Information System, which provided the foundational framework for the design of the acoustic processing system.

Monica Kohler provided support during initial funding and conceptual stages.

Tara Narwani provided support during initial funding and conceptual stages.

Steve L. Van Wilgenburg provided immeasurable feedback through the creation and use of the system at scale.

Elly C. Knight provided data standardization, integration and analytical support especially with acoustic classifiers.

Kevin G. Kelly provided design implementation and testing especially with acoustic classifiers, as well as user support.

Charles M. Francis provided support, inspiration, and advocacy, which helped secure the funding that made this WildTrax possible. His guidance was instrumental in initiating WildTrax and in shaping earlier work with Avichorus, which served as an inspiration for the platform.

Acknowledgements

WildTrax was conceived in Edmonton, ᐊᒥᐢᑿᒌᐚᐢᑲᐦᐃᑲᐣ Amiskwaciwâskahikan, located within Treaty 6 Territory and within the Métis homelands and Métis Nation of Alberta Region 4. We acknowledge this land as the traditional territories of many First Nations such as the Nehiyaw (Cree), Denesuliné (Dene), Nakota Sioux (Stoney), Anishinaabe (Saulteaux) and Niitsitapi (Blackfoot). This project was supported by funding from Environment and Climate Change Canada, Alberta Environment and Parks, the Oil Sands Monitoring Program, and Canada’s Oil Sands Innovation Alliance. We also extend our gratitude to the following organizations for their partnership and support: U of A Sound Studies Institute, Joint Canada-Alberta Implementation Plan for Oil Sands Monitoring, InnoTech Alberta, University of Alberta, NSERC (Natural Sciences and Engineering Research Council of Canada), PTAC, Devon Energy, ConocoPhillips, Cenovus Energy, Nexen, Imperial Oil, Shell, Suncor Energy, Alberta Pacific Forest Industries Inc., Canadian Natural, Alberta Conservation Association, Parks Canada, University of Alberta, Government of Saskatchewan, and Ɂehdzo Got’ı̨nę Gots’ę́ Nákedı Sahtú Renewable Resources Board. We acknowledge that the data used in WildTrax were collected on lands that are, and have always been, the traditional territories of Indigenous peoples. We honour their enduring connection to these lands, waters, and communities, and recognize their rights, cultures, and contributions. We thank the WildTrax community for their ongoing contributions, which strengthen the platform and advance big data approaches to avian conservation.

References

Ahumada, Jorge A, Eric Fegraus, Tanya Birch, Nicole Flores, Roland Kays, Timothy G O’Brien, Jonathan Palmer, et al. 2020. “Wildlife Insights: A Platform to Maximize the Potential of Camera Trap and Other Passive Sensor Wildlife Data for the Planet.” Environmental Conservation 47 (1): 1–6.
Aide, T Mitchell, Carlos Corrada-Bravo, Marconi Campos-Cerqueira, Carlos Milan, Giovany Vega, and Rafael Alvarez. 2013. “Real-Time Bioacoustics Monitoring and Automated Species Identification.” PeerJ 1: e103.
Beery, Sara. 2023. “The MegaDetector: Large-Scale Deployment of Computer Vision for Conservation and Biodiversity Monitoring.” California Institute of Technology, Pasadena, CA, USA.
Bibby, Colin J. 1999. “Making the Most of Birds as Environmental Indicators.” Ostrich 70 (1): 81–88.
Binley, Allison D, Brandon PM Edwards, Gabriel Dansereau, Elly C Knight, and Iman Momeni-Dehaghi. 2023. “Minimizing Data Waste.” Bulletin of the Ecological Society of America 104 (2): 1–11.
Bolton, Mark, Nigel Butcher, Fiona Sharpe, Danaë Stevens, and Gareth Fisher. 2007. “Remote Monitoring of Nests Using Digital Camera Technology.” Journal of Field Ornithology 78 (2): 213–20.
Buxton, Rachel T, Joseph R Bennett, Andrea J Reid, Charles Shulman, Steven J Cooke, Charles M Francis, Elizabeth A Nyboer, et al. 2021. “Key Information Needs to Move from Knowledge to Action for Biodiversity Conservation in Canada.” Biological Conservation 256: 108983.
Buxton, Rachel T, Patrick E Lendrum, Kevin R Crooks, and George Wittemyer. 2018. “Pairing Camera Traps and Acoustic Recorders to Monitor the Ecological Impact of Human Disturbance.” Global Ecology and Conservation 16: e00493.
Canterbury, Grant E, Thomas E Martin, Daniel R Petit, Lisa J Petit, and David F Bradford. 2000. “Bird Communities and Habitat as Ecological Indicators of Forest Condition in Regional Monitoring.” Conservation Biology 14 (2): 544–58.
Douglas, Korry, and Susan Douglas. 2003. PostgreSQL: A Comprehensive Guide to Building, Programming, and Administering PostgresSQL Databases. SAMS publishing.
Farley, Scott S, Andria Dawson, Simon J Goring, and John W Williams. 2018. “Situating Ecology as a Big-Data Science: Current Advances, Challenges, and Solutions.” BioScience 68 (8): 563–76.
Fleishman, Erica, James R Thomson, Ralph Mac Nally, Dennis D Murphy, and John P Fay. 2005. “Using Indicator Species to Predict Species Richness of Multiple Taxonomic Groups.” Conservation Biology 19 (4): 1125–37.
Fox, Helen E, Megan D Barnes, Gabby N Ahmadia, Grace Kao, Louise Glew, Kelly Haisfield, Nur Ismu Hidayat, et al. 2017. “Generating Actionable Data for Evidence-Based Conservation: The Global Center of Marine Biodiversity as a Case Study.” Biological Conservation 210: 299–309.
Fraixedas, Sara, Andreas Lindén, Markus Piha, Mar Cabeza, Richard Gregory, and Aleksi Lehikoinen. 2020. “A State-of-the-Art Review on Birds as Indicators of Biodiversity: Advances, Challenges, and Future Directions.” Ecological Indicators 118: 106728.
Furness, Robert W, and Jeremy JD Greenwood. 2013. Birds as Monitors of Environmental Change. Springer Science & Business Media.
Garland, Laura, Andrew Crosby, Richard Hedley, Stan Boutin, and Erin Bayne. 2020. “Acoustic Vs. Photographic Monitoring of Gray Wolves (Canis Lupus): A Methodological Comparison of Two Passive Monitoring Techniques.” Canadian Journal of Zoology 98 (3): 219–28.
Greenberg, Saul, and Theresa Godin. 2012. “Timelapse Image Analysis Manual.”
Gregory, Richard D, David Noble, Rob Field, John Marchant, M Raven, and DW Gibbons. 2003. “Using Birds as Indicators of Biodiversity.” Ornis Hungarica 12 (13): 11–24.
Hallgren, Willow, Linda Beaumont, Andrew Bowness, Lynda Chambers, Erin Graham, Hamish Holewa, Shawn Laffan, et al. 2016. “The Biodiversity and Climate Change Virtual Laboratory: Where Ecology Meets Big Data.” Environmental Modelling & Software 76: 182–86.
Hampton, Stephanie E, Carly A Strasser, Joshua J Tewksbury, Wendy K Gram, Amber E Budden, Archer L Batcheller, Clifford S Duke, and John H Porter. 2013. “Big Data and the Future of Ecology.” Frontiers in Ecology and the Environment 11 (3): 156–62.
Hobson, Keith A, Robert S Rempel, Hamilton Greenwood, Brian Turnbull, and Steven L Van Wilgenburg. 2002. “Acoustic Surveys of Birds Using Electronic Recordings: New Potential from an Omnidirectional Microphone System.” Wildlife Society Bulletin, 709–20.
Jetz, Walter, Gavin H Thomas, Jeffery B Joy, Klaas Hartmann, and Arne O Mooers. 2012. “The Global Diversity of Birds in Space and Time.” Nature 491 (7424): 444–48.
Jiguet, Frédéric, ANNE-SOPHIE GADOT, Romain Julliard, Stuart E Newson, and Denis Couvet. 2007. “Climate Envelope, Life History Traits and the Resilience of Birds Facing Global Change.” Global Change Biology 13 (8): 1672–84.
Kartez, Jack D, and Molly P Casto. 2008. “Information into Action: Biodiversity Data Outreach and Municipal Land Conservation.” Journal of the American Planning Association 74 (4): 467–80.
Kim, Hyun Woo, Sungsoo Yoon, Mokyoung Kim, Manseok Shin, Heenam Yoon, and Kidong Kim. 2021. “EcoBank: A Flexible Database Platform for Sharing Ecological Data.” Biodiversity Data Journal 9: e61866.
Kush, Rebecca Daniels, D Warzel, Maura A Kush, Alexander Sherman, Eileen A Navarro, R Fitzmartin, Frank Pétavy, et al. 2020. “FAIR Data Sharing: The Roles of Common Data Elements and Harmonization.” Journal of Biomedical Informatics 107: 103421.
Lepage, Denis, Gaurav Vaidya, and Robert Guralnick. 2014. “Avibase–a Database System for Managing and Organizing Taxonomic Concepts.” ZooKeys, no. 420: 117.
MacPhail, Alexander G, Daniel A Yip, Elly C Knight, Richard Hedley, Michelle Knaggs, Julia Shonfield, Emily Upham-Mills, and Erin M Bayne. 2024. “Audio Data Compression Affects Acoustic Indices and Reduces Detections of Birds by Human Listening and Automated Recognisers.” Bioacoustics 33 (1): 74–90.
Mekonen, Sefi. 2017. “Birds as Biodiversity and Environmental Indicator.” Indicator 7 (21).
Michel, Nicole L, Curtis Burkhalter, Chad B Wilsey, Matt Holloran, Alison Holloran, and Gary M Langham. 2020. “Metrics for Conservation Success: Using the ‘Bird-Friendliness Index’ to Evaluate Grassland and Aridland Bird Community Resilience Across the Northern Great Plains Ecosystem.” Diversity and Distributions 26 (12): 1687–1702.
Morante-Filho, José Carlos, and Deborah Faria. 2017. “An Appraisal of Bird-Mediated Ecological Functions in a Changing World.” Tropical Conservation Science 10: 1940082917703339.
Nathan, Ran, Christopher T Monk, Robert Arlinghaus, Timo Adam, Josep Alós, Michael Assaf, Henrik Baktoft, et al. 2022. “Big-Data Approaches Lead to an Increased Understanding of the Ecology of Animal Movement.” Science 375 (6582): eabg1780.
Newman, Scott H, Aleksei Chmura, Kathy Converse, A Marm Kilpatrick, Nikkita Patel, Emily Lammers, and Peter Daszak. 2007. “Aquatic Bird Disease and Mortality as an Indicator of Changing Ecosystem Health.” Marine Ecology Progress Series 352: 299–309.
Niemi, Gerald J, Joann M Hanowski, Ann R Lima, Tom Nicholls, and Norm Weiland. 1997. “A Critical Analysis on the Use of Indicator Species in Management.” The Journal of Wildlife Management, 1240–52.
O’Brien, Timothy G, and Margaret F Kinnaird. 2008. “A Picture Is Worth a Thousand Words: The Application of Camera Trapping to the Study of Birds.” Bird Conservation International 18 (S1): S144–62.
Orme, C David L, Richard G Davies, Valerie A Olson, Gavin H Thomas, Tzung-Su Ding, Pamela C Rasmussen, Robert S Ridgely, et al. 2006. “Global Patterns of Geographic Range Size in Birds.” PLoS Biology 4 (7): e208.
Perkel, Jeffrey M. 2019. “Ways to Avoid a Data-Storage Disaster.” Nature 568 (7750): 131–32.
Peters, Debra PC, Kris M Havstad, Judy Cushing, Craig Tweedie, Olac Fuentes, and Natalia Villanueva-Rosales. 2014. “Harnessing the Power of Big Data: Infusing the Scientific Method with Machine Learning to Transform Ecology.” Ecosphere 5 (6): 1–15.
Pollet, Ingrid L, Alexa Arnyek, Julia Ellen Baak, Rikki Clark, Jacob Comeau-Ouellette, Asha C Grewal, Sarah E Gutowsky, et al. 2025. “Technological Advancements: A Global Review of the Use of Camera Technology in Wildlife Research.” Environmental Reviews, no. ja.
Randler, Christoph, and Nadine Kalb. 2018. “Distance and Size Matters: A Comparison of Six Wildlife Camera Traps and Their Usefulness for Wild Birds.” Ecology and Evolution 8 (14): 7151–63.
Reif, Jiřı́. 2013. “Long-Term Trends in Bird Populations: A Review of Patterns and Potential Drivers in North America and Europe.” Acta Ornithologica 48 (1): 1–16.
Sekercioglu, Cagan H. 2012. “Bird Functional Diversity and Ecosystem Services in Tropical Forests, Agroforests and Agricultural Areas.” Journal of Ornithology 153 (Suppl 1): 153–61.
Shin, Dong-Hee, and Min Jae Choi. 2015. “Ecological Views of Big Data: Perspectives and Issues.” Telematics and Informatics 32 (2): 311–20.
Shonfield, Julia, and Erin M Bayne. 2017. “Autonomous Recording Units in Avian Ecological Research: Current Use and Future Applications.” Avian Conservation & Ecology 12 (1).
Sitters, Holly, Julian Di Stefano, Fiona Christie, Matthew Swan, and Alan York. 2016. “Bird Functional Diversity Decreases with Time Since Disturbance: Does Patchy Prescribed Fire Enhance Ecosystem Function?” Ecological Applications 26 (1): 115–27.
Smits, Judit EG, and Kimberly J Fernie. 2013. “Avian Wildlife as Sentinels of Ecosystem Health.” Comparative Immunology, Microbiology and Infectious Diseases 36 (3): 333–42.
Sólymos, Péter, Steven M Matsuoka, Erin M Bayne, Subhash R Lele, Patricia Fontaine, Steve G Cumming, Diana Stralberg, Fiona KA Schmiegelow, and Samantha J Song. 2013. “Calibrating Indices of Avian Density from Non-Standardized Survey Data: Making the Most of a Messy Situation.” Methods in Ecology and Evolution 4 (11): 1047–58.
Stanton, Jessica C, Brice X Semmens, Patrick C McKann, Tom Will, and Wayne E Thogmartin. 2016. “Flexible Risk Metrics for Identifying and Monitoring Conservation-Priority Species.” Ecological Indicators 61: 683–92.
Stephenson, PJ, and Carrie Stengel. 2020. “An Inventory of Biodiversity Data Sources for Conservation Monitoring.” PLoS One 15 (12): e0242923.
Temple, Stanley A, and John A Wiens. 1989. “Bird Populations and Environmental Changes: Can Birds Be Bio-Indicators?” American Birds 43 (2): 14.
Yip, Daniel A, Lionel Leston, Erin M Bayne, Péter Sólymos, and Alison Grover. 2017. “Experimentally Derived Detection Distances from Audio Recordings and Human Observers Enable Integrated Analysis of Point Count Data.” Avian Conservation and Ecology 12 (1): 11.
Zakaria, Mohamed, Puan Chong Leong, and Muhammad Ezhar Yusuf. 2005. “Comparison of Species Composition in Three Forest Types: Towards Using Bird as Indicator of Forest Ecosystem Health.” Journal of Biological Sciences 5 (6): 734–37.
Zhang, Jianting, Michael Gertz, and Le Gruenwald. 2009. “Efficiently Managing Large-Scale Raster Species Distribution Data in PostgreSQL.” In Proceedings of the 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, 316–25.