Changelog
peskas.zanzibar.data.pipeline 4.0.0
Major Changes
- Fleet Activity Analysis Pipeline: Introduced a comprehensive pipeline for estimating and analyzing fishing fleet activity using GPS-tracked boats and boat registry data. This includes new functions for preparing boat registries, processing trip data, calculating monthly trip statistics, estimating fleet-wide activity, and calculating district-level total catch and revenue.
-
New Modeling and Summarization Functions:
-
prepare_boat_registry()
: Summarizes boat registry data by district. -
process_trip_data()
: Processes trip data with district information and filters outliers. -
calculate_monthly_trip_stats()
: Computes monthly fishing activity statistics by district. -
estimate_fleet_activity()
: Scales up sample-based trip statistics to fleet-wide estimates. -
calculate_district_totals()
: Combines fleet activity and catch data for district-level totals. -
generate_fleet_analysis()
: Orchestrates the full analysis pipeline and uploads results. -
summarize_data()
: Generates and uploads summary datasets (monthly, taxa, district, gear, grid) for WorldFish survey data.
-
-
Enhanced Data Export and Integration:
-
export_wf_data()
: Exports summarized WorldFish survey data and modeled estimates to MongoDB, including new geographic regional summaries. -
create_geos()
: Generates geospatial regional summaries and exports as GeoJSON for spatial visualization.
-
- Expanded Documentation: New and updated Rd files for all major new functions, with improved examples and cross-references.
Improvements
- Consistent Time Series and Grouping: All summary tables (taxa, districts, gear) now include a ‘date’ (monthly) column and are grouped by month, with missing months filled as NA for consistent time series exports.
-
Parallel Processing: Improved use of parallelization (via
future
andfurrr
) for validation and summarization steps, enhancing performance for large datasets. -
Data Quality and Validation:
- Enhanced filtering and validation of survey data before summarization and export.
- Improved handling of flagged/invalid submissions.
peskas.zanzibar.data.pipeline 3.3.0
Major Changes
- All summary tables (taxa, districts, gear) now include a ‘date’ (monthly) column and are grouped by month. Missing months are filled as NA for consistent time series exports.
peskas.zanzibar.data.pipeline 3.1.0
New Features
- Added
create_geos()
function to generate geospatial regional summaries of fishery data - Added support for GPS track data visualization through new grid-based analytics
- Added
generate_track_summaries()
function to process GPS tracks into 1km grid cells
peskas.zanzibar.data.pipeline 2.6.0
peskas.zanzibar.data.pipeline 2.5.0
Major Changes
- Enhanced validation workflow with KoboToolbox integration:
- Added
update_validation_status()
function to update submission status via API - Added
sync_validation_submissions()
for parallel processing of validation flags - Updated Kobo URL endpoint from kf.kobotoolbox.org to eu.kobotoolbox.org
- Added
New Features
- Implemented parallel processing for validation operations using future/furrr packages
- Added progress reporting during validation operations via progressr package
- Enhanced validation status synchronization between local system and KoboToolbox
Improvements
- Updated data preprocessing to handle flying fish estimates and taxa corrections (TUN→TUS, SKH→CVX)
- Updated export workflow to use validation status instead of flags for data filtering
- Added taxa information to catch export data
- Added Zanzibar SSF report template with visualization examples
- Improved package documentation structure with better categorization
peskas.zanzibar.data.pipeline 2.4.0
Major Changes
- Implemented support for multiple survey data sources:
- Refactored
get_validated_surveys()
to handle WCS, WF, and BA sources - Added source parameter to specify which datasets to retrieve
- Improved handling of data sources with different column structures
- Refactored
New Features
- Added
export_wf_data()
function for WorldFish-specific data export - Enhanced validation with additional composite metrics:
- Price per kg validation
- CPUE (Catch Per Unit Effort) validation
- RPUE (Revenue Per Unit Effort) validation
Improvements
- Added min_length parameter for better length validation thresholds
- Updated LW coefficient filtering logic in model-taxa.R
- Enhanced alert flag handling with combined flags from different validation steps
- Improved catch price and catch weight handling for zero-catch outcomes
- Enhanced data preprocessing with better field type conversion
peskas.zanzibar.data.pipeline 2.3.0
Major Changes
- Enhanced KoboToolbox integration:
- Implemented new validation status retrieval from KoboToolbox API
- Updated validation workflow to incorporate submission validation status
- Improved data validation process through direct API integration
New Features
- New KoboToolbox interaction functions:
-
get_validation_status()
: Retrieves submission validation status from KoboToolbox API
-
peskas.zanzibar.data.pipeline 2.2.0
Major Changes
- Completely restructured taxonomic data processing:
- Introduced new modular functions for taxa handling in model-taxa.R
- Added efficient batch processing for species matching
- Implemented optimized FAO area retrieval system
- Streamlined length-weight coefficient calculations
- Enhanced integration with FishBase and SeaLifeBase
New Features
- New taxonomic processing functions:
-
load_taxa_databases()
: Unified database loading from FishBase and SeaLifeBase -
process_species_list()
: Enhanced species list processing with taxonomic ranks -
match_species_from_taxa()
: Improved species matching across databases -
get_species_areas_batch()
: Efficient FAO area retrieval -
get_length_weight_batch()
: Optimized length-weight parameter retrieval
-
Improvements
- Enhanced performance through batch processing
- Reduced API calls to external databases
- Better error handling and input validation
- More comprehensive documentation
- Improved code organization and modularity
peskas.zanzibar.data.pipeline 2.1.0
Major Changes
- Enhanced taxonomic and catch data processing capabilities:
- Added comprehensive functions for species and catch data processing
- Implemented length-weight coefficient retrieval from FishBase and SeaLifeBase
- Created functions for calculating catch weights using multiple methods
- Added new data reshaping utilities for species and catch information
- Extended Wild Fishing (WF) survey validation with detailed quality checks
- Updated cloud storage and data download/upload functions
peskas.zanzibar.data.pipeline 2.0.0
Major Changes
- Complete overhaul of the data pipeline architecture
- Added PDS (Pelagic Data Systems) integration:
- New trip ingestion and preprocessing functionality
- GPS track data processing capabilities
- Implemented MongoDB export and storage functions
- Removed renv dependency management for improved reliability
- Updated Docker configuration for more robust builds
peskas.zanzibar.data.pipeline 1.0.0
peskas.zanzibar.data.pipeline 0.2.0
New features
Added the validation step and updated the preprocessing step for wcs kobo surveys data, see preprocess_wcs_surveys()
and validate_wcs_surveys()
functions. Currently, validation for catch weight, length and market values are obtained using median absolute deviation method (MAD) leveraging on the k parameters of the univOutl::LocScaleB
function.
In order to accurately spot any outliers, validation is performed based on gear type and species.
N.B. VALIDATION PARAMETERS ARE NOT YET TUNED
peskas.zanzibar.data.pipeline 0.1.0
Drop parent repository code (peskas.timor.pipeline), add infrastructure to download WCS survey data and upload it to cloud storage providers