Pulse Brain · Growing Health Evidence Index
Peer-reviewed

HydroBench: Jupyter supported reproducible hydrological model benchmarking and diagnostic tool

Edom Moges, Benjamin L. Ruddell, Liang Zhang, Jessica M. Driscoll, Parker A. Norton, Fernando Pérez, Laurel G. Larsen

Frontiers in Earth Science · 2022

Read source ↗ All evidence

Summary

Evaluating whether hydrological models are right for the right reasons demands reproducible model benchmarking and diagnostics that evaluate not just statistical predictive model performance but also internal processes. Such model benchmarking and diagnostic efforts will benefit from standardized methods and ready-to-use toolkits. Using the Jupyter platform, this work presents HydroBench, a model-agnostic benchmarking tool consisting of three sets of metrics: 1) common statistical predictive measures, 2) hydrological signature-based process metrics, including a new time-linked flow duration curve and 3) information-theoretic diagnostics that measure the flow of information among model variables. As a test case, HydroBench was applied to compare two model products (calibrated and uncalibrat

Source type
Peer-reviewed study
DOI
10.3389/feart.2022.884766
Catalogue ID
SNmokbvyot-zvfqlj
Pulse AI · ask about this record

Dig deeper with Pulse AI.

Pulse AI has read the whole catalogue. Ask about this record, its theme, or how the findings apply to UK farming and policy — every answer cites the underlying studies.