Characterizing climate change impacts on water resources typically relies on Global Climate Model (GCM) outputs that are bias-corrected using observational datasets. In this process, two pivotal decisions are (i) the Bias Correction Method (BCM) and (ii) how to handle the historically observed time series, which can be used as a continuous whole (i.e., without dividing it into sub-periods), or partitioned into monthly, seasonal (e.g., three months), or any other temporal stratification (TS). Here, we examine how the interplay between the choice of BCM, TS, and the raw GCM seasonality may affect historical portrayals and projected changes. To this end, we use outputs from 29 GCMs belonging to the CMIP6 under the Shared Socioeconomic Pathway 5–8.5 scenario, using seven BCMs and three TSs (entire period, seasonal, and monthly). The results show that the effectiveness of BCMs in removing biases can vary depending on the TS and climate indices analyzed. Further, the choice of BCM and TS may yield different projected change signals and seasonality (especially for precipitation), even for climate models with low bias and a reasonable representation of precipitation seasonality during a reference period. Because some BCMs may be computationally expensive, we recommend using the linear scaling method as a diagnostics tool to assess how the choice of TS may affect the projected precipitation seasonality of a specific GCM. More generally, the results presented here unveil trade-offs in the way BCMs are applied, regardless of the climate regime, urging the hydroclimate community for a careful implementation of these techniques.

Diogo Costa

and 5 more

This work advances the incorporation and cross-model deployment of multi-biogeochemistry and ecological simulations in existing process-based hydro-modelling tools. It aims to transform the current practice of water quality modelling as an isolated research effort into a more integrated and collaborative activity between science communities. Our approach, which we call “Open Water Quality” (OpenWQ), enables existing hydrological, hydrodynamic, and groundwater models to extend their capabilities to water quality simulations, which can be set up to examine a variety of water-related pollution problems. OpenWQ’s objective is to provide a flexible biogeochemical model representation that can be used to test different modelling hypotheses in a multi-disciplinary co-creative process. In this paper, we introduce the general approach used in OpenWQ. We detail aspects of its architecture that enable its coupling with existing models. This integration enables water quality models to benefit from advances made by hydrologic- and hydrodynamic-focused groups, strengthening collaboration between the hydrological, biogeochemistry, and soil science communities. We also detail innovative aspects of OpenWQ’s modules that enable biogeochemistry lab-like capabilities, where modellers can define the pollution problem(s) of interest, the appropriate complexity of the biogeochemistry routines, and test different modelling hypotheses. In a companion paper, we demonstrate how OpenWQ has been coupled to two hydrological models, the “Structure for Unifying Multiple Modelling Alternatives” (SUMMA) and the “Cold Regions Hydrological Model” (CRHM), demonstrating the innovative aspects of OpenWQ, the flexibility of its couplers and internal spatiotemporal data structures, and the versatile eco-modelling lab capabilities that can be used to study different pollution problems.

S. Gharari

and 7 more

Lakes and reservoirs are an integral part of the terrestrial water cycle. In this work, we present the implementation of water balance models of lakes and reservoirs into mizuRoute, a vector-based routing model. The developments described here are termed mizuRoute-Lakes. The capabilities of mizuRoute-Lake in simulating the water balance of lakes and reservoirs are demonstrated. The main advantage of mizuRoute-Lake is flexibility in testing alternative lake water balance models within a given river and lake network topology. Users can choose between various types of parametric models that are already implemented in mizuRoute-Lake or data-driven models that provide time-series of the target volume and abstraction from a lake or reservoir from an external source such as historic observation or water management models. The parametric models for lake and reservoir water balance implemented in mizuRoute-Lake are Hanasaki, HYPE, and D{\"o}ll formulations. In general, the parametric models relate the outflow from lakes or reservoirs to the storage and various parameters including inflow, demand, volume of storage, etc. Additionally, this flexibility allows to easily evaluate and compare the effect of various water balance models for a lake or reservoir without needing to reconfigure the routing model. We show the flexibility of mizuRoute-Lake by presenting global, regional and local scale applications. The development of mizuRoute-Lake paves the way for better integration of water management models with existing and future observations such as the Surface Water and Ocean Topography (SWOT) mission, in the context of Earth system modeling.
Despite the proliferation of computer-based research on hydrology and water resources, such research is typically poorly reproducible. Published studies have low reproducibility due to incomplete availability of data and computer code, and a lack of documentation of workflow processes. This leads to a lack of transparency and efficiency because existing code can neither be quality controlled nor re-used. Given the commonalities between existing process-based hydrological models in terms of their required input data and preprocessing steps, open sharing of code can lead to large efficiency gains for the modeling community. Here we present a model configuration workflow that provides full reproducibility of the resulting model instantiations in a way that separates the model-agnostic preprocessing of specific datasets from the model-specific requirements that models impose on their input files. We use this workflow to create large-domain (global, continental) and local configurations of the Structure for Unifying Multiple Modeling Alternatives (SUMMA) hydrologic model connected to the mizuRoute routing model. These examples show how a relatively complex model setup over a large domain can be organized in a reproducible and structured way that has the potential to accelerate advances in hydrologic modeling for the community as a whole. We provide a tentative blueprint of how community modeling initiatives can be built on top of workflows such as this. We term our workflow the “Community Workflows to Advance Reproducibility in Hydrologic Modeling’‘ (CWARHM; pronounced “swarm”).

Andrew J Newman

and 3 more

Alaska and the Yukon are a challenging area to develop observationally based spatial estimates of meteorology. Complex topography, frozen precipitation undercatch, and extremely sparse observations all limit our capability to accurately estimate historical conditions. In this environment it is useful to develop probabilistic estimates of precipitation and temperature that explicitly incorporate spatiotemporally varying uncertainty and bias corrections. In this paper we exploit recently-developed ensemble Climatologically Aided Interpolation (eCAI) systems to produce daily historical observations of precipitation and temperature across Alaska and the Yukon territory at a 2 km grid spacing for the time period 1980-2013. We extend the previous eCAI method to include an ensemble correction methodology to address precipitation gauge undercatch and wetting loss, which is of high importance for this region. Leave-one-out cross-validation shows our ensemble has little bias in daily precipitation and mean temperature at the station locations, with an overestimate in the daily standard deviation of precipitation. The ensemble has skillful reliability compared to climatology and significant discrimination of events across different precipitation thresholds. Comparing the ensemble mean climatology of precipitation and temperature to PRISM and Daymet v3 show large inter-product differences, particularly in precipitation across the complex terrain of SE and northern Alaska. Finally, long-term mean loss adjusted precipitation is up to 36% greater than the unadjusted estimate in windy areas that receive a large fraction of frozen precipitation.

Lina Stein

and 4 more

Hydroclimatic flood generating processes, such as excess rain, short rain, long rain, snowmelt and rain-on-snow, underpin our understanding of flood behaviour. Knowledge about flood generating processes helps to improve modelling decisions, flood frequency analysis, estimation of climate change impact on floods, etc. Yet, not much is known about how climate and catchment attributes influence the distribution of flood generating processes. With this study we aim to offer a comprehensive and structured approach to close this knowledge gap. We employ a large sample approach (671 catchment in the conterminous United States) and test attribute influence on flood processes with two complementary approaches: firstly, a data-based approach which compares attribute probability distributions of different flood processes, and secondly, a random forest model in combination with an interpretable machine learning approach (accumulated local effects). This machine learning technique is new to hydrology, and it overcomes a significant obstacle in many statistical methods, the confounding effect of correlated catchment attributes. As expected, we find climate attributes (fraction of snow, aridity, precipitation seasonality and mean precipitation) to be most influential on flood process distribution. However, attribute influence varies both with process and climate type. We also find that flood processes can be predicted for ungauged catchments with relatively high accuracy (R2 between 0.45 and 0.9). The implication of these findings is that flood processes should be taken into account for future climate change impact studies, as impact will vary between processes.

Raymond Spiteri

and 3 more

The next generation of Earth System models promisesunprecedented predictive power through the application of improvedphysical representations, data collection, and high-performancecomputing. A key component to the accuracy, efficiency, and robustnessof the Earth System simulations is the time integration ofdifferential equations describing the physical processes. Manyexisting Earth System models are simulated using low-order,constant-stepsize time-integration methods with no error control,opening them up to being inaccurate, inefficient, or require aninfeasible amount of manual tweaking when run over multipleheterogeneous domains or scales. We have implemented the variable-stepize, variable-order differentialequation solver SUNDIALS as the time integrator within the Structurefor Unifying Multiple Modelling Alternatives (SUMMA) modelframework. The model equations in SUMMA were modified and augmented toexpress conservation of mass and enthalpy. Water and energy balanceerrors were tracked and kept below a strict tolerance. The resultingSUMMA-SUNDIALS software was successfully run in a fully automatedfashion to simulate hydrological processes on the North Americancontinent, sub-divided into over 500,000 catchments. We compared the performance of SUMMA-SUNDIALS with a version (calledSUMMA-BE) that used the backward Euler method with a fixed stepsize asthe time-integration method. We find that SUMMA-BE required two ordersof magnitude more CPU time to produce solutions of comparable accuracyto SUMMA-SUNDIALS. Solutions obtained with SUMMA-BE in a similar orshorter amount of CPU time than SUMMA-SUNDIALS often contained largediscrepancies. We conclude that sufficient accuracy, efficiency, and robustness ofnext-generation Earth System model simulations can realistically onlybe obtained through the use of adaptive solvers. Furthermore, wesuggest simulations produced with low-order, constant-stepsizesolvers deserve more scrutiny in terms of their accuracy.

Razi Sheikholeslami

and 3 more

Global Sensitivity Analysis (GSA) has long been recognized as an indispensable tool for model analysis. GSA has been extensively used for model simplification, identifiability analysis, and diagnostic tests, among others. Nevertheless, computationally efficient methodologies are sorely needed for GSA, not only to reduce the computational overhead, but also to improve the quality and robustness of the results. This is especially the case for process-based hydrologic models, as their simulation time is often too high and is typically beyond the availability for a comprehensive GSA. We overcome this computational barrier by developing an efficient variance-based sensitivity analysis using copulas. Our data-driven method, called VISCOUS, approximates the joint probability density function of the given set of input-output pairs using Gaussian mixture copula to provide a given-data estimation of the sensitivity indices. This enables our method to identify dominant hydrologic factors by recycling pre-computed set of model evaluations or existing input-output data, and thus avoids augmenting the computational cost. We used two hydrologic models of increasing complexity (HBV and VIC) to assess the performance of the proposed method. Our results confirm that VISCOUS and the original variance-based method can detect similar important and unimportant factors. However, while being robust, our method can substantially reduce the computational cost. The results here are particularly significant for, though not limited to, process-based models with many uncertain parameters, large domain size, and high spatial and temporal resolution.