Flooding is a frequent disaster that has a wide-spread footprint globally with significant financial and societal impacts. With availability of Earth observation data from private and public entities at varying spatial, temporal, and spectral resolution as well as data from crowdsourcing, there is no shortage of models. In fact, models and algorithms are abundant and proliferating. However, the question remains where is a global flood model when we need one? Just because models are available does not mean they are usable or accessible and adequate for emergency managers, first responders and other stakeholders who use the model outputs for preparedness, response and resource planning. Often the issue of usability stems from the fact that the models are not always reproducible or replicable. The accuracy and uncertainty associated with the models and how they change based on the scale of analysis and the resolution of input and output datasets are often not communicated properly to stakeholders so they can be part of their decision-making process. The proliferation of machine learning and data driven models that rely on historical data also adds to this problem. This paper discusses several important issues associated with global flood models and provides recommendations that could be used to increase the usability of these models.
Smart drainage management to limit summer drought damage in Nordic agriculture under the circular economy conceptSyed Md Touhidul Mustafa 1, *, Kedar Ghag1, Anandharuban Panchanathan2, Bishal Dahal 1, Amirhossein Ahrari1, Toni Liedes 3, Hannu Marttila1, Tamara Avellán1, Mourad Oussalah2, Björn Klöve 1, & Ali Torabi Haghighi11Water, Energy and Environmental Engineering Research Unit, University of Oulu, P.O. Box 4300, FIN90014, Oulu, Finland2Center for Machine Vision and Signal Analysis, University of Oulu, Finland3Intelligent Machines and Systems, University of Oulu, PO Box 4200, 90014 Oulu, Finland
Process understanding of the interaction between streamflow, groundwater and water usages under drought are hampered by a limited number of gauging stations, especially in tributaries. Recent technological advances facilitate the application of non-commercial measurement devices for monitoring environmental systems. The Dreisam River in the South-West of Germany was affected by several hydrological drought events from 2015 to 2020, when parts of the main stream and tributaries fell dry. A flexible longitudinal water quality and quantity monitoring network was set up in 2018. Among other measurements it employs an image based method with QR codes as fiducial marker. In order to assess under which conditions the QR-code based water level loggers (WLL) deliver data according to scientific standards, we present a comparison to conventional capacitive based WLL. The results from 20 monitoring stations reveal that the riverbed was dry for > 50 \% at several locations and even for > 70 \% at most severely affected locations during July and August 2020, with the north western parts of the catchment being especially concerned. Thus, the highly variable longitudinal drying patterns of the stream reaches could be monitored. The image-based method was found to be a valuable asset for identification of confounding factors and validation of zero level occurrences. Nevertheless, a simple image processing approach (based on an automatic thresholding algorithm) did not compensate for errors due to natural conditions and technical setup. Our findings highlight that the complexity of measurement environments is a major challenge when working with image-based methods.
We examined changes in catchment-scale annual and seasonal evapotranspiration after 50% strip thinning, using runoff data from headwater catchments. The short-term water balance (STWB) method between periods from 8 to 100 days was applied to the treated (KT) and control (KC) catchments. The estimated evapotranspiration during the pre- and post-thinning periods were 840 and 910 and 780 and 860 mm/year in the catchments KT and KC, respectively. According to a paired catchment analysis of estimated evapotranspiration, monthly evapotranspiration increased from 3 to 20 mm from June to December, while it decreased from 7 to 31 mm from January to May. The estimated annual and monthly evapotranspiration was compatible with the values monitored in the plot-scale interception, canopy transpiration, and ground surface evapotranspiration. Our findings showed that the decreases in evapotranspiration due to 50% thinning were similar or different in different methods of measurement when compared with thinning in the other catchments around the world. The STWB model can evaluate the effects of timber harvesting on changes in evapotranspiration (ET), including the reproduction of seasonal patterns of ET.
Accurate simulation of plant water use across agricultural ecosystems is essential for various applications, including precision agriculture, quantifying groundwater recharge, and optimizing irrigation rates. Previous approaches to integrating plant water use data into hydrologic models have relied on evapotranspiration (ET) observations. Recently, the flux variance similarity approach has been developed to partition ET to transpiration (T) and evaporation, providing an opportunity to use T data to parameterize models. To explore the value of T/ET data in improving hydrologic model performance, we examined multiple approaches to incorporate these observations for vegetation parameterization. We used ET observations from 5 eddy covariance towers located in the San Joaquin Valley, California, to parameterize orchard crops in an integrated land surface – groundwater model. We find that a simple approach of selecting the best parameter sets based on ET and T performance metrics works best at these study sites. Selecting parameters based on performance relative to observed ET creates an uncertainty of 27% relative to the observed value. When parameters are selected using both T and ET data, this uncertainty drops to 24%. Similarly, the uncertainty in potential groundwater recharge drops from 63% to 58% when parameters are selected with ET or T and ET data, respectively. Additionally, using crop type parameters results in similar levels of simulated ET as using site-specific parameters. Different irrigation schemes create high amounts of uncertainty and highlight the need for accurate estimates of irrigation when performing water budget studies.
Soil water repellency (SWR) increases surface runoff and preferential flows. Thus, quantitative evaluation of SWR distribution is necessary to understand water movements. Because the variability of SWR distribution makes it difficult to measure directly, we developed a method for estimating an SWR distribution index, defined as the areal fraction of surface soil showing SWR (SWRarea). The theoretical basis of the method is as follows: (1) SWRarea is equivalent to the probability that a position on the soil surface is drier than the critical water content (CWC); SWR is present (droplets absorbed in >10 s) when the soil surface is drier than the CWC and absent when it is wetter. (2) CWC and soil moisture content (θ) are normally distributed independent variables. (3) Thus, based on probability theory, the cumulative normal distribution of θ – CWC (f(x)) can be obtained from the distributions of CWC and θ, and f(0), the cumulative probability that θ – CWC < 0, gives the SWRarea. To investigate whether the method gives reasonable results, we repeatedly measured θ at 0–5 cm depth and determined the water repellency of the soil surface at multiple points in fixed plots with different soils and topography in a humid-temperate forest. We then calculated the CWC from the observed θ–SWR relationship at each point. We tested the normality of the CWC and θ distributions and the correlation between CWC and θ. Then, we determined f(x) from the CWC and θ distributions and estimated the SWRarea on each measurement day. Although CWC and θ were both normally distributed, in many cases they were correlated. Nevertheless, the CWC–θ dependency had little effect on the estimation error, and f(x) explained 69% of the SWRarea variability. Our findings show that a stochastic approach is useful for estimating SWRarea.
Lateral saturated soil hydraulic conductivity, Ks,l, is the soil property governing subsurface water transfer in hillslopes, and the key parameter in many numerical models simulating hydrological processes both at the hillslope and catchment scales. Likewise, the hydrological connectivity of lateral flow paths plays a significant role in determining the intensity of the subsurface flow at various spatial scales. The objective of the study is to investigate the relationship between Ks,l and hydraulic connectivity at the hillslope spatial scale. Ks,l was determined by the subsurface flow rates intercepted by drains, and by water table depths observed in a well network. Hydraulic connectivity of the lateral flow paths was evaluated by the synchronicity among piezometric peaks, and between the latter and the peaks of drained flow. Soil moisture and precipitation data were used to investigate the influence of the transient hydrological soil condition on connectivity and Ks,l. It was found that the higher was the synchronicity of the water table response between wells, the lower was the time lag between the peaks of water levels and those of the drained subsurface flow. Moreover, the most synchronic water table rises determined the highest drainage rates. The relationships between Ks,l and water table depths were highly non-linear, with a sharp increase of the values for water table levels close to the soil surface. Estimated Ks,l values for the full saturated soil were in the order of thousands of mm h-1, suggesting the activation of macropores in the root zone. The Ks,l values determined at the peak of the drainage events were correlated with the indicators of synchronicity. The sum of the antecedent soil moisture and of the precipitation was correlated with the indicators of connectivity and with Ks,l. We suggest that, for simulating realistic processes at the hillslope scale, the hydraulic connectivity could be implicitly considered in hydrological modelling through an evaluation of Ks,l at the same spatial scale.
Salt marshes are hotspots of nutrient processing en route to sensitive coastal environments. While our understanding of these systems has improved over the years, we still have limited knowledge of the spatiotemporal variability of critical biogeochemical processes within salt marshes. Sea-level rise will continue to force change on salt marsh functioning, highlighting the urgency of filling this knowledge gap. Our study was conducted in a central California estuary experiencing extensive marsh drowning and relative sea-level rise, making it a model system for such an investigation. Here we instrumented three marsh positions with different degrees of inundation (6.7%, 8.9%, and 11.2% of the time for the upper, middle, and lower marsh positions, respectively), providing locations with varied geochemical characteristics and hydrological interaction at the site. We continuously monitored redox potential (Eh) at depths of 0.1, 0.3, and 0.5 m, subsurface water levels (WL), and temperature at each marsh position to understand how drivers of subsurface biogeochemical processes fluctuate across tidal cycles, using wavelet analyses to explain the interactions between Eh and WL. We found that tidal forcing significantly affects biogeochemical processes by imparting controls on Eh variability, likely driving subsurface hydro-biogeochemistry of the salt marsh. Wavelet coherence showed that the Eh-WL relationship is non-linear, and their lead-lag relationship is variable. We found that precipitation events perturb Eh at depth over timescales of hours, even though WL show relatively minimal change during events. This work highlights the importance of high-frequency measurements, such as Eh, to help explain factors that govern subsurface geochemistry and hydrological processes in salt marshes.
Increased surface temperatures (0.7℃ per decade) in the Arctic affects polar ecosystems by reducing the extent and duration of annual snow cover. Monitoring of these important ecosystems needs detailed information on snow cover properties (depth and density) at resolutions (< 100 m) that influence ecological habitats and permafrost thaw. As arctic snow is strongly influenced by vegetation, an ecotype map at 10 m resolution was added to a method with the Random Forest (RF) algorithm previously developed for alpine environments and applied here over an arctic landscape for the first time. The topographic parameters used in the RF algorithm were Topographic Position Index (TPI) and up-wind slope index (Sx), which were estimated from the freely available Arctic DEM at 2 m resolution. Ecotypes with taller vegetation with moister soils were found to have deeper snow because of the trapping effect. Using feature importance with RF, snow depth distributions were predicted from topographic and ecosystem parameters with a root mean square error = 8 cm (23%) (R² = 0.79) at 10 m resolution for an arctic watershed (1 500 km²) in western Nunavut, Canada.
The snowpack regime influences the timing of soil water available for transpiration and synchrony with the evapotranspiration (ET) energy demand (air temperature, VPD, and shortwave radiation). Variability of snowmelt timing, soil water availability, and the energy demand results in heterogeneous ET rates throughout a watershed. In this study, we assess how ET and growing season length vary across five sites on an elevational gradient in the Dry Creek Watershed, ID, USA. We compared trends of daily and annual ET between 2012 and 2017 to environmental parameters of soil moisture, air temperature, vapor pressure deficit, snow cover, and precipitation and evaluate how ET varies between sites and what influences annual ET at each site. We observed three trends in ET across the watershed. The first trend is at the low elevation site where the snow cover is not continuous throughout the winter and rain is the dominant precipitation form. The first day of the growing season and ET occurs early in the season when the energy demand is low and soil water is available. Annual ET at the low elevation site is a balance between spring precipitation providing soil water into the summer season and limiting the ET energy demand. The second trend occurs at the middle elevation site located in the rain-snow transition. At this site, ET increases with snow depth and spring precipitation extending the soil water availability into the summer season. At the higher elevation sites, ET is aligned with the energy demand and limited by growing season length. At the high elevation sites, decreasing snow depth and spring precipitation and increasing spring air temperatures result in greater annual ET rates. The observations from this study highlight the influence of environmental parameters and the potential sensitivity of ET to climate change.
Soil–rock mixtures are widely encountered in geotechnical engineering projects. The instability and failure mechanism of grap-graded soil–rock mixtures under rainfall conditions has always been the focus of geological disaster research. To deeply explore the mechanism of seepage deformation of soil–rock mixtures, an indoor physical permeability test that considers soil–rock mixtures with different fine contents was conducted, and a particle-scale numerical simulation test of the permeability evolution was carried out using the coupling model of PFC3D and ABAQUS. The test results showed that the spatial distribution of fine particle loss along the height direction could be divided into three areas: top loss, middle uniform, and bottom loss area. The “island” effect of coarse particles, which is caused by excessive fine content and makes the fine particles bear more load, was eliminated with the loss of fine particles. In this preset working condition of coarse and fine particle diameters, setting FC to 35% may be the best way to fill the voids between the coarse particles. Particle migration leads to a change in the load-bearing skeleton structure, thereby causing seepage deformation. Therefore, the particle-scale numerical test method can better reproduce the seepage deformation process of grap-graded soil–rock mixtures.
The climate of the Eurasia inland basin (EIB) is characterized by limited precipitation and high potential evapotranspiration; as such, water storage in the EIB is vulnerable to global warming and human activities. There is increasing evidence pointing to varying trends in water storage across different regions; however, a consistent conclusion on the main attributes of these trends is lacking. Based on the hydrological budget in a closed inland basin, the main attributes of changes in actual evapotranspiration (AET) and terrestrial water storage (TWS) were identified for the EIB and each closed basin. In the EIB and most of its closed basins, the TWS and AET showed significantly decreasing and non-significantly increasing trends, respectively. The primary cause underpinning the significantly decreasing TWS in the EIB was increasing AET. Approximately 70% of the increase in AET has been supplied by increased irrigation diversions and glacial melt runoff. At the basin scale, similar to the EIB, changes in AET were the predominant factor driving changes in TWS in most basins; the exception to this was the Balkhash Lake basin (BLB), Iran inland river basin (IIRB), Qaidam basin (QB), and Turgay River basin (TuRB). In these basins, changes in precipitation largely contributed to the TWS changes. The AET consumption of other water resources was the main factor contributing to AET changes in seven of 16 basins, including the Aral Sea, Caspian Sea, Junggar, Monglia Plateau, Qiangtang Plateau, and Tarim River basins. The increase in precipitation contributed more than 60% of increasing AET in four of 16 basins, particularly in the Helmand River basin and QB (>90%). Changes in precipitation and consumption by other water supply sources contributed to approximately half of the AET changes in the other five basins, including the Inner Mongolia Plateau, Issyk-Kul Sarysu, BLB, IIRB, and TuRB basins.
Quantifying the uncertainty linked to the degree to which the spatio-temporal variability of the catchment descriptors (CDs), and consequently calibration parameters (CPs), represented in the distributed hydrology models and its impacts on the simulation of flooding events is the main objective of this paper. Here, we introduce a methodology based on ensemble approach principles to characterize the uncertainties of spatio-temporal variations. We use two distributed hydrological models (WaSiM and Hydrotel) and six catchments with different sizes and characteristics, located in southern Quebec, to address this objective. We calibrate the models across four spatial (100, 250, 500, 1000 $m^2$) and two temporal (3 hours and 24 hours) resolutions. Afterwards, all combinations of CDs-CPs pairs are fed to the hydrological models to create an ensemble of simulations for characterizing the uncertainty related to the spatial resolution of the modeling, for each catchment. The catchments are further grouped into large ($>1000 km^2$), medium (between 500 and 1000 $km^2$) and small ($<500km^2$) to examine multiple hypotheses. The ensemble approach shows a significant degree of uncertainty (over $100\%$ error for estimation of extreme streamflow) linked to the spatial discretization of the modeling. Regarding the role of catchment descriptors, results show that first, there is no meaningful link between the uncertainty of the spatial discretization and catchment size, as spatio-temporal discretization uncertainty can be seen across different catchment sizes. Second, the temporal scale plays only a minor role in determining the uncertainty related to spatial discretization. Third, the more physically representative a model is, the more sensitive it is to changes in spatial resolution. Finally, the uncertainty related to model parameters is dominant larger than that of catchment descriptors for most of the catchments. Yet, there are exceptions for which a change in spatio-temporal resolution can alter the distribution of state and flux variables, change the hydrologic response of the catchments, and cause large uncertainties.
Flow regimes are critical for determining physical and biological processes in rivers, and their classification and regionalization traditionally seeks to link patterns of flow to physiographic, climate and other information. There are many approaches to, and rationales for, catchment classification, with those focused on streamflow often seeking to relate a particular response characteristic to a physical property or climatic driver. Rationales include such topics as Prediction in Ungauged Basins (PUB), helping with experimental approaches, and providing guidance for model selection in poorly understood hydrological systems. While scale and time are important considerations for classification, the Annual Daily Hydrograph (ADH) is a first-order easily visualized integrated expression of catchment function, and over many years is a distinct hydrological signature. In this study, we use t-SNE, a state-of-the-art technique of dimensionality reduction, to classify 17110 ADHs for 304 reference catchments in mountainous Western North America. t-SNE is chosen over other conventional methods of dimensionality reduction (e.g. PCA) as it presents greater separability of ADHs, which are projected on a 2D map where the similarities are evaluated according to their map distance. We then utilize a Deep Learning encoder to upgrade the non-parametric t-SNE to a parametric approach, enhancing its capability to address ‘unseen’ samples. Results showed that t-SNE was an effective classifier as it successfully clustered ADHs of similar flow regimes on the 2D map. In addition, many compact clusters on the 2D map in the coastal Pacific Northwest suggest information redundancy in the local hydrometric network. The t-SNE map provides an intuitive way to visualize the similarity of high-dimensional data of ADHs, groups catchments with like characteristics, and avoids the reliance on subjective hydrometric indicators.
Snow density is one of the essential properties to describe snowpack characteristics. To obtain the spatial variability of snow density and estimate it accurately in different periods of snow season still remain as challenges, particularly in the mountains. This study analyzed the spatial variability of snow density with in-situ measurements in three different periods (i.e. accumulation, stable, melt period) of snow seasons 2017/2018 and 2018/2019 in the middle Tianshan Mountains, China. The performance of multiple linear regression model (MLR) and three machine learning models (i.e. Random Forest (RF), Extreme Gradient Boosting (XGB) and Light Gradient Boosting Machine (LGBM)) to simulate snow density has been evaluated. It was found that the snow density in melt period (0.27 g cm-3) was generally greater than that in stable (0.20 g cm-3) and accumulation period (0.18 g cm-3), and the spatial variability of snow density in melt period was slightly smaller than that in other two periods. The snow density in mountainous areas was generally higher than that in plain or valley areas, and snow density increased significantly (p < 0.05) with elevation in the accumulation and stable periods. Besides elevation, latitude and ground surface temperature also had critical impacts on the spatial variability of snow density in the middle Tianshan Mountains, China. In this work, the machine learning model, especially RF model, performed better than MLR on snow density simulation in three periods. Compared with MLR, the determination coefficients of RF promoted to 0.61, 0.51 and 0.58 from 0.50, 0.1 and 0.52 in accumulation period, stable period and melt period respectively. This study provide a more accurate snow density simulation method for estimating regional snow mass and snow water equivalent, which allows us to achieve a better understanding of regional snow resources.
The hydroclimatology of Northern South America responds to strongly-coupled dynamics of oceanic and terrestrial surface-atmosphere exchange, as moisture evaporated from these sources interact to produce continental rainfall. However, the relative contributions of these two source types through the annual cycle have been described only in modeling studies, with no observational tools used to corroborate these predictions. The use of isotopic techniques to study moisture sources has been common in assessing changes in the water cycle and in climate dynamics, as isotopes allow tracking the connection between evaporation, transpiration, and precipitation, as well as the influence of large scale hydroclimatic phenomena, such as the seasonal Inter Tropical Convergence Zone migration. We characterize the isotopic composition of moisture sources becoming precipitation in the Andes and Caribbean regions of Colombia, using stable isotopes data (δ18O, δ2H) from the Global Network of Isotopes in Precipitation (1971-2016) and contrasting it with moisture trajectory tracking from the FLEXPART model, using input from ERA-Interim reanalysis to compute the relative contribution of oceanic and terrestrial sources through the annual cycle. Our results indicate that most precipitation in the region comes from terrestrial sources including recycling (>30 % for all months), Orinoco (up to 28 % monthly for April), and the northern Amazon (up to 17 % monthly for June, July, and August); followed by oceanic sources including the Tropical South Pacific (up to 30 % monthly in October, November, December) and Tropical North Atlantic (up to 30 % monthly for January). These outcomes highlight the utility of combining stable isotopes in precipitation and modeling techniques to discriminate terrestrial and oceanic sources of precipitation. Further, our results highlight the need to assess the hydrological consequences of land cover change in South America, particularly in a country like Colombia where water, food and energy security all depend directly on precipitation. .
Aaron Smith1, Doerthe Tetzlaff1,2,3, Marco Maneta4,Chris Soulsby3,21IGB Leibniz Institute of Freshwater Ecology and Inland Fisheries Berlin, Berlin, Germany2Humboldt University Berlin, Berlin, Germany3Northern Rivers Institute, School of Geosciences, University of Aberdeen, UK4Department of Geosciences, University of Montana, Missoula, Montana, USACorrespondence to: Aaron Smith (email@example.com)
Midwestern cities require forecasts of surface nitrate loads to bring additional treatment processes online or activate alternative water supplies. Concurrently, networks of nitrate monitoring stations are being deployed in river basins, co-locating water quality observations with established stream gauges. Here, we construct a synthetic data set of stream discharge and nitrate for the Wabash River Basin - one of the U.S.’s most nutrient polluted basins - using the established Agro-IBIS model. While real-world observations are limited in space and time, particularly for nitrate, the synthetic data set allows for sufficiently long periods to train machine learning models and assess their performance. Using the synthetic data, we established baseline 1-day forecasts for surface water nitrate at 12 cities in the basin using support vector machine regression (SVMR; RMSE 0.48-3.3 mg/L). Next, we used the SVMRs to evaluate the improvement in forecast performance associated with deployment of additional sensors. Synthetic data enable us to quantitatively assess the expected value of an additional nitrate sensor being deployed, which is, of course, not possible if we are limited to the present observational network. We identified the optimal sensor placement to improve forecasts at each city, and the relative value of sensors at all possible locations. Finally, we assessed the co-benefit realized by other cities when a sensor is deployed to optimize a forecast at one city, finding significant positive externalities in all cases. Ultimately, our study explores the potential for AI to make short-term predictions and provide an unbiased assessment of the marginal benefit and co-benefits to an expanded sensor network. While we use water quantity in the Wabash River Basin as a case study, this approach could be readily applied to any problem where the future value of sensors and network design are being evaluated.