The era of "big data'' promises to provide new hydrologic insights, and open web-based platforms are being developed and adopted by the hydrologic science community to harness these datasets and data services. This shift accompanies advances in hydrology education and the growth of web-based hydrology learning modules, but their capacity to utilize emerging open platforms and data services to enhance student learning through data-driven activities remains largely untapped. Given that generic equations may not easily translate into local or regional solutions, teaching students to explore how well models or equations work in particular settings or to answer specific problems using real data is essential. This paper introduces an open web-based learning module developed to advance data-driven hydrologic process learning, targeting upper level undergraduate and early graduate students in hydrology and engineering. The module was developed and deployed on the HydroLearn open educational platform, which provides a formal pedagogical structure for developing effective problem-based learning activities. We found that data-driven learning activities utilizing collaborative open web platforms like HydroShare and CUAHSI JupyterHub computational notebooks allowed students to access and work with datasets for systems of personal interest and promoted critical evaluation of results and assumptions. Initial student feedback was generally positive, but also highlights challenges including trouble-shooting and future-proofing difficulties and some resistance to open-source software and programming. Opportunities to further enhance hydrology learning include better articulating the myriad benefits of open web platforms upfront, incorporating additional user-support tools, and focusing methods and questions on implementing and adapting notebooks to explore fundamental processes rather than tools and syntax. The profound shift in the field of hydrology toward big data, open data services and reproducible research practices requires hydrology instructors to rethink traditional content delivery and focus instruction on harnessing these datasets and practices in the preparation of future hydrologists and engineers.
Artificial subsurface (tile) drainage is used to increase trafficability and crop yield in much of the Midwest due to soils with naturally poor drainage. Tile drainage has been researched extensively at the field scale, but knowledge gaps remain on how tile drainage influences the streamflow response at the watershed scale. The purpose of this study is to analyze the effect of tile drainage on the streamflow response for 59 Ohio watersheds with varying percentages of tile drainage and explore patterns between the Western Lake Erie Bloom Severity Index to streamflow response in heavily tile-drained watersheds. Daily streamflow was downloaded from 2010-2019 and used to calculated mean annual peak daily runoff, mean annual runoff ratio, the percent of observations in which daily runoff exceeded mean annual runoff (TQmean), baseflow versus stormflow percentages, and the streamflow recession constant. Heavily-drained watersheds (> 40 % of watershed area) consistently reported flashier streamflow behavior compared to watersheds with low percentages of tile drainage (< 15% of watershed area) as indicated by significantly lower baseflow percentages, TQmean, and streamflow recession constants. The mean baseflow percent for watersheds with high percentages of tile drainage was 20.9 % compared to 40.3 % for watersheds with low percentages of tile drainage. These results are in contrast to similar research regionally indicating greater baseflow proportions and less flashy hydrographs (higher TQmean) for heavily-drained watersheds. Stormflow runoff metrics in heavily-drained watersheds were significantly positively correlated to western Lake Erie algal bloom severity. Given the recent trend in more frequent large rain events and warmer temperatures in the Midwest, increased harmful algal bloom severity will continue to be an ecological and economic problem for the region if management efforts are not addressed at the source. Management practices that reduce the streamflow response time to storm events, such as buffer strips, wetland restoration, or drainage water management, are likely to improve the aquatic health conditions of downstream communities by limiting the transport of nutrients following storm events.
Submersion and exposure from the operation of the Three Gorges Reservoir (TGR) can alter soil properties and plant characteristics at different elevations of the water level fluctuation zone (WLFZ), possibly influencing soil detachment capacity (Dc), but the vertical heterogeneity of this effect is uncertain. Soil samples were taken from 6 segments (5 m elevation per segment) along a slope profile in the WLFZ of the TGR to clarify the vertical heterogeneity of Dc. Scouring experiments were conducted at 5 slope gradients (17.63%, 26.79%, 36.40%, 46.63%, and 57.74%) and 5 flow rates (10, 15, 20, 25, and 30 L min–1) to determine Dc. The results indicate that the soil properties and biomass parameters of the WLFZ are strongly affected by elevation. Dc fluctuates with increasing elevation, with maximum and minimum average values at elevations of 145-150 m and 165-170 m, respectively. Linear equations accurately describe the relationships between Dc and hydrodynamic parameters. τ, ω, and E perform much better than U. Furthermore, a clear improvement is seen when using the general index of flow intensity to estimate Dc. Dc is significantly negatively correlated with MWD (p < 0.05) and organic matter (p < 0.01) but not significantly correlated with other soil properties (p > 0.05). At elevations of 145-150 m and 170-175 m, rill erodibility was greater than at other elevations. The critical hydraulic parameters were highest in the 165-170 m segments, both showing obviously fluctuation in the vertical direction of slope surface. This research highlighted the vertical heterogeneity of the soil detachment and was helpful to understand the mechanisms of soil detachment processes in the WLFZ of the TGR.
When formulating a hydrologic model, scientists rely on parameterizations of multiple processes based on field data, but literature review suggests that more frequently people select parameterizations that were included in pre-existing models rather than re-evaluating the underlying field experiments. Problems arise when limited field data exist, when “trusted” approaches do not get reevaluated, and when processes fundamentally change in different environments. The physics and dynamics of snow interception by conifers, including both loading and unloading of snow, is just such a case. The most commonly used interception parameterization is based on data from four trees from one site, but field study results are not directly transferable between environments. The process varies dramatically between locations with relatively warmer versus colder winters. Here, we combine a comprehensive literature review with a model to demonstrate essential improvements to model representations of snow interception. We recommend that, as a first and essential step, all models include increased loading due to increased adhesion and cohesion when temperatures rise from -3 and 0°C. The commonly used parameters of a fixed maximum value for loading and an e-folding time for unloading are not supported by observations or physical understanding and are not necessary to reproduce observations. In addition to unloading based on physical processes, such as wind or canopy warming, all models must represent melting of in-canopy snow so that it can be unloaded in liquid form. As a second step, we propose field experiments across climates and forest types to investigate: a) a representation of the force balance between adhesion and cohesion versus gravity for both interception efficiency and rates of unloading, b) wind effects during and between storms, and c) lubrication when snow melts. For greatest impact, this framework requires dedicated field measurements. These processes are essential for models to accurately represent the impacts of dynamically changing forest cover and snow cover on both global albedo and water supplies.
Based on historical records and crop harvest scores extracted from historical documents, this study reconstructed the spatial-temporal distribution and severities of floods in the Yangtze-Huai River valley in 1823 and 1849. We also summarized the effects of the floods on society and identified government measures taken to cope with the floods in the context of the economic recession in the period of 1801--1850. The 1823 flood, which was caused by the heavy precipitation of the Meiyu period and typhoons, severely affected areas in the lower reaches of the Yangtze River. Meanwhile, the 1849 flood, triggered by long-term, high-intensity Meiyu precipitation in the middle and lower reaches of the Yangtze River, mainly affected areas along the Yangtze River. The 1849 disaster was more serious than the one in 1823. In the lower reaches of the Yangtze River, the 1849 flood caused the worst agricultural failure of the period 1730--1852. To deal with the disasters, the Qing government took relief measures, such as exempting taxes in the affected areas, distributing grain stored in warehouses, and transferring grain to severely afflicted areas. These relief measures were supplemented by auxiliary measures, such as exempting commodity taxes on grain shipped to disaster areas and punishing officials who failed to provide adequate disaster relief. The flood disasters disrupted the water system of the Grand Canal and forced the Qing government to transport Cao rice by sea beginning in 1826. This laid the groundwork for the rise of coastal shipping in modern China. With the economic recession of the 19th century, Chinese society was not as resilient to floods as it was in the 18th century. Compared to droughts, floods are more difficult to deal with and pose greater threats to infrastructure and to normal life and work in the cities.
Climate change in terms of regional warming and modifications in precipitation regimes has large impacts on streamflow in regions where both rainfall and snowmelt are important runoff generating processes like in Norway. Hydrological impacts of recent changes in climate are usually investigated by trend analyses applied on annual, seasonal, or monthly time series. However, neither of them can detect sub-seasonal changes and their underlying causes. Based on high-resolution trend analyses (i.e., applying the Mann-Kendall test on 10-day-moving-averaged daily time series), this study investigated sub-seasonal changes in daily streamflow, rainfall, and snowmelt in 61 and 51 catchments in Western vs. Eastern Norway (Vestlandet vs. Østlandet), respectively, over the period 1983-2012. The relative contribution of rainfall vs. snowmelt to daily streamflow and the changes therein have also been estimated to identify the changing relevance of these driving processes over the same period. Detected changes in daily streamflow were finally attributed to changes in the most important hydro-meteorological drivers using multiple-regression models with increasing complexity. Results reveal a coherent picture of earlier spring flow timing in both regions due to earlier snowmelt. Other streamflow trend patterns differ between both regions: Østlandet shows increased summer streamflow in catchments up to ~1100 m a.s.l. and slightly increased winter streamflow in about 50 % of the catchments, while trend patterns in Vestlandet are less coherent. The importance of rainfall for streamflow contribution has increased in both regions, and the trend attribution reveals that changes in rainfall and snowmelt can explain streamflow changes to some degree in periods and regions where they are dominant (snowmelt: spring and Østlandet; rainfall: autumn and Vestlandet). However, detected streamflow changes can be best explained by adding temperature as an additional predictor which indicates the relevance of additional driving processes for streamflow changes like increased glacier melt and evapotranspiration.
While 1992 marked the first major dam – Manwan – on the main stem of the Mekong River, the post-2010 era has seen the construction and operationalisation of mega dams such as Xiaowan (started operations in 2010) and Nuozhadu (started operations in 2014) that were much larger than any dams built before. The scale of these projects implies that their operations will likely have significant ecological and hydrological impacts from the Upper Mekong Basin to the Vietnamese Delta and beyond. Historical water level and water discharge data from 1960 to 2020 were analysed to examine the changes to streamflow conditions across three time periods: 1960-1991 (pre-dam), 1992-2009 (growth) and 2010-2020 (mega-dam). At Chiang Saen, the nearest station to the China border, monthly water discharge in the mega-dam period has increased by up to 98% during the dry season and decreased up as much as -35% during the wet season when compared to pre-dam records. Similarly, monthly water levels also rose by up to +1.16m during the dry season and dropped by up to -1.55m during the wet season. This pattern of hydrological alterations is observed further downstream to at least Stung Treng (Cambodia) in our study, showing that Mekong streamflow characteristics have shifted substantially in the post-2010 era. In light of such changes, the 2019-2020 drought – the most severe one in the recent history in the Lower Mekong Basin – was a consequent of constructed dams reducing the amount of water during the wet season. This reduction of water was exacerbated by the decreased monsoon precipitation in 2019. Concurrently, the untimely operationalisation of the newly opened Xayaburi dam in Laos coincided with the peak of the 2019-2020 drought and could have aggravated the dry conditions downstream. Thus, the mega-dam era (post-2010) may signal the start of a new normal of wet-season droughts.
1. IntroductionTropical mountainous ecosystems are recognized as providers of valuable ecological and hydrological services (Viviroli et al, 2007). In Central America, the Páramo, a high‐elevation tropical grassland ecosystem, extends over ~ 200 km2 in Costa Rica and Panama, with ~50% of this area located within the Chirripó National Park between 3,100 and 3,820 m asl (-83.49°, 9.46°). Vegetation mostly consists of 0.5 to 2.5 m tall bamboo dominated (Chusquea subtessellata ) grasslands, covering up to 60% of the total Páramo area in Costa Rica (Fig.1a). The climate is controlled by the northeast trade winds, the latitudinal migration of the Intertropical Convergence Zone (ITCZ), cold continental outbreaks (i.e., northerly winds), and the seasonal influence of Caribbean cyclones. These circulation patterns produce two rainfall maxima on the Pacific slope, one in June and one in September, which are interrupted by a relative minimum between July-August, known as the Mid-Summer Drought, due to intensification of trade winds over the Caribbean Sea (Magaña et al., 1999; Waylen, 1996). The wettest season extends from May to November (contributing up to 89% of the annual precipitation), whereas the driest season is from December to April (Fig. 2a; Esquivel-Hernández et al., 2018). The surface water system of Chirripó is characterized by a lake district which comprises approximately 30 lakes of glacial origin and streams flowing down the Caribbean and Pacific slopes (Fig 1b). Lake catchments are characterized by steep slopes that promote rapid hydrological responses such as fast water‐level changes. Input of water to these glacial lakes is mostly controlled by the seasonal inputs of rainfall, which mix up with stream and subsurface waters. In April 2015, the Chirripó Hydrological Research Site (CHRS) was installed with the goal of advancing the understanding of the hydrological functioning in the Central American Páramo using environmental tracers (i.e., water stable isotopes) in combination with hydrometric data. A detailed map of CHRS is available in Esquivel-Hernández et al. (2019).
Irrigation activities are a major control on water movement and storage in irrigated river valleys in the Intermountain West, USA. Particularly in dry years, surface water diversions can deplete streams over the summer irrigation season, leading to more variable stream temperatures and increased risk for resident aquatic species. Cooler lateral inflows derived from irrigation activities can mitigate the impacts of depletion by buffering main channel stream temperatures. Given the increasing susceptibility of depleted streams to climate and land use changes, understanding stream temperature patterns and controls in these systems is critical. We used intensive field monitoring over three summers and thermal aerial imagery to characterize stream temperature patterns and irrigation influences in a 2.5 km reach of a small agricultural stream in northern Utah. Considering variable hydrology, weather, channel morphology, diversions, and lateral inflows we found stream temperatures to be relatively insensitive to flow depletion or lateral inflows in a wet year but very sensitive in drier years. Irrigation-related lateral inflows reduced longitudinal warming and diel variability during drier years and at times prevented temperatures from reaching stressful or lethal limits. Reaches with substantial lateral inflow contributions also had a greater areal proportion of low temperatures and spatial temperature diversity. These trends were enhanced by differences in channel morphology, with greater spatial and temporal variability in multi-thread than single-thread reaches. Study results highlight critical flow and weather conditions driving increased temperature variability that will likely become more extreme with additional climate change related reductions in baseflow. Regardless of the cause, this study highlights that decreased instream flows increase the importance of identifying, quantifying, and maintaining lateral inflows to maintain instream temperatures and preservation of these inflows should be considered in future water management decisions.
The active rock glacier “Innere Ölgrube”, and its catchment area (Ötztal Alps, Austria) are assessed using various hydro(geo)logical tools to provide a thorough catchment characterization and to quantify temporal variations in recharge and discharge components. During the period from June 2014 to July 2018, an average contribution derived from snowmelt, ice melt and rainfall of 35,8 %, 27,6 % and 36,6 %, respectively, is modelled for the catchment using a rainfall-runoff model. Discharge components of the rock glacier springs are distinguished using isotopic data as well as other natural and artificial tracer data, when considering the potential sources rainfall, snowmelt, ice melt and groundwater. Seasonal as well as diurnal variations in runoff are quantified and the importance of shallow groundwater within this rock glacier-influenced catchment is emphasized. Water derived from ice melt is suggested to be provided mainly by melting of two small cirque glaciers within the catchment and subordinately by melting of permafrost ice of the rock glacier. The active rock glacier is characterized by a layered internal structure with an unfrozen base layer responsible for groundwater storage and retarded runoff, a main permafrost body contributing little to the discharge (at the moment) by permafrost thaw and an active layer responsible for fast lateral flow on top of the permafrost body. Snowmelt contributes at least 1/3rd of the annual recharge. During droughts, meltwater derived from two cirque glaciers provides runoff with diurnal runoff variations; however, this discharge pattern will change as these cirque glaciers will ultimately disappear in the future. The storage-discharge characteristics of the investigated active rock glacier catchment are an example of a shallow groundwater aquifer in alpine catchments that ought to be considered when analysing (future) river runoff characteristics in alpine catchments as these provide retarded runoff during periods with little or no recharge.
Rock glaciers are increasingly influencing the hydrology and water chemistry of Alpine catchments, with important implications for drinking water quality and ecosystem health under a changing climate. During summers of 2017 - 2019, we monitored the physical and chemical conditions of springs emerging from two active rock glaciers (ZRG and SRG) with distinct geomorphological settings in the Eastern Italian Alps (Solda/Sulden catchment). Both springs had constantly cold waters (1.4 ± 0.1 °C), and their ionic composition was dominated by SO42-, HCO3-, Ca2+ and Mg2+. Concentrations of major ions and trace elements, and values of water isotopes (δ18O, δ2H), increased towards autumn with an asymptotic trend at SRG, and a positive unimodal pattern at ZRG, where concentrations peaked 60 - 80 days after the end of the snowmelt. Wavelet analysis on electrical conductivity (EC) and water temperature records revealed daily cycles only at SRG, and significant weekly/biweekly fluctuations at both springs attributable to oscillations of meteorological conditions. Several rainfall events triggered a transient (0.5 - 2 hrs) EC drop and water temperature rise (dilution and warming) at SRG, whereas only intense rainfall events occasionally increased EC at ZRG (solute enrichment and thermal buffering), with a long-lasting effect (6 - 48 hrs). Our results, supported by a limited but emerging literature, suggest that: i) the distinctive composition of the bedrock drives different concentrations of major ions and trace elements in rock glacier springs; ii) pond-like and stream-like springs have distinct fluctuations of water parameters at different timescales; iii) peaks of EC/solute concentrations indicate a seasonal window of major permafrost thaw for rock glaciers feeding pond-like springs. These results provide a first quantitative description of the hydrological seasonality in rock glacier outflows, and their hydrochemical response to precipitation events, bringing relevant information for water management in the European Alps under climate change.
Aquatic vegetation, hydraulics and sediment transport have complex interactions that are not yet well understood. These interactions are important for sediment conveyance, sediment sequestration, phasing of sediment delivery from runoff events, and management of ecosystem health in lowland streams. To address this knowledge gap detailed field measurements of sediment transport through natural flexible aquatic vegetation are required to supplement and validate laboratory results. This paper contributes a field study of suspended sediment transport through aquatic vegetation and includes mechanical removal of aquatic vegetation with a weed cutting boat. It also provides methods to quantify vegetation cover through remote sensing with Unmanned Aerial Vehicles (UAVs) and estimate biomass from ground truth sampling. Suspended sediment concentrations were highly dependent on aquatic vegetation abundance, and the distance upstream that had been cleared of aquatic vegetation. When the study reach was fully vegetated (i.e. cover >80%), the maximum recorded SSC was 14.6 g/m3 (during a fresh with discharge of 2.47 m3/s), during weed cutting operations SSC was 76.8 g/m3 at 0.84 m3/s (weedcutting boat 0.5-1 km upstream from study reach), however following weed cutting operations (4.6 km cleared upstream), SSC was 139.0 g/m3 at a discharge of 1.52 m3/s. The data indicates that fine sediment was being sequestered by aquatic vegetation and likely remobilised after vegetation removal. Investigation of suspended sediment spatial dynamics illustrated changes in particle size distribution due to preferential settling of coarse particles within aquatic vegetation. Hydraulic resistance in the study reach (parameterized by Manning’s n) dropped by over 70% following vegetation cutting. Prior to cutting hydraulic resistance was discharge dependent, while post cutting hydraulic resistance was approximately invariant of discharge. Aerial surveying captured interesting changes in aquatic vegetation cover, where some very dense regions of aquatic vegetation were naturally removed leaving behind unvegetated riverbed and fine sediment.
Peatlands are globally important long-term sinks of carbon, however there is concern that enhanced moss moisture stress due to climate change mediated drought will reduce moss productivity making these ecosystems vulnerable to carbon loss and associated long-term degradation. Peatlands are resilient to summer drought moss stress because of negative ecohydrological feedbacks that generally maintain a wet peat surface, but where feedbacks may be contingent on peat depth. We tested this ‘survival of the deepest’ hypothesis by examining water table position, near-surface moisture content, and soil water tension in peatlands that differ in size, peat depth, and catchment area during a summer drought. All shallow sites lost their WT (i.e. the groundwater well was dry) for considerable time during the drought period. Near-surface soil water tension increased dramatically at shallow sites following water table loss, increasing ~5–7.5× greater at shallow sites compared to deep sites. During a mid-summer drought intensive field survey we found that 60%–67% of plots at shallow sites exceeded a 100 mb tension threshold used to infer moss water stress. Unlike the shallow sites, tension typically did not exceed this 100 mb threshold at the deep sites. Using species dependent water content - chlorophyll fluorescence thresholds and relations between volumetric water content and water table depth, Monte Carlo simulations suggest that moss had nearly twice the likelihood of being stressed at shallow sites (0.38 ± 0.24) compared to deep sites (0.22 ± 0.18). This study provides evidence that mosses in shallow peatland may be particularly vulnerable to warmer and drier climates in the future, but where species composition may play an important role. We argue that a critical ‘threshold’ peat depth specific for different hydrogeological and hydroclimatic regions can be used to assess what peatlands are especially vulnerable to climate change mediated drought.
Extreme precipitation can have profound consequences for communities, resulting in natural hazards such as rainfall-triggered landslides that cause casualties and extensive property damage. A key challenge to understanding and predicting rainfall-triggered landslides comes from observational uncertainties in the depth and intensity of precipitation preceding the event. Practitioners and researchers must select among a wide range of precipitation products, often with little guidance. Here we evaluate the degree of precipitation uncertainty across multiple precipitation products for a large set of landslide-triggering storm events and investigate the impact of these uncertainties on predicted landslide probability using published intensity-duration thresholds. The average intensity, peak intensity, duration, and NOAA-Atlas return periods are compared ahead of 228 reported landslides across the continental US and Canada. Precipitation data are taken from four products that cover disparate measurement methods: near real-time and post-processed satellite (IMERG), radar (MRMS), and gauge-based (NLDAS-2). Landslide-triggering precipitation was found to vary widely across precipitation products with the depth of individual storm events diverging by as much as 296 mm with an average range of 51 mm. Peak intensity measurements, which are typically influential in triggering landslides, were also highly variable with an average range of 7.8 mm/hr and as much as 57 mm/hr. The two products more reliant upon ground-based observations (MRMS and NLDAS-2) performed better at identifying landslides according to published intensity-duration storm thresholds, but all products exhibited hit-ratios of greater than 0.56. A greater proportion of landslides were predicted when including only manually-verified landslide locations. We recommend practitioners consider low-latency products like MRMS for investigating landslides, given their near-real time data availability and good performance in detecting landslides. Practitioners would be well-served considering more than one product as a way to confirm intense storm signals and minimize the influence of noise and false alarms.
Groundwater age is often used to estimate groundwater recharge through a simplified analytical approach. This estimated recharge is thought to be representative of the mean recharge between the point of entry and the sampling point. However, given the complexity in actual recharge, whether the mean recharge is reasonable is still unclear. This study examined the validity of the method to estimate long-term average groundwater recharge and the possibility of obtaining reasonable spatial recharge pattern. We first validated our model in producing reasonable age distributions using a constant flux boundary condition. We then generated different flow fields and age patterns by using various spatially-varying flux boundary conditions with different magnitudes and wavelengths. Groundwater recharge was estimated and analyzed afterwards using the method at the spatial scale. We illustrated the main findings with a field example in the end. Our results suggest that we can estimate long-term average groundwater recharge with 10% error in many parts of an aquifer. The size of these areas decreases with the increase in both the amplitude and the wavelength. The chance of obtaining a reasonable groundwater recharge is higher if an age sample is collected from the middle of an aquifer and at downstream areas. Our study also indicates that the method can also be used to estimate local groundwater recharge if age samples are collected close to the water table. However, care must be taken to determine groundwater age regardless of conditions.
Uncovering the hillslope scale flow and transport dynamics in an experimental hydrologic systemMinseok Kim1, Till H. M. Volkmann1,2, Aaron Bugaj1, Yadi Wang3, Antônio A. Meira Neto4, Katarena Matos4, Ciaran J. Harman5,6, Peter A. Troch1,41Biosphere 2, University of Arizona, Tucson, AZ, USA,2Applied Intelligence, Accenture, Kronberg im Taunus, Germany, 3Department of Environmental Science, University of Arizona, Tucson, AZ, USA, 4Department of Hydrology and Atmospheric Sciences, University of Arizona, Tucson, AZ, USA, 5Department of Environmental Health and Engineering, Johns Hopkins University, Baltimore, MD, USA,6Department of Earth and Planetary Sciences, Johns Hopkins University, Baltimore, MD, USAHillslope scale water flow and transport dynamics have been extensively studied (Burt & McDonnell, 2015; Hewlett & Hibbert, 1963), but observing those internal dynamics in high spatial and temporal resolutions remains challenging. In this study, we uncover internal water flow and transport dynamics in an artificial hillslope in the Landscape Evolution Observatory (LEO), Biosphere 2, University of Arizona, Tucson, USA, using the experimental dataset collected in December 2016. Complete information about the hillslope and experiment can be found elsewhere (Pangle et al., 2015; Till H. M. Volkmann et al., 2018); Here, we only summarize some relevant information.The first part of the animation describes the experimental system and setup (time 00:12 – 04:14 in Animation S1). The LEO hillslope is 330 m3 (30 m long, 11 m wide, and 1 m deep) sloping soil lysimeter. The hillslope is primarily made up of loamy-sand textured basaltic tephra, and the most downslope 5.5 m3 is filled with gravel-textured basaltic tephra. A custom irrigation system supplies reverse osmosis filtered water onto the LEO surface. The downslope boundary is exposed to atmospheric pressure, creating the seepage face boundary condition. The sensor networks (including pressure transducers and volumetric water content sensors) and the water isotope sampling locations and intervals (7 hrs to 101 hrs) are illustrated in Animation S1 (time 02:09 – 03:01). The isotope composition of subsurface water is obtained from laser-based online measurements of vapor that is extracted via custom gas probes through equilibrium calculation (T. H.M. Volkmann & Weiler, 2014). The irrigation sequence of this experiment was designed to generate a periodic steady state, which allows the application of the PERidoic Tracer Hierarchy method (Harman & Kim, 2014) for the observation of the time-variable transit time distributions and the StorAge Selection functions. Deuterium-labeled water was irrigated during the first two irrigation events.The second part of the animation shows the dynamics of the perched water table and soil water content (time 04:15 – 06:53). The extent of the saturated zone was estimated using the pressure transducer data and Delaunay triangulation (Delaunay, 1934). The experimental data show the saturation from below mechanisms—wetting up from the bedrock surface into the soil profile (McDonnell, 1997)—and the saturation from downslope to upslope. The water table profile forms a wedge-like shape, which is a characteristic of hillslope with a high hillslope (Peclet) number (Berne et al., 2005; Brutsaert, 1994). The hillslope Peclet number of the LEO hillslope during the experiment is high (> 10) (Kim et al., 2020). Significant time delays in the water table dynamics are observed at some upslope locations (e.g., at 13 m upslope), which is mostly due to the delayed water supply from the convergent upslope area. The water content data indicates that the convergent upslope water content began to decrease around the timing of the water table peak at 13 m upslope.The third part of the animation shows the tracer dynamics (from time 06:43). The animated experimental data reveal two notable water transport dynamics. First, the vertical tracer movement is faster at the upslope. This faster movement at the upslope is, in a sense, counter-intuitive because the upslope region is drier than the downslope. This is due to the lateral flow in the saturated zone and the tension saturated zone, that are thicker at the downslope. While water velocity is higher at the downslope, the direction of velocity is not vertical but rotated towards the downslope in those zones.Second, the animated data illustrate that old water is present only at the downslope. This observation is a characteristic of hillslope with a high hillslope number, in which old water is preferentially discharged (Kim et al., 2020). Indeed, the observed SAS function in this hillslope is concave (Kim et al., 2020), indicating that the hillslope preferentially discharges old water that is stored at the downslope.
Clay aquitards are semipermeable membranes that allow groundwater flow while retarding solute migration have been researched extensively but also subjected to much debate. In this study, we collected clay samples from drilling cores (30–90m) in the Hengshui area located in the Hebei Plain, then extracted pore water using a high-pressure squeezing device. Vertical hydrochemical and isotopic profile variation trends for the pore water were revealed using hydrochemical (Cl－, Na+, Ca2+, K+, Mg2+, and SO42-) and stable isotopic measurements of H, O, and Cl. The results showed that the hydrochemical clay interlayer pore water of the saline aquifer is Cl•SO4-Na•Mg type and the average total dissolved solids（TDS）is 10.17g/L. The hydrochemical clay aquitard pore water is of the Cl•SO4-Na•Ca type with an average TDS of 1.9g/L. The hydrochemical clay interlayer pore water of aquifer II is of Cl-Na•Ca type with an average TDS of 1.1g/L. Our results showed that the water quality of the aquifer II is not affected by the upper part of saline aquifer, thus the clay aquitard acts as a significant barrier to salt movement. A polarization layer concentrated in ions was formed between the upper part of saline aquifer and the clay aquitard. The concentration polarization layer increases the salt-inhibition effect. Isotpic H, O, and Cl results showed significant fractionation. The pore water of aquifer II lacked heavy isotopes(D、18O、37Cl), but had significant heavy isotope enrichment in the concentrated polarized layer (the δD value was -76‰, the δ18O value was -8.4‰, and the δ37Cl value was 1.59‰). Hyperfiltration thus played a significant role in isotope fractionation.
Soil moisture plays a significant role in land-atmosphere interactions. Changing fractions of latent and sensible heat fluxes caused by soil moisture variations can affect near-surface air temperature, thus influencing the cooling effect of the oasis in arid regions. In this study, the framework for the evaporative fraction (EF) dependence on soil moisture is used to analyze the impacts of soil moisture variation on near-surface air temperature and the oasis effect. The results showed that the contribution rate of soil moisture to EF was significantly higher than that of EF to temperature. Under the interaction of temperature sensitivity to EF and EF to soil moisture, the ∂T/∂ϴ presented a similar tempo-spatial variation with both of the above. It was most significant in oasis areas during summer (−1.676), while it was weaker in plain desert areas during the autumn (−0.071). In the study region, the effect of soil moisture variation on air temperature can reach 0.018–0.242 K for different land-cover types in summer. The maximum variation of soil moisture in summer can alter air temperature by up to 0.386 K. The difference in temperature variability between the oasis and desert areas promoted the formation of the oasis effect. For different oasis, the multi-year average oasis cold effect index (OCI) ranged from −1.36 K to −0.26 K, while average summer OCI ranged from −1.38 K to −0.29 K. The lower bound of the cooling effect of oasis ranged from −4.97 to −1.69 K. The analysis framework and results of this study will provide a new perspective for further research on the evolution process of the oasis effect and water-heat balance in arid areas.