The Hunga Tonga Volcano eruption launched a myriad of atmospheric waves that have been observed to travel around the world several times. These waves generated Traveling Ionospheric Disturbances (TIDs) in the ionosphere, which are known to adversely impact radio applications such as Global Navigation Satellite Systems (GNSS). One such GNSS application is Precise Point Positioning (PPP), which can achieve cm-level accuracy using a single receiver, following a typical convergence time of 30 mins to 1 hour. A network of ionosondes located throughout the Australian region were used in combination with GNSS receivers to explore the impacts of the Hunga-Tonga Volcano eruption on the ionosphere and what subsequent impacts they had on PPP. It is shown that PPP accuracy was not significantly impacted by the arrival of the TIDs and Spread-F, provided that PPP convergence had already been achieved. However, when the PPP algorithm was initiated from a cold start either shortly before or after the TID arrivals, the convergence times were significantly longer. GNSS stations in northeastern Australia experienced increases in convergence time of more than 5 hours. Further analysis reveals increased convergence times to be caused by a super equatorial plasma bubble (EPB), the largest observed over Australia to date. The EPB structure was found to be ~42 TECU deep and ~300 km across, traveling eastwards at 30 m/s. The Hunga Tonga Volcano eruption serves as an excellent example of how ionospheric variability can impact real-world applications and the challenges associated with modeling the ionosphere to support GNSS.
Recently, Zhou (2022) reported temporal change of seismic velocity in the Earth’s outer core based on relative travel time differences of SKS phase between some “doublets”. The study further suggested existence of a possible 2-3% density deficit in the outer core and a localized transient flow with a speed of ~40 km/year. We examine the seismic data of the best-quality “doublet” (event pair 19970503-20180910) reported in the study. We relocate the “doublet” based on a master-event relocation method (Wen , 2006) using the seismic data of the compressional waves that travel outside the outer core, including P or Pdiff , pP or pPdiff , pPn, PP or Pdiff Pdiff , and PcP waves recorded at the global seismographic network. The later event (20180910) is found to be located 14.20 km away, 204.33°NW, of the earlier event (19970503) with a source depth 1.45 km deeper. After correction for the effects of relative source location and origin time, SKS signals exhibit no discernable relative travel time differences between the two events at the frequency band ≥0.2 Hz at all the four most anomalous stations (COLA, INK, ULN, YAK) reported in Zhou (2022). However, SPdKS-SKPdS phases, which start bifurcating from the SKS phases at the distance range of those four reported anomalous stations, exhibit evident changes of waveform and travel time between the events. The “SKS signals” used inZhou (2022), which had a 50-s time window and were filtered from 0.01-0.05 Hz, contain signals of SKS and SPdKS-SKPdS phases. It is the changes of SPdKS-SKPdS phases, not that of SKS phases, that generate the apparent time shift in the low-frequency filtered “SKS signals” reported in Zhou (2022). The SPdKS-SKPdS phases of those reported anomalous stations sample a lowermost mantle region populated with ultra-low velocity zones (ULVZs). The separation of the two events is large and the SPdKS-SKPdS phases would sample ULVZs with slightly different paths between the two events, yielding different waveform and travel time (Wen & Helmberger , 1998). We conclude that there is no observable temporal change of seismic properties in the Earth’s outer core in the seismic data used in Zhou (2022) and the reported relative travel time difference in the “SKS signals” in Zhou(2022) is caused by waveform and relative travel time changes in SPdKS-SKPdS phases due to slightly different sampling paths to the ULVZs at the bottom of the mantle between the events.
The Planetary Data System (PDS) maintains archives of data collected by NASA missions that explore our solar system. The PDS Cartography and Imaging Sciences Node (Imaging Node) provides access to millions of images of planets, moons, and other bodies. Given the large and continually growing volume of data, there is a need for tools that enable users to quickly search for images of interest. Each image archived at the PDS Imaging Node is described by a rich set of searchable metadata properties, such as the time it was collected and the instrument used. However, users often wish to search on the content of the image to find those images most relevant to their scientific investigation or individual curiosity. To enable the content-based search of the large image archives, we utilized machine learning techniques to create convolution neural network (CNN) classification models. The initial CNN classification results for rover missions (i.e., Mars Science Laboratory and Mars Exploration Rover) and orbiter missions (i.e., Mars Reconnaissance Orbiter, Cassini, and Galileo) were deployed at the PDS Image Atlas (https://pds-imaging.jpl.nasa.gov/search) in 2017. With the content-based search capability, users of the PDS Image Atlas can search using a list of pre-defined classes and quickly find relevant images. For example, users can search “Impact ejecta” and find the images containing impact ejecta from the archive of the Mars Reconnaissance Orbiter mission. All of the CNN classification models were trained using the transfer learning approach, in which we adapted a CNN model pretrained on Earth images to classify planetary images. Over the past several years, we employed the following three techniques to improve the efficiency of collecting labeled data sets, the accuracy of the models, and the interpretability of the classification results:· First, we used the marginal-probability based active learning (MP-AL) algorithm to improve the efficiency of collecting labeled data sets.· Second, we used the classifier chain and ensemble approaches to improve the accuracy of the classification results. · Third, we incorporated the prototypical part network (ProtoPNet) architecture to improve the interpretability of the classification results.
A major challenge to community code development and management is the testing and validation of public contributions. The community-developed GFDL Finite-Volume Cubed Sphere Dynamical Core (FV3) is no exception: automated testing of contributions made to the FV3 public repository is paramount for ensuring the integrity of the many earth-system models and forecasting applications using FV3 as a dynamical core. A build and test system for the FV3 dynamical core was developed for internal testing on NOAA Research and Development High Performance Computing Systems (RDHPCS). We have designed a continuous integration (CI) approach for the FV3 dynamical core Github repository that uses a cloud-based platform to perform automated compilation and reproducibility testing to validate community code contributions. A combination of NOAA RDHPCS Parallel Works virtual machines and containers developed at GFDL are used to compile and test code on the cloud efficiently. We will also discuss how we adapted the FV3 tests for automated CI.
Geomagnetic storms are primarily driven by stream interaction regions (SIRs) and coronal mass ejections (CMEs). Since SIR and CME storms have different solar wind and magnetic field characteristics, the magnetospheric response may vary accordingly. Using FAST/TEAMS data, we investigate the variation of ionospheric O+ and H+ outflow as a function of geomagnetic storm phase during SIR and CME magnetic storms. The effects of storm size and solar EUV flux, including solar cycle and seasonal effects, on storm time ionospheric outflow, are also investigated. The results show that for both CME and SIR storms, the O+ and H+ fluence peaks during the main phase, and then declines in the recovery phase. However, for CME storms, there is also significant increase during the initial phase. Because the outflow starts during the initial phase in CME storms, there is time for the O+ to reach the plasma sheet before the start of the main phase. Since plasma is convected into the ring current from the plasma sheet during the main phase, this may explain why more O+ is observed in the ring current during CME storms than during SIR storms. We also find that outflow fluence is higher for large storms than moderate storms and is higher during solar maximum than solar minimum.
Key Points:Lagrangian-based HYSPLIT modelling system used to estimate volcanic ash particle trajectories.HYSPLIT simulation took place before and after the massive eruption on 15th January 2022 (termed as pre-caldera and post-caldera respectively in Section 5)Volcanic ash particle deposition and volcanic ash particle position simulated using HYSPLIT for the HTHH submarine volcano massive eruption event.AbstractVolcano-seismic signals such as long-period (LP) events and tremors are important indicators for volcanic activity and unrest. Explosive volcanic eruptions are stunning phenomena that influence the Earth’s natural systems and climate in a variety of ways. This paper discusses the mid-week January 2022 eruption of the HTHH submarine volcano, especially on 15th January an event with many impacts in the region (dynamic, chemical, climate breakdown). Given the potential for a volcanic eruption to affect climate, the oceanic system, or climate variability, consistent and understandable modelling of these exceptional events is critical.The main objective was to determine the volcanic effects in our atmospheric boundary layer (ABL) during the multiple eruptive events occurred on January 2022 at HTHH. Our discussion also contributes to understanding the underlying Earth system dynamics triggered by cataclysmic volcanic eruptions. The Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model system developed by the National Oceanic and Atmospheric Administration’s (NOAA) Air Resources Laboratory was used to deliberate the effects caused by the multiple eruptions of HTHH on mid-week of January 2022. Our modelling results include model trajectories at different frequency levels, volcanic ash deposition and ash particle position from the series of multiple eruption events of submarine volcano HTHH in the mid-week of January 2022.
Though precise, most LiDARs are vulnerable to position and pointing errors as deviations from the expected principal axis lead to projection errors on target. While fidelity of location/pointing solutions can be high, determination of uncertainty remains relatively limited. As a result, NASA’s 2021 Surface Topography and Vegetation Incubation Study Report lists vertical (horizontal, geolocation) accuracy as an associated parameter for all (most) identified Science and Application Knowledge Gaps, and identifies maturation of Uncertainty Quantification (UQ) methodologies on the STV Roadmap for this decade. The presented generalized Polynomial Chaos Expansion (gPCE) based method has wide ranging applicability to improve positioning, geolocation uncertainty estimates for all STV disciplines, and is extended from the bare earth to the bathymetric lidar use case, adding complexity introduced by entry angle, wave structure, and sub-surface roughness. This research addresses knowledge gaps in bathy-LiDAR measurement uncertainty through a more complete description of total aggregated uncertainties, from system level to geolocation, by applying a gPCE-UQ approach. Currently, the standard approach is the calculation of the Total Propagated Uncertainty, which is often plagued by simplifying approximations (e.g. strictly Gaussian uncertainty sources) and ignored covariances. gPCE intrinsically accounts for covariance between variables to determine uncertainty in a measurement, without manually constructing a covariance matrix, through a surrogate model of system response. Additionally, gPCE allows arbitrarily high order uncertainty estimates (limited only by the one-time computational cost of computing gPCE coefficients), accurate representation of non-Gaussian sources of error (e.g. wave height energy distributions), and direct integration of measurement requirements into the design of LiDAR systems, by trivializing the computation of global sensitivity analysis.
The identification of a prospective groundwater recharge zone is crucial for supplementing groundwater resources. It's especially critical in the hard rock region, where groundwater is the principal source of potable water and is fast disappearing due to uncontrolled mining. The present study used a combination of modern methodologies and technologies to analyze the groundwater potential zone occurrence, including geographic information system (GIS), remote sensing (RS), electrical resistivity i.e. VES, and multi-criteria decision analysis (MCDA). Several thematic layers were prepared, including geomorphology, lineament density, drainage density, soil type, geology, rainfall, soil texture, elevation, and land use land cover (LULC), which were weighed according to their impact on the groundwater prospect zone. The analytical hierarchical process (AHP) was used in this study to apply normalization to relative assistance. Vertical electrical sounding was used to find water bearing formations/fracture zones at various points throughout the selected region. The five prospective groundwater prospect zones that were delineated using these methods have been classified as very low, low, moderate, high, and very high. The delineated high groundwater potential zones were found in the northeastern part and a little below the central region, while low to moderate zones were found almost evenly distributed all over the region. The acquired result was validated using well yield data, which showed a 72 percent accuracy with our delineated groundwater potential zone. Hence, the AHP model in the current work has outperformed the competition in terms of prediction accuracy.
V. S. Gokul 1,2,3., K. M. Sreejith4., G. Srinivasa Rao 5., M. Radhakrishna 1*., P. G. Betts 21 Department of Earth Sciences, Indian Institute of Technology Bombay, Mumbai 400 076, India.2 School of Earth, Atmosphere and Environment, Monash University, Clayton, Victoria 3800, Australia.3 IITB-Monash Research Academy, Indian Institute of Technology Bombay, Mumbai 400 076, India.4 Geosciences Division, Space Applications Centre, Ahmedabad 380 015, India.5 Department of Applied Geophysics, Indian Institute of Technology (Indian School of Mines) Dhanbad, Dhanbad 826 004, India (*Corresponding author:[email protected])AbstractThe Indian Ocean has the largest geoid anomaly, known as the Indian Ocean Geoid Low (IOGL). This long wavelength geoid depression has a magnitude of negative 106 m, and is centered south of India. The nature and depth of the sources causing this characteristic low are poorly constrained and has been the subject of debate. In this abstract, we focus on understanding the density contributions to the geoid low from the crust and upper mantle using joint analysis of geoid and gravity data along with published tomographic models in the region. Decomposition of geoid anomalies in the spectral domain indicate that mass anomalies below the upper mantle (> 700 km) contribute to 90% of the total geoid anomaly. In order to compute the upper mantle contribution to the IOGL, we used the Moho geometry and the crustal density structure from the 3-D gravity inversion, and the SL2013sv tomography model for the upper mantle density structure. The presence of density sources, which was not resolved in the modeling within the sub-lithospheric mantle is confirmed upon comparing the crustal and upper mantle (up to 700 km) geoid response below the IOGL with n=10 residual geoid anomaly. Integrated gravity-geoid 2-D modeling of the geometries of the anomalous sources located at the base of LAB and at a depth of 320-340 km, respectively, confirms that the contribution of density structures up to 700 km explains only the ten percent of the IOGL which matches well with the spectral decomposition results. This suggests that the lower mantle sources, such as paleo-subducted slabs or plume sources from the core-mantle boundary significantly contributes to the IOGL.
Abstract ID and Title: 1147876 Role of soil in regulating runoff processes in Pine- and Oak-dominated headwater catchments of the Western HimalayasFinal Paper Number: H44H-02Presentation Type: Online Poster DiscussionSession Number and Title: H44H Runoff Generation Processes: Exploring Thresholds, Sources, and Pathways from Plot to Continental Scales II Online Poster DiscussionSession Date and Time: Thursday, 15 December 2022; 13:45 - 14:45Location: Online Only
Climate change and a growing global population pose ongoing threats to critical resources. As resources required by the agriculture sector continue to diminish, it is critical to leverage the emerging technologies and new solutions within the sector. New cultivation practices have emerged over the years, allowing food to be grown within urban areas. Greenhouses are versatile in the resources needed for their operation, as well as the foods that can be grown. While greenhouses provide a potential for a more constant food supply, there is a lack of optimization between the components. There are benefits to having modular components of a greenhouse, allowing for adjustments or repairs to singular pieces. However, there is inefficiency in the entire system, since each component functions without considering the others. To improve greenhouse efficiency, a closed-loop system can be introduced. A greenhouse is a closed system, and by repurposing, reusing, and recirculating resources, a greenhouse can evolve to have a closed-loop system. This enables the components of a system to share resources more effectively, communicate any systems changes that are required, and minimize waste outputs.This research explores the current technology in the space of agriculture and computer science to create a fully closed-loop system. The most noticeable system components are food waste, nutrient systems, water systems, growing media, and heating and energy. Not all components within a greenhouse can leverage the same artificial intelligence methods and techniques based on existing findings. There are methods in place that allow the components to interpret data gathered from the greenhouse and alter its operational patterns. There remains a lack in communicating this information to other aspects of the system to have it make informed data-driven decisions as well. One can optimize singular components thereby reducing resource reliance, to a certain threshold until it impacts the plant’s development and yield. When all the systems components’ resource needs and outputs converge the functionality of the system can be optimized to utilize resources at a higher efficiency. Results are indicative of very siloed and isolated research, exploring closed-loop systems within greenhouses, but not leveraging its full capabilities.