loading page

Inferring Airmass Properties from GOES-R ABI Observations
  • Kyle Hilburn
Kyle Hilburn
Cooperative Institute for Research in the Atmosphere/Colorado State University

Corresponding Author:[email protected]

Author Profile

Abstract

Radiosonde observations are the gold-standard for quantifying vertical profiles of atmospheric state variables. Knowledge of which is critical for quantifying moisture and instability, two main ingredients for severe weather. Unfortunately, radiosondes are very sparse, averaging just one observation per 500 x 500 km area over CONUS, and most locations have only two observations per day. This creates uncertainty in the representation of short wavelength and rapidly evolving synoptic and mesoscale features in numerical weather prediction (NWP) and provides few points of comparison for human forecasters to interpret NWP in making forecasts. To fill this gap in our knowledge of the atmospheric state, human forecasters make use of satellite imagery to estimate airmass properties for incrementing NWP outputs. Data from geostationary satellites have been especially useful because of its high temporal resolution (5-minutes) and high spatial resolution (2 km). While the Advanced Baseline Imager (ABI) was not designed as a sounding sensor, the three water vapor bands and three infrared window bands do provide some sounding capabilities. Satellite data are particularly useful in assessing position and timing errors, the representation of short waves, and humidity. The key question addressed by this work is can the mental process used by human forecasters be translated into a machine learning (ML) algorithm to provide automated and objective estimates of airmass properties from ABI? Experiments with convolutional neural networks (CNNs) show that ML can indeed be used. Related research efforts, such as NOAA Unique Combined Atmospheric Processing System (NUCAPS) has explored use of dense neural networks (DNNs), which are essentially replacing a radiative transfer model with a ML model. However, we find more skill can be achieved by making use of the spatial information captured with CNNs. This more closely mimics the human imagery interpretation process: it is the spatial patterns in the features (as much as the pixel-wise values themselves) that carry the useful information content. We will present our latest results, focusing especially on relative humidity, compare against radiosondes, and discuss whether skill is enough to potentially make a positive impact on NWP analyses.