Detection and monitoring of tropical forest degradation is crucial to climate change mitigation and biodiversity conservation efforts. Several algorithms have been recently developed to monitor forest degradation and disturbance using remote sensing. However, these algorithms differ in local predictions due to the variation in the biogeophysical parameters used as degradation proxies. It is crucial to assess their relative performance and shortcomings in order to develop a clear understanding of the conditions under which each algorithm will detect a disturbance. In this study, we used GEDI lidar data on forest structure to examine the sensitivity of widely used forest disturbance and degradation products in a frontier tropical forest landscape in the Peruvian Amazon. We compared a leading spectral-based degradation algorithm (Continuous Degradation Detection (CODED)) with a radar-based algorithm (ALOS-2 PalSAR-2 based Radar Forest degradation Index (RFDI)). Given the sensitivity of radar to canopy cover and volume, we hypothesized that a single radar observation may detect degradation better than a long spectral time series. We first identified stable forests for reference structure in two ways: using disturbance stratification data from CODED, and using Peruvian protected areas. Our analysis showed that CODED performed below expectations in detecting forest degradation, often including patches that were regrowing after clear-felling in its “degraded” class. As CODED classified spectral changes over time rather than capturing structural variability, it classified 82% of palm plantations area as “degraded.” CODED also failed to detect degradation in forest areas that were likely partially disturbed (i.e., with low height and high cover). By contrast, the PalSAR-2 RFDI showed a significant relationship with forest height (detecting low height in degraded forests), although its predictive ability was limited due to high variability and signal saturation. Our study supports the conclusion that radar-based observation can detect forest degradation that the time series observation failed to detect. Given the limited correspondence between radar and spectral algorithms, we suggest that integrations of spectral and radar data may be beneficial for mapping forest degradation.
The end-Permian mass extinction event resulted in the loss of approximately 80% to 90% of marine animal species due to drastic changes in climate. Because warming was a major factor in the extinction, it has been theorized the organisms that did survive were able to do so because they moved to higher latitudes and this hypothesis is consistent with tetrapod data. We hypothesized that this relationship holds true for marine mollusks and arthropods as well. Using Changhsingian (Late Permian) and Induan (Early Triassic) data from the Paleobiology Database, we extracted occurrences of classes Bivalvia, Cephalopoda, Gastropoda, and Ostracoda, which had 2433, 395, 379, and 1717 genus occurrences, respectively. Then, we used the paleolatitude data for each genus occurrence to characterize the latitude distribution of each class before and after the Permian/Triassic transition. We compared the paleolatitude medians before and after the mass extinction for each class to quantify the latitude shift for each class: 23.18° for Bivalvia, 37.45° for Cephalopoda, 29.82° for Gastropoda, and 6.29° for Ostracoda. This finding indicates that each individual class had a different latitudinal shift, with all classes exhibiting a poleward shift north. We also conducted Welch t-tests to compare the differences in latitudinal ranges and found that they were significant (Bivalvia: p < 2.2e-16, Cephalopoda: p = 3.83e-6, Gastropoda: p < 2.2e-16, Ostracoda: p = 0.0030). In addition, we ran multiple randomized models to compare them with our original results and found a significant difference between them via the Kolmogorov-Smirnov test, which means that the northward migration could be a biological response. Moreover, the results of our study show that the overall latitudinal range of most classes contracted after the extinction event, with the exception of the Cephalopoda class.
There is no doubt anymore that Earth Observation (EO) is contributing toward meeting the Sustainable Development Goals and addressing environmental challenges. Digital Earth Africa’s objective is to make freely available an EO data cube for all of Africa that democratizes the capacity to process and analyse satellite data. It allows to track changes across Africa in unprecedented detail and will provide data on a vast number of issues, including soil and coastal erosion, agriculture, forest and desert development, water quality, and changes to human settlements. To realise full benefits of an advanced Platform like Digital Earth Africa, Digital Earth Africa has co-designed and co-developed with five institutions namely the Regional Centre For Mapping Of Resources For Development (RCMRD, Kenya), Centre de Suivi Écologique (Senegal), l’observatoire du Sahara et du Sahel (Tunisia), AFRIGIST (Nigeria) and AGRHYMET (Niger). This was meant to ensure it meets end-users needs, this program has been developed by the future deliverers of the program. From the trainers’ perspective, the program is built to consider the recent changes in teaching approaches and methodologies including pedagogy that emerged from a Covid-19, and post Covid-19, pandemic world. On the end-user side, the curriculum covered a wide spectrum of topics, from understanding satellite images, python scripting in the JupyterLab environment to identifying solutions to SDGs challenges through use cases, available in English and French. Digital Earth Africa’s Gender Equity, Diversity and Social Inclusion principles strategy (GEDSI) is imprinted as a watermark across the whole program. It prioritises gender equality, diversity, and social inclusion so that women, people with disabilities and marginalised individuals and communities have the same opportunities to benefit from EO data. In addition, Digital Earth Africa started live virtual sessions, to stay connected with end users, who have developed impactive stories in their communities. Digital Earth Africa seeks to support the capacity development of individuals, academic and governmental institutions, and private sector organisations to empower present and next generation of decision makers to drive toward a sustainable future, leaving on one and place behind.
NASA’s Planetary Data System (PDS)* contains data collected by missions to explore our solar system. This includes Lunar Reconnaissance Orbiter (LRO), which has collected as much data as all other planetary missions combined. Currently, PDS offers no way to search lunar images based on content. Working with the PDS Cartography and Imaging Sciences Node (IMG), we develop LROCNet, a deep learning (DL) classifier for imagery from LRO’s Narrow Angle Cameras (NACs). Data we get from NACs are 5km swaths, at nominal orbit, so we perform a saliency detection step to find surface features of interest. A detector developed for Mars HiRISE (Wagstaff et al, 2021) worked well for our purposes, after updating based on LROC image resolution. We use this detector to create a set of image chipouts (small cutouts) from the larger image, sampling the lunar globe. The chipouts are used to train LROCNet. We select classes of interest based on what is visible at the NAC resolution, consulting with scientists and performing a literature review. Initially, we had 7 classes: fresh crater, old crater, overlapping craters, irregular mare patches, rockfalls and landfalls, of scientific interest, and none. Using the Zooniverse platform, we set up a labeling tool and labeled 5,000 images. We found that fresh crater made up 11% of the data, old crater 18%, with the vast majority none. Due to limited examples of the other classes, we reduced our initial class set to: fresh crater (with impact ejecta), old crater, and none. We divided the images into train/validation/test set making sure no image swaths span multiple sets and fine tuned pre-trained DL models. VGG-11, a standard DL model, gives the best performance on the validation set, with an overall accuracy of 82% on the test set. We had 83% label agreement in our human label study; labeling was difficult as there is no clear class boundary. Our DL model accuracy is similar to human labelers. 64% of fresh craters, 80% old craters, and 86% of the none class are classified correctly. Predictions from this model will be integrated with IMG’s Atlas, allowing users to interactively search classes of interest. *https://pds-imaging.jpl.nasa.gov Copyright © 2022, California Institute of Technology. U.S. Government sponsorship acknowledged.
Poem to imagine the “essence” of water as it circulates through the Earth universe, synergistically supporting all environments and living ecosystems, forming, and shaping land and life. The poem links key elements of the interactive global water cycle and international programs to sustainably manage natural, and socioeconomic resources, given the challenge of climate change. It is in awareness of: –Essential Water Variables (EWVs) of the Group on Earth Observations (GEO) Global Water Sustainability (GEOGLOWS) initiative; Earth Observations (EO) for the Water-Energy-Food Nexus (EO4WEF) community activity; UN Sustainable Development Goals (UN SDGs), UNFCCC–Climate Change. The poem hopes to bring water to the forefront of consciousness. Readers are invited to comment on the intangible “feelings” evoked by the poem.
The various extreme weather events that occurred globally in 2021, from Europe to China to North America, served as yet another reminder that robust strategies for climate adaptation are crucial at a time of rapid global warming. Building resilient communities and lessening the impact that natural disasters have on vulnerable infrastructure can be aided by automated systems driven by machine learning algorithms trained on Earth observation data. When deployed, computer vision models can analyze satellite imagery in real time and inform decision makers and nongovernmental organizations about the timely and targeted allocation of resources and humanitarian aid personnel to affected areas. Here, we overview several specific 2021 extreme events and the factors that caused the loss of life, damage to infrastructure, and economic loss. The events surveyed include flooding in Germany, wildfires in Greece, and Hurricane Ida in the Eastern United States. Taking this information into account, we further discuss barriers to the large-scale deployment of current machine learning technologies, especially models trained on Earth observation data. We examine the limitations of satellite imagery and big data applications in detecting damage and building collapse and how Interferometric Synthetic Aperture Radar (InSAR) can be a tool to resolve existing issues. The aim of this work is to understand why many state-of-the-art models being developed have not yet been successfully and extensively deployed in the real world and to foster discussion about optimizing the use of deep learning technology to save lives and lead effective disaster management efforts.