Large earthquakes rupture faults over hundreds of kilometers within minutes. Finite-fault models elucidate these processes and provide observational constraints for understanding earthquake physics. However, finite-fault inversions are subject to non-uniqueness and substantial uncertainties. The diverse range of published models for the well-recorded 2011 M_w 9.0 Tohoku-Oki earthquake aptly illustrates this issue, and details of its rupture process remain under debate. Here, we comprehensively compare 32 finite-fault models of the Tohoku-Oki earthquake and analyze the sensitivity of three commonly-used observational data types (geodetic, seismic, and tsunami) to the slip features identified. We first project all models to a realistic megathrust geometry and a 1-km subfault size. At this scale, we observe poor correlation among the models, irrespective of the data type. However, model agreement improves significantly when subfault sizes are increased, implying that their differences primarily stem from small-scale features. We then forward-compute geodetic and teleseismic synthetics and compare them with observations. We find that seismic observations are sensitive to rupture propagation, such as the peak-slip-rise time. However, neither teleseismic nor geodetic observations are sensitive to spatial slip features smaller than 64 km. In distinction, the synthesized seafloor deformation of all models exhibits poor correlation, indicating sensitivity to small-scale slip features. Our findings suggest that fine-scale slip features cannot be unambiguously resolved by remote or sparse observations, such as the three data types tested in this study. However, better resolution may become achievable from uniformly gridded dense offshore instrumentation.

Lei Li

and 3 more

Seismology focuses on the study of earthquakes and associated phenomena to characterize seismic sources and Earth structure, which both are of immediate relevance to society. This article is composed of two independent views on the state of the ICON principles (Goldman et al., 2021) in seismology and reflects on the opportunities and challenges of adopting them from a different angle. Each perspective focuses on a different topic. Section 1 deals with the integration of multiscale and multidisciplinary observations, focusing on integrated and open approaches, whereas Section 2 discusses computing and open-source algorithms, reflecting coordinated, networked, and open principles. In the past century, seismology has benefited from two co-existing technological advancements - the emergence of new, more capable sensory systems and affordable and distributed computing infrastructure. Integrating multiple observations is a crucial strategy to improve the understanding of earthquake hazards. However, current efforts in making big datasets available and manageable lack coherence, which makes it challenging to implement initiatives that span different communities. Building on ongoing advancements in computing, machine learning algorithms have been revolutionizing the way of seismic data processing and interpretation. A community-driven approach to code management offers open and networked opportunities for young scholars to learn and contribute to a more sustainable approach to seismology. Investing in new sensors, more capable computing infrastructure, and open-source algorithms following the ICON principles will enable new discoveries across the Earth sciences.