This review paper is an attempt to develop an understanding of how recent advances in machine learning are used for geological modelling to predict a distribution of subsurface resources and solve similar challenges of heterogeneous pattern modelling in computer science.
Modern geoscience has become a data-rich field, where many problems are related to identifying/generating realistic patterns from data to describe and predict the performance of natural systems. New learning-based technologies enable a more effective integration of large volumes of data that are becoming available to increase the value of the information in managing decisions about the natural system at stake. Machine learning is often seen as an ideal remedy vs other modelling approaches, while it may be not, when it is decoupled from the domain context of the modeled data and fails to meet the constraining assumptions about the modeled system. However, it remains probably the most promising and dynamically developing approach to solve problems of reproduction complex spatial patterns in natural geoscience systems, although it also has its own drawbacks and limitations. Recent advances in the learning-based algorithms offer new opportunities for geoscience modelling to overcome some limitations of the spatial statistics to tackle the challenge of model calibration and prediction with large volumes of data and domain knowledge to account for.
This review will focus on the four specific questions inherent to basic geological modelling:
- how to represent geological realism of the modelled patterns;
- how to identify and preserve explicit and implicit dependencies between geological properties;
- what are the principle model controls that drive the geological system performance and how to parameterise them;
- how to establish the right level of model diversity to provide the necessary predictive power in the uncertainty forecast of the geological system behaviour.