Bootstrapping Machine-Learning Based Seismic Fault Interpretation
Seismic Interpretation represents quite a unique problem, we have a great density of data within seismic datasets; 2D, 3D, prestack, multicomponent, broadband. A wealth of measurements. However, we have very little in the terms of interpretation on which to train machine-learning models. This "labelled" data, as the term goes, is scant and insufficient for training a general-purpose seismic interpretation model, and this is unlikely to change.
So, we need to find a different approach. Being able to use existing machine learning architectures on seismic images directly is very attractive, and results of using CNNs to detect salt bodies [Waldeland, 2017] are encouraging, if unlikely to be as successful away from the well-defined textural image contrasts that we see in and out of salt. A very different approach is pursued by [Araya-Polo, 2017], who are exploring how to teach a learning system how geological faulting works.
This is a very interesting approach and likely an important part of some larger, probably adversarial, multi-network system that balances what networks can 'see' in seismic versus what other networks 'understand' about geology. In this work we focus on the 'seeing'.
Our approach is to have a machine learning system interpret faults from seismic data in a supervised manner where the only supervised inputs to the system are algorithm-generated labels (seismic attributes) and some constraints over how we expect faults to be represented in seismic.
We are focussing on faults although the broader goal is to determine geological structure from seismic data. Our primary fault-attribute inputs will be derived from “fault likelihood” analysis [Hale, 2012], where we can generate dense fault-probability and orientation volumes over a dataset. We use this data to train a supervised system such as a CNN or DCGAN.
Immediate questions arise; by training on seismic attributes, will the learner just learn how to produce the same seismic attributes? How do we enable a system to generalise beyond the input machine-generated data? How do we introduce addiitional model constraints in orderto prevent this?
The goal of this work is to expore concepts from the field of semi-supevised learning and how they might be applied to allow us to "bootstrap" training schemes for seismic with other measurements such as attriiutes. In this work we attempt to answer these questions and hope to drive further study in the community.
AAPG Datapages/Search and Discovery Article #90323 ©2018 AAPG Annual Convention and Exhibition, Salt Lake City, Utah, May 20-23, 2018