--> Acceleration of Geostatistical Seismic Inversion Using TensorFlow: A Heterogeneous Distributed Machine Learning Framework

AAPG ACE 2018

Datapages, Inc.Print this page

Acceleration of Geostatistical Seismic Inversion Using TensorFlow: A Heterogeneous Distributed Machine Learning Framework

Abstract

Geostatistical seismic inversion is an emerging technology for reservoir characterization with uncertainty quantification in geophysics. However, its computational demanding nature often limits the application in practice. In this work, we propose a method to accelerate the inversion using TensorFlow, an open-source heterogeneous distributed machine learning framework. With TensorFlow, it is naturally easy to implement our algorithms in distributed parallel systems: we no longer need to manage data sending/receiving, and the communication between different computational devices. Moreover, by means of the data flow graph and back propagation algorithm, both of which lay the foundation of machine learning frameworks, the prior reservoir realizations can be updated automatically and iteratively until convergence toward the true models.

In general, we can accelerate the geostatistical seismic inversion in three levels. First, geostatistical simulation and seismic inversion typically involve huge amount of large matrix multiplications and other operations, which can be massively parallelized and thus sped up on GPUs. Although GPU cores are slower than CPU cores, a CPU usually has only 2 or 4 cores while a single GPU might have thousands of cores. The large number of cores and faster memory make GPU much faster on general computations; Second, geostatistical seismic inversion generally involves an ensemble of reservoir realizations to quantify the model uncertainty, and therefore, we can simulate and inverse them on different computational devices simultaneously (model parallelization); Thirdly, due to the huge volume of seismic data, we can divide it into several sub-datasets and then send to different devices for computation (data parallelization).

To evaluate the performance of the proposed parallel algorithm using TensorFlow, we used a sub-dataset with 300 * 200 * 300 grids of a real 3D seismic to test the execution times on CPU and GPU respectively. From the experiment results, the computational time was largely reduced over 10 times using a single NVIDIA Tesla K80 GPU compared to a dual-core Intel i5 CPU. In the following, we will test the inversion algorithm on the entire seismic survey using multiple computational devices to further improve the computation efficiency with a speed-up of over 100X. Our future goal is to develop a general parallel geophysical inversion library based on TensorFlow to make it user-friendly and more computationally efficient.