--> Abstract: Innovations in Visualization Open New Possibilities for Interpreters, by Bruno de Ribet; #90206 (2014)

Datapages, Inc.Print this page

Innovations in Visualization Open New Possibilities for Interpreters

Bruno de Ribet
Paradigm, Houston, TX, USA

Abstract

Visualization technology is continuously evolving, with the growing availability of rendering and transparency capabilities scalable to common computer architectures. To date, this evolution has followed two main trends. The first has the objective of handling ever-increasing amounts of data, while the second aims to enrich the visualization process with a greater number of data types and the rendering of specific objects.

In the past several years, data storage technology has expanded dramatically to allow the storage and processing of huge amounts of data on disk rather than in computer memory, making it possible to view data sets which are much larger than the computer's physical memory. This is most relevant for the tens or hundreds of gigabytes of seismic data often required for a single survey. It has also opened the door to the use of 3D display and interpretation tools within routine workflows, while leaving high-end capabilities such as automated sub-volume detection to the dedicated visualization tools which work only on data in live computer memory.

Additionally, dramatic improvements in the performance of input/output devices, as well as in computer power and data handling, are enabling many capabilities, once available only in computer memory, to also become feasible for data residing on disk. One of these is optical stacking, or the application of transparency capabilities to multi-line 2D seismic surveys. This technique, which lets the user line up a succession of 2D profiles and move the display around in three dimensions, eliminates the lengthy and tedious back-and-forth workflows sometimes needed to verify assumptions regarding a fault's direction and development. This is particularly relevant to 2D seismic profiles, where ambiguities are more frequent due to the nature and limitations of the data and of sections as opposed to 3D migrated volumes.

A holistic look at data
The huge increase in data size, resolution and fold are further augmented by the accessibility of many types of data, including seismic, wells, microseismic, 4D, wellbore, video and others, allowing a more holistic look at the regions, basins, formations and reservoirs under investigation. In addition to data for the particular area of interest, the interpreter is now able to access all regional data, including project and vintage data from previous surveys.

From single survey, single volume to multi-survey, multi-volume capabilities
Until recently, all advanced interpretation work was done using a single survey, single volume approach. Interpreters would load a 3D seismic data volume into their system, and then work on that particular volume. Today, technology enables interpreters to load multiple 3D volumes from multiple surveys, and all the 2D profile data as well, enhanced by many other data types available to the company within and around the area of interest. This provides them with new insight into the geology of the entire region, and leads to better and more informed decisions down the line.

New visualization perspectives
With so much data now available, and with massive data integration now possible at the desktop level, interpreters are in danger of being overwhelmed. Indeed, one of the greatest challenges before the interpreter today is to come up with new visualization technologies and interpretation workflows to fully utilize the new data within tight project deadlines.

One of the most dramatic recent developments has been the release to the market of new, highly parallelized compute power capabilities (GPU – Graphic Processing Units), to perform computation traditionally handled by a CPU.

This technique is being adapted to improve the visualization process. The use of GPU-based 3D rendering improves the refresh speed of visual displays by at least X8, while also enhancing the visual integrity of the display, without any other rendering artifacts, like distortion in the deeper scene. A direct consequence for the interpreter is the use of GPU for computing seismic attributes almost instantaneously, improving productivity working interactively with large amount of such data.

Automation speeds up the process
The increase of automation in every aspect of the interpretation process has been another game changer. Some of the most notable developments include the advent of waveform-based, automatic horizon propagation, which has resulted in a major acceleration of the interpretation workflow, due to its speed and accuracy. Automatic fault extraction techniques enable seismic interpreters to quickly pick hundreds of faults at a time, allowing them to concentrate on geologic history rather than on the tedious picking process. A powerful tool for interactive cross-plotting within a 3D visualization environment is now commonly available for use, enabling advanced multi-attribute interpretation. Furthermore, automated switching between full resolution and decimated datasets enables the interpreter to switch back and forth between the regional, local and reservoir detail scales of a project at a click of the mouse.

The age of pre-stack seismic data
Traditionally, pre-stack processing and interpretation were performed by experts using specialized software, such as AVO inversion, velocity analysis, and others. Pre-stack interpretation was not traditionally part of mainstream interpretation workflows. But while interpreting pre-stack data may be challenging to the interpreter working at the desktop, the information it offers is highly valuable. Pre-stack data, as required for transformation to the elastic properties domain, are providing dynamic and kinematic information related to the quality of the processing. This information helps validate interpretations, provides improved calibration to well data, and is suitable for detailed analysis such as cross-plotting.

The growth of interest in pre-stack seismic data has been putting pressure on visualization tools to help interpreters manipulate not only large amounts of data, but also data objects that are extensions to the post-stack x-y-time or x-y-depth coordinate system. The demand for visual access to the pre-stack data stems from the challenges of investigating very detailed features in the process of qualifying and characterizing the subsurface. And indeed, the ability to directly access and display pre-stack data is one of the new tools that have recently come onto the market.

The emergence of pre-stack data as a component of many workflows requires more than just an ability to see trace data, however. The inherent complexity of ray paths through the overburden, the sheer volume of data that needs to be accessed, and the effects of anisotropy on waveform propagation are only some of the challenges facing deadline-constrained geoscientists who are often required to make sense of 30 to 50 times more data than before. In order to manage pre-stack data, and in order to locate a specific point in space, one needs to develop not only new data access and display capabilities, but also new ways to interpret the results. Visualization tools, new display methods, processes that make visible the illumination that resulted in a given reflector or event, are all part of the answer, and will become increasingly accessible to users.

The technology frontier
All of the powerful new tools described in this abstract are at today's technology frontier, and are already being used by first adopters in the industry. This presentation will illustrate those available technologies and workflows as a ubiquitous solution for routine activities to be performed by a new generation of geoscientists.

AAPG Search and Discovery Article #90206 © AAPG Hedberg Conference, Interpretation Visualization in the Petroleum Industry, Houston, Texas, June 1-4, 2014