Click to view article in PDF format.
Generating Missing Logs -- Techniques and Pitfalls*
By
Michael Holmes1, Dominic Holmes1, and Antony Holmes1
Search and Discovery Article #40107 (2003)
*Adapted from “extended abstract” for presentation at the AAPG Annual Meeting, Salt Lake City, Utah, May 11-14, 2003.
1Digital Formation, Inc., 6000 E Evans Ave, Ste 1-400, Denver, CO, 80222 ([email protected])
Outline
In most fields, log data are incomplete or unreliable for some intervals or entire wells. Neural networks are becoming a fashionable method to fill-in missing data, and they are powerful. The basic methodology is to train the system over intervals where the log of interest exists, and apply the training over missing log intervals. However, there are limitations and the approach can be easily abused. Inherent in the application is the assumption that reservoir characteristics remain similar over intervals where missing data are generated. For example, if training is established in hydrocarbon-bearing levels, and the application is in wet rocks, results might be unreliable.
A better approach is to use rigorous methodology to ensure data integrity and consistency:
q Despike porosity logs to eliminate bad hole data. Proprietary algorithms are applied, followed by hand editing as required.
q For extensive intervals of bad hole, pseudo logs are created using neural net training on intervals with reliable log traces and with similar petrophysical properties.
In wells with missing logs of crucial importance, pseudo logs are generated several ways:
q Using neural networks.
q Deterministic petrophysical modeling, using shale, matrix, and fluid properties from other existing curves.
q Stochastic modeling, where an approximate curve (perhaps from neural networks) is used as input, and the reconstructed curve is output.
The different pseudo logs can then be compared, and reasons for curve divergence (if any) can be examined. This approach can highlight where pseudo curves are reliable and where they are not.
|
|
ExamplesThe examples are from a well in the Wamsutter area of southwest Wyoming.
1) The Importance of Data Preparation Two different neural networks are used to create a sonic log from density, neutron, and gamma ray logs: a) Using unedited (raw) data -- the system ‘‘learns’’ intervals of bad hole and faithfully reproduces sonic ‘‘spikes.” b) Using edited data -- corrected for bad hole -- the system reproduces the edited data, and is much more reliable.
In Figure 1, track #1 shows the comparison of the original sonic log
with the despiked log, highlighting the spikes removed. The The example demonstrates the importance of preparing the data prior to using any neural network technique. The neural network will reproduce whatever the input data demonstrates. If the input includes bad data, the neural network will ‘‘learn’’ to predict bad data. Thus it is crucial to take measures to ensure the input data is valid and consistent.
2) Using Appropriate Amounts of Data for the Training Intervals Different neural networks are used to predict a sonic log from density, neutron, and gamma ray logs. In each case, different training intervals are used. The differences can be quite large if the training points are not chosen extremely wisely (very subjective). A final case demonstrates using far more of the data to create a better model which more appropriately models the reservoir.
In Figure 2, the
3) Differences using Fluid Substitution Pseudo sonic logs, calculated deterministically and including the effects of gas substitution: a) Liquid-filled b) Residual gas c) Gas remote from the wellbore
It is clear in Figure 3 that in the gas-bearing sand, the sonic ‘‘sees’’ no gas. Density/Neutron and Resistivity responses clearly indicate gas, at depths of investigation beyond those measured by the sonic log.
The Additionally, if a missing log is a shallow-reading device (such as a sonic log) but is generated from deeper reading devices (such as the deep resistivity, neutron, or density), then the resulting pseudo log has questionable value.
SummaryIn each case, missing data can be generated using different methods. However, care must be used to ensure that the generated information has integrity and is appropriate for the reservoir.
1) Steps must be taken to clean-up and validate all input data to any
2) It is
important for the interpreter to have an understanding of what the
correct answer might be. When generating |


