--> Generating Missing Logs -- Techniques and Pitfalls, by Michael Holmes, Dominic Holmes, and Antony Holmes, #40107 (2003).
[First Hit]

Datapages, Inc.Print this page

  Click to view article in PDF format.

 

Generating Missing Previous HitLogsNext Hit -- Techniques and Pitfalls*

By

Michael Holmes1, Dominic Holmes1, and Antony Holmes1

 Search and Discovery Article #40107 (2003)

 

*Adapted from “extended abstract” for presentation at the AAPG Annual Meeting, Salt Lake City, Utah, May 11-14, 2003.

 

1Digital Formation, Inc., 6000 E Evans Ave, Ste 1-400, Denver, CO, 80222 ([email protected])

 

Outline

 In most fields, log data are incomplete or unreliable for some intervals or entire wells. Neural networks are becoming a fashionable method to fill-in missing data, and they are powerful. The basic methodology is to train the system over intervals where the log of interest exists, and apply the training over missing log intervals. However, there are limitations and the approach can be easily abused. Inherent in the application is the assumption that reservoir characteristics remain similar over intervals where missing data are generated. For example, if training is established in hydrocarbon-bearing levels, and the application is in wet rocks, results might be unreliable.

 

 A better approach is to use rigorous methodology to ensure data integrity and consistency:

q       Despike porosity Previous HitlogsNext Hit to eliminate bad hole data. Proprietary algorithms are applied, followed by hand editing as required.

q       For extensive intervals of bad hole, pseudo Previous HitlogsNext Hit are created using neural net training on intervals with reliable log traces and with similar petrophysical properties.

 

 In wells with missing Previous HitlogsNext Hit of crucial importance, pseudo Previous HitlogsNext Hit are generated several ways:

q       Using neural networks.

q       Deterministic petrophysical modeling, using shale, matrix, and fluid properties from other existing curves.

q       Stochastic modeling, where an approximate curve (perhaps from neural networks) is used as input, and the reconstructed curve is output.

 

The different pseudo Previous HitlogsNext Hit can then be compared, and reasons for curve divergence (if any) can be examined. This approach can highlight where pseudo curves are reliable and where they are not.

 

 

uOutline

uFigure captions

uExamples

  uData preparation

  uData for training intervals

  uFluid substitution

uSummary

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

uOutline

uFigure captions

uExamples

  uData preparation

  uData for training intervals

  uFluid substitution

uSummary

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

uOutline

uFigure captions

uExamples

  uData preparation

  uData for training intervals

  uFluid substitution

uSummary

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

uOutline

uFigure captions

uExamples

  uData preparation

  uData for training intervals

  uFluid substitution

uSummary

 

 

 

 

 

 

 

 

 

 

 

 

Figure Captions

Figure 1: Comparing neural network models using the raw data with a model using despiked data.

 

 

Figure 2: Comparing Previous HitsyntheticNext Hit sonics created using specific regions as the training intervals for a neural network, with a model using the entire well as the training interval.

 

 

Figure 3: Comparisons of three pseudo Previous HitsonicNext Hit Previous HitlogsNext Hit with the original log.

 

 

 

Figure 4: Comparing Previous HitsyntheticNext Hit seismograms using four different Previous HitsyntheticNext Hit Previous HitsonicNext Hit approaches.

 

Examples 

The examples are from a well in the Wamsutter area of southwest Wyoming.

 

1) The Importance of Data Preparation 

Two different neural networks are used to create a Previous HitsonicNext Hit log from density, neutron, and gamma ray Previous HitlogsNext Hit:

a) Using unedited (raw) data -- the system ‘‘learns’’ intervals of bad hole and faithfully reproduces Previous HitsonicNext Hit ‘‘spikes.”

b) Using edited data -- corrected for bad hole -- the system reproduces the edited data, and is much more reliable.

 

In Figure 1, track #1 shows the comparison of the original Previous HitsonicNext Hit log with the despiked log, highlighting the spikes removed. The Previous HitSyntheticNext Hit Previous HitSonicNext Hit #1 log was created using the original density, neutron, and gamma ray Previous HitlogsNext Hit to predict the original Previous HitsonicNext Hit. Note that the spikes in the original Previous HitsonicNext Hit are faithfully reproduced by the neural network. The Previous HitSyntheticNext Hit Previous HitSonicNext Hit #2 log was generated using the gamma ray and despiked density and neutron Previous HitlogsNext Hit to model the despiked Previous HitsonicNext Hit. The result is a more appropriate model. The red bars highlight regions where spikes were reproduced in the first model, but are corrected in the second. The blue bar indicates a region where the Previous HitlogsNext Hit used to model the Previous HitsonicNext Hit do not exhibit character sufficient to model the Previous HitsonicNext Hit in either case. 

The example demonstrates the importance of preparing the data prior to using any neural network technique. The neural network will reproduce whatever the input data demonstrates. If the input includes bad data, the neural network will ‘‘learn’’ to predict bad data. Thus it is crucial to take measures to ensure the input data is valid and consistent.

 

2) Using Appropriate Amounts of Data for the Training Intervals 

Different neural networks are used to predict a Previous HitsonicNext Hit log from density, neutron, and gamma ray Previous HitlogsNext Hit. In each case, different training intervals are used. The differences can be quite large if the training points are not chosen extremely wisely (very subjective). A final case demonstrates using far more of the data to create a better model which more appropriately models the reservoir. 

In Figure 2, the Previous HitSyntheticNext Hit Previous HitSonicNext Hit #1 was created using only the training regions highlighted in yellow, whereas Previous HitSyntheticNext Hit Previous HitSonicNext Hit #2 used the entire well as the training region. As expected, the first model is extremely accurate over the training regions. The problems arise over the other regions, for which it becomes obvious that the training intervals did not fully represent the data that was being modeled. As highlighted in red, there are now regions where the first model generated erroneous spikes in the Previous HitsonicNext Hit due to insufficient data being provided initially. This type of selective interval approach can lead to many such problems. Although selecting more training intervals can help resolve specific issues, it becomes a very subjective model. A better approach is to begin with as much data as possible and let the neural network incorporate the maximum amount of valid data.

 

3) Differences using Fluid Substitution 

Pseudo Previous HitsonicNext Hit Previous HitlogsNext Hit, calculated deterministically and including the effects of gas substitution:

a) Liquid-filled

b) Residual gas

c) Gas remote from the wellbore

 

It is clear in Figure 3 that in the gas-bearing sand, the Previous HitsonicNext Hit ‘‘sees’’ no gas. Density/Neutron and Resistivity responses clearly indicate gas, at depths of investigation beyond those measured by the Previous HitsonicNext Hit log. 

The Previous HitsyntheticNext Hit seismograms in Figure 4 show significant differences dependent on the pseudo Previous HitsonicNext Hit used. The lesson of the example is that if you do not know what fluid the Previous HitsonicNext Hit log is measuring (and it is not always residual gas), then any Previous HitsyntheticNext Hit seismograms using the Previous HitsonicNext Hit will also have problematic meaning. 

Additionally, if a missing log is a shallow-reading device (such as a Previous HitsonicNext Hit log) but is generated from deeper reading devices (such as the deep resistivity, neutron, or density), then the resulting pseudo log has questionable value.

 

Summary 

In each case, missing data can be generated using different methods. However, care must be used to ensure that the generated information has integrity and is appropriate for the reservoir. 

1) Steps must be taken to clean-up and validate all input data to any Previous HitsyntheticNext Hit generation method. This rather obvious step is often one of the easiest to overlook. 

2) It is important for the interpreter to have an understanding of what the correct answer might be. When generating Previous HitsyntheticTop data, it is conceivable to generate nearly any answer. It is up to the interpreter to ensure the answer used makes geological and geophysical sense.

Return to top.