--> Abstract/Excerpts: Automatic Calibration of Stratigraphic Forward Models for Reservoir Presence De-Risking, by Oriol Falivene, Jim Pickens, Alessandro Frascati, Stephane Gesbert, Tobias Garwood,Yu-Lan Hsu, Mantez Mc Donald, Gary Steffens, Alban Rovira, and Brad Prather; #120098 (2013)

Datapages, Inc.Print this page

Abstract/Excerpt

Automatic Calibration of Stratigraphic Forward Models for Reservoir Presence De-Risking

Oriol Falivene¹, Jim Pickens¹,², Alessandro Frascati¹, Stephane Gesbert¹, Tobias Garwood²,³, Yu-Lan Hsu², Mantez Mc Donald², Gary Steffens², Alban Rovira², and Brad Prather²
¹Shell Global Solutions International (Rijswijk (NL) and Houston (US))
²Shell Exploration & Production Company (The Hague (NL) and Houston (US))
³Presently at Hess, (London, UK)

Introduction: hydrocarbon exploration and stratigraphic forward models
Conventional hydrocarbon exploration is increasingly challenged, with a shift towards deeper, more difficult plays in poorly imaged areas some cases obscured by salt canopies. Within this context, understanding and predicting reservoir characteristics at regional to basin scales is a key element of quantifying reservoir risk and uncertainty. One method to assess these issues is through stratigraphic forward modelling. Stratigraphic forward models (SFM) are numerical, process-based algorithms that simulate sedimentary and tectonic processes controlling basin fill architecture (Paola, 2000). SFM have been widely developed and used in the past to understand and illustrate general conceptual controls on stratigraphy for different sedimentary environments.

Applying SFM to understand and predict reservoir distribution within a basin enables the integration of available prior information with fundamental geologic processes and relationships simulated through the numerical model, yielding a coherent source-to-sink simulation of the basin fill. In addition, when dealing with subsurface exploration some of the controls for reservoir distribution and therefore the process parameters in the SFM are difficult to constrain or are unknown. Given such uncertainties, building a SFM provides a consistent and rigorous framework for testing multiple scenarios and assumptions (Burgess et al. 2006), understanding and identifying critical geological controls and their implications in terms of resultant gross depositional facies and reservoir presence predictions, in areas that cannot be resolved by available data (Lawrence et al., 1990).

Problem: calibration to available data
SFM predictions can be significantly improved if the models are calibrated to independent constraints in the studied basin, such as thicknesses from seismic or information from well data that are not directly used as input parameters for the model. This is critical in reducing uncertainty in the parameter space and limits the number of combinations of input parameters that can yield valid models. Ideally, calibrated models allow prediction of facies distribution away from control points and provide a more rigorous understanding of stratigraphic controls, than might be gained from uncalibrated modeling.

The calibration to available data is a difficult non-linear, non-unique inverse problem leading to complex multi-modal objective functions to be optimized. This is mainly due to: poorly constrained initial conditions, inherent uncertainty on controlling factors (even for data rich exploration settings) and their complex influence on stratigraphic architecture, as well as the simplifications invoked by the SFM to simulate real processes.

In the past, calibration of SFM has been mostly achieved manually (by trial and error) and validated by a qualitative assessment of the resulting match. Due to long computing times incurred for each simulated model, only a limited number of scenarios could be attempted. As a result, manual calibration suffered from subjective user bias and sub-optimal calibration. As one might expect, it is unlikely that these solutions properly sample the solution space, leading to biased solutions and an incomplete assessment of uncertainty. The latter has hindered widespread acceptance and application of SFM within the hydrocarbon exploration community. Previous attempts to automatically calibrate basin-scale SFM have been carried out by Lessenger and Cross (1996), Cross and Lessenger (1999), Bornholdt et al., (1999), Imhof and Sharma (2006 and 2007), but none addressed three-dimensional subsurface exploration scale problems.

Method: automatic calibration workflow
We present a workflow for automatic calibration of SFM to available data, such as seismic and wells. The workflow is tailored to the challenges in hydrocarbon exploration: a) complex error functions with multiple local minima (due to the non-linearity of simulated geologic processes), b) numerous unknowns and significant computing resources required for each simulation (due to the large size of the models and extended timescales, c) non-trivial assessment of the sensitivity of geological models to unknown parameters, and d) solutions fraught with uncertainty because of sparse or imperfect data, and approximations in the processes simulated by the SFM.

We use Dionisos as the SFM (Granjeon and Joseph, 1999). Dionisos allows the 3D simulation of a variety of geological processes controlling stratigraphy and depositional environments in most sedimentary settings (continental, deltaic, deep-water, carbonates and evaporites). Dionisos uses simplified deterministic transport simulation and sedimentation laws based on gravity-driven and water-driven diffusion. These simulate the average net effect of many individual events, and therefore are only suitable for broad scale models with large time-steps.

The automatic calibration workflow comprises five main steps:
Step 1) Data analysis: analysis of available data in the studied basin, leading to a selection of key input parameters and processes to be included in the SFM models.
Step 2) Uncertain parameters: selection of parameters that can have significant influence on the stratigraphic model predictions but are poorly or indirectly constrained by the data are identified (commonly these can be initial bathymetry, position of clastic sediment input sources into the basin, sediment partitioning between sediment sources, etc.), together with a prior range of geologically realistic values. The latter are commonly estimated from prior knowledge based on available data, literature reviews or analogues.
Step 3) Calibration constraints: specification of hard data (e.g. average sand percent, average thickness, thickness from seismic, sand percent from wells, sand percent variability, etc) that need to be reproduced by the inversion-based models. Error functions are used to quantitatively define how well a model honors these constraints. Individual errors are defined for specific model characteristics, locations and time intervals, either for parts of the model or for the entire model. These are combined by using tolerance thresholds to account for different calibration data weights, imperfect exploration data, and imperfect SFM by preventing over fit the models to the calibration constraints. A model is admissible when all individual error thresholds are met.
4) Inversion algorithm: a variation of the neighborhood algorithm (NA, Sambridge 1999) is used to seek for admissible models (i.e. models that are consistent with the calibration constraints). In NA, several models are evaluated per iteration (computed in parallel using a computer cluster). In the first iteration, the parameters for the models evaluated are randomly selected within the prior ranges defined in step 2. Successive iterations NA spawn new models by sampling the parameter space neighborhoods of the best models obtained up to that point. Model ranking uses a combination of the unique error functions defined in step 3. Due to its parallel nature, NA tends to be strongly attracted to a single solution, which is problematic in inversion problems with multi-modal solutions. This behavior has been mitigated by upgrading NA with mechanisms that prevent over-exploitation of local minima and admissible models, thereby shifting the search to new promising areas of the solution space. We usually run the inversion algorithm till tens of admissible stratigraphic forward models are obtained.
5) Analysis of admissible models: these represent those parameter combinations that yield modeled stratigraphy compatible with calibration data. This step aims to understand which geologic controls are required to obtain admissible models as well as understand the resultant reservoir-presence predictions. Besides visual inspection of individual models, we leverage: 1) histograms of input parameters to display relative tightening of parameter uncertainty ranges after calibration, 2) correlations of input parameters and cluster analysis in the parameter space to identify a manageable number of discrete representative scenarios, 3) facies classifications for an easier visualization of the properties output by the stratigraphic model, 4) summary maps for specific models or model sets and properties, and 5) conditional frequency maps (Burgess et al., 2006) to collapse the predictions from a set of models into a single map and assess uncertainty.

Examples and results: Tertiary of the Gulf of Mexico and Mesozoic of West Siberia
The workflow for automatic calibration is illustrated for two different exploration applications:
a)     The first example is deepwater exploration in the Miocene of the Gulf of Mexico, with some subsalt areas and difficult to image reservoirs. The models cover a 400 x 460km area, from the delta to the abyssal plain, with a grid cell size of 10 x 10km, and simulate almost 6 my of geological time. Models were calibrated to available well data and seismic thicknesses. More than 70000 models were evaluated, of which 317 considered admissible.
b)     The second example is onshore exploration for slope stratigraphic traps in the Neocomian of the West Siberia basin. Most of the area is covered by 2D seismic and it is difficult to image and identify stratigraphic traps. The models cover a 380 x 520 km area, from the delta topsets to the slope and basin floor settings, with a grid cell size of 10km x 10km, and simulate 12.6 my of geological time. The models were constructed sequentially for six units, resulting in more than 200000 scenarios evaluated of which 1000 were considered admissible.
In both cases calibrated modelling tested a wide range of parameters and scenarios in terms of accommodation, clastic supply, and transport systems. The results show the potential of combing SFM and automatic calibration using inversion algorithms with conventional play-based exploration and seismic interpretation to delineate prospective areas for reservoir presence, allowing “sweet spotting” for further detailed exploration, and filling in “white space” where data support fails. Furthermore, the methodology allows explorers to test a range of hypotheses that might or might not be compatible with available data, to quantify geological processes, and to identify uncertainties impacting distribution of gross depositional environments and reservoir presence, and finally to understand their effects on reservoir presence predictions, and relate them to geological controls.

References
Bornholdt, S., U. Nordlund, and H. Westphal, 1999, Inverse stratigraphic modeling using genetic algorithms, in J.W. Harbaugh, et al. (eds), Numerical Experiments in Stratigraphy: Recent Advances in Stratigraphic and Sedimentologic Computer Simulation, v. 62: Tulsa, SEPM Special Publications, p. 85-90.
Burgess, P.M., H. Lammers, C. van Oosterhout, and D. Granjeon, 2006, Multivariate sequence stratigraphy: Tackling complexity and uncertainty with stratigraphic forward modeling, multiple scenarios, and conditional frequency maps: AAPG Bulletin, v. 90, p. 1-19.
Cross, T.A., M.A. Lessenger, 1999, Construction and application of a stratigraphic inverse model, in J. W. Harbaugh, et al. (eds), Numerical Experiments in Stratigraphy: Recent Advances in Stratigraphic and Sedimentologic Computer Simulation, v. 62: Tulsa, SEPM Special Publications, p. 70-83.
Granjeon, D., and P. Joseph, 1999, Concepts and applications of a 3-D multiple litology, diffusive model in stratigraphic modeling, in J.W. Harbaugh, et al. (eds), Numerical Experiments in Stratigraphy: Recent Advances in Stratigraphic and Sedimentologic Computer Simulations, v. 62, SEPM Special Publication, p. 197-210.
Imhof, M., and A. K. Sharma, 2006, Quantitative seismostratigraphic inversion of a prograding delta from seismic data: Marine and Petroleum Geolgy, v. 23, p. 735-744.
Lawrence, D.T., M. Doyle, and T. Aigner, 1990, Stratigraphic simulation of sedimentary basins - concepts and calibrations: AAPG Bull, v. 74, p. 273-295.
Lessenger, M., and T.A. Cross, 1996, An Inverse Stratigraphic Simulation Model - Is Stratigraphic Inversion Possible?: Energy Exploration Exploitation, v. 14, p. 627-637.
Paola, C., 2000, Quantitative models of sedimentary basin filling: Sedimentology, v. 47, p. 121-178.
Sambridge, M., 1999, Geophysical inversion with a neighborhood algorithm- Searching a parameters space: Geophysical Journal International, v. 138, p. 479-494.

AAPG Search and Discovery Article #120098©2013 AAPG Hedberg Conference Petroleum Systems: Modeling the Past, Planning the Future, Nice, France, October 1-5, 2012