--> Reservoir Geochemistry: The Changing Landscape From the 1950’s to the Present

AAPG Hedberg Conference, The Evolution of Petroleum Systems Analysis

Datapages, Inc.Print this page

Reservoir Geochemistry: The Changing Landscape From the 1950’s to the Present


In the late 1950’s, R. Milhone (Chevron, production dept.) was using constant temperature GC to separate gases. Observing this work, L. W. Slentz and J. Andersen (Chevron) conceived of trying to expand this work to oils, and perhaps identify a “fingerprint” of one oil for comparison with others. Slentz and Andersen (late 1950’s) applied heat tape to a GC column and periodically increased the temperature (first use of temperature programming in oil industry), and resolved C6‐C9 compounds. In February 1960, they conducted the first technical service application of this technology, differentiating Summerland, CA beach tar from oil produced from a nearby well. During the 1960’s‐70’s, P. Elmer and H. Packard developed temp‐ programmable gas chromatographs, and Slentz/Andersen developed packed columns to generate reproducible C6‐C18 chromatograms. They went onto install ~10 of these instruments in Chevron US laboratories and overseas operating companies. In the 1970’s, the term “reservoir geochemistry” was coined by petroleum geochemists at Chevron to distinguish new, reservoir‐management applications from the more established, exploration‐related applications of petroleum geochemistry. Then Chevron published a suite of studies to showcase different applications of reservoir geochemistry in the oil/gas industry: Slentz (1981) proposed that the composition of an oil or water could be used as a 'fingerprint' characteristic of a reservoir; Kaufman et al. (1990) showed that tubing string leaks can be identified and quantified using this fingerprint method; Hwang and Baskin (1995) showed that oil fingerprints (and other bulk properties) did not change in a large‐scale reservoir during 20‐plus years; and McCaffrey et al. (1996) showed that matrix algebra applied to GC and GCMS peak heights can be used for production allocation from discrete reservoirs, and GCMS data can be used to predict fluid viscosity variations with depth in heavy oil reservoirs. In the early‐mid 90’s, geochemists at BP, Shell, Total, Statoil and University of Newcastle, improved geochemical assessments of reservoir continuity by integrating geochemical data with geological and engineering data, and by modeling the rates of different fluid mixing mechanisms in the reservoir (e.g., England, 2007). Shell geochemists developed two independent methods of measuring the similarity of oil samples to evaluate reservoir connectivity: (i) Multi‐dimensional gas chromatography (MDGC) measures the abundance of 12 gasoline‐range alkyl‐benzene compounds and (ii) the second technique determines the similarity of two oil samples by performing a pair‐wise comparison of a large number of HRGC peaks. Fuex et al. (2003) described a centrifuge experiment on a live oil sample demonstrating gravity segregation can explain the origin of a large compositional gradient. In the 2000’s, workers at the University of Calgary and elsewhere dramatically improved our understanding of in‐reservoir oil biodegradation, allowing petroleum geochemistry to be used to understand and predict viscosity variations in biodegraded oil accumulations. The advent of mud gas isotope logging in the early 2000’s provided new high‐resolution natural tracers for characterizing gases in reservoirs. Integrated time lapse studies, such as Chouparova et al. (2010), illustrated dynamic changes in communication between conventional reservoirs. Recently, with the advent of unconventional reservoirs, time‐lapse petroleum geochemistry has become key to assessing changes in drained rock volume over time, and to optimizing development strategies for stacked pay in plays such as the Permian Basin, Eagle Ford, and Bakken (e.g., Laughland and Baskin (2015), Kornacki (2017), Jweda et al. (2017)).