[[ Check out my Wordpress blog Context/Earth for environmental and energy topics tied together in a semantic web framework ]]

Sunday, March 16, 2008

Trek


I've never owned a Trek, but this guy started a revolution in light-weight yet strong carbon framed bikes. And his Wisconsin-based company supported Lance Armstrong in his quest for multiple Tour de France victories.

Richard A. Burke's college yearbook photo (Marquette '56):

Friday, March 07, 2008

Street Lamp Understanding of the Shock Model

Q:"why look for your lost keys only under the streetlamp?"
A:"cuz that's where the light is"

I found a bit of mathematical convenience that perhaps can add a bit of clarity to the understanding of the Oil Shock Model. In the past I have used a few other devices to get the point across, including an electrical circuit analogy and the use of gamma distributions. The new construction I present came up in a previous comment at TOD.

Consider a hypothetical situation that a discovery profile fits a Gaussian density profile in time. Then consider that each shift in the production history also follows a Gaussian mean with attached variance. It follows (in keeping with the formulation of the Shock Model) that a convolution of a Gaussian density function with a Gaussian results in another Gaussian (proof let to the reader1). The resultant width adds in quadrature and the Gaussian shifts by the relative offset of the two curves. This essentially means that after N repeated convolutions the peak shifts by N mean values and the width broadens by the square root of the sum of the squares of the N standard deviations. Both the Shock Model and this hypothetical analysis used the concept of the mathematical convolution to demonstrate how the oil production curve shifts in time from the initial discovery profile. In the global situation this shift often manifests itself by latency of dozens of years (the current shift runs at +40 years).


The repeated convolutions also cause the initial discovery profile to broaden due to probabilistic considerations, essentially explainable by the uncertainties in when production and other preceding phases actually start on a given discovered region. The only way that the profile can sharpen after discovery occurs by increases in the extraction rate as evidenced by demand, technological advancements, or other various production shocks. These non-stochastic properties can alone counteract the relentless advance of entropy.

the curves above correspond to the equations below where w=1 and dt=5 years:
P0=exp(-(t)2/2/w2)/sqrt(w2)
-- Discovery phase centered at 0, width = w

P1=exp(-(t-dt)2/2/(w2+w2))/sqrt(w2+w2)
-- Fallow phase, mean latency = dt, variance = w2

P2=exp(-(t-2*dt)2/2/(w2+w2+w2))/sqrt(w2+w2+w2)
-- Construction phase, mean latency = dt, variance = w2

P3=exp(-(t-3*dt)2/2/(w2+w2+w2+w2))/sqrt(w2+w2+w2+w2)
-- Maturation phase, mean latency = dt, variance = w2

P4=exp(-(t-4*dt)2/2/(w2+w2+w2+w2+w2))/sqrt(w2+w2+w2+w2+w2)
-- Production phase, draw-down time = dt, variance = w2
The use of Gaussians to describe the convolutions allow a closed-form solution to the result and at each stage. For background on how the central limit theorem also plays into this see here. Note however, that the closed form solution only holds if the Gausians have negative time histories, which unfortunately breaks the time causality of discovery and production. But like I said, if we use this simply as a means to a better understanding in how the continuous model manifests itself, by truncating the negative tails and eyeballing the curves, we can essentially live with the approximation. It gives us a intuitive shorthand in understanding how the latencies add up and how the resultant peak can broaden given the underlying density functions.



1 For an alternative explanation of how to derive the Gaussian convolution identity, look up the concept of Fourier transforms here. The derivation works out simply if we use the identity that a convolution in the time domain corresponds to a multiplication in the frequency domain, and that the Fourier transform of a Gaussian results in a Gaussian.

Tuesday, March 04, 2008

Creaming Curves and Dispersive Discovery

Try looking up information on the oil industry term "creaming curve" via a Google search. In relative google terms, you don't find a heck of a lot. Of the top hits, this paper gives an Exxon perspective in regards to a practical definition for the distinctively shaped curve:
Conventional wisdom holds that for any given basin or play, a plot of cumulative discovered hydrocarbon volumes versus time or number of wells drilled usually show a steep curve (rapidly increasing volumes) early in the play history and a later plateau or terrace (slowly increasing volumes). Such a plot is called a creaming curve, as early success in a play is thought to inevitably give way to later failure as the play or basin is drilled-up. It is commonly thought that the "cream of the crop" of any play or basin is found early in the drilling history.
This seems a simple enough description and so you would expect a bit of basic theory to back up how the curve gets derived, perhaps via elementary physical and statistical processes. Alas, I don't see much explanation on a cursory level besides a bit of empirical hand-waving and statistically insignificant observations such as the Exxon paper describes. And the #3 Google hit brings you back to the site of yours truly, who basically plays amateur sleuth on fossil fuel matters. I definitely don't have any particular hands on experience with regards to creaming curves, and won't pretend to, but I can try to add a statistical flavor to the rather empirical explanations that dot the conventional wisdom landscape. The paucity of fundamental theory combined with the fact that my own tepid observations rank high on a naive internet search tells me that we have a ripe and fertile field to explore with regards to creaming data.

The fresh idea I want to bring to the table regards how the Dispersive Discovery model fits into the dynamics of creaming curves. As the Exxon definition describes the x-axis in terms of time or number of wells drilled, one could make the connection that this corresponds to a probe metric that the Dispersive Discovery uses as the independent variable. The probe in general describes a swept volume of the search space. If the number of wells drilled corresponds linearly to a swept volume, then the dispersive curve maps the independent variable to the discovered volume, via two scaling parameters D0 and k:
D(x) = D0*x*(1-exp(-k/x))
and then we map the variable x to the number of wells drilled. Changing the x parameter to time requires a mapping of time to a rate of increase in x:
x=f(t)
I assert that this has to map at least as a monotonically increasing function, which could accelerate if technology gets added to the mix (faster and faster search techniques over time), and it could possibly decelerate if physical processes such as diffusion play a role (Fick's law of parabolic growth):

diffusion =>
x = A*sqrt(t)


accelerating growth =>
x = B * tN

steady growth =>
x = C * t

The last relation essentially says that the number of wildcats or the number of wells drilled accumulates linearly with time. If we can justify this equivalence, then an elementary creaming curve has the same appearance as a reserve growth curve for a limited reservoir area. The concavity of the reserve growth curve or creaming curve has everything to do with how the dispersive swept volume increase with time:

Regarding the historical "theoretical" justifications for creaming curves, I found a few references to modeling the dynamics of the curve to a hyperbola, i.e. an x1/N shape. This has some disturbing characteristics, principle among them the lack of a finite asymptote. So we know that this wouldn't fit the bill for a realistic model. On the other hand, the Dispersive Discovery model has (1) a statistical basis for its derivation, (2) a quasi-hyperbolic climb, and (3) a definite asymptotic behavior which aligns with the reservoir limit.

For the curious, the Dispersive Discovery model also has a nice property that allows quick-and-dirty curve fitting. Because it basically follows affine transformations, one parameter governs the asymptotic axis and the other stretches the orthogonal axis. This means that we can draw a single curve and distort the shape along independent axis, thereby generating an eyeball fit fairly rapidly. (Unfortunately a curve such as the Logistic used in peak modeling does not have the affine transformation property, making curve fitting not eyeball-friendly).

We can look at a few examples of creaming curves and their similarity to dispersive discovery.

This site http://www.hubbertpeak.com/blanchard/, referenced in PolicyPete analyzes creaming curves for Norway oil:

At this point I overlaid a dispersive discovery curve over the "Theory" curve that PolicyPete alludes to:

PolicyPete does not come close to specifying his "theory" in any detail, but the simple dispersive discovery model lays closely on top of it with a definite asymptote.

Another creaming curve analysis from "Wolf at the the Door" results in this curve:

The red curve above references a "hyperbolic" curve fit, while the figure below includes the Dispersive Discovery ft.

The fit here proves arguably better than the hyperbolic and gives a definite asymptote that a hyperbolic would gradually and eventually overtake.

One can apply the same model fitting to natural gas. From discussions at TOD the following chart shows the continuously updated creaming curve for USA NG.



I originally did some non-creaming analysis using Hubbert's 1970's data and arrived at an asymptote of 1130 Tcf using Dispersive Discovery. An updated curve using new field wildcats instead of Hubbert's cumulative depth drilled yields an asymptote of 1260 Tcf from a least-squares fit.


Hubbert's plot from the 1970's indicates the correspondence from cumulative footage to number of new wildcats; we assume that every new wildcat adds a fixed additional amount of cumulative footage. This allows a first-order approximation to dates well beyond what Hubbert had collected.


As a caveat to the analysis, I would caution that the number of wildcats drilled may not correspond to the equivalent swept volume search space. It may turn out that every new wildcat drilled results from a correspondingly deeper and wider search net. If this turns out as a more realistic depiction of the actual dynamics, we can easily apply a transform from cumulative #wildcats to cumulative swept volume, in a manner analogous to like we did for mapping time to swept volume in the case of reserve growth.

I like how this all fits together like a jigsaw puzzle and we can get a workable unification of the concepts behind technology assisted discovery, creaming curves, and the "enigmatic" reserve growth. It also has the huge potential of giving quantitative estimates for the ultimate "cream level" thanks to the well-behaved asymptotic properties of the dispersive discovery model. And it basically resolves the issue of why no one has ever tried to predict the levels for the "hyperbolic" theory, as no clear asymptote results from any hyperbolic curve without adding a great deal of complexity (both in understanding and computation).



Update:
Laherrere provides a post to TD on Arctic creaming curves
ME: Yes, good to see someone here that essentially produces half the referenced and cited (and high quality) graphs concerning oil depletion.

The one pressing question I always have is how the interpolated and extrapolated smooth lines get drawn on these figures. We all know that the oil production curves tend to use the Logistic as a fitting function, but we don't have a good handle on what most analysts use for discovery curves and creaming curves. In particular I have seen several references to creaming curves being modeled as "hyperbolic" curves yet find little in fundamental analysis to make any kind of connection.

Based on statistical considerations I am convinced that the discovery and creaming curves result from a relatively simple model that I have outlined on TOD. I have a recent post where I make the connection from dispersive discovery to creaming curves here:
http://mobjectivist.blogspot.com/2008/03/creaming-curves-and-dispersive....

In the following figure I apply the Dispersive Discovery function to one of the data sets on your graph. This function is simple to formulate and it produces a finite asymptote which you can use to estimate the "ultimate discoverable" (150 GBoe for NG in the following).

Response from Laherrere.
Every time that I plot a creaming curve, I am amazed to see how easy it is to model with several hyperbolas, but this doesn't explain why, except that on earth everything is curved. Linear is just a local effect (horizontal with the bubble, vertical with the mead) being the tangent of a curve. I found the same thing with fractals: it is a curve, so I took the simplest second degree curve : the parabola.

For creaming, hyperbola is the simplest with an asymptote. But the most important is to use several curves because exploration is cyclical. But another important point is to define the boundaries of the area. If the area is too big, it may combine apples and oranges making it difficult to find a natural trend. If the area is too small it will have too little data to find a trend. The best is to select a large Petroleum System which is a natural domain. The Arctic area is an artificial boundary and not a geological one.
I agree that the bigger the better, as the statistics improve and local geological variations play less of a factor.

Sunday, March 02, 2008

Natural Gas Analysis

This chart from TOD poster Jon Friese brings up some interesting issues:

First of all, one rarely finds this kind of data for crude oil, where each year gets tallied separately. I figure the reason it doesn't show up very often arises because the maturation of oil wells do not correspond to a given year very concisely. A variable maturation time essentially pushes the actual production to the next few years so that the chart wouldn't have the same columnar contrast. On the other hand, the draw of a natural gas reservoir occurs immediately and so the yearly data shows up very strikingly.

The Oil Shock Model handles the analysis fairly well. I basically set the maturation level to zero years and tried to emulate the chart's look.

Overlay below

Not having the actual discovery data available, I used empirical fits to generate the individual curves. The base curve for the years prior to 1980 results in a curve that follows the relationship:

Base = K * (0.55 exp(-T/2) + 0.45 exp (-0.15*T/2))

This essentially gives a fast slope and a slower slope which approximates a reserve growth component that I have reported on before. The individual yearly production curves also show a similar behavior (here small t corresponds from the date of the :

Base = Gain * (0.9 exp(-t/2) + 0.1 exp (-0.1*t/2))

The slow portion contributes only 10% of the bulk of the growth, so the reserve growth doesn't amount for much.

The other piece of the fit involved the contribution of the Gain which basically generates the envolpe of the curve, starting from the initial point in the data collection at 1980.

Gain = (1 - 0.7 exp(-0.02*T))

This gain function basically demonstrates that continually greater amounts of Natural Gas get extracted per year but the trend does not show that a peak will arrive any time soon. It instead suggests that new wells get constructed to meet the demand for Texas. The yearly time constant for each year's output remains a pretty quick 2 years, so that when a drop-off in production occurs, it will happen fairly quickly.