[[ Check out my Wordpress blog Context/Earth for environmental and energy topics tied together in a semantic web framework ]]

Friday, September 16, 2005

Oil Shock Model

Update: I added an appendix at the bottom that provides the algorithm for the oil shock model. Per Bubba's suggestion, I am thinking of how to redisplay the URR plot to better guide the eye.

Following up on my critique of Stuart Staniford's estimate of global cumulative endpoint production of oil (aka the URR estimate), I found enough additional misinformation or plain mistakes in SS's model to refine my macro peak oil model (1, 2) further.

The mistake that Staniford made in applying the logistics model to BP's data came from the wrong initial (pre-1965) cumulative production. Look at the following plot from TOD and you see a telling "cusp" at the top of the curve:

That cusp disappears when one applies the right initial cumulative data; Staniford added about 100 Billion Barrels (BBls) too much to this data. This becomes obvious when you notice that his cumulative as of 2004 sits at about 1050 BBls -- 10% over the generally accepted value of 952 BBls from the USGS.

This time I downloaded the data from BP's World Energy 2005 page and tried to make a more decent try at a model fit. Based on comments at The Oil Drum, two issues that I consider very important, (a) using historical data and (b) using realistic yet simple models, apparently tends to rub some people the wrong way. I think I have good arguments covering these concerns.

For some reason, the use of historical data causes certain oil watchers to scoff at us "backward" thinkers; they don't realize that we can learn a lot, both in a Bayesian sense and avoiding repeating the same mistakes over and over again (how shocking). Otherwise, I can only buy the argument up to a point; for example, until we hit the peak, we can't with 100% certainty predict the peak. Having a real inflection point or an obvious concavity does wonders for this class of analytical model.

I also find the simplicity of the model important -- something that Staniford agrees with. Unfortunately, he simplifies to such an extent, by using the Logistics curve, that he only has a single parameter to adjust. I find that much like having to make a prediction given only the mean of some data set. I would at least like to have a couple of parameters, say the mean and variance, to improve my chances. Here, Staniford throws away valuable information that has a good physical basis to achieve a simple empirical fit. I side with a good physical model that will give good intuition and insight to the problem at hand.

With that in mind I used my convolution-based Macro Peak Oil model to fit the BP data along with the USGS cumulative value of 952 BBls. The following uses a symmetric Triangular Discovery1 curve starting in 1944 with a width of 87 years and normalized to an URR value of 2400 BBls.

The early years (pre 1970) of my model feature an oil extraction rate of 6% of total content per year. For an exponential drop-off, this gives a 1/e point of approximately 16 years.

I add three shocks in the years 1973-1974, 1979-1980, and 1991-1992, whereby the oil extraction rate changed dramatically. Because of the stationary properties of the Markov model, I can simply adjust the rates of the normalized exponential term in the middle of the convolution without violating the stochastic nature of oil extraction over time. The shocks tend to momentarily suppress and flatten the production/consumption rate. (As a technical aside, the suppressive dip in the output comes about mathematically from the exponential normalization)

The three shocks correspond to the OPEC embargo, Iranian crises coupled with a deep recession, and the first Gulf war.
YearExtraction Rate%change

No doubt we will get more shocks in the future. The crucial finding in my mind: the shocks serve to delay significantly the onset of peak oil. Before the 70's, we used oil as if it came out of the tap; we have since made significant corrections in the extraction rate and our more conservative use of oil. We will likely get a shock fairly soon -- this will not invalidate the model results and it will not disprove the peak oil hypothesis. It simply will serve to delay the peak a bit more. I really believe (and the model shows) that we would have hit a peak several years ago without these shocks in place.

Recession is good? How about that for back-handed insight?

The final URR fit looks like the following curve. Of course the asymptote hits 2400 BBls because of the triangular distribution I started with. I mainly want to demonstrate how the annoying cusp disappears when plotted correctly.

1 I chose the triangular curve because of the proliferation of the following type of data.

Note that I use the term "discovery" a bit loosely; in my mind it actually indicates when pumping activity actually started on the previously discovered reserves.

This code compiles with the GCC gnatmake compiler provided with most recent vintages of Linux (or available for Windows here). To build, save as file "shock_model.adb", then gnatmake shock_model to make the executable. Spreadsheets like the CSV-format: "year", "BBls/day", "cumulative BBls"

with Text_IO;
with Ada.Numerics.Elementary_Functions;

procedure Shock_Model is

subtype Years is Natural range 1944 .. Natural'Last;

type Shock is record
Year : Years; -- Year of shock
Rate : Float; -- Prod. or depletion rate (fraction / year)
end record;
type Shocks is array (Years range <>) of Shock;

Width : constant := 87; -- Triangle window
Volume : constant := 2400.0; -- In BBls

S : constant Shocks :=
((Years'First, 0.06), -- Starting Year
(1974, 0.048), -- OPEC
(1980, 0.035), -- Iran +
(1983, 0.029), -- Recession
(1991, 0.026), -- Gulf I
(2150, 0.0)); -- Ending Year

--##### You don't have to look beyond this point unless you
--##### want to mess with the algorithm or output.

-- General purpose array of floats indexed by year
type Flt is array (Natural range <>) of Float;

-- Safe array get function, truncates values out of range
function Get (Arr : Flt; I : Integer) return Float is
if I in Arr'Range then
return Arr (I);
return 0.0;
end if;
end Get;

-- Takes 2 arrays and returns the convolution array
function Convolve (A, B : in Flt) return Flt is
Total : constant Natural := A'Length + B'Length;
C : Flt (0 .. Total);
V : Float;
for J in 0 .. Total loop
V := 0.0;
for I in 0 .. J loop
V := V + Get (A, I) * Get (B, J - I);
end loop;
C (J) := V;
end loop;
return C;
end Convolve;

-- Creates a symmetric window, truncated left by Start
-- The prior accumulated data is provided by Impulse
function Triangle_Window
(Width : in Natural;
Start : in Natural := 0;
Impulse : Float := 0.0)
return Flt
R : Flt (0 .. Width - 1 - Start);
Half : constant Natural := Width / 2;
Offset : Natural;
for I in R'Range loop
Offset := I + Start;
if Offset < Half then
R (I) := Float (Offset) / Float (Half);
R (I) := Float (Width - Offset) / Float (Half);
end if;
end loop;
R (0) := R (0) + Impulse;
return R;
end Triangle_Window;

-- An exponential array populated to length
function Exponential (Length : in Natural;
Alpha : in Float) return Flt is
use Ada.Numerics.Elementary_Functions;
R : Flt (0 .. Length - 1);
for I in 0 .. Length - 1 loop
R (I) := Alpha * Exp (-Alpha * Float (I));
end loop;
return R;
end Exponential;

-- The summed difference between two arrays up to Width
-- This is used to renormalize when an Alpha rate changes
function Leftover (A, B : in Flt;
Width : in Natural) return Float is
V1 : Float := 0.0;
V2 : Float := 0.0;
for J in 0 .. Width - 1 loop
V1 := V1 + Get (A, J);
V2 := V2 + Get (B, J);
end loop;
V1 := V1 + 0.5 * Get (A, Width);
V2 := V2 + 0.5 * Get (B, Width);
return (V1 - V2);
end Leftover;

-- Recursive formula to re-convolute on successive rate shocks
-- Returns production curve in array
function Gen
(S : in Shocks; -- Input stimulus
Left : in Float; -- Pre-history reserve level
Curve : in Flt; -- The starting curve
TS : in Natural) -- TimeSpan of results
return Flt
if S'Length = 1 then -- returns on completion
return Curve;
W1 : constant Natural := S (S'First).Year - Years'First;
W2 : constant Natural := S (S'First + 1).Year - Years'First;
T : constant Flt := Triangle_Window (Width, W1, Left);
R : constant Flt :=
Convolve (T, Exponential (TS, S (S'First).Rate));
return Gen
(S => S (S'First + 1 .. S'Last),
Left => Leftover (T, R, W2 - W1),
Curve => Curve & R (1 .. W2 - W1),
TS => TS);
end if;
end Gen;

Time_Span : Natural := S (S'Last).Year - S (S'First).Year;
R : constant Flt := Gen (S, 0.0, (0 => 0.0), Time_Span);

Total : Float := 0.0;
K : constant Float := Volume / Float (Width) * 2.0;
for I in R'Range loop
Total := Total + K * R (I);
(Integer (Years'First + I)'Img & "," & -- Year
Float'Image (K * R (I) / 365.0) & "," & -- Per Day
Float'Image (Total));
end loop;
end Shock_Model;

As a check, the red curve displays the output of the above code. The yellow curve shows what would have happened without the oil shocks. Through the roof, baby.


Professor Anonymous Anonymous said...

Very Nice. I am really impressed with the great blog you have here! I was wondering how long you have been doing this?

I have a Nano-Technology
Energy Patch
site/blog. It has some simply amazing testimonials regarding the unique
technology that is good for those from 10-110. Just see if you can believe this Nano-Technology
Energy Patch related stuff. Imagine a 12 year old girl weighing less than 100 lbs establishing the
World Squat record of 275 lbs not once but 3 times !

How about an 83 Yr Old Lady that could not stand and had to be carried back and forth to the bathroom and on a Thursday evening in 15 minutes she was not only able to stand, but also walked across the room just holding onto her daughter's arm. The next night, she and her daughter were arm wrestling and for several minutes were locked in the center position. Her daughter the next day said "That was my mom's WEAK arm" :-)

Come and check out our LifeWave Energy
site/blog if you are interested in getting more energy and or sleeping better than you have in years, and I think you will be very impressed also :-)

9:11 PM  
Professor Blogger @whut said...

Weird comment above. I know it is spam, but talk about prescience!

9:18 PM  
Professor Blogger Bubba said...

I thought I posted a commment here yesterday, but now I don't see it. I must be losing my mind.

What I said was that although your curve fits the data quite nicely, the fact that the curve is turning sub-parallel to the absyssa makes it very difficult to use as a predictive tool. Moreover, it would be better to convert the X axis units to time, after fitting the curve, for the purpose of display. This would steepen the curve where it is turning parallel to the absyssa. It will also put things in terms that people care about - i.e. how much play is there in the time to the peak and to when everything will actually end for good.

8:36 AM  
Professor Blogger @whut said...

Yes, I noticed that about the URR curve. Everyone likes to plot it that way to see how it intercepts the y-axis.

6:33 PM  
Professor Blogger JMS said...

Love that last chart. Aligns to Hubbert's prediction of 1995 (and his prediction of 2001) made before the shocks.

11:43 PM  
Professor Blogger @whut said...

A bit of emergent behavior there.

7:00 AM  
Professor Anonymous Anonymous said...

Jean Laherrère made a similar fit for the total world energy production (picture here). The fit is also poor when the cumulative production is small. However, the resulting production curve seems correct (picture here).

8:06 PM  
Professor Blogger Prof. Goose said...

damn it WHT, why don't you tell me about this stuff so we can post a link to it! yeesh! :) (good stuff, I'll try to sneak it between Rita crap tomorrow).

9:54 PM  
Professor Blogger @whut said...

Khebab, I saw this when you posted to TOD the other day. It seems as if he uses all energy sources and pre-20th century data for his curve. Whereas BP and USGS tends to think of cumulative oil use as starting in the 1930's-40's. Otherwise the global cumulative of 950 BBls as of 2004 does not make sense.

pg, I've got one more idea up my sleeve. I have resisted doing the full numerical integration, opting instead for the convolution approach to gain insight, but the differences between the logistic equation become a bit more apparent if I show and numerically solve the differential equations. Stay tuned, I have got some code I will post tonight.

8:47 AM  
Professor Anonymous Anonymous said...

Fine job, thanks! I think we can interpret the trend break in the beginning of '70s as a result of overproduction. We know that the increase came mostly from Middle East and mainly from Saudi-Arabia. The Saudis (and others) had clearly problems with overproduction and OPEC was a reaction to that. It was modelled after the Texas Railway Commission oil production control system and for the same reasons.

The '50s - '60s growth trend was clearly unsustainable and had to break. The first oil crisis did that. I interpret the oil crises of '70s and '80s as real production crises. Production was just bumping to real constraints. The hypothetical curve you have drawn shows this. It was impossible to attain. More and more of the procuction increase is now from non-conventional (in the '70s sense) production that was not technologically possible before and behaves differently.

We could think of a hypothetical "natural" or "optimal" production curve. The real production "broke out" from it upwards by overproduction in the '50s and '60s and later the shocks brought the production violently back to the curve. Now we have been more or less on that "basic" production curve that coincides with your data based model.

This helps to see OPEC role more clearly. Basically it has done the right and unavoidable thing. Besides we might guess that we are now again a little bit over the optimal curve and there is a suspicion of overproduction. And we know that Russia (Yukos) has probably overproduced and possibly recklessly. Offshore production has typically a steeper production curve and there it is easy to press the production up a little too quickly. So this kind of curves with a good fit can point to possible problems.

These deviations can be explained as purely stochastical or otherwise. But one of the uses of this kind of modelling is to find the points that command attention. Now we can make a tentative forecast that we will have a sharper break downwards from the present. According to BP oil production growth was 3.5% in 2003 and 4,2% in 2004. We meet this kind of sustained growth (more than 2 years in a row) last time before 1974. The growth was at this level even temporarily last time in 1988 (3.9%. In 2000 it was 3.6%). This year the growth will be markedly lower (Katrina, Rita and all) and probably next year too.

6:00 AM  
Professor Blogger Matthew said...

The oil shocks are slowing the initial ramping up of production, which is correct, but by fitting the curve that way it also has the effect of slowing the ultimate rate of decline. It doesn't make conceptual sense to me why a delay in reaching the peak would lead to a lower rate of decline post-peak.

7:47 AM  
Professor Blogger Matthew said...

Actually, I take that back - since you didn't reach the same level of production, you might have fewer total wells, etc. built, and so not able to deplete as fast. Potentially makes sense, depending on the circumstances of the shocks.

7:53 AM  

Post a Comment

<< Home

"Like strange bulldogs sniffing each other's butts, you could sense wariness from both sides"