Unification
I envision the modeling of global oil depletion as the intersection of perhaps three salient behaviors: the dynamics of oil extraction, the dynamics of oil discovery, and the effects of reserve growth. I figure that I have a good handle on the first two behaviors, governed by the Oil Shock Model and the Cubic Growth Discovery Model. The mathematical convolution of the two gives, in my opinion, a great first-order view of historical oil production. However, I have not figured out exactly how to slip reserve growth into the framework to help us get some additional predictive utility out of the model. Which leads to my ongoing quest for a suitable unifying factor to bridge the three dynamics.
Khebab has done yeoman's work in trying to figure out how to retrofit a reserve growth term to the Oil Shock Model. He essentially first pointed out the (same) limited predictive utility of relying on discoveries without a reserve-like growth term serving to "soften the landing". However, I understand Khebab's modification to the OSM only up to a point. My personal impasse occurs at deciding the point in time at which we introduce reserve growth into the dynamics, and in particular whether it should occur at discovery (i.e. a form of backdating) or during the field maturation phase. Both of these stem from some basic physical considerations -- namely in the fact that it takes some mean time from the moment of discovery to the generation of a mature producing field. But that may in fact prove a bit too concrete an assumption and I have come to realize that I should rethink how discovery estimates actually figure in to the life-cycle. I have a feeling it really has to do with appearances and our perception of a discoverable quantity. After all, this perception provides the only basis for how much an oil company plans to drill in a promising region at any given moment in time.
To fuzzify the discovery term I refer back to a recent post on how reserve growth estimates on a single region may actually play out, paying careful consideration to the limits of human perception. In that micro-model, we take the simple premise that our estimate of the reserve remains proportional to the depth of knowledge of the reserve, and the variance of this estimate equaled the estimate itself (a maximum entropy estimator). I referred to this as a "depth of confidence" random variable. The fuzziness of the random variable allows it to bleed into the fixed/finite depth of the actual reservoir. So in a statistical sense, fuzzy perception mixes together an estimate that includes a concrete notion of how far we can measure into the pool (i.e. the mean) together with a fluctuating variance that could include probes beyond the fixed size of the reservoir. Although I would not go so far as to say that it follows any kind of quantum mechanical uncertainty principle, the analogy certainly applies.
I reproduce the basic estimation equations below:
The third line basically splits out the current conservative measure with a probability that we have extended beyond the depth of the finite pool. The reason for the slow uptake in reserve growth lies in the fact that as the "depth of confidence" increases, our variance also increases. So we have two competing effects: as we get closer to the fixed volume in our depth variable, the fuzziness in our estimate continues to increase, leading to a slow asymptotic glide to the final ultimate reserve limit. This essentially puts a mathematical framework on how we improve our perception of a largely unmeasurable quantity -- i.e. the fixed volume of a largely hidden reservoir buried beneath the earth. As a caveat, if you personally don't believe in the
sqrt(variance) ~ mean
premise, you won't get the exponential term in the preceding equation, and the reserve growth is linear until it hits the finite limit and then table-tops, i.e. no gradual asymptotic reserve growth. So in essence, we have a model that reflects perception of what we know for a discovered amount rather than the reality of the discovery -- something in fact largely unknowable or, at the very least, speculative within bounds. The unifying connection between reserve growth on a single reservoir and estimates of discovery on a larger, or even global, scale involves only substituting a suitable expression for the "depth of confidence" growth term with time. For a single reservoir, we can assume that the depth term grows at least linearly with time, since the estimates will start improving immediately after discovery. However, for the global largely unexplored region, the equivalent depth grows geometrically over time (say like a cubic growth), as ever increasing swatches of volume are evaluated for oil content starting from the historical start of modern oil exploration in 1858.
This leads to the following equation for cumulative discovery (asymptotic value=
D0
): D = kt4*(1-exp(-D0/kt4))
and the derivative of this for instantaneous discoveries (e.g. yearly discoveries): dD/dt = 4kt3*(1-exp(-D0/kt4)*(1+D0/kt4))
We show the discovery profile for the dimension-less dD/dt
below:Note that the reserve component leaps out, as the profile has a pronounced tail. This basically provides the missing link between the previous cubic model's two physical regimes:
It took me awhile to merge the rather stark decline that occurs from hitting the discovery volume limit (which I modelled as a negative feedback term) with a gradual downslope that I figured had to occur from a diminishing return caused by the perception of apparent reserve growth.
Now I don't personally know the name of the specific mathematical formality that this model likely follows (the converse of Simulated Annealing?). I know it exists somewhere, but I don't feel bad that I don't know it at the moment. After all, the geniuses at Shell, Exxon, BP, and countless geologists haven't yet placed it either, or even stumbled across it, as far as I can tell.
As a bottom-line, I finally have my Holy Grail (I'm not dead yet) of a unified Discovery+Reserve+Shock model. This should allow some fairly robust predictive capability. Stay tuned.
3 Comments:
Nice result, If I understand correctly your model is now completely parametric where new discovery additions (and reserve growth?) depends only on the k and D0 parameters. I can't wait to see a real life application of this approach.
I agree that the application of reserve growth is tricky and the choice of the starting point of reserve growth (at discovery time or at maturation time) has generally a great impact on the final result.
I agree that the tricky part with this model is that we can't use any kind of backdated discovery data for calibration, since the (perceptual) reserve growth gets applied during maturation. For comparison against historical discovery data, it is best to use the initial discovery estimate, which is actually harder to come by because backdating routinely gets applied to the public data.
Backdating works against us because this is a perceptual causal model, whereas to backdate means to go backward in time modifying our perception in a non-causal fashion.
I hope to elaborate a bit more on that soon.
And one other thing, Khebab, this also means that the latency factors in the shock model have to be reduced, in particular the maturation time needs to be reduced significantly as the apparent reserve growth is taken care of by this model.
Post a Comment
<< Home