[[ Check out my Wordpress blog Context/Earth for environmental and energy topics tied together in a semantic web framework ]]

Monday, January 17, 2011

The Oil ConunDRUM

I synthesized the last several years of blog content and placed it into a book tentatively called The Oil ConunDRUM  (ultimately titled Mathematical Geoenergy published by Wiley/AGU in 2019). This document turned into a treatise of topics relating to the role of disorder and entropy in the applied sciences. Volume 1 is mainly on the analysis of the decline in global oil production, while Volume 2 uses often related analysis in studying renewable sources of energy and how entropy plays a role in our environment and everyday life.





This is a list of the novel areas of research, listed in what I consider a ranked order of originality:
  1. The Oil Shock Model.
    A data flow model of oil extraction and production which allows for perturbations.

  2. The Dispersive Discovery Model.
    A probabilistic model of resource discovery which accounts for technological advancement and a finite search volume.

  3. The Reservoir Size Dispersive Aggregation Model.
    A first-principles model that explains and describes the size distribution of oil reservoirs and fields around the world.

  4. Solving the Reserve Growth "enigma".
    An application of dispersive discovery on a localized level which models the hyperbolic reserve growth characteristics observed.

  5. Shocklets.
    A kernel approach to characterizing production from individual fields.

  6. Reserve Growth, Creaming Curve, and Size Distribution Linearization.
    An obvious linearization of this family of curves, related to HL but more useful since it stems from first principles.

  7. The Hubbert Peak Logistic Curve explained.
    The Logistic curve is trivially explained by dispersive discovery with exponential technology advancement.

  8. Laplace Transform Analysis of Dispersive Discovery.
    Dispersion curves are solved by looking up the Laplace transform of the spatial uncertainty profile.

  9. The Maximum Entropy Principle and the Entropic Dispersion Framework.
    The generalized math framework applied to many models of disorder, natural or man-made. Explains the origin of the entroplet.

  10. Gompertz Decline Model.
    Exponentially increasing extraction rates lead to steep production decline.

  11. Anomalous Behavior in Dispersive Transport explained.
    Photovoltaic (PV) material made from disordered and amorphous semiconductor material shows poor photoresponse characteristics. Solution to simple entropic dispersion relations or the more general Fokker-Planck leads to good agreement with the data over orders of magnitude in current and response times.

  12. Framework for understanding Breakthrough Curves and Solute Transport in Porous Materials.
    The same disordered Fokker-Planck construction explains the dispersive transport of solute in groundwater or liquids flowing in porous materials.

  13. The Dynamics of Atmospheric CO2 buildup and extrapolation.
    Used the oil shock model to convolve a fat-tailed CO2 residence time impulse response function with a fossil-fuel stimulus. This shows the long latency of CO2 buildup very straightforwardly.

  14. Terrain Slope Distribution Analysis.
    Explanation and derivation of the topographic slope distribution across the USA. This uses mean energy and maximum entropy principle.

  15. Reliability Analysis and understanding the "bathtub curve".
    Using a dispersion in failure rates to generate the characteristic bathtub curves of failure occurrences in parts and components.

  16. Wind Energy Analysis.
    Universality of wind energy probability distribution by applying maximum entropy to the mean energy observed. Data from Canada and Germany.

  17. Dispersion Analysis of Human Transportation Statistics.
    Alternate take on the empirical distribution of travel times between geographical points. This uses a maximum entropy approximation to the mean speed and mean distance across all the data points.

  18. The Overshoot Point (TOP) and the Oil Production Plateau.
    How increases in extraction rate can maintain production levels.

  19. Analysis of Relative Species Abundance.
    Dispersive evolution of species according to Maximum Entropy Principle leads to characteristic distribution of species abundance.

  20. Lake Size Distribution.
    Analogous to explaining reservoir size distribution, uses similar arguments to derive the distribution of freshwater lake sizes. This provides a good feel for how often super-giant reservoirs and Great Lakes occur (by comparison)

  21. Labor Productivity Learning Curve Model.
    A simple relative productivity model based on uncertainty of a diminishing return learning curve gradient over a large labor pool (in this case Japan).

  22. Project Scheduling and Bottlenecking.
    Explanation of how uncertainty in meeting project deadlines or task durations caused by a spread of productivity rates leads to probabilistic schedule slips with fat-tails. Answers why projects don't complete on time.

  23. The Stochastic Model of Popcorn Popping.
    The novel explanation of why popcorn popping follows the same bell-shaped curve of the Hubbert Peak in oil production.

  24. The Quandary of Infinite Reserves due to Fat-Tail Statistics.
    Demonstrated that even infinite reserves can lead to limited resource production in the face of maximum extraction constraints.

  25. Oil Recovery Factor Model.
    A model of oil recovery which takes into account reservoir size.

  26. Network Transit Time Statistics.
    Dispersion in TCP/IP transport rates leads to the measured fat-tails in round-trip time statistics on loaded networks.

  27. Language Evolution Model.
    Model for relative language adoption which depends on critical mass of acceptance.

  28. Web Link Growth Model.
    Model for relative popularity of web sites which follows a diminishing return learning curve model.

  29. Scientific Citation Growth Model.
    Same model used for explaining scientific citation indexing growth.

  30. Particle and Crystal Growth Statistics.
    Detailed model of ice crystal size distribution in high-altitude cirrus clouds.

  31. Rainfall Amount Dispersion.
    Explanation of rainfall variation based on dispersion in rate of cloud build-up along with dispersion in critical size.

  32. Earthquake Magnitude Distribution.
    Distribution of earthquake magnitudes based on dispersion of energy buildup and critical threshold.

  33. Income Disparity Distribution.
    Relative income distribution which includes inflection point to to compounding interest growth on investments.

  34. Insurance Payout Analysis, and Hyperbolic Discounting.
    Fat-tail analysis of risk and estimation.

  35. Thermal Entropic Dispersion Analysis.
    Solving the Fokker-Planck equation or Fourier's Law for thermal diffusion in a disordered environment. A subtle effect.

  36. GPS Acquisition Time Analysis.
    Engineering analysis of GPS cold-start acquisition times.
You can refer back to details in the blog, but The Oil ConunDRUM cleans everything up. It features quality mathematical markup, references to scholarly work, a full subject index, hypertext table of contents, several hundred figures with captions, footnotes and sidebars with editorial commentary, embedded historical documents, source code appendices, and tables of nomenclature and glossary.



EDIT (1/21/11): Here is a critique from TOD. I can only assume the commenter doesn't understand the concept of convolution or doesn't realize that such a useful technique exists:
Your methods are fundamentally flawed you cannot aggregate across producing basins like you do. Its simply wrong.
To add multiple producing basins together you must adjust the time variable such that all of them start production at the same time or if they have peaked all the peaks are aligned.
The time that a basin was discovered and put into production is an irrelevant random variable and has no influence on the ultimate URR.
If you don't correctly normalize the time variable across basins your work is simply garbage. There is no coupling between basins and no reason to average them based on real time. Its junk math. No simple function exists in real time to describe the aggregate production profile.

The US simply happened to have its larger basins developed about the same time in real time. Hubbert's original analysis worked simply because the error in the normalized time and real time was small.

One of the mysteries of science and mathematics is the role of entropy. The mathematician Gian-Carlo Rota from MIT had this to say just a few years ago:
The take on this is that as Rota says about the Maximum Entropy Principle "Among all mathematical recipes, this is to the best of my knowledge the one that has found the most striking applications in engineering practice", yet it retains this sense of mystery in that no one can really prove it -- entropy just IS and by its existence, you have to deal with it the best you can.

EDIT (1/31/11): In the book, the last prediction of global crude production I made was a while ago. Here is an update:


The chart above is the best guess model from 2007 using the combined Dispersive Discovery+Oil Shock Model for crude. Apart from a conversion from barrels/year to barrels/day, this is the same model as I used in a 2007 TOD post and documented in The Oil ConunDRUM. The recent data from EIA is shown as the green dots back to 1980. I always find it interesting to take the 10,000 foot view. What may look like a plateau up close, may actually be part of the curve at a distance.

EDIT (2/22/2011): An additional USA Shock Model not included in the book. I included Alaska in this model.

Discovery data transcribed from this figure; the discoveries seem to end in 1985, so I extended the data with a dispersive discovery model. I added in Alaska North Slope at 22 billion barrels in 1968 and a small 300 million barrel starter discovery in 1858.
.
The blue line in the Dispersive Discovery Model is this equation, which is essentially a scaled version of the world model:
DD(t)=(1-exp(-URR/(B*((t-t')^6))))*B*((t-t')^6), URR=240,000 million barrels, B=2E-7, t'=1835.

I did not include any perturbation shocks to keep it simple. Apart from the data, the following is the entirety of the Ruby code; the discovery.txt file is yearly discovery data, which is from the first graph. The second graph shows reserve.out and production.out.


cat discovery.txt | ruby exp.rb 0.07 | ruby exp.rb 0.07 | ruby exp.rb 0.07 > reserve.out
cat reserve.out | ruby exp.rb 0.08 >production.out

$ cat exp.rb

def exp(a, b)
rate = b
length = a.length
temp = 0.0
for i in 0..length do
output = (a[i].to_f + temp) * rate
temp = (a[i].to_f + temp) * (1.0 - rate)
puts output
end
end
exp(STDIN.readlines, ARGV[0].to_f)

25 Comments:

Professor Blogger Joshua Stults said...

Bravo.

I downloaded the pdf, but it will take a while to digest. I hadn't seen your GPS acquisition time analysis before, interesting...

6:13 AM  
Professor Blogger @whut said...

Thanks, the GPS example was the concrete engineering application in the bunch.

8:53 AM  
Professor Anonymous Anonymous said...

On the first page of chapter 32 you use the word "weny". What does that word mean?

3:54 PM  
Professor Blogger Unknown Human said...

It is full of tremendous insight. You have done a great service to the world.

9:06 PM  
Professor Blogger @whut said...

Thanks for the comments.

Bratticus, I think the word "weny" somehow slipped through the spell-check -- it should be "went", as in "went beyond mere speculation".

11:40 PM  
Professor Blogger Unknown said...

Have you given any thought to a book? I've used lulu.com to publish my own book. The barrier to entry in terms of cost is as low as it gets.

4:31 PM  
Professor Blogger @whut said...

Advice taken, but as far as I am concerned, it's in a book form now. It has small form if you want to print it out, and it actually reads fine on a Kindle. Options are still open so I will see how it goes, thanks.

7:39 PM  
Professor Blogger Unknown said...

Book = wider audience.

Lulu isn't the only publish on demand service around. It is just the one I know. Essentially for the price of purchasing one draft at cost you get free webstore space, ISBN and listing on Amazon.com. Lulu wants the manuscript in PDF format. If you are lucky your manuscript will already match one of their sizes. Design a cover and you are done. If not I can help with LaTEX to PDF that I used.

6:03 AM  
Professor Blogger Unknown said...

First reaction: this is the only detailed discussion of a logically and mathematically coherent modelling approach to the timing and shape of the decline in the rate of global oil production that I have seen (other than perhaps Peter Wells of Neftex http://www.7gen.com/blog/20081026/24961-aspousa-2008). Plus fascinating applications of the underlying math and statistical logic to all sorts of apparently different areas of analysis. Awe inspiring.

Second reaction: took 700+ pages to convolute, disperse and otherwise smear out a bunch of input data a few times until it turned into a smooth curve that looked like the right shape, then added perturbations tailored to make it look like it fits the real world; the chart of the Shock model on page 363 with oil + NGPL less than 80 mbpd in 2010 (when it was more) is not proving accurate, everyone should ignore this.

6:11 AM  
Professor Blogger Remco Bouckaert said...

Equation 4.2: p(r|g) should be p(x|g,t), otherwise the constant 1/gt is inconsistent with Equation 4.1

10:22 AM  
Professor Blogger @whut said...

Remco, thanks for the eagle eye.

Scarmani, One of the big challenges is trying to do the accounting for "all liquids" properly. This is a huge analysis problem because various other energy sources get mixed in with the mainly crude component. (And some of this obviously gets double counted if oil gets used to manufacture biofuels, for example) All I can say is that the crude only predictions are coming true and all that is left to do is follow the perturbations down the slope.

10:43 AM  
Professor Blogger JMS said...

Well done sir. It is curious to me that people think 10 minutes and a paragraph of criticism will stand up to 5 years of work and thought done for its own sake. That kind of ego driven mania should be left to the experts, like me.
Cheers. I hope you get the proper feedback this effort deserves and an audience that appreciates it and can help you improve it in a broader venue.
-Jon

9:57 AM  
Professor Blogger @whut said...

Yeah, it's been a long (monkey)grind. Thanks for supplying the humor over the stretch.

10:35 AM  
Professor Blogger Unknown said...

All this is very novel and interesting research. I'm a chemical engineer. I'd encourage you to publish several papers in various journals demonstrating the maximum entropy principle. I think you'd reach a lot more people who are able to make changes with articles rather than web pages.

Reading your stuff on the Terrain Slope Distribution Analysis, "bathtub curve" and Wind Energy Analysis makes everything seem simple, but it's not. I've been to engineering talks where the emeritus professor of chemical engineering at UQ bemoaned not being able to solve crystal growth DEs analytically. And yet, you've given a solution here!

I'd encourage you to publish in a journal, but don't compromise or dumb down your research.

(PS. I know how frustrating writing papers for journals is).

8:12 PM  
Professor Blogger @whut said...

Thanks for the advice.

About simplicity, I would say to first-order, much of the analysis boils down simply. The higher-order details are indeed complicated, yet I believe that the disorder is great enough to obscure all that complexity. That is really what is at the bottom of terrain slope, failure rates, and wind speed dispersion.

The question is whether journal editors will appreciate the simplification down to what is observable, instead of isolating some specific physical phenomena, which is where all the sunk costs of research lie.

BTW, I have published material science research in the Journal of Crystal Growth, so I understand what you are saying but I am also thinking in terms of a paradigm shift relating to how best information will get disseminated in the future. It's really a gamble on my part, thinking that "crowd-sourcing" of science may work instead of publishing in obscure journals.

8:39 PM  
Professor Blogger Steve From Virginia said...

I came over here because some analyst had an article discussing Monte Carlo method and I knew you had some good stuff that I could compare and contrast ...

and I notice in your spare time you've put together this massive quantitative analysis of energy depletion 'business'. My hat's off to you! What a tremendous job.

I have only started reading through it. Have you talked to anyone @ ASPOUSA or Energy Bulletin about this?

2:09 PM  
Professor Blogger @whut said...

Steve, I will get some info to them, thanks.

8:07 AM  
Professor Blogger Unknown said...

Well, I'd still encourage you to publish some of your stuff in journals. Even obscure journals are given more weight than a book published on the net, for people in arguments with Lynch, Yergin et al. The shock model is one of the best models I've seen. Reminds me of pharmacokinetic models for drug ADME (absorption, distribution, metabolism, elimination).

2:28 AM  
Professor Blogger Joshua Stults said...

Thought you'd be interested in this short discussion on rough estimates of wind energy availability.

7:17 AM  
Professor Blogger @whut said...

Thanks, I was just looking for that ref. It is quite an interesting thesis that extracting wind energy has these calculable limits. They do say that wind energy is a MaxEnt condition.

9:39 PM  
Professor Blogger @whut said...

Note to self: The figures on p. 609 on scientific citation distributions are flipped. The first is the PDF and the scond is a CDF.

5:31 PM  
Professor Blogger Steve Short said...

Fantastic piece of work. My contratulations. I have been reading it over the last weekend since jumping from (lurker) reading your stuff recently over at Judith Curry's blog. I am a geochemist and chemothermodynamicist who sometimes designs hydrometallurgical processes. With respect to MEP and MAXENT etc, it is clear you know Shannon and Dewar's work well. Have you been following the more recent work of Alex Kleidon and his associates at the Max Planck Institute for Biogeochemistry? Here's a nice recent one in the online accessible journal Earth System Dynamics:
Quantifying the thermodynamic entropy budget of the land surface:
is this useful?
N. A. Brunsell1, S. J. Schymanski, and A. Kleidon.

5:57 PM  
Professor Blogger @whut said...

Yes, I have been reading Kleidon's work recently. He is very good at working global energy arguments and some of his findings are eye-opening, like how much potential renewable energy can be extracted by humans. (not as much as we might think, according to my interpretation of his writings)

3:48 AM  
Professor Blogger D Coyne said...

Hi WHT,

You seem to be interested in diverse subjects. If you have any interest in econometrics (essentially applying statistics to economics) and how the maxent principles might be applied in that area the following book may be of interest: "Maximum Entropy Econometrics: Robust Estimation with Limited Data" by A. Golan, G. Judge, and D. Miller. I came across it when looking for texts to review the econometrics I have forgotten after reading the recent (May 11, 2012) IMF paper discussed at the Oil Drum.

D Coyne

4:31 AM  
Professor Blogger iron said...

THANKS FOR SHARING SUCH A AMAZING WORK
NICE WORK
Energy Analysis in USA

9:19 PM  

Post a Comment

<< Home


"Like strange bulldogs sniffing each other's butts, you could sense wariness from both sides"