tag:blogger.com,1999:blog-70020402024-03-23T11:33:39.028-07:00M O B J E C T I V I S Tre: Facts on the inevitable world-wide energy transition,
see https://GeoEnergyMath.com@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.comBlogger1141125tag:blogger.com,1999:blog-7002040.post-11240388791541394942020-03-18T05:46:00.003-07:002020-03-18T05:46:53.473-07:00Mathematical Geoenergy<div style="background-color: white; border: 0px; box-sizing: border-box; color: #444444; font-family: Lato, sans-serif; font-size: 18px; margin-bottom: 1.7em; outline: 0px; padding: 0px; vertical-align: baseline;">
Our book <a href="https://www.wiley.com/en-us/Mathematical+Geoenergy%3A+Discovery%2C+Depletion%2C+and+Renewal-p-9781119434290" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Mathematical Geoenergy</a> presents a number of novel approaches that each deserve a research paper on their own. Here is the list, ordered roughly by importance (IMHO):</div>
<div class="wp-block-image" style="background-color: white; border: 0px; box-sizing: border-box; color: #444444; font-family: Lato, sans-serif; font-size: 18px; margin: 0px 0px 1em; outline: 0px; padding: 0px; vertical-align: baseline;">
<figure class="alignright" style="box-sizing: border-box; display: table; float: right; margin: 0px 0px 0px 1em;"><img alt="" style="box-sizing: border-box; height: auto; max-width: 100%;" /></figure></div>
<div style="background-color: white; border: 0px; box-sizing: border-box; color: #444444; font-family: Lato, sans-serif; font-size: 18px; margin-bottom: 1.7em; outline: 0px; padding: 0px; vertical-align: baseline;">
</div>
<span id="more-475" style="background-color: white; border: 0px; box-sizing: border-box; color: #444444; font-family: Lato, sans-serif; font-size: 18px; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"></span><span style="background-color: white; color: #444444; font-family: Lato, sans-serif; font-size: 18px;"></span><br />
<ol style="background-color: white; border: 0px; box-sizing: border-box; color: #444444; font-family: Lato, sans-serif; font-size: 18px; list-style-image: initial; list-style-position: initial; margin: 0px 0px 1.7em 3em; outline: 0px; padding: 0px; vertical-align: baseline;">
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Laplace’s Tidal Equation Analytic Solution</span>.<br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch11" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 11</a>, <a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch12" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">12</a>)</span> A solution of a Navier-Stokes variant along the equator. Laplace’s Tidal Equations are a simplified version of Navier-Stokes and the equatorial topology allows an exact closed-form analytic solution. This could classify for the Clay Institute Millenium Prize if the practical implications are considered, but it’s a lower-dimensional solution than a complete 3-D Navier-Stokes formulation requires.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Model of El Nino/Southern Oscillation (ENSO)</span>.<br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch12" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 12</a>)</span> A tidally forced model of the equatorial Pacific’s thermocline sloshing (the ENSO dipole) which assumes a strong annual interaction. Not surprisingly this uses the Laplace’s Tidal Equation solution described above, otherwise the tidal pattern connection would have been discovered long ago.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Model of Quasi-Biennial Oscillation (QBO)</span>.<br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch11" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 11</a>)</span> A model of the equatorial stratospheric winds which cycle by reversing direction ~28 months. This incorporates the idea of amplified cycling of the sun and moon nodal declination pattern on the atmosphere’s tidal response.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Origin of the Chandler Wobble</span>.<br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch13" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 13</a>)</span> An explanation for the ~433 day cycle of the Earth’s Chandler wobble. Finding this is a fairly obvious consequence of modeling the QBO.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">The Oil Shock Model.</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch5" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 5</a>)</span> A data flow model of oil extraction and production which allows for perturbations. We are seeing this in action with the recession caused by oil supply perturbations due to the Corona Virus pandemic.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">The Dispersive Discovery Model.</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch4" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 4</a>)</span> A probabilistic model of resource discovery which accounts for technological advancement and a finite search volume.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Ornstein-Uhlenbeck Diffusion Model</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch6" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 6</a>)</span> Applying Ornstein-Uhlenbeck diffusion to describe the decline and asymptotic limiting flow from volumes such as occur in fracked shale oil reservoirs.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">The Reservoir Size Dispersive Aggregation Model.</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch4" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 4</a>)</span> A first-principles model that explains and describes the size distribution of oil reservoirs and fields around the world.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Origin of Tropical Instability Waves (TIW)</span>.<br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch12" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 12</a>)</span> As the ENSO model was developed, a higher harmonic component was found which matches TIW</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Characterization of Battery Charging and Dischargin</span>g.<br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch18" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 18</a>)</span> Simplified expressions for modeling Li-ion battery charging and discharging profiles by applying dispersion on the diffusion equation, which reflects the disorder within the ion matrix.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Anomalous Behavior in Dispersive Transport explained.</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch18" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 18</a>)</span> Photovoltaic (PV) material made from disordered and amorphous semiconductor material shows poor photoresponse characteristics. Solution to simple entropic dispersion relations or the more general Fokker-Planck leads to good agreement with the data over orders of magnitude in current and response times.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Framework for understanding Breakthrough Curves and Solute Transport in Porous Materials.</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch20" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 20</a>)</span> The same disordered Fokker-Planck construction explains the dispersive transport of solute in groundwater or liquids flowing in porous materials.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Wind Energy Analysis</span>.<br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch11" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 11</a>)</span> Universality of wind energy probability distribution by applying maximum entropy to the mean energy observed. Data from Canada and Germany. Found a universal BesselK distribution which improves on the conventional Rayleigh distribution.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Terrain Slope Distribution Analysis.</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch16" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 16</a>)</span> Explanation and derivation of the topographic slope distribution across the USA. This uses mean energy and maximum entropy principle.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Thermal Entropic Dispersion Analysis</span>.<br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch14" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 14</a>)</span> Solving the Fokker-Planck equation or Fourier’s Law for thermal diffusion in a disordered environment. A subtle effect but the result is a simplified expression not involving complex <em style="border: 0px; box-sizing: border-box; font-family: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">errf </em>transcendental functions. Useful in ocean heat content (OHC) studies.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">The Maximum Entropy Principle and the Entropic Dispersion Framework.</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch10" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 10</a>)</span> The generalized math framework applied to many models of disorder, natural or man-made. Explains the origin of the entroplet.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Solving the Reserve Growth “enigma”.</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch6" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 6</a>)</span> An application of dispersive discovery on a localized level which models the hyperbolic reserve growth characteristics observed.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Shocklets.</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch7" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 7</a>)</span> A kernel approach to characterizing production from individual oil fields.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Reserve Growth, Creaming Curve, and Size Distribution Linearization.</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch6" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 6</a>)</span> An obvious linearization of this family of curves, related to Hubbert Linearization but more useful since it stems from first principles.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">The Hubbert Peak Logistic Curve explained.</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch7" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 7</a>)</span> The Logistic curve is trivially explained by dispersive discovery with exponential technology advancement.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Laplace Transform Analysis of Dispersive Discovery.</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch7" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 7</a>)</span> Dispersion curves are solved by looking up the Laplace transform of the spatial uncertainty profile.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Gompertz Decline Model.</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch7" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 7</a>)</span> Exponentially increasing extraction rates lead to steep production decline.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">The Dynamics of Atmospheric CO2 buildup and Extrapolation.</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch9" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 9</a>)</span> Convolving a fat-tailed CO2 residence time impulse response function with a fossil-fuel emissions stimulus. This shows the long latency of CO2 buildup very straightforwardly.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Reliability Analysis and Understanding the “Bathtub Curve”.</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch19" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 19</a>)</span> Using a dispersion in failure rates to generate the characteristic bathtub curves of failure occurrences in parts and components.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">The Overshoot Point (TOP) and the Oil Production Plateau.</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch8" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 8</a>)</span> How increases in extraction rate can maintain production levels.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Lake Size Distribution.</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch15" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 15</a>)</span> Analogous to explaining reservoir size distribution, uses similar arguments to derive the distribution of freshwater lake sizes. This provides a good feel for how often super-giant reservoirs and Great Lakes occur (by comparison).</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">The Quandary of Infinite Reserves due to Fat-Tail Statistics.</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch9" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 9</a>)</span> Demonstrated that even infinite reserves can lead to limited resource production in the face of maximum extraction constraints.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Oil Recovery Factor Model.</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch6" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 6</a>)</span> A model of oil recovery which takes into account reservoir size.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Network Transit Time Statistics.</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch21" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 21</a>)</span> Dispersion in TCP/IP transport rates leads to the measured fat-tails in round-trip time statistics on loaded networks.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Particle and Crystal Growth Statistics.</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch20" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 20</a>)</span> Detailed model of ice crystal size distribution in high-altitude cirrus clouds.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Rainfall Amount Dispersion.</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch15" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 15</a>)</span> Explanation of rainfall variation based on dispersion in rate of cloud build-up along with dispersion in critical size.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Earthquake Magnitude Distribution.</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch13" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 13</a>)</span> Distribution of earthquake magnitudes based on dispersion of energy buildup and critical threshold.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">IceBox Earth Setpoint Calculation.</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch17" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 17</a>)</span> Simple model for determining the earth’s setpoint temperature extremes — current and low-CO2 icebox earth.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Global Temperature Multiple Linear Regression Model</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch17" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 17</a>)</span> The global surface temperature records show variability that is largely due to the GHG rise along with fluctuating changes due to ocean dipoles such as ENSO (via the SOI measure and also AAM) and sporadic volcanic eruptions impacting the atmospheric aerosol concentrations.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">GPS Acquisition Time Analysis</span>.<br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch21" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 21</a>)</span> Engineering analysis of GPS cold-start acquisition times. Using Maximum Entropy in EMI clutter statistics.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">1/f Noise</span> <span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Model</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch21" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 21</a>)</span> Deriving a random noise spectrum from maximum entropy statistics.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Stochastic Aquatic Waves</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch12" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Ch 12</a>)</span> Maximum Entropy Analysis of wave height distribution of surface gravity waves.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">The Stochastic Model of Popcorn Popping.</span><br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.app3" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Appx C</a>)</span> The novel explanation of why popcorn popping follows the same bell-shaped curve of the Hubbert Peak in oil production. Can use this to model epidemics, etc.</li>
<li style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Dispersion Analysis of Human Transportation Statistics</span>.<br style="box-sizing: border-box;" /><span style="border: 0px; box-sizing: border-box; font-family: inherit; font-style: inherit; font-weight: 700; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">(<a href="https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.app3" style="border: 0px; box-sizing: border-box; color: #1abc9c; font-family: inherit; font-style: inherit; font-weight: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration-line: none; transition: all 0.2s ease-in-out 0s; vertical-align: baseline;">Appx C</a>)</span> Alternate take on the empirical distribution of travel times between geographical points. This uses a maximum entropy approximation to the mean speed and mean distance across all the data points.</li>
</ol>
@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.com1tag:blogger.com,1999:blog-7002040.post-84903442812108913002011-01-17T07:01:00.000-08:002019-11-16T15:56:15.280-08:00The Oil ConunDRUMI synthesized the last several years of blog content and placed it into a book tentatively called The Oil ConunDRUM (<a href="https://geoenergymath.com/2018/11/02/mathematical-geoenergy-update/" target="_blank"><span style="font-size: large;">ultimately titled Mathematical Geoenergy published by Wiley/AGU in 2019</span></a>). This document turned into a treatise of topics relating to the role of disorder and entropy in the applied sciences. Volume 1 is mainly on the analysis of the decline in global oil production, while Volume 2 uses often related analysis in studying renewable sources of energy and how entropy plays a role in our environment and everyday life.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<img border="0" height="157" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgy1C8Yz_lMpLQr660CjxIgxu8oH4CQ9NrCKy0ukPlWpqBzj6FtRWXlMDugyU59_lvE4FreNO-L_jLXYO4ok7DFaKjTzYOCDRwYdlrleBEvungFkg-sGpq6hwrQjO6xuvlMjPqV/s320/TheOilConundrum.GIF" width="320" /></div>
<br />
<br />
<br />
<br />
This is a list of the novel areas of research, listed in what I consider a ranked order of originality:<br />
<ol>
<li><span style="font-family: "georgia"; font-weight: bold;">The Oil Shock Model.</span><br />
A data flow model of oil extraction and production which allows for perturbations.<br />
<br />
</li>
<li><span style="font-weight: bold;">The Dispersive Discovery Model.</span><br />
A probabilistic model of resource discovery which accounts for technological advancement and a finite search volume.<br />
<br />
</li>
<li><span style="font-weight: bold;">The Reservoir Size Dispersive Aggregation Model.</span><br />
A first-principles model that explains and describes the size distribution of oil reservoirs and fields around the world.<br />
<br />
</li>
<li><span style="font-weight: bold;">Solving the Reserve Growth "enigma".</span><br />
An application of dispersive discovery on a localized level which models the hyperbolic reserve growth characteristics observed.<br />
<br />
</li>
<li><span style="font-weight: bold;">Shocklets.</span><br />
A kernel approach to characterizing production from individual fields.<br />
<br />
</li>
<li><span style="font-weight: bold;">Reserve Growth, Creaming Curve, and Size Distribution Linearization.</span><br />
An obvious linearization of this family of curves, related to HL but more useful since it stems from first principles.<br />
<br />
</li>
<li><span style="font-weight: bold;">The Hubbert Peak Logistic Curve explained.</span><br />
The Logistic curve is trivially explained by dispersive discovery with exponential technology advancement.<br />
<br />
</li>
<li><span style="font-weight: bold;">Laplace Transform Analysis of Dispersive Discovery.</span><br />
Dispersion curves are solved by looking up the Laplace transform of the spatial uncertainty profile.<br />
<br />
</li>
<li><span style="font-weight: bold;">The Maximum Entropy Principle and the Entropic Dispersion Framework. </span><br />
The generalized math framework applied to many models of disorder, natural or man-made. Explains the origin of the entroplet.<br />
<br />
</li>
<li><span style="font-weight: bold;">Gompertz Decline Model.</span><br />
Exponentially increasing extraction rates lead to steep production decline.<br />
<br />
</li>
<li><span style="font-weight: bold;">Anomalous Behavior in Dispersive Transport explained.</span><br />
Photovoltaic (PV) material made from disordered and amorphous semiconductor material shows poor photoresponse characteristics. Solution to simple entropic dispersion relations or the more general Fokker-Planck leads to good agreement with the data over orders of magnitude in current and response times.<br />
<br />
</li>
<li><span style="font-weight: bold;">Framework for understanding Breakthrough Curves and Solute Transport in Porous Materials.</span><br />
The same disordered Fokker-Planck construction explains the dispersive transport of solute in groundwater or liquids flowing in porous materials.<br />
<br />
</li>
<li><span style="font-weight: bold;">The Dynamics of Atmospheric CO2 buildup and extrapolation.</span><br />
Used the oil shock model to convolve a fat-tailed CO2 residence time impulse response function with a fossil-fuel stimulus. This shows the long latency of CO2 buildup very straightforwardly.<br />
<br />
</li>
<li><span style="font-weight: bold;">Terrain Slope Distribution Analysis.</span><br />
Explanation and derivation of the topographic slope distribution across the USA. This uses mean energy and maximum entropy principle.<br />
<br />
</li>
<li><span style="font-weight: bold;">Reliability Analysis and understanding the "bathtub curve".</span><br />
Using a dispersion in failure rates to generate the characteristic bathtub curves of failure occurrences in parts and components.<br />
<br />
</li>
<li><span style="font-weight: bold;">Wind Energy Analysis</span>.<br />
Universality of wind energy probability distribution by applying maximum entropy to the mean energy observed. Data from Canada and Germany.<br />
<br />
</li>
<li><span style="font-weight: bold;">Dispersion Analysis of Human Transportation Statistics.</span><br />
Alternate take on the empirical distribution of travel times between geographical points. This uses a maximum entropy approximation to the mean speed and mean distance across all the data points.<br />
<br />
</li>
<li><span style="font-weight: bold;">The Overshoot Point (TOP) and the Oil Production Plateau.</span><br />
How increases in extraction rate can maintain production levels.<br />
<br />
</li>
<li><span style="font-weight: bold;">Analysis of Relative Species Abundance.</span><br />
Dispersive evolution of species according to Maximum Entropy Principle leads to characteristic distribution of species abundance.<br />
<br />
</li>
<li><span style="font-weight: bold;">Lake Size Distribution.</span><br />
Analogous to explaining reservoir size distribution, uses similar arguments to derive the distribution of freshwater lake sizes. This provides a good feel for how often super-giant reservoirs and Great Lakes occur (by comparison)<br />
<br />
</li>
<li><span style="font-weight: bold;">Labor Productivity Learning Curve Model.</span><br />
A simple relative productivity model based on uncertainty of a diminishing return learning curve gradient over a large labor pool (in this case Japan).<br />
<br />
</li>
<li><span style="font-weight: bold;">Project Scheduling and Bottlenecking.</span><br />
Explanation of how uncertainty in meeting project deadlines or task durations caused by a spread of productivity rates leads to probabilistic schedule slips with fat-tails. Answers why projects don't complete on time.<br />
<br />
</li>
<li><span style="font-weight: bold;">The Stochastic Model of Popcorn Popping.</span><br />
The novel explanation of why popcorn popping follows the same bell-shaped curve of the Hubbert Peak in oil production.<br />
<br />
</li>
<li><span style="font-weight: bold;">The Quandary of Infinite Reserves due to Fat-Tail Statistics.</span><br />
Demonstrated that even infinite reserves can lead to limited resource production in the face of maximum extraction constraints.<br />
<br />
</li>
<li><span style="font-weight: bold;">Oil Recovery Factor Model.</span><br />
A model of oil recovery which takes into account reservoir size.<br />
<br />
</li>
<li><span style="font-weight: bold;">Network Transit Time Statistics.</span><br />
Dispersion in TCP/IP transport rates leads to the measured fat-tails in round-trip time statistics on loaded networks.<br />
<br />
</li>
<li><span style="font-weight: bold;">Language Evolution Model.</span><br />
Model for relative language adoption which depends on critical mass of acceptance.<br />
<br />
</li>
<li><span style="font-weight: bold;">Web Link Growth Model.</span><br />
Model for relative popularity of web sites which follows a diminishing return learning curve model.<br />
<br />
</li>
<li><span style="font-weight: bold;">Scientific Citation Growth Model.</span><br />
Same model used for explaining scientific citation indexing growth.<br />
<br />
</li>
<li><span style="font-weight: bold;">Particle and Crystal Growth Statistics.</span><br />
Detailed model of ice crystal size distribution in high-altitude cirrus clouds.<br />
<br />
</li>
<li><span style="font-weight: bold;">Rainfall Amount Dispersion.</span><br />
Explanation of rainfall variation based on dispersion in rate of cloud build-up along with dispersion in critical size.<br />
<br />
</li>
<li><span style="font-weight: bold;">Earthquake Magnitude Distribution.</span><br />
Distribution of earthquake magnitudes based on dispersion of energy buildup and<span style="font-family: "arial"; font-size: 85%;"><span style="font-family: "arial"; font-size: 10pt;"><o:p></o:p></span></span> critical threshold.<br />
<br />
</li>
<li><span style="font-weight: bold;">Income Disparity Distribution.</span><br />
Relative income distribution which includes inflection point to to compounding interest growth on investments.<br />
<br />
</li>
<li><span style="font-weight: bold;">Insurance Payout Analysis, and Hyperbolic Discounting.</span><br />
Fat-tail analysis of risk and estimation.<br />
<br />
</li>
<li><span style="font-weight: bold;">Thermal Entropic Dispersion Analysis</span>.<br />
Solving the Fokker-Planck equation or Fourier's Law for thermal diffusion in a disordered environment. A subtle effect.<br />
<br />
</li>
<li><span style="font-weight: bold;">GPS Acquisition Time Analysis</span>.<br />
Engineering analysis of GPS cold-start acquisition times.<br />
</li>
</ol>
You can refer back to details in the blog, but <a href="https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B-ycoDmNCe6wODQ5Mjc4ZGUtZTU5ZC00NjY1LWIwNTItMTY0YTJjYTg1Zjgz&hl=en&authkey=CIKqxfsE">The Oil ConunDRUM</a> cleans everything up. It features quality mathematical markup, references to scholarly work, a full subject index, hypertext table of contents, several hundred figures with captions, footnotes and sidebars with editorial commentary, embedded historical documents, source code appendices, and tables of nomenclature and glossary.<br />
<br />
<hr />
<br />
<span style="font-weight: bold;">EDIT (1/21/11)</span>: Here is a critique from TOD. I can only assume the commenter doesn't understand the concept of convolution or doesn't realize that such a useful technique exists:<br />
<blockquote>
<span style="font-size: 78%;">Your methods are fundamentally flawed you cannot aggregate across producing basins like you do. Its simply wrong.</span><br />
<span style="font-size: 78%;">To add multiple producing basins together you must adjust the time variable such that all of them start production at the same time or if they have peaked all the peaks are aligned.</span><br />
<span style="font-size: 78%;">The time that a basin was discovered and put into production is an irrelevant random variable and has no influence on the ultimate URR.<br />
If you don't correctly normalize the time variable across basins your work is simply garbage. There is no coupling between basins and no reason to average them based on real time. Its junk math. No simple function exists in real time to describe the aggregate production profile.</span><br />
<span style="font-size: 78%;">The US simply happened to have its larger basins developed about the same time in real time. Hubbert's original analysis worked simply because the error in the normalized time and real time was small.</span></blockquote>
<br />
One of the mysteries of science and mathematics is the role of entropy. The mathematician Gian-Carlo Rota from MIT had this to say just a few years ago:<br />
<a href="http://img208.imageshack.us/img208/7238/rota.gif"><img alt="" border="0" src="https://img208.imageshack.us/img208/7238/rota.gif" style="cursor: pointer; display: block; height: 358px; margin: 0px auto 10px; text-align: center; width: 566px;"></a>The take on this is that as Rota says about the Maximum Entropy Principle <span style="font-style: italic;">"Among all mathematical recipes, this is to the best of my knowledge the one that has found the most striking applications in engineering practice"</span>, yet it retains this sense of mystery in that no one can really prove it -- entropy just <span style="font-style: italic;">IS</span> and by its existence, you have to deal with it the best you can.<br />
<br />
<b>EDIT (1/31/11</b>): In the book, the last prediction of global crude production I made was a while ago. Here is an update:<br />
<br />
<img src="https://img189.imageshack.us/img189/3640/ddos2010.gif"><br />
The chart above is the best guess model from 2007 using the combined Dispersive Discovery+Oil Shock Model for crude. Apart from a conversion from barrels/year to barrels/day, this is the same model as I used in a <a href="http://www.theoildrum.com/node/3287" rel="nofollow">2007 TOD post</a> and documented in <a href="http://theoilconundrum.com/" rel="nofollow">The Oil ConunDRUM</a>. The recent data from <a href="http://tonto.eia.doe.gov/cfapps/ipdbproject/iedindex3.cfm?tid=5&pid=57&aid=1&cid=regions&syid=1980&eyid=2010&unit=TBPD">EIA</a> is shown as the green dots back to 1980. I always find it interesting to take the 10,000 foot view. What may look like a plateau up close, may actually be part of the curve at a distance.<br />
<br />
<span style="font-weight: bold;">EDIT (2/22/2011): </span>An additional USA Shock Model not included in the book. I included Alaska in this model.<br />
<br />
Discovery data transcribed from this figure; the discoveries seem to end in 1985, so I extended the data with a dispersive discovery model. I added in Alaska North Slope at 22 billion barrels in 1968 and a small 300 million barrel starter discovery in 1858.<span style="font-size: 85%;"><br />
.<br />
</span><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwvmbtEHhHb89SjYwcslZ9hTbdd5EMwiRWQsGFCYtW3p7Wl5wQddKA4MMe3FyP7u8dT4rZ4UJ0RLqF-zKxXdPbNRvak_cPoy5_hkzssYBr4u4CArAjIGxG8AzRAExsFjQmrahn/s1600/saupload_e11.jpg"><img alt="" border="0" id="BLOGGER_PHOTO_ID_5576793052508230306" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwvmbtEHhHb89SjYwcslZ9hTbdd5EMwiRWQsGFCYtW3p7Wl5wQddKA4MMe3FyP7u8dT4rZ4UJ0RLqF-zKxXdPbNRvak_cPoy5_hkzssYBr4u4CArAjIGxG8AzRAExsFjQmrahn/s320/saupload_e11.jpg" style="cursor: pointer; display: block; height: 198px; margin: 0px auto 10px; text-align: center; width: 320px;" /></a>The blue line in the Dispersive Discovery Model is this equation, which is essentially a scaled version of the world model:<br />
<span style="font-size: 85%;">DD(t)=(1-exp(-URR/(B*((t-t')^6))))*B*((t-t')^6), URR=240,000 million barrels, B=2E-7, t'=1835.<br />
</span><br />
<a href="http://img338.imageshack.us/img338/9950/usaprod.gif"><img alt="" border="0" src="https://img338.imageshack.us/img338/9950/usaprod.gif" style="display: block; height: 384px; margin: 0px auto 10px; text-align: center; width: 889px;"></a>I did not include any perturbation shocks to keep it simple. Apart from the data, the following is the entirety of the Ruby code; the<span style="font-size: 85%;"><span style="font-family: "courier new";"> discovery.txt</span></span> file is yearly discovery data, which is from the first graph. The second graph shows <span style="font-size: 85%;"><span style="font-family: "courier new";">reserve.out</span></span> and <span style="font-size: 85%;"><span style="font-family: "courier new";">production.out</span></span>.<br />
<br />
<span style="font-family: "courier new";"><span style="font-size: 78%;"> <span style="font-size: 85%;"></span></span></span><br />
<blockquote>
<span style="font-size: 78%;"><span style="font-family: "courier new";">cat discovery.txt | ruby exp.rb 0.07 | ruby exp.rb 0.07 | ruby exp.rb 0.07 > reserve.out<br />
cat reserve.out | ruby exp.rb 0.08 >production.out<br />
</span></span><br />
<span style="font-family: "courier new"; font-size: 78%;">$ cat exp.rb<br />
</span><br />
<pre><span style="font-family: "courier new"; font-size: 78%;">def exp(a, b)
rate = b
length = a.length
temp = 0.0
for i in 0..length do
output = (a[i].to_f + temp) * rate
temp = (a[i].to_f + temp) * (1.0 - rate)
puts output
end
end
<span style="font-size: 78%;"><span style="font-family: "courier new";">exp(STDIN.readlines, ARGV[0].to_f)
</span></span></span></pre>
</blockquote>
@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.com25tag:blogger.com,1999:blog-7002040.post-77814539258515731502011-01-08T06:00:00.000-08:002011-01-08T03:08:07.370-08:00Terrain SlopesEntropy makes its mark everywhere. Take the case of modeling topography. How can we model and thus characterize disorder in the earth's terrain? Can we actually understand the extreme variability we see?<br /><br />If we consider that immense forces cause upheaval in the crust then we can reason that the energy can also vary all over the map, so to speak. The process that transfers potential energy into kinetic energy to first order has to contain elements of randomness. To the huge internal forces within the earth, generating relief textures equates to a kind of brownian motion in relative terms -- over geological time, the terrain amounts to nothing more than inconsequential particles to the earth's powerful internal engine.<br /><br />In a related sense the process also resembles the pressure distribution in the earth's atmosphere, a classic application of maximum entropy that we can re-apply in the case of modeling terrain slope distributions.<br /><br /><span style="font-weight: bold;">Premise.</span> We take the terrain slope <span style="font-weight: bold; font-style: italic;">S</span> as our random variable (defined as rise/run). The higher<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEsAMOs1L1Q40sNGOTb_17YnMNXlYb9E7yVLPY8GK_tHb0wQGsh0Ot4RL9hGBGgt81U5Hv63t3YTZXa8do0VzHmSQ6ZyemTqQyaVvQhAqX1GLV8PKCjWGgz3kdylrpXk_wBa5k/s1600/rise-run.gif"><img style="margin: 0pt 0pt 10px 10px; float: right; cursor: pointer; width: 251px; height: 98px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEsAMOs1L1Q40sNGOTb_17YnMNXlYb9E7yVLPY8GK_tHb0wQGsh0Ot4RL9hGBGgt81U5Hv63t3YTZXa8do0VzHmSQ6ZyemTqQyaVvQhAqX1GLV8PKCjWGgz3kdylrpXk_wBa5k/s320/rise-run.gif" alt="" id="BLOGGER_PHOTO_ID_5559223670498981554" border="0" /></a> the slope, the more energetic the terrain. Applying Maximum Entropy to a section of terrain, we can approximate the local variations as a MaxEnt conditional probability density function:<br /><blockquote>p(<span style="font-weight: bold; font-style: italic;">S</span>|<span style="font-weight: bold; font-style: italic;">E</span>) = (1/<span style="font-weight: bold; font-style: italic;">cE</span>) * exp(-<span style="font-weight: bold; font-style: italic;">S</span>/<span style="font-weight: bold; font-style: italic;">cE</span>)</blockquote>where <span style="font-weight: bold; font-style: italic;">E</span> is the local mean energy and <span style="font-weight: bold; font-style: italic;">c</span> is a constant of proportionality. But we also assume that the mean <span style="font-weight: bold; font-style: italic;">E</span> varies over a larger area that we are interested in, as in the superstatistical sense of applying a prior distribution.<br /><blockquote>p(<span style="font-weight: bold; font-style: italic;">E</span>) = <span style="font-weight: bold; font-style: italic;">k</span>*exp(-<span style="font-weight: bold; font-style: italic;">k</span>*<span style="font-weight: bold; font-style: italic;">E</span>)<br /></blockquote>where <span style="font-weight: bold; font-style: italic;">k</span> is another MaxEnt measure of our uncertainty in the energy spread over a larger area.<br /><br />The final probability is an integral over the marginal distribution consisting of the conditional multiplied by the prior:<br /><blockquote>p(<span style="font-weight: bold; font-style: italic;">S</span>) = integral p(<span style="font-weight: bold; font-style: italic;">S</span>|<span style="font-weight: bold; font-style: italic;">E</span>) *p(<span style="font-weight: bold; font-style: italic;">E</span>) d<span style="font-weight: bold; font-style: italic;">E</span> from <span style="font-weight: bold; font-style: italic;">E</span>=0 to infinity</blockquote>This integrates as a BesselK function of the zero order, <span style="font-weight: bold;">K</span><sub style="font-weight: bold;">0</sub>, available on any spreadsheet program (see <a href="http://mobjectivist.blogspot.com/2010/10/stock-market-as-econophysics-toy.html">here</a> for a similar derivation in an unrelated field).<br /><blockquote>p(<span style="font-weight: bold; font-style: italic;">S</span>) = 2/<span style="font-weight: bold; font-style: italic;">S</span><sub style="font-weight: bold; font-style: italic;">0</sub> * <span style="font-weight: bold;">K</span><sub style="font-weight: bold;">0</sub>(2*sqrt(<span style="font-weight: bold; font-style: italic;">S</span>/<span style="font-weight: bold; font-style: italic;">S</span><sub style="font-weight: bold; font-style: italic;">0</sub>))</blockquote>The average value of the terrain slope for this distribution is simply the value <span style="font-weight: bold; font-style: italic;">S</span><sub style="font-weight: bold; font-style: italic;">0</sub>.<br /><br />Now we can try it on a large set of data. I downloaded all the DEM data for the 1 degree quadrangles (aka blocks/tiles) in the USA from the USGS web site.<a href="http://dds.cr.usgs.gov/pub/data/DEM/250/"> http://dds.cr.usgs.gov/pub/data/DEM/250/</a><br /><br />This consists of post data at approximately 92 meter intervals (i.e. a fixed value of <span style="font-style: italic;">run</span>) at 1:<em style="font-style: italic;"></em>250,000 <span style="font-style: italic;"></span>scale for the entire USA. I concentrated on the lower 48 and some spillover into Canada. I used <span style="font-weight: bold;">curl</span> to iteratively download each of the nearly 1000 quadrangle files on the server.<br /><br />I then wrote a program to read the data from individual DEM files and calculate the slopes between adjacent posts and came up with an average slope (rise/run) of 0.039, approximately a 4% grade or 2.2 degrees pitch. I take the absolute values of all slopes so that the average is not zero.<br /><br />The cumulative plot of terrain slopes for all 5 billion calculated slope points appears on the following chart (<span style="font-weight: bold;">Figure 1</span>). I also added the cumulative probability distribution of the BesselK model with the calculated average slope as the single adjustable parameter.<br /><br /><div style="text-align: center;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhERcShglk2eQPnhrNU0FuCakUsNf7o_KEzvuCLAxfSqUae8Xl0yJBChcI4go8RsBg9S74Dg-dbbHhdKaX0AZTqBGY-uTXPVnEaoHervNYugyvXgFfiE0pJXOXExgjWf3QyEl79/s1600/dem_cumulative_k0.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 291px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhERcShglk2eQPnhrNU0FuCakUsNf7o_KEzvuCLAxfSqUae8Xl0yJBChcI4go8RsBg9S74Dg-dbbHhdKaX0AZTqBGY-uTXPVnEaoHervNYugyvXgFfiE0pJXOXExgjWf3QyEl79/s400/dem_cumulative_k0.gif" alt="" id="BLOGGER_PHOTO_ID_5558912484015346514" border="0" /></a><span style="font-weight: bold;">Figure 1</span>: CDF of USA DEM data and the BesselK model with a small variation in <span style="font-weight: bold; font-style: italic;">S</span><sub style="font-weight: bold; font-style: italic;">0</sub> (<span style="font-size:85%;">+/-</span>4% about the average 0.037 rise/run) demonstrating sensitivity to the fit.<br /><br /></div>This kind of agreement does not just happen because of coincidence. It occurs because random forces contribute to maximizing the entropy of the topography. Enough variability exists for the terrain to reach an ergodic limit in filling the energy-constrained state space.<br /><br />As supporting evidence, it turns out that we can generate a distribution that maps well to the prior by estimating the average slope from the conditional PDF of each of the 922 quadrangle blocks and then plotting this aggregate data set as another histogram (see <span style="font-weight: bold;">Figure 2</span>).<br /><br /><div style="text-align: center;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhCeiRWoNLbgUUZHt9YKN05HzH1auBgThN1khFgWXTzfLoq1ZEaCJa-tVsXCwTbLp0-DrGs9srycZTjqITrytppDxlMEnwGlglz8pozN9Gy4Kxp-KOzW1t-icYg6io6p45-IExp/s1600/dem_aggreg_k0.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 305px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhCeiRWoNLbgUUZHt9YKN05HzH1auBgThN1khFgWXTzfLoq1ZEaCJa-tVsXCwTbLp0-DrGs9srycZTjqITrytppDxlMEnwGlglz8pozN9Gy4Kxp-KOzW1t-icYg6io6p45-IExp/s400/dem_aggreg_k0.gif" alt="" id="BLOGGER_PHOTO_ID_5559227284989631618" border="0" /></a><span style="font-weight: bold;">Figure 2</span>: Generation of the prior distribution by taking the average slope of each of the nearly 1000 quadrangles . The best fit generates a value of <span style="font-weight: bold; font-style: italic;">S</span><sub style="font-weight: bold; font-style: italic;">0 </sub>(1/27=0.037) close to that used in Figure 1.<br /></div><br />Practically speaking, we see the variability in slopes expressed at the two different levels. The entire USA at the integrated (BesselK model) level and the aggregated regions at the localized (exponential prior) level. These remain consistent as they agree on the single adjustable parameter <span style="font-weight: bold; font-style: italic;">S</span><sub style="font-weight: bold; font-style: italic;">0 </sub>.<br /><br />The modeled distribution has many practical uses for analysis, including transportation studies and planning. Obviously, vehicles traveling up slopes use a significant amount of energy and you might like to have a model to base an analysis on without having to rely on the data by itself. (As a caveat, I did not include any of the spatial correlations that must also exist and might prove useful as well)<br /><br />Perusing the recent research, I couldn't find anyone that had previously discovered this simple model. Not that they haven't tried, coming up with a good slope distribution model seems to amount to a mini Holy Grail among geophysicists. I went as far as dropping $10 to downloading the first paper, which turned out to be a bust.<br /><ol><li><span style="font-weight: bold;">Probabilistic description of topographic slope and aspect.</span><br />G. Vico and A. Porporato, <span style="font-style: italic;">JOURNAL OF GEOPHYSICAL RESEARCH</span>, VOL. 114, F01011, doi:10.1029/2008JF001038, 2009<br /><a href="http://www.agu.org/journals/jf/jf0901/2008JF001038/2008JF001038.pdf">http://www.agu.org/journals/jf/jf0901/2008JF001038/2008JF001038.pdf</a><br /></li><li><span style="font-weight: bold;">Nonlinear Processes in Geophysics Multifractal earth topography.</span><br />J.-S. Gagnon, S. Lovejoy, and D. Schertzer, Nonlin. <span style="font-style: italic;">Processes Geophys.</span>, 13, 541–570, 2006<br /><a href="http://hal.archives-ouvertes.fr/docs/00/33/10/93/PDF/npg-13-541-2006.pdf">http://hal.archives-ouvertes.fr/docs/00/33/10/93/PDF/npg-13-541-2006.pdf</a><br /></li><li><span style="font-weight: bold;">PROPAGATION OF DEM UNCERTAINTY: AN INTERVAL ARITHMETIC APPROACH.</span><br />G Gonçalves, <span style="font-style: italic;">XXII International Cartographic Conference</span>, 2005<br /><a href="http://www.cartesia.org/geodoc/icc2005/pdf/oral/TEMA7/Session%202/GIL%20GON%C7ALVES.pdf">http://www.cartesia.org/geodoc/icc2005/pdf/oral/TEMA7/Session%202/GIL%20GON%C7ALVES.pdf</a></li><li><span style="font-weight: bold;">SAR interferometry and statistical topography</span>.<br />Guarnieri, A.M. <span style="font-style: italic;">IEEE Transactions on Geoscience and Remote Sensing</span>, Dec 2002<br /><a href="http://home.dei.polimi.it/monti/papers/montiguarnieri02.pdf">http://home.dei.polimi.it/monti/papers/montiguarnieri02.pdf</a></li></ol>If someone wants to generate Monte Carlo statistics for the BesselK model without having to do the probability inversion, the algorithm turns out surprisingly simple. Draw two independent random samples from a uniform [0.0 .. 1.0] interval, apply the natural log to each, multiply them together, and then multiply by the<span style="font-weight: bold; font-style: italic;"> S</span><sub style="font-weight: bold; font-style: italic;">0</sub> scaling constant. That will give the following cumulative if done 5 billion times, which is the same size as my USA DEM data sample.<br /><br /><div style="text-align: center;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcdN1IHDbuU_rZcxzs-aAxaL1NCn9OPJnHTl5M0kyk7ZOAHPf9M46Q6a2TNqxP4gqNv-IWLXYymdLHPgTkZPxky4C-f23Zq4WThJvfsFZR-wi-QuyghOvWG4HDiw-CmSGVi5a9/s1600/bessel-verif.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 301px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcdN1IHDbuU_rZcxzs-aAxaL1NCn9OPJnHTl5M0kyk7ZOAHPf9M46Q6a2TNqxP4gqNv-IWLXYymdLHPgTkZPxky4C-f23Zq4WThJvfsFZR-wi-QuyghOvWG4HDiw-CmSGVi5a9/s400/bessel-verif.gif" alt="" id="BLOGGER_PHOTO_ID_5559228254969257346" border="0" /></a><span style="font-weight: bold;">Figure 3</span>: Generation of the BesselK model via Monte Carlo.<br /></div><br />The only statistical noise is at the 1e-9 level, same as in the DEM data.<br /><br />Examples of some random-walk realizations drawing from a two-level model follow. The flatter regions occur more often reflecting the regional data.<br /><div style="text-align: left;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQgvWZDLfy8Z37vauPlYpcb6bXvD0ukH_Rvzh1eXlirgZVjg8dPWzRok9w-IUqxHYy4Mw7Ev77p19IWKcKuTOH4Eabw8ZxO_SOUR8f4YslcHCIH64w7nLJKTRpow8cX4ga9VLT/s1600/random_terrain-2.gif"><img style="margin: 0px auto 10px; display: block; text-align: left; cursor: pointer; width: 320px; height: 195px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQgvWZDLfy8Z37vauPlYpcb6bXvD0ukH_Rvzh1eXlirgZVjg8dPWzRok9w-IUqxHYy4Mw7Ev77p19IWKcKuTOH4Eabw8ZxO_SOUR8f4YslcHCIH64w7nLJKTRpow8cX4ga9VLT/s320/random_terrain-2.gif" alt="" id="BLOGGER_PHOTO_ID_5559768507150473746" border="0" /></a><br /></div><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgps8lI6s7RnATqyD3MXmD3tCQLgdYh4t7AYQ1pubJJ9nqEe4Et0x_YURFu1XvmaK-bmKckAjql90QIkBWnflYRGSXWsPqUwcKzJ07Xbs4LtRVBTU2qhlvTdiaWLhMqFjz9dq08/s1600/random_terrain-1.gif"><img style="margin: 0px auto 10px; display: block; text-align: right; cursor: pointer; width: 320px; height: 194px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgps8lI6s7RnATqyD3MXmD3tCQLgdYh4t7AYQ1pubJJ9nqEe4Et0x_YURFu1XvmaK-bmKckAjql90QIkBWnflYRGSXWsPqUwcKzJ07Xbs4LtRVBTU2qhlvTdiaWLhMqFjz9dq08/s320/random_terrain-1.gif" alt="" id="BLOGGER_PHOTO_ID_5559768506458703922" border="0" /></a>@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.com0tag:blogger.com,1999:blog-7002040.post-13769127852498015992010-10-23T10:17:00.000-07:002010-10-23T11:36:58.889-07:00Understanding Recovery FactorsA <a href="http://europe.theoildrum.com/node/7063#comments_top">recent TOD post on reserve growth by Rembrandt Kopelaar</a> motivated this analysis.<br /><br />The recovery factor indicates how much oil that one can recover from the original estimate. This has important implications for the the ultimately recovery resources, and increases in recovery rate has implications for reserve growth.<br /><br />First of all, we should acknowledge that we still have uncertainty as to the amount of original oil in place, so that the recovery factor has two factors of uncertainty.<br /><br />The cumulative distribution of reservoir recovery factor typically looks like the following S-shaped curve. The fastest upslope indicates the region closest to the average recovery factor.<br /><br /><div style="text-align: center;"><a href="http://www.theoildrum.com/uploads/434/Jean_rec_factor.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 429px; height: 318px;" src="http://www.theoildrum.com/uploads/434/Jean_rec_factor.png" alt="" border="0" /></a><span style="font-weight: bold; font-style: italic;">Figure 1</span>: Recovery Factor cumulative distribution function (<a href="http://europe.theoildrum.com/node/7063">from</a>)<br /><br /></div>To understand the spread in the recovery factors, one has to first realize that all reservoirs have different characteristics. Some are more difficult to extract from and others have easier recovery factors. One of the principle first-order effects has to do with the size of the reservoir: bigger reservoirs typically have better recovery factors and as <a href="http://www.theoildrum.com/node/7060/734698">one reservoir engineer mentioned on TOD</a> <span style="font-style: italic;"></span><blockquote><span style="font-style: italic;">"Reserve growth tends to happen in bigger fields because thats where you get the most bang for your buck"</span></blockquote>So if we make the simple assumption that cumulative recovery factors (RF) have <a href="http://mobjectivist.blogspot.com/2010/09/hydrogeology-for-dummies.html">Maximum Entropy uncertainty or dispersion</a> for a given <span style="font-weight: bold;">Size</span>:<br /><blockquote><span style="font-weight: bold; font-style: italic;">P</span>(<span style="font-weight: bold; font-style: italic;">RF</span>) = 1-exp (-<span style="font-weight: bold; font-style: italic;">k</span>*<span style="font-weight: bold; font-style: italic;">RF</span>/<span style="font-weight: bold; font-style: italic;">Size</span>)</blockquote>this makes sense as the recovery factor will extend for larger fields.<br /><br />Add to the mix that reservoir Sizes go approximately as (see <a href="http://mobjectivist.blogspot.com/2008/10/estimating-urr-from-dispersive-field.html">here</a>):<br /><blockquote><span style="font-weight: bold; font-style: italic;">Pr</span>(<span style="font-weight: bold; font-style: italic;">Size</span>)= 1/(1+<span style="font-weight: bold; font-style: italic;">Median</span>/<span style="font-weight: bold; font-style: italic;">Size</span>)</blockquote>Then a simple reduction in these sets of equations (with the key insight that <span style="font-weight: bold; font-style: italic;">RF </span>ranges between 0 and 1, i.e. between 0 and 100%) gives us<br /><blockquote><span style="font-weight: bold; font-style: italic;">P</span>(<span style="font-weight: bold; font-style: italic;">RF</span>) = 1 - exp(-<span style="font-weight: bold; font-style: italic;">k</span>*<span style="font-weight: bold; font-style: italic;">RF</span>*<span style="font-weight: bold; font-style: italic;">RF</span>/(1-<span style="font-weight: bold; font-style: italic;">RF</span>)/<span style="font-weight: bold; font-style: italic;">Median</span>)</blockquote>the ratio <span style="font-weight: bold; font-style: italic;">Median</span>/<span style="font-weight: bold; font-style: italic;">k </span>indicates the fractional average recovery factor relative to the <span style="font-weight: bold; font-style: italic;">median</span> field size.<br /><br />A set of curves for various<span style="font-weight: bold; font-style: italic;"> k/Median</span> values below:<br /><br /><br /><div style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1BBnqRDyzKO9YgbEv4youtyATeQ08Tg1SYWBZn6udSLGo8aR8JT-mtq7CAW0aUna8SP-NKKTkObRWQJ8JhjVDwtcHIqbFlf7DDQ7EF5UCqJawpFgxJvX0uvbg5powek9lVpvf/s1600/recoveryFactorsTrend.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 215px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1BBnqRDyzKO9YgbEv4youtyATeQ08Tg1SYWBZn6udSLGo8aR8JT-mtq7CAW0aUna8SP-NKKTkObRWQJ8JhjVDwtcHIqbFlf7DDQ7EF5UCqJawpFgxJvX0uvbg5powek9lVpvf/s400/recoveryFactorsTrend.gif" alt="" id="BLOGGER_PHOTO_ID_5531292893587615506" border="0" /></a><span style="font-weight: bold; font-style: italic;">Figure 2:</span> Recovery Factor distribution functions assuming maximum entropy<br /><div style="text-align: left;"><br />Rembrandt provided some recovery factor curves originally supplied by Laherrere, and I fit these to the <span style="font-weight: bold; font-style: italic;">Median</span>/<span style="font-weight: bold; font-style: italic;">k</span> fractions below.<br /><br /></div><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCxIiuyaU4gq7m-yAYyZF1jN04lBO1f7iiF0XzNdqN2qoB3PiNV_KC0zKvy5Rc9zBF87Ie3xjTcFn3je_aIzgBJX21a8CX1ZKlr45g8dFfM-dPjHoxL2c1_uloBX4nx914B2X2/s1600/RecoveryFactorOil.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 299px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCxIiuyaU4gq7m-yAYyZF1jN04lBO1f7iiF0XzNdqN2qoB3PiNV_KC0zKvy5Rc9zBF87Ie3xjTcFn3je_aIzgBJX21a8CX1ZKlr45g8dFfM-dPjHoxL2c1_uloBX4nx914B2X2/s400/RecoveryFactorOil.gif" alt="" id="BLOGGER_PHOTO_ID_5531292480901917474" border="0" /></a><span style="font-weight: bold; font-style: italic;">Figure 3:</span> Recovery factor curves from <a href="http://europe.theoildrum.com/node/7063">Rembrandt's TOD post</a>,<br />alongside the recovery factor model described here.<br /></div><div style="text-align: left;"><br /></div>Laherrere also <a href="http://www.oilcrisis.com/laherrere/groningen.pdf">provided curves for natural gas</a>, where recovery factors turn out much higher.<br /><br /><div style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVaxjP8CiTWqGQUBY-UpQ2lwvVTsRnrGqP0sI07EdIoAo63ZHZ8RUf70XbESiRMsTmIIVWskSHNyeovyKg7AgZPNwjMjBEzYlB0NlbSGwxOlNZW1L6jqxEpVNEd7Ew1bUovEXn/s1600/ng_recoveryFactor.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 294px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVaxjP8CiTWqGQUBY-UpQ2lwvVTsRnrGqP0sI07EdIoAo63ZHZ8RUf70XbESiRMsTmIIVWskSHNyeovyKg7AgZPNwjMjBEzYlB0NlbSGwxOlNZW1L6jqxEpVNEd7Ew1bUovEXn/s400/ng_recoveryFactor.gif" alt="" id="BLOGGER_PHOTO_ID_5531292895318935890" border="0" /></a><span style="font-weight: bold; font-style: italic;">Figure 4:</span> Recovery Factor distribution functions for natural gas.<br />Note that the recovery factor is much higher than for oil.<br />(Note: I had to fix the typo in the graph x-axis naming)<br /></div><br />It looks like this derivation has strong universality underlying it. This remains a very simple and parsimonious model as it has<span style="font-weight: bold; font-style: italic;"> only one sliding parameter</span>. The parameter <span style="font-weight: bold; font-style: italic;">Median</span>/<span style="font-weight: bold; font-style: italic;">k </span>works in a scale-free fashion because both numerator and denominator have dimensions of size. This means that one can't muck with it that much -- as recovery factors increase, the underlying uncertainty will remain and the curves in <span style="font-weight: bold; font-style: italic;">Figure 2</span> will simply slide to the right over time while adjusting their shape. This will essentially describe the future reserve growth we can expect; the uncertainty in the underlying recovery factors will remain and thus we should see the limitations in the smearing of the cumulative distributions. <br /><br />To reverse the entropic dispersion of nature and thus to overcome the recovery factor inefficiency, we will certainly have to expend extra energy.<br /><br /><div style="text-align: left;"><br /></div>@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.com10tag:blogger.com,1999:blog-7002040.post-60019241633112932232010-10-20T18:23:00.000-07:002010-10-21T23:30:16.647-07:00Bird SurveysThis post either points out something pretty obvious or else it reveals something of practical benefit. You can judge for now.<br /><br />I briefly made a reference to bird survey statistics when I <a href="http://mobjectivist.blogspot.com/2010/03/econophysics-and-sunk-costs.html">wrote this post </a>on econophysics and income modeling. I took a typical rank histogram of bird species abundance and fit it the best I could to a dispersive growth model, further described <a href="http://www.theoildrum.com/node/6255">here</a>. The generally observed trend follows that many species exist in the middle of abundance and relatively small numbers of species exist at each end of the spectrum -- few species exceedingly common (i.e. starling) and few species exceedingly rare (i.e. whooping crane). Since the bird data comes from a large area in North America, the best fit followed a meta-community growth model. The meta-community adjustment impacts the knee of the histogram curve and broadens the Preston plot, effectively smearing over geological ages that different species have had to adapt.<br /><br /><div style="text-align: center;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjoFac36Vez4VlshfqkmhzQfGLDs59HEFhHI8L2AQcv0ohSV3tziDSAa4dYOAwvy-EbZ2RN2uBM8m7TAy0KyfLoC2q5Ur4aXc8Ko3SDM5ef_R0CQ-n-VtPP76LWhUsV01ZWN0Yg/s1600/birds.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 248px; height: 400px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjoFac36Vez4VlshfqkmhzQfGLDs59HEFhHI8L2AQcv0ohSV3tziDSAa4dYOAwvy-EbZ2RN2uBM8m7TAy0KyfLoC2q5Ur4aXc8Ko3SDM5ef_R0CQ-n-VtPP76LWhUsV01ZWN0Yg/s400/birds.gif" alt="" id="BLOGGER_PHOTO_ID_5530328616187718482" border="0" /></a><span style="font-weight: bold; font-style: italic;">Figure 1:</span> Preston plot (top) and<br />rank histogram (bottom) of relative bird species abundance<br /><br /></div>If we assume that the relative species abundance has a underlying model related to steady-state growth according to<span style="font-weight: bold; font-style: italic;"> p</span>(<span style="font-weight: bold; font-style: italic;">rate</span>), where <span style="font-weight: bold; font-style: italic;">rate </span>is the relative advantage for species reproduction and survival, then this should transitively <span style="font-style: italic;">might </span>apply to disturbances to growth as well. Recently, I ran into a paper that actually tried to discern some universality in diverse growth papers, and it coincidentally used the bird survey data along with two economic measures of firm size and mutual fund size.<br /><ul><li><span style="color: rgb(0, 0, 0); line-height: 13.3px; opacity: 1;font-family:'Helvetica','Arial','sans-serif';font-size:85%;" >Schwarzkopf, Yonathan, and J. Doyne Farmer, <a href="http://www.blogger.com/Schwarzkopf,%20Yonathan,%20and%20J.%20Doyne%20Farmer.%20%20%C3%A2%C2%80%C2%9CThe%20Cause%20of%20Universality%20in%20Growth%20Fluctuations.%C3%A2%C2%80%C2%9D%20Santa%20Fe%20Institute,%20Santa%20Fe,%20NM,%202010.%20%20http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1597504"> “The Cause of Universality in Growth Fluctuations.” </a>Santa Fe Institute, </span><span style="color: rgb(0, 0, 0); line-height: 13.3px; opacity: 1;font-family:'Helvetica','Arial','sans-serif';font-size:85%;" >(April 2010)</span><span style="color: rgb(0, 0, 0); line-height: 13.3px; opacity: 1;font-family:'Helvetica','Arial','sans-serif';font-size:85%;" >.</span></li><li><span style="color: rgb(0, 0, 0); line-height: 13.3px; opacity: 1;font-family:'Helvetica','Arial','sans-serif';font-size:85%;" >Schwarzkopf, Yonathan, and J. Doyne Farmer. <a href="http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1597505">“Supporting Information — The Cause of Universality in Growth Fluctuations.”</a> Santa Fe Institute.</span></li></ul>I did the best I could with the figures in the paper but eventually went to the source, <a href="ftp://ftpext.usgs.gov/pub/er/md/laurel/BBS/DataFiles/">ftp://ftpext.usgs.gov/pub/er/md/laurel/BBS/DataFiles/</a>, and used data from 1997 to 2009.<br /><br />I applied the same abundance distribution as before and came up with the fit below (see <span style="color: rgb(0, 0, 153); font-weight: bold;">blue </span>and <span style="color: rgb(255, 0, 0); font-weight: bold;">red </span>curves below, data and model respectively). That provided a sanity check, but Schwarzkopf and Farmer indicated that the year-to-year relative growth fluctuations should also obey some fundamental behavior through the distribution of this metric:<br /><blockquote><span style="font-weight: bold; font-style: italic;">RelativeGrowth</span>(<span style="font-weight: bold; font-style: italic;">Year</span>) = <span style="font-weight: bold; font-style: italic;">n</span>(<span style="font-weight: bold; font-style: italic;">Year</span>+1) / <span style="font-weight: bold; font-style: italic;">n</span>(<span style="font-weight: bold; font-style: italic;">Year</span>)</blockquote>Sure enough, and for whatever reason, the "growth" in the surveyed data does show as much richness as the steady state averaged abundance distribution. The relative growth in terms of a fractional yearly change sits alongside the relative abundance curve below (in <span style="color: rgb(0, 102, 0); font-weight: bold;">green</span>). Notice right off the bat that the distribution of fractional changes drops off much more rapidly.<br /><br /><div style="text-align: center;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6moRuJFVxc9-Kk7vIg1htpiRlKTCtOy5YpBuk17rNj-JDdAx81jUiw8_4eibM0qTb8bHBmQ2_mEFq7A1oVmhPPGYupxkFUqMAD-5BOhEM7TQAU4e_H5C5kagXLZ7E6c2sfmkL/s1600/BirdGrowthCDF1997.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 288px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6moRuJFVxc9-Kk7vIg1htpiRlKTCtOy5YpBuk17rNj-JDdAx81jUiw8_4eibM0qTb8bHBmQ2_mEFq7A1oVmhPPGYupxkFUqMAD-5BOhEM7TQAU4e_H5C5kagXLZ7E6c2sfmkL/s400/BirdGrowthCDF1997.gif" alt="" id="BLOGGER_PHOTO_ID_5530699507817140514" border="0" /></a><span style="font-weight: bold; font-style: italic;">Figure 2 :</span> The red meta-model curve smears the median from 200 to 60000<br /></div><br />I believe that this has a simple explanation having to do with Poisson counting statistics. When estimating fractional yearly growth, we consider that the rarer bird species having the lowest abundance will contribute most strongly to fluctuation noise on year-to-year survey data. Values flipping from 1 to 2 will lead to 100% growth rates for example. (We have to ignore movements from 1 to 0 and 0 to 1 as these growths become infinite.<br /><br />I devised a simple algorithm that takes two extreme values (<span style="font-weight: bold; font-style: italic;">R</span> greater than 1 and <span style="font-weight: bold; font-style: italic;">R</span> less than 1 ) and the steady state abundance <span style="font-weight: bold; font-style: italic;">N</span> for each species. The lower bound of:<br /><blockquote><span style="font-weight: bold; font-style: italic;">R1</span> = <span style="font-weight: bold; font-style: italic;">R </span>* (1-sqrt(2/<span style="font-weight: bold; font-style: italic;">N</span>))/(1+sqrt(2/<span style="font-weight: bold; font-style: italic;">N</span>))</blockquote>and the upper bound becomes:<br /><blockquote><span style="font-weight: bold; font-style: italic;">R2</span> = <span style="font-weight: bold; font-style: italic;">R </span>* (1+sqrt(2/<span style="font-weight: bold; font-style: italic;">N</span>))/(1-sqrt(2/<span style="font-weight: bold; font-style: italic;">N</span>))</blockquote>The term 1.4/sqrt(<span style="font-weight: bold; font-style: italic;">N</span>) derives from Poisson counting statistics in that the relative changes become inversely related to the size of the sample. We double count in this case because we don't know whether the direction will go up or down, relative to <span style="font-weight: bold; font-style: italic;">R</span>, a number close to unity.<br /><br />(This has much similarity to the model <a href="http://mobjectivist.blogspot.com/2010/10/tower-of-babel-how-languages-diversify.html">I just used in understanding language adoption</a>. Small numbers of adopters experience suppressing fluctuations as 1/sqrt(<span style="font-weight: bold; font-style: italic;">N</span>))<br /><br />Expanding on the scale, the results of this algorithm are shown in <span style="font-weight: bold; font-style: italic;">Figure 3</span>.<br /><br /><div style="text-align: center;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvQdeN6SWPKmFeAJVF2e-bnV0vEc8U0_AcePPkKdlMO1v8q4if0Kmd7LDNWWMG8npatQOLloJIXtNzwWzUwrX-hhVgtDqaKojhSZQOavXVAtze2s7dM3bh-V2SNYKrhgrUoDFF/s1600/BirdGrowthCDF1997local.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 309px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvQdeN6SWPKmFeAJVF2e-bnV0vEc8U0_AcePPkKdlMO1v8q4if0Kmd7LDNWWMG8npatQOLloJIXtNzwWzUwrX-hhVgtDqaKojhSZQOavXVAtze2s7dM3bh-V2SNYKrhgrUoDFF/s400/BirdGrowthCDF1997local.gif" alt="" id="BLOGGER_PHOTO_ID_5530699521432333810" border="0" /></a><span style="font-weight: bold; font-style: italic;">Figure 3 :</span> Model of yearly growth fluctuation in terms of a cumulative distribution function<br /></div><br />Placing it in terms of a binned probability density function, the results look like the following plot. Note the high counts match closely the data simply because the 1/sqrt(<span style="font-weight: bold; font-style: italic;">N</span>) is relatively small. Away from these points, you can see the general trend develop even though the data is (understandably) obscured by the same counting noise.<br /><br /><div style="text-align: center;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5qBHFT-KsQ8ekltpvLTnAghsu78djjKSBPkdo4hOz676GomMYijGPEqqax_CfdTvCreVqxWM4QzsZQZ2Q4IErZMheDISS2N4AalKMFd6dHmbamb9aGfRrDwR3w2ddzIKWfYkW/s1600/BirdGrowthPDF1997.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 383px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5qBHFT-KsQ8ekltpvLTnAghsu78djjKSBPkdo4hOz676GomMYijGPEqqax_CfdTvCreVqxWM4QzsZQZ2Q4IErZMheDISS2N4AalKMFd6dHmbamb9aGfRrDwR3w2ddzIKWfYkW/s400/BirdGrowthPDF1997.gif" alt="" id="BLOGGER_PHOTO_ID_5530699517805491538" border="0" /></a><span style="font-weight: bold; font-style: italic;">Figure 4</span> : The probability density function of yearly growth fluctuations.<br /></div><br />As an essential argument to take home, consider that a counting statistics argument probably accounts for the yearly growth fluctuations observed. Before you make any other assertions, you likely have to remove this source of noise. Looking at <span style="font-weight: bold; font-style: italic;">Figure 3 </span>&<span style="font-weight: bold; font-style: italic;"> 4</span>, you can potentially see a slight bias toward positive growth for certain lower abundance species. This comes at the expense of lower decline elsewhere, except for some strong declines in several other low abundance species. This may indicate the natural ebb and flow of attrition and recovery in species populations, with some of these undergoing strong declines. I haven't done this but it makes sense to identify the species or sets of species associated with these fluctuations.<br /><br />Two puzzling points also stick out. For one, I don't understand why Schwarzkopf and Farmer didn't immediately discern this effect. Their underlying rationale may have some of the same elements but it gets obscured by their complicated explanation. They do use a resampling technique (on 40+ years worth of data) but I didn't see much of a reference to conventional counting statistics, only the usual hand-wavy Levy flight arguments. They did find a power law of around-0.3 instead of the -0.5 we used for Poisson, so they may generate something equivalent to Poisson by drawing from a similar Levy distribution. Overall I find this violates Occam's razor, at least for this set of bird data .<br /><br />Secondly, it seems that these differential growth curves have real significance in <a href="http://mobjectivist.blogspot.com/2010/10/stock-market-as-econophysics-toy.html">financial applications</a>. More of the automated transactions look for short duration movements and I would think that ignoring counting statistics could lead the computers astray.<br /><br /><hr width="100"><br /><span style="font-weight: bold;">Epilogue</span><br /><br />As an aside, when I first pulled the data off the USGS server, I didn't look closely at the data sets. It turns out that the years 1994,1995,1996 were included in the data but appeared to have much poorer sampling statistics. From 1994 to 1996, the samples got progressively larger but I didn't realize this when I first collected and processed the data.<br /><br /><div style="text-align: center;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3EISmqkno8lBB5RRzxd3ng57NAArAs3F2tNpDDmf0B8z4AwTF2s3hrXstWFINYQ8eWzsqs1GaXsybeSVM-28YOpVci6aqNJLedqs189MFUVmSmlphTrPjivJKPe2ctCpRDktf/s1600/BirdGrowthCDF.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 378px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3EISmqkno8lBB5RRzxd3ng57NAArAs3F2tNpDDmf0B8z4AwTF2s3hrXstWFINYQ8eWzsqs1GaXsybeSVM-28YOpVci6aqNJLedqs189MFUVmSmlphTrPjivJKPe2ctCpRDktf/s400/BirdGrowthCDF.gif" alt="" id="BLOGGER_PHOTO_ID_5530699502506294386" border="0" /></a><span style="font-weight: bold; font-style: italic;">Figure 6 :</span> CDF of larger data sample.<br />Note the strange hitch in the data growth fluctuation curve.<br /></div><br />At the time, I figured that the slope had a simple explanation related to uncertainties in the surveying practice. I also saw some similarities to the uncertainties in stock market returns that I blogged about recently in an <a href="http://mobjectivist.blogspot.com/2010/10/stock-market-as-econophysics-toy.html">econophysics posting</a>.<br /><br />Say the survey delta time has a probability distribution with average <span style="font-weight: bold; font-style: italic;">time </span>-- the <span style="font-weight: bold; font-style: italic;">T</span> most likely related to the time between surveys:<br /><blockquote><span style="font-weight: bold; font-style: italic;">p<span style="font-size:78%;">t</span></span>(<span style="font-weight: bold; font-style: italic;">time</span>) = (1/<span style="font-weight: bold; font-style: italic;">T</span>)*exp(-<span style="font-weight: bold; font-style: italic;">time</span>/<span style="font-weight: bold; font-style: italic;">T</span>)</blockquote>then we also assume that a surveyor tries to collect a certain amount of data, <span style="font-weight: bold; font-style: italic;">x</span>, during the duration of the survey. We could characterize this as a mean, <span style="font-weight: bold; font-style: italic;">X</span>, or some uniform interval. We don't have any knowledge of higher order moments to we just apply the Maximum Entropy Principle<br /><span style="font-weight: bold; font-style: italic;"></span><blockquote><span style="font-weight: bold; font-style: italic;">p<span style="font-size:78%;">x</span></span>(<span style="font-weight: bold; font-style: italic;">x</span>) = (1/<span style="font-weight: bold; font-style: italic;">X</span>)*exp(<span style="font-weight: bold; font-style: italic;">-x</span>/<span style="font-weight: bold; font-style: italic;">X</span>)</blockquote>The ratio between these two establishes the relative rate of growth, <span style="font-weight: bold; font-style: italic;">rate </span>= <span style="font-weight: bold; font-style: italic;">X</span>/<span style="font-weight: bold; font-style: italic;">T</span>. We can derive the following cumulative quite easily:<br /><blockquote><span style="font-weight: bold; font-style: italic;">P</span>(<span style="font-weight: bold; font-style: italic;">rate</span>) = <span style="font-weight: bold; font-style: italic;">T</span>*<span style="font-weight: bold; font-style: italic;">rate</span>/(<span style="font-weight: bold; font-style: italic;">T</span>*<span style="font-weight: bold; font-style: italic;">rate </span>+<span style="font-weight: bold; font-style: italic;">X</span>)</blockquote>The yearly growth rate fluctuations of course turn out as the second derivative of this function. We take one derivative to convert :<br /><blockquote>d<span style="font-weight: bold; font-style: italic;">p</span>(<span style="font-weight: bold; font-style: italic;">rate</span>)/d<span style="font-weight: bold; font-style: italic;">rate</span> = 2*<span style="font-weight: bold; font-style: italic;">T</span>/<span style="font-weight: bold; font-style: italic;">X</span>/(<span style="font-style: italic;"><span style="font-weight: bold;"></span></span>1+<span style="font-weight: bold; font-style: italic;">rate</span>*<span style="font-weight: bold; font-style: italic;">T</span>/<span style="font-weight: bold; font-style: italic;">X</span>)<sup>^3<br /></sup></blockquote>On a cumulative plot as in <span style="font-weight: bold; font-style: italic;">Figure 6</span>, this shows a power-law of order 2 (see the <span style="color: rgb(255, 153, 102); font-weight: bold;">orange </span>curve). Near the knee of the curve, it looks a bit sharper. If we use a uniform distribution of <span style="font-weight: bold; font-style: italic;">p<span style="font-size:78%;">x</span></span>(<span style="font-weight: bold; font-style: italic;">x</span>) up to some maximum sample interval, then it matches the knee better (see the dashed curve).<br /><br />So the simple theory says that much of the observed yearly fluctuation may arise simply due to sampling variations during the surveying interval. Plotting as a binned probability density function, the contrast shows up more clearly in <span style="font-weight: bold; font-style: italic;">Figure 7</span>. In both cases is fit to <span style="font-weight: bold; font-style: italic;">X/T</span> = 60. This number is bigger than unity because it looks like every year, the number of samples increases (I also did not divide by 15, the number of years in the survey).<br /><br />But of course, the reason this maximum entropy model works as well as it does came about from <span style="font-style: italic;">real variation</span> in the sampling techniques. Those years from 1994 to 1996 placed enough uncertainty and thus variance in the growth rates to completely smear the yearly growth fluctuation distribution.<br /><br /><div style="text-align: center;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcRCaaWn0rNN1s-CsK2JK5I5hduReFDO1ezm9iVnDyQfTMk3-08QuiwG6G2wnU0Ok99J8120TDt28IirskLL1jyyaZkX26diGNNUnURCHZCfKOZcP4r5wGIopQyPJ_-CxGiEMT/s1600/BirdGrowthPDF.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 390px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcRCaaWn0rNN1s-CsK2JK5I5hduReFDO1ezm9iVnDyQfTMk3-08QuiwG6G2wnU0Ok99J8120TDt28IirskLL1jyyaZkX26diGNNUnURCHZCfKOZcP4r5wGIopQyPJ_-CxGiEMT/s400/BirdGrowthPDF.gif" alt="" id="BLOGGER_PHOTO_ID_5530699506551117250" border="0" /></a><span style="font-weight: bold; font-style: italic;">Figure 7 :</span> PDF of larger sample which had sampling variations.<br />Note that this has a much higher width than <span style="font-style: italic;">Figure 4.</span><br /></div><br />Only in retrospect when I was trying to rationalize why a sampling variation this large would occur in a seemingly standardized yearly survey, did I find the real source of this variation. Clearly, the use of the Maximum Entropy Principle explains a lot, but you still may have to dig out the sources of the uncertainty.<br /><br />Can we understand the statistics of something as straightforward as a bird survey? Probably, but as you can see, we have to go at it from a different angle than that typically recommended. I will keep an eye out if it has more widespread applicability; for now it obviously requires countable discrete entities.@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.com0tag:blogger.com,1999:blog-7002040.post-43827921643311124282010-10-16T16:15:00.000-07:002010-10-17T10:18:54.635-07:00Tower of Babel, How languages diversifyOne pattern that has evaded linguists and cognitive scientists for some time relates to the quantitative distribution in human language diversity. Much like how plant and animal species <a href="http://www.theoildrum.com/node/6255">diversify in a specific pattern</a>, with very few species dominating within an ecosystem and relatively few species exceedingly rare, the same thing happens with natural languages. You find a few languages spoken by many people, and very few spoken seldomly,with the largest number occupying the middle.<br /><br />Consider a simple model of language growth whereby adoption of languages occur over time by dispersion. The cumulative probability distribution for the number of languages is<br /><blockquote><span style="font-style: italic; font-weight: bold;">P</span>(<span style="font-weight: bold; font-style: italic;">n</span>) = 1/(1+1/<span style="font-weight: bold; font-style: italic;">g</span>(<span style="font-weight: bold; font-style: italic;">n</span>))</blockquote>This form derives from the application of the maximum entropy principle to any random variate where one only knows the mean in the growth rate and an assumed mean in the saturation level. I refer to this as <a href="http://www.energybulletin.net/node/51768">entropic dispersion</a> and have used this <a href="http://mobjectivist.blogspot.com/2010/06/mentaculus.html">many applications before</a> so I no longer feel a need to rederive this term every time I bring it up.<br /><br />The key to applying entropic dispersion is in understanding the growth term <span style="font-weight: bold; font-style: italic;">g</span>(<span style="font-weight: bold; font-style: italic;">n</span>). In many cases <span style="font-weight: bold; font-style: italic;">n</span> will grow linearly with time so the result will assume a <a href="http://mobjectivist.blogspot.com/2008/07/solving-enigma-of-reserve-growth.html">hyperbolic shape</a>. In another case, an exponential growth brought up by technology advances will result in a <a href="http://mobjectivist.blogspot.com/2008/08/general-dispersive-discovery-laplace.html">logistic sigmoid distribution</a>. Neither of these likely explains the language adoption growth curve.<br /><br />Intuitively one imagines that language adoption occurs in fits and starts. Initially a small group of people (at least two for arguments sake) have to convince other people on the utility of the language. But a natural fluctuation arises with small numbers as key proponents of the language will leave the picture and the growth of the language will only sustain itself when enough adopters come along and the law of large numbers starts to take hold. A real driving force to adoption doesn't exist, as ordinary people have no real clue as to what constitutes a "good" language, so that this random walk or Brownian motion has to play an important role in the early stages of adoption.<br /><br />So with that as a premise, we have to determine how to model this effect mathematically. Incrementally we wish to show that the growth term gets suppressed by the potential for fluctuation in the early number of adopters. A weaker steady growth term will take over once a sufficiently large crowd joins the bandwagon.<br /><blockquote><span style="font-weight: bold; font-style: italic;">dn</span> = <span style="font-weight: bold; font-style: italic;">dt</span> / (<span style="font-weight: bold; font-style: italic;">C</span>/sqrt(<span style="font-weight: bold; font-style: italic;">n</span>) + <span style="font-weight: bold; font-style: italic;">K</span>)</blockquote>In this differential formulation, you can see how the fluctuation term which goes as 1/sqrt(<span style="font-weight: bold; font-style: italic;">n</span>) suppresses the initial growth until it reaches a steady state as the <span style="font-weight: bold; font-style: italic;">K</span> term becomes more important. Integrating this term once and we get the implicit equation:<br /><span style="font-weight: bold; font-style: italic;"></span><blockquote>2*<span style="font-weight: bold; font-style: italic;">C</span>*sqrt(<span style="font-weight: bold; font-style: italic;">n</span>) + <span style="font-weight: bold; font-style: italic;">K</span>*<span style="font-weight: bold; font-style: italic;">n</span> = <span style="font-weight: bold; font-style: italic;">t</span><br /></blockquote>Plotting this for <span style="font-weight: bold; font-style: italic;">C</span>=0.007 and <span style="font-weight: bold; font-style: italic;">K</span>=0.000004, we get the following growth function.<br /><br /><div style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGx6A0P8W1HsMVldKNQ_IJqGxjuKkzhsvjukgpWhkkivqLCSKGgvRP1izZENPwh8HYGUH02F4Idq-UWejKLYsZpr-usRfgX6Ly0A3CfXeKO8WHnLyks8Bv-5T3hZ0WZnsjm59V/s1600/languageGrowthFunction.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 360px; height: 231px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGx6A0P8W1HsMVldKNQ_IJqGxjuKkzhsvjukgpWhkkivqLCSKGgvRP1izZENPwh8HYGUH02F4Idq-UWejKLYsZpr-usRfgX6Ly0A3CfXeKO8WHnLyks8Bv-5T3hZ0WZnsjm59V/s400/languageGrowthFunction.gif" alt="" id="BLOGGER_PHOTO_ID_5529052087201833074" border="0" /></a><span style="font-weight: bold; font-style: italic;">Figure 1 </span>: Growth function assuming suppression during early fluctuations<br /><div style="text-align: left;"><br /></div></div>This makes a lot of sense as you can see that growth occurs very slowly until an accumulated time at which the linear term takes over. That becomes the saturation level for an expanding population base as the language has taken root.<br /><br />To put this in stochastic terms assuming that the actual growth terms disperse across boundaries, we get the following cumulative dispersion (plugging the last equation into the first equation to simulate an ergodic steady state):<br /><blockquote><span style="font-style: italic; font-weight: bold;">P</span>(<span style="font-weight: bold; font-style: italic;">n</span>) = 1/(1+1/<span style="font-weight: bold; font-style: italic;">g</span>(<span style="font-weight: bold; font-style: italic;">n</span>)) = 1/(1+1/<span style="font-weight: bold; font-style: italic;"></span>(2*<span style="font-weight: bold; font-style: italic;">C</span>*sqrt(<span style="font-weight: bold; font-style: italic;">n</span>) + <span style="font-weight: bold; font-style: italic;">K</span>*<span style="font-weight: bold; font-style: italic;">n</span>))</blockquote>I took two sets of the distribution of population sizes of languages (DPL) of the Earth’s actually spoken languages from the references below and plotted the entropic dispersion alongside the data. The first reference provides the DPL in terms of a probability density function (i.e. the first derivative of <span style="font-weight: bold; font-style: italic;">P</span>(<span style="font-weight: bold; font-style: italic;">n</span>)) and the second as a cumulative distribution function. The values for <span style="font-weight: bold;">C</span> and <span style="font-weight: bold;">K</span> were as used above. The fit works parsimoniously well and it makes much more sense than the complicated explanations offered up previously for language distribution.<br /><br /><br /><div style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSb40AzGsLJqsoHw5F9CNbfJJnfNsGuL1ula-qE-yYQhXW2Opt08sH1puTCUa7o8i8oMs3KTPknXyOnhW7S_yaby44bbZwpg0DzxdQkCVNsFDhvhbCSeS73Iyf0Ei4J_aiFZsG/s1600/LanguageDispersion.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 261px; height: 400px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSb40AzGsLJqsoHw5F9CNbfJJnfNsGuL1ula-qE-yYQhXW2Opt08sH1puTCUa7o8i8oMs3KTPknXyOnhW7S_yaby44bbZwpg0DzxdQkCVNsFDhvhbCSeS73Iyf0Ei4J_aiFZsG/s400/LanguageDispersion.gif" alt="" id="BLOGGER_PHOTO_ID_5529036886484040402" border="0" /></a><span style="font-weight: bold; font-style: italic;">Figure 2 </span>: Language diversity (top) probability density function (below) cumulative. The entropic dispersion model in green.<br /></div><br />In summary, the two pieces to the puzzle are assuming dispersion according to the maximum entropy principle, and a suppressed growth rate due to fluctuations during the early adoption. This gives two power law slopes in the cumulative; 1/2 in the lower part of the curve and 1 in the higher part of the curve.<br /><br /><span style="font-weight: bold;">References</span><br /><ol><li><a href="http://arxiv.org/PS_cache/physics/pdf/0504/0504196v1.pdf">Scaling Relations for Diversity of Languages</a> (2008)</li><li><a href="http://iopscience.iop.org/1367-2630/11/9/093006/pdf/1367-2630_11_9_093006.pdf">Competition and fragmentation: a simple model generating<br />lognormal-like distributions</a> (2009)</li><li><a href="http://www.pnas.org/content/106/31/12640.full.pdf+html">Scaling laws of human interaction activity</a> (2009)<br />Discussions on the fluctuation term.<br /></li></ol><br /><br /><br /><br /><hr /><br /><div class="content"><a href="http://www.huffingtonpost.com/2010/10/14/classroom-heroes-ny-math-_n_762196.html"><span style="font-style: italic;font-size:100%;" >NY Math Teacher Howard A. Stern Uses Ingenuity To Overcome Failure Statistics</span></a><p><a href="http://www.huffingtonpost.com/2010/10/14/classroom-heroes-ny-math-_n_762196.html" title="http://www.huffingtonpost.com/2010/10/14/classroom-heroes-ny-math-_n_762196.html" rel="nofollow"></a>The public school teacher highlighted in the linked article has this to say:</p><blockquote style="font-style: italic;"><p>"So much of math is about noticing patterns," says Stern, who should know. Before becoming a teacher, he was a finance analyst and a quality engineer.</p></blockquote>I always try to seek interesting patterns in the data, but more to the point, I try to actually understand the behavior from a fundamental perspective.<br /><blockquote style="font-style: italic;"><p>One way Stern uses technology is by helping his students visualize his lessons through the use of graphing calculators.</p></blockquote><p>Stern has it exactly right, if we treat knowledge seeking as a game, like a suduko puzzle, we can attract more people to science in general.<br /></p><p>I think that the pattern in language distribution has similarities to that of innovation adoption as well, similar to what Rogers describes in his book "Diffusions of Innovations". I will try to look into this further as I think the dispersive arguments holds some promise as an analytical approach.<br /></p><a href="http://farm4.static.flickr.com/3284/2363312417_6173828b01.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 477px; height: 314px;" src="http://farm4.static.flickr.com/3284/2363312417_6173828b01.jpg" alt="" border="0" /></a><br /></div><br /><hr width="50%/">@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.com4tag:blogger.com,1999:blog-7002040.post-39997149417117347552010-10-12T23:07:00.000-07:002010-10-17T21:46:15.889-07:00Stock Market as Econophysics Toy ProblemConsider a typical stock market. It consists of a number of stocks that show various rates of growth, <span style="font-weight: bold; font-style: italic;">R</span>. Say that these have an average growth rate, <span style="font-weight: bold; font-style: italic;">r</span>. Then by the Maximum Entropy Principle, the probability distribution function is:<br /><span style="font-weight: bold; font-style: italic;"></span><blockquote><span style="font-weight: bold; font-style: italic;">pr</span>(<span style="font-weight: bold; font-style: italic;">R</span>) = 1/<span style="font-weight: bold; font-style: italic;">r</span>*exp(-<span style="font-weight: bold; font-style: italic;">R</span>/<span style="font-weight: bold; font-style: italic;">r</span>)</blockquote>We can solve this for an expected valuation, <span style="font-weight: bold; font-style: italic;">x</span>, of some arbitrary stock after time, <span style="font-weight: bold;">t</span>.<br /><blockquote><span style="font-style: italic; font-weight: bold;">n</span>(<span style="font-style: italic; font-weight: bold;">x</span>|<span style="font-style: italic; font-weight: bold;">t</span>) = ∫ <span style="font-weight: bold; font-style: italic;">pr</span>(<span style="font-weight: bold; font-style: italic;">R</span>) <span style="font-weight: bold; font-style: italic;">δ</span>(<span style="font-weight: bold; font-style: italic;">x</span><span style="font-style: italic;">-</span><span style="font-weight: bold; font-style: italic;">Rt</span>) <span style="font-style: italic;">d</span><span style="font-weight: bold; font-style: italic;">R</span></blockquote>This reduces to the marginal distribution:<br /><span style="font-weight: bold; font-style: italic;"></span><blockquote><span style="font-weight: bold; font-style: italic;">n</span>(<span style="font-style: italic; font-weight: bold;">x</span>|<span style="font-style: italic; font-weight: bold;">t</span>) = 1/<span style="font-style: italic;"><span style="font-weight: bold;">(rt)</span></span> * exp(-<span style="font-weight: bold; font-style: italic;">x</span><span style="font-style: italic;"><span style="font-weight: bold;">/</span></span><span style="font-style: italic;"><span style="font-weight: bold;">(rt)</span></span>)</blockquote>In general, the growth of a stock only occurs over some average time, <span style="font-weight: bold;">τ</span>, which has its own Maximum Entropy probability distribution:<br /><span style="font-weight: bold; font-style: italic;"></span><blockquote><span style="font-weight: bold; font-style: italic;">p</span>(<span style="font-weight: bold; font-style: italic;">t</span>) = 1/<span style="font-weight: bold;">τ</span> *exp(-<span style="font-style: italic;"><span style="font-weight: bold;">t</span></span>/<span style="font-weight: bold;">τ</span>)<br /></blockquote>So when the expected growth is averaged over expected times we get this integral:<br /><blockquote><span style="font-style: italic; font-weight: bold;">n</span>(<span style="font-style: italic; font-weight: bold;">x</span>) = ∫ <span style="font-style: italic;"><span style="font-weight: bold;">n</span></span>(<span style="font-style: italic;"><span style="font-weight: bold;">x|t</span></span>) <span style="font-weight: bold; font-style: italic;">p</span>(<span style="font-weight: bold; font-style: italic;">t</span>) <span style="font-style: italic;">d</span><span style="font-weight: bold; font-style: italic;">t</span></blockquote>We have almost solved our problem, but this integration reduces to an ugly transcendental function <span style="font-weight: bold;">K</span><sub><span style="font-weight: bold;">0</span> </sub>otherwise known as a modified Bessel function of the second kind and order 0.<br /><blockquote><span style="font-weight: bold; font-style: italic;">n</span>(<span style="font-weight: bold; font-style: italic;">x</span>) = 2/(<span style="font-weight: bold; font-style: italic;">r</span><span style="font-weight: bold; font-style: italic;">τ</span>) * K<sub>0</sub>(2*sqrt(<span style="font-weight: bold;">x</span>/(<span style="font-weight: bold; font-style: italic;">r</span><span style="font-weight: bold; font-style: italic;">τ</span>) ))<br /></blockquote>Fortunately, the K<sub>0 </sub>function is available on any spreadsheet program (Excel, OpenOffice, etc) as the function <span style="font-weight: bold;font-family:courier new;" >BESSELK(X;0)</span>.<br /><br />Let us try it out. I took <a href="http://www.crossingwallstreet.com/archives/2009/04/more-on-the-distribution-of-stock-returns.html">3500 stocks over the last decade (since 1999</a>), and plotted the histogram of all rates of return below.<br /><br /><br /><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiiH2mQgk1D9uplaLZHPYADwxHetmbtHAM4E6BCDvzBfprlfm2jDcjHPG0oDTtXYP1rm60IiQcEIUDK6PZepuwjmIXDtKgNvkZOHxluS-GFo80zVCLNRm53DmJYlT_IcbVyO6Kq/s1600/stock_returns_since_1999.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 362px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiiH2mQgk1D9uplaLZHPYADwxHetmbtHAM4E6BCDvzBfprlfm2jDcjHPG0oDTtXYP1rm60IiQcEIUDK6PZepuwjmIXDtKgNvkZOHxluS-GFo80zVCLNRm53DmJYlT_IcbVyO6Kq/s400/stock_returns_since_1999.gif" alt="" id="BLOGGER_PHOTO_ID_5527775686609188706" border="0" /></a>The <span style="color: rgb(255, 0, 0);">red line</span> is the Maximum Entropy model for the expected rate of return, <span style="font-weight: bold; font-style: italic;">n</span>(<span style="font-weight: bold; font-style: italic;">x</span>) where <span style="font-weight: bold; font-style: italic;">x</span> is the rate of return. This has <span style="font-weight: bold;">only a single adjustable parameter</span>, the aggregate value <span style="font-weight: bold; font-style: italic;">r</span><span style="font-weight: bold; font-style: italic;">τ.</span> We line this up with the peak which also happens to coincide with the mean return value. For the 10 year period, <span style="font-weight: bold; font-style: italic;">r</span><span style="font-weight: bold; font-style: italic;">τ =</span> 2, essentially indicating an average doubling in the valuation of the average stock. This doesn't say anything about the stock market as a whole, which turned out pretty flat over the decade, only that certain high-rate-of-return stocks upped the average (much like the story of Bill Gates entering a room of average wage earners).<br /><br />The following figure shows a Monte Carlo simulation where I draw 3500 samples from a <span style="font-weight: bold; font-style: italic;">r</span><span style="font-weight: bold; font-style: italic;">τ </span>value of 1. This gives an idea of the amount of counting noise we might see.<br /><br /><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiO_jH8r1OA6IXjxwZnh-CRLvEsY8kBB44iilt0hlX7v-bGrqrTXYJswFX81tHXTjqiqiLCAL46f1DCpOqPvmA9IaprQiFkIfErANJuOnLJgri25uyAmx29U1zU5WYN82jGg_na/s1600/stock_returns_sim.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 358px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiO_jH8r1OA6IXjxwZnh-CRLvEsY8kBB44iilt0hlX7v-bGrqrTXYJswFX81tHXTjqiqiLCAL46f1DCpOqPvmA9IaprQiFkIfErANJuOnLJgri25uyAmx29U1zU5WYN82jGg_na/s400/stock_returns_sim.gif" alt="" id="BLOGGER_PHOTO_ID_5527774963972358514" border="0" /></a>I should point out that the MaxEnt model shows very little by way of excessively fat tails at high returns. A stock has to both survive a long time and grow at a rapid enough rate to get too far out in the tail. You see that in the data as only a couple of the stocks have returns greater than 100x. I don't rule out the possibility of high-return tails but we would need to put even more disorder in the <span style="font-weight: bold; font-style: italic;">pr</span>(<span style="font-weight: bold; font-style: italic;">R</span>) distribution than the MaxEnt provides for a mean return rate. The actual data seems a bit sharper and has more outliers than the Monte Carlo simulation, indicating some subtlety that I probably have missed. Yet, this demonstrates how to use the Maximum Entropy Principle most effectively -- you should only include the parameters that you can defend. From this minimal set of constraints you observe how far this can take you. In this case, I could only defend some concept of mean in <span style="font-weight: bold; font-style: italic;">r</span><span style="font-weight: bold; font-style: italic;">τ </span>and then you get a distribution that reflects the uncertainty you have in the rest of the parameter space.<br /><br />The stock market with its myriad of players follows an entropic model to first-order. All the agents seem to fill up the state space so that we can get a parsimonious fit to the data with an almost laughably simple econophysics model. For this model, the distribution curve on a log-log plot will always take on exactly that skewed shape (excepting for statistical noise of course) -- it will only shift laterally depending the general direction of the market.<br /><br />The stock market becomes essentially a toy problem, no different than the explanation of statistical mechanics you may encounter in a physics course.<br /><br />Has anyone else figured this out?<br /><br /><span style="font-size:130%;"><span style="font-weight: bold;">[EDIT]</span></span><br />Besides the slight fat-tail, which may be due to potential compounding growth similar to that found in<a href="http://mobjectivist.blogspot.com/2010/03/econophysics-and-sunk-costs.html"> individual incomes</a>, the sharper peak may also have a second-order basis. This could result from a behavior called <a href="http://www.theoptionsinsider.com/tradingtechnology/?id=5536">implied correlation</a> which measures the synchronized behavior among stocks in the market. According to recent measurements, the correlation has hit all-time highs (the last around October 5). Qualitatively a high correlation would imply that the average growth rate <span style="font-weight: bold; font-style: italic;">r</span> would show much less dispersion in that variate, and the dispersion would only apply to the length of time, <span style="font-weight: bold; font-style: italic;">t</span>, that a stock rides the crest. Correlation essentially removes one of the parameters of variability from the model and the distribution sharpens up. The stock distribution then becomes the following simple damped exponential instead of the Bessel.<br /><blockquote><span style="font-weight: bold; font-style: italic;">n</span>(<span style="font-weight: bold; font-style: italic;">x</span>) = 1/(<span style="font-style: italic;"><span style="font-weight: bold;">r</span></span><span style="font-weight: bold; font-style: italic;">τ</span>) * exp(-<span style="font-weight: bold;">x</span>/(<span style="font-weight: bold; font-style: italic;">r</span><span style="font-weight: bold; font-style: italic;">τ</span>))</blockquote>The figure below shows what happens when about 40% of the stocks would show this correlation (<span style="color: rgb(0, 153, 0); font-weight: bold;">in green</span>). The other 60% show independent variability or dispersion in the rates as per the original model.<br /><br /><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxOPNGZcmS9POzytE3qvvShU4sSdPt-NUXCsnVePx8_YeR4AXhmVXcLm3T0RLia1_ZTttuUEbSdRF25qiEeBPXazj9DLwXO4BsVR6K2htShAhJlgKuyvzMMa86hYrDbvNY6jpI/s1600/stock_returns_implied_correlation.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 365px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxOPNGZcmS9POzytE3qvvShU4sSdPt-NUXCsnVePx8_YeR4AXhmVXcLm3T0RLia1_ZTttuUEbSdRF25qiEeBPXazj9DLwXO4BsVR6K2htShAhJlgKuyvzMMa86hYrDbvNY6jpI/s400/stock_returns_implied_correlation.gif" alt="" id="BLOGGER_PHOTO_ID_5529234241623860834" border="0" /></a>I don't think this makes the collective stock behavior and more complex. I think it makes it simpler in fact. Implied correlation actually points to the future in the stock market. Dispersion in stock returns will narrow as all stocks move in unison. It makes it even more of a toy, with computers potentially dictating all movements.<br /><br /><div style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhl3T7uzxDrKCdGIgRuY4bbaNtZZj6571j_T9sV8OBVBpbyiB5LYxOejd9ljIDBbHejjbigBS6tysrVwMcwaAea1RyVe5NFOaA531rJv-KXppGaK1mLIGIy3oINSW3ZdkgbXiKl/s1600/implied_correlation.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 194px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhl3T7uzxDrKCdGIgRuY4bbaNtZZj6571j_T9sV8OBVBpbyiB5LYxOejd9ljIDBbHejjbigBS6tysrVwMcwaAea1RyVe5NFOaA531rJv-KXppGaK1mLIGIy3oINSW3ZdkgbXiKl/s400/implied_correlation.gif" alt="" id="BLOGGER_PHOTO_ID_5529236276949347042" border="0" /></a>Implied correlation has risen in the last few years (from <a href="http://www.theoptionsinsider.com/tradingtechnology/?id=5536">here</a>)<br /></div><br /><br /><hr width="75%"><br /><span style="font-weight: bold;">References<br /></span> I personally don't deal with the stock market, preferring to watch it from afar. I found a few papers that try to understand this effect, but most just try to brute force fit it to various distributions.<br /><ol><li>Analysis of same data from <a href="http://seekingalpha.com/article/132215-long-term-stock-return-distributions-getting-the-whole-picture">Seeking Alpha</a><br /></li><li>This paper is close but no cigar. It looks like they "detrend" the data to get of the skew, which I think misses the point :<br />"<a href="http://arxiv.org/PS_cache/arxiv/pdf/0705/0705.4112v2.pdf"><span style="font-style: italic;">Microscopic origin of non-Gaussian distributions of financial returns</span></a>" (2007)</li><li>This book has info on the Bessel distribution:<br />"<span style="font-style: italic;">Return distributions in finance"</span>, J. Knight and S. Satchell</li><li>Interesting from an <a href="http://www2.physics.umd.edu/%7Eyakovenk/papers/QuantFinance-2-443-2002.pdf">econophysics perspective</a>.</li><li>This book appears worthless:<br />"<span style="font-style: italic;">Fat-Tailed and Skewed Asset Return Distributions</span>", S.T. Rachev, F.J. Fabozzi, C Menn</li></ol>@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.com7tag:blogger.com,1999:blog-7002040.post-48284055719070349872010-10-07T13:19:00.001-07:002010-10-07T18:22:31.389-07:00Black-ScholesGames for suits. This post has no relevance in the greater scheme of things.<br /><br />As a premise, consider that the financial industry needs instruments of wealth creation that work opposite to that of stocks. For example, when stock prices remain low, then something else else should take up the slack -- otherwise important people won't make money. Wall Street invented derivatives, options, and other hedging methods to serve as an investment vehicle under these conditions.<br /><br />We can try to show how this works.<br /><br />If <span style="font-weight: bold; font-style: italic;">S</span> is the stock price, then <span style="font-weight: bold; font-style: italic;">V</span> ~ 1/<span style="font-weight: bold; font-style: italic;">S</span> is an example "derivative" that works as a reciprocal to price. This becomes the normative description and defines the basic objective as to what the investment class wants to achieve -- an alternate form of income that balances swings in stock price, potentially reducing risk.<br /><br />Further, we make the assumption that the derivative will grow or decline over time.<br /><br />So we get:<br /><span style="font-weight: bold; font-style: italic;">V</span>(<span style="font-weight: bold; font-style: italic;">S</span>,<span style="font-weight: bold; font-style: italic;">t</span>) = <span style="font-weight: bold; font-style: italic;">K</span>/<span style="font-weight: bold; font-style: italic;">S</span> * exp(<span style="font-weight: bold; font-style: italic;">a</span>*<span style="font-weight: bold; font-style: italic;">t</span>)<br /><br />If <span style="font-weight: bold; font-style: italic;">a</span> > 0 then the derivative will grow and if <span style="font-weight: bold; font-style: italic;">a</span> is less than zero than the derivative will damp out over time. The term <span style="font-weight: bold; font-style: italic;">K</span> is a constant of proportionality.<br /><br />The <a href="http://en.wikipedia.org/wiki/Black%E2%80%93Scholes">infamous Black-Scholes equation</a> supposedly governs the behaviour of derivatives with respect to stock prices (and time) according to this invariant:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://upload.wikimedia.org/math/0/a/7/0a73eeb3a0a4e975cf629fe206d780be.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 314px; height: 44px;" src="http://upload.wikimedia.org/math/0/a/7/0a73eeb3a0a4e975cf629fe206d780be.png" alt="" border="0" /></a><br />The particulars may change but this formulation describes <span style="font-style: italic;">THE</span> equation that Merton, Black, and Scholes devised to aid investors in making hedged investments using options and other derivatives. The way to read this equation is to note that derivatives will drift or diffuse into the space of the stock price, and proportional to the stock price itself. The drift term occurs due to the interest rate <span style="font-weight: bold; font-style: italic;">r</span> providing a kind of forcing function. The derivative, <span style="font-weight: bold; font-style: italic;">V</span>, can also grow due to pure interest rate compounding, as seen in the last term. Whether this actually holds or not, I don't really care as I don't participate in these schemes.<br /><br />So if you look at it from a very neutral perspective you come up with some interesting observations. For one, you can trivially solve this partial differential equation for a generally disordered set of initial conditions. And the solution appears exactly the same as my first expression above:<br /><span style="font-weight: bold; font-style: italic;">V</span>(<span style="font-weight: bold; font-style: italic;">S</span>,<span style="font-weight: bold; font-style: italic;">t</span>) = <span style="font-weight: bold; font-style: italic;">K</span>/<span style="font-weight: bold; font-style: italic;">S</span> * exp(<span style="font-weight: bold; font-style: italic;">a</span>*<span style="font-weight: bold; font-style: italic;">t</span>)<br /><br />To verify this assertion, we test the expression in the B-S equation, substituting the partial derivatives as we go along.<br /><br /><span style="font-weight: bold; font-style: italic;">a</span>*<span style="font-weight: bold; font-style: italic;">K</span>/<span style="font-weight: bold; font-style: italic;">S</span>* exp(<span style="font-weight: bold; font-style: italic;">a</span>*<span style="font-weight: bold; font-style: italic;">t</span>) + 1/2(<span style="font-weight: bold; font-style: italic;">σS</span>)<sup>2</sup>*2*<span style="font-weight: bold; font-style: italic;">K</span>/<span style="font-weight: bold; font-style: italic;">S</span><sup>2</sup>*exp(<span style="font-weight: bold; font-style: italic;">a</span>*<span style="font-weight: bold; font-style: italic;">t</span>) - <span style="font-weight: bold; font-style: italic;">rS</span>*<span style="font-weight: bold; font-style: italic;">K</span>/<span style="font-weight: bold; font-style: italic;">S</span><sup>2</sup>*exp(<span style="font-weight: bold; font-style: italic;">a</span>*<span style="font-weight: bold; font-style: italic;">t</span>) - <span style="font-weight: bold; font-style: italic;">r</span>*<span style="font-weight: bold; font-style: italic;">K</span>/<span style="font-weight: bold; font-style: italic;">S</span> * exp(<span style="font-weight: bold; font-style: italic;">a</span>*<span style="font-weight: bold; font-style: italic;">t</span>) = 0<br /><br />Cancelling out all common factors:<br /><br /><span style="font-weight: bold; font-style: italic;">a</span>/<span style="font-weight: bold; font-style: italic;">S</span> + 1/2(<span style="font-weight: bold; font-style: italic;">σS</span>)<sup>2</sup>*2/<span style="font-weight: bold; font-style: italic;">S</span><sup>2</sup>- <span style="font-weight: bold; font-style: italic;">rS</span>/<span style="font-weight: bold; font-style: italic;">S</span><sup>2</sup> - <span style="font-weight: bold; font-style: italic;">r</span>/<span style="font-weight: bold; font-style: italic;">S</span> = 0<br /><br />Reducing the value of <span style="font-weight: bold; font-style: italic;">S</span><br /><br /><span style="font-weight: bold; font-style: italic;">a</span>/<span style="font-weight: bold; font-style: italic;">S</span> + 1/2(<span style="font-weight: bold; font-style: italic;">σ</span>)<sup>2</sup>*2/<span style="font-weight: bold; font-style: italic;">S</span>- <span style="font-weight: bold; font-style: italic;">r</span>/<span style="font-weight: bold; font-style: italic;">S</span> - <span style="font-weight: bold; font-style: italic;">r</span>/<span style="font-weight: bold; font-style: italic;">S</span> = 0<br /><br /><span style="font-weight: bold; font-style: italic;">a</span> + 1/2(<span style="font-weight: bold; font-style: italic;">σ</span>)<sup>2</sup>*2- <span style="font-weight: bold; font-style: italic;">r</span> - <span style="font-weight: bold; font-style: italic;">r</span> = 0<br /><br />gets us to:<br /><br /><span style="font-weight: bold; font-style: italic;">a</span> = 2*<span style="font-weight: bold; font-style: italic;">r</span> - <span style="font-weight: bold; font-style: italic;">σ</span><sup>2</sup><br /><br />The term <span style="font-weight: bold; font-style: italic;">r</span> is proportional to interest, and <span style="font-weight: bold; font-style: italic;">σ</span> is volatility or variance in stock price.<br /><br />So this simple expression that I just cooked up will obey Black-Scholes as long as we choose the constant<span style="font-weight: bold; font-style: italic;"> a</span> term to correspond to the interest and volatility as shown above, and we get:<br /><span style="font-weight: bold; font-style: italic;">V</span>(<span style="font-weight: bold; font-style: italic;">S</span>,<span style="font-style: italic; font-weight: bold;">t</span>) = <span style="font-weight: bold; font-style: italic;">K</span>/<span style="font-weight: bold; font-style: italic;">S</span> * exp((<span style="font-weight: bold; font-style: italic;">2</span>*<span style="font-weight: bold; font-style: italic;">r</span> - <span style="font-weight: bold; font-style: italic;">σ</span><sup>2</sup>)*<span style="font-weight: bold; font-style: italic;">t</span>)<br /><br />Note that if the volatility (i.e. diffusion) stays high relative to interest, the exponential will damp out with time. If interest (i.e. drift) goes higher than volatility, the exponential will accelerate, creating a huge amount of paper gains.<br /><br /><span style="font-style: italic;">At this point someone will argue that this solution does not reflect reality. I beg to differ. When you make your bed of mathematical box-springs, you have to lie in it. This solution to Black-Scholes is perfectly fine as it gives a steady-state picture of the partial differential equation. The diffusional and drift components cancel with the right mix of production vs destruction in derivative wealth. If you don't like it, then come up with something different than that specific B-S equation.</span><br /><br />I have a feeling that all the seeming complexity of financial quantitative analysis with its Ito calculus and Wiener processes acts as a shiny facade to a simple reality. The math exists to model the inverse relationship of stocks to derivatives. If this didn't happen -- and the lords of high finance absolutely require this relationship to make money -- the math as formulated would vanish from their toolbox. In other words, the math only exists to justify what the financial operatives want to see happen. Everyone appears to implicitly buy this mathematical artifice hook, line, and sinker.<br /><br />Quantitative analysis and the "quants" who work it have created a fantasy land, where they do not want you to know how easily their quaint ornate universe reduces to a simple function. If they admitted to the charade, the mystery would all disappear and they would no longer have jobs.<br /><br />Economics and finance does not constitute a science. In science you may need to use partial differential equations. For example, the Fokker-Planck equation shows up quite often -- which incidentally, the Black-Scholes equation shows some similarity to and the quant proponents of B-S certainly like to play up -- but it typically applies to <a href="http://mobjectivist.blogspot.com/2010/05/fokker-planck-for-disordered-systems.html">real</a>, <a href="http://mobjectivist.blogspot.com/2010/05/word-on-dispersion.html">physical</a> systems where you use it to try to understand nature, not trying to model some artificial game-like behavior.<br /><!-- I will give anyone a billion dollars if they can show that my expression is not a valid solution to the Black-Scholes equation. I also demand a Nobel prize for pointing out that the emperor has no clothes. --><br />I can edit my solution into the Wikipedia page for <a href="http://en.wikipedia.org/wiki/Black%E2%80%93Scholes">Black-Scholes</a> and I will bet that someone will immediately remove it. I harbor no illusions. The financial industry depends on the absence of real knowledge to achieve their objectives.<br /><br />That explains why economics and finance do not classify as sciences; absolute truth does not matter to economists and financiers, only the art of deconstructing profit and the craft of phantom wealth creation does.<br /><br /><h5 style="font-family: courier new;">Please address editorial comments to: Postings, Main Incinerator, Department of Sanitation, North River Piers, New York, N.Y. 10019. </h5>@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.com4tag:blogger.com,1999:blog-7002040.post-1468873648314439172010-10-02T10:28:00.000-07:002010-10-02T10:46:03.141-07:00Lake Size DistributionsOur environment shows great diversity in the size and abundance in natural structures. Since we extract oil from our environment, it stands to reason that many of the same mechanisms leading to oil formation could also reveal themselves in more familiar natural phenomena. Take the size distribution of lakes as an example.<br /><br />Freshwater lakes accumulate their volume in a manner analogous to the way that an underground reservoir accumulates oil. Over geologic time, water drifts into a basin at various rates and over a range in collecting regions. In the context of oil reservoirs, I have talked about this <a href="http://mobjectivist.blogspot.com/2008/10/estimating-urr-from-dispersive-field.html">behavior before</a> and the Maximum Entropy prediction of the size distribution leads to the following expression:<br /><br />P(Size) = 1/(1+Median/Size)<br /><br />Surveys of lake size show the same reciprocal power law dependence, with the exponent usually appearing arbitrarily close to one. In Figure 1 below, the data plotted on a ranked plot clearly shows this dependence over several orders of magnitude.<br /><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJTLs2htYBjwKHKexy5TwqYzNqqetukAXuuwU2Aw6XA34Z4BhxzzT35WWgGxiB6SiMvJ3X_fx_Gylw6H62qg_MCRc2ifd3Ks4wB0r5VJS-EwXn_OiMlYbdv21vVoM3Z5KNFLcK/s1600/northern_quebec_lakes.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 363px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJTLs2htYBjwKHKexy5TwqYzNqqetukAXuuwU2Aw6XA34Z4BhxzzT35WWgGxiB6SiMvJ3X_fx_Gylw6H62qg_MCRc2ifd3Ks4wB0r5VJS-EwXn_OiMlYbdv21vVoM3Z5KNFLcK/s400/northern_quebec_lakes.gif" alt="" id="BLOGGER_PHOTO_ID_5523503081675068546" border="0" /></a><br /><div style="text-align: center;"><span style="font-weight: bold;">Figure 1: </span>Northern Quebec lakes [1]<br /><br /></div>More revealing, in Figure 2 we can observe the bend in the curve that limits the number of small lakes in exact accordance to the equation. The agreement with such a simple model suggests that a universal behavior links the statistics between environmental phenomena as seemingly distinct as those of lakes and oil reservoirs. <br /><br /><div style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivAELvu9na2zkcKIwKEFW37dtTklJk-tNoI9bXuE2mlQjxmFgSY9C0aU8cr3j0ECVjJaNQuDethRY-l1K6E-FGXLAw4GavOGCwdYsa1di2LORSfv56A5SXrmgF-adWkO_9Aack/s1600/amazon-lake-size-model.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 315px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivAELvu9na2zkcKIwKEFW37dtTklJk-tNoI9bXuE2mlQjxmFgSY9C0aU8cr3j0ECVjJaNQuDethRY-l1K6E-FGXLAw4GavOGCwdYsa1di2LORSfv56A5SXrmgF-adWkO_9Aack/s400/amazon-lake-size-model.gif" alt="" id="BLOGGER_PHOTO_ID_5523503180110900178" border="0" /></a><br /><span style="font-weight: bold;">Figure 2: </span>Amazon lakes [2]<br /></div><br />This provides other intuitive clues to how to think about reservoir sizing. Consider the fact that very few freshwater lakes reach gigantic portions, the Great Lakes serving as a prime example. Similarly, the rare occurrence of “super-giant” reservoirs follow from the same principles. We clearly won’t find any new huge freshwater lakes, while the future occurrence of super-giant oil reservoirs remains very doubtful just from the statistics of oil reservoirs found so far. Finding substantial numbers of super-giant reservoirs would result in deviations from the size distribution plot, making it very unlikely.<br /><br /><span style="font-weight: bold;">References</span><br />[1] <a href="http://www.eorc.jaxa.jp/ALOS/en/kyoto/phase_1/KC-Phase1-report_Telmer.pdf">K&C Science Report – Phase 1 Global Lake Census</a><br /><br />[2] <a href="http://cires.colorado.edu/limnology/pubs/pdfs/Pub116.pdf">Estimation of the fractal dimension of terrain from Lake Size Distributions</a>@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.com0tag:blogger.com,1999:blog-7002040.post-90647021863603142302010-09-06T22:23:00.000-07:002010-09-09T17:52:54.939-07:00Hydrogeology for DummiesA running theme of this blog involves the reduction of seemingly complex behaviors into simple mathematical formulations. It remains a bit of a mystery to me why in many situations that no one has either (a) done this work on their own or (b) uncovered the work of someone else who has done the simplifying analysis years ago.<br /><br />The majority of scientists practicing mainstream research have furthered the cause by following the lead of others who go down blind alleys and over-complicate the analysis. I suspect that a few complicate matters intentionally, as it demonstrates to other scientists their intellectual prowess. In certain cases, creating a private world of intricate analysis acts as a kind of moat around which they can fortify their specialty discipline.<br /><br />Of course, this doesn't happen universally. Certainly we run across many scientific and engineering subdisciplines that have gone through years of scrubbing. In these cases, the most salient and simple analyses have emerged and stood the test of time. They often share the same traits of elegance and crystalline transparency so that we can use their patterns to understand the world without a lot of extra effort. To me, that seems a reasonable goal to strive for.<br /><br />In this post, I will go through the derivation of what I consider a very overlooked and simple argument having to do with the transport of materials in porous media -- much as what you would find in tracing a contaminant though a groundwater basin. Or what may happen if you frac for natural gas and open up new pathways to a drinking water aquifer. Or how oil will migrate to a reservoir over time, feeding the production output of a stripper well for years. Or what happens if you spill oil in a waterway.<br /><br />Unfortunately, when you pose this kind of problem to a research geologist or hydrologist, you will have to prepare for an onslaught of ornate misdirection. They will either derive some hideous numerical model or possibly run a piece of commercial software. Apparently, they will never resort to plain logic and elementary first-principles considerations.<br /><br /><span style="font-weight: bold;">The Problem</span><br /><br />1. Consider a contaminant that enters an aquifer in a single dose<br />2. Predict how long it will take to pass by a downstream location<br />3. How do you solve this problem?<br /><br />A large scale experiment typically looks like this scenario:<br /><br /><div style="text-align: center;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://www.gue.com/files/page_images/conservation/groundwater_tracing/fig3_sm.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 300px; height: 448px;" src="http://www.gue.com/files/page_images/conservation/groundwater_tracing/fig3_sm.jpg" alt="" border="0" /></a>from <span style="font-family:Arial;"><span style=""></span></span><a href="http://www.gue.com/?q=en/node/798">Groundwater Tracing in the Woodville Karst Plain</a></div><div style="text-align: justify;"><br /></div>And you get a result that looks like the following figure. Intuitively, one would expect that the concentrated dose will disperse as it travels downstream and that the original concentration will spread out in time. The red curve that goes through the data gives you a feel for what I will derive via a simple model.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtVd45Nbhy5DK0_DfGdRCnyrTQyDIcgavEB5fpcw180M_iH_gojOqT97oTHrDWU1I1JkRd4omBBWx7Rw34Pan6fgdI3WdXPHucMGjGS6sZCiQRabOttHl4w4eGMVjDywYH9Sp5/s1600/stream-tracer_htm_m18f853af.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 251px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtVd45Nbhy5DK0_DfGdRCnyrTQyDIcgavEB5fpcw180M_iH_gojOqT97oTHrDWU1I1JkRd4omBBWx7Rw34Pan6fgdI3WdXPHucMGjGS6sZCiQRabOttHl4w4eGMVjDywYH9Sp5/s400/stream-tracer_htm_m18f853af.jpg" alt="" id="BLOGGER_PHOTO_ID_5509539113463994082" border="0" /></a>As a main premise, I assume that disorder plays a big role in providing a variety of pathways from source to sink. One can imagine that some paths might occur on the main waterway, providing a maximum speed or path of least resistance. Other paths may follow obstructions or diversions which will either slow down or speed up the flow from the main path.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://docs.google.com/File?id=dctrrzxh_40hgnhffd3_b"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 300px; height: 310px;" src="http://docs.google.com/File?id=dctrrzxh_40hgnhffd3_b" alt="" border="0" /></a>The main path has a mean velocity <span style="font-weight: bold; font-style: italic;">v</span><sub>0</sub> and the other paths have probabilities that range below this, with some mean deviation <span style="font-weight: bold; font-style: italic;">v<sub>m</sub></span> from <span style="font-weight: bold; font-style: italic;">v</span><sub>0</sub>. A distribution that <span style="font-weight: bold;">maximizes entropy </span>while holding to these two minimal constraints looks like the following graph.<br /><br /><div style="text-align: center;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg5UkypXQPa_TwjPTL8AWA_-XfNpBDLVzAlGytK6aAvU3vR3w8ixTP-lGbVoYw14saNYOGOQBnF5ZcTAp0lFqBCTimAUg86KtKYRy564BfVkw7_n51GPWCAgoh8uf721hXytd04/s1600/maxent-velocity-peak.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 300px; height: 190px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg5UkypXQPa_TwjPTL8AWA_-XfNpBDLVzAlGytK6aAvU3vR3w8ixTP-lGbVoYw14saNYOGOQBnF5ZcTAp0lFqBCTimAUg86KtKYRy564BfVkw7_n51GPWCAgoh8uf721hXytd04/s400/maxent-velocity-peak.gif" alt="" id="BLOGGER_PHOTO_ID_5514377389431560530" border="0" /></a> <span style="font-weight: bold;">Figure 1</span>: MaxEnt velocity distribution for absolute mean deviation<br /><br /></div> This illustrates simple dispersion. For this post we won't even consider diffusion, which although important may in fact act as only a second-order effect depending on the speed of the main flow.<br /><br />The calculation of downstream concentration, <span style="font-style: italic; font-weight: bold;">n</span>(<span style="font-style: italic; font-weight: bold;">x</span>,<span style="font-style: italic; font-weight: bold;">t</span>), drops out of the Fokker-Planck equation if we ignore diffusion. Note the delta function, <span style="font-weight: bold; font-style: italic;">δ</span>(<span style="font-weight: bold; font-style: italic;">x</span><span style="font-style: italic;">-</span><span style="font-weight: bold; font-style: italic;">vt</span>), which describes a traveling pulse for each velocity component.<br /><span style="font-style: italic; font-weight: bold;"></span><blockquote><span style="font-style: italic; font-weight: bold;">n</span>(<span style="font-style: italic; font-weight: bold;">x</span>,<span style="font-style: italic; font-weight: bold;">t</span>) = ∫ <span style="font-weight: bold; font-style: italic;">p</span>(<span style="font-weight: bold; font-style: italic;">v</span>) <span style="font-weight: bold; font-style: italic;">δ</span>(<span style="font-weight: bold; font-style: italic;">x</span><span style="font-style: italic;">-</span><span style="font-weight: bold; font-style: italic;">vt</span>) <span style="font-style: italic;">d</span><span style="font-weight: bold; font-style: italic;">v</span></blockquote>Next we apply the Maximum Entropy Principle to generate a velocity distribution as shown in the <span style="font-weight: bold;">Figure 1</span>:<br /><blockquote><span style="font-weight: bold; font-style: italic;">p</span>(<span style="font-weight: bold; font-style: italic;">v</span>) = 1/<span style="font-weight: bold; font-style: italic;">v</span><sub style="font-weight: bold; font-style: italic;">m</sub> exp(-<span style="font-weight: bold; font-style: italic;">|v-v<sub>o</sub>|</span>/<span style="font-weight: bold; font-style: italic;">v</span><sub style="font-weight: bold;">m</sub>)</blockquote>No other distribution has a higher entropy given that mean and an absolute deviation from the mean, so it ranks as the least biased estimator for that set of constraints. (Note that this does not describe the <span style="font-style: italic;">normal</span> or Gaussian distribution as that requires a second-moment, i.e variance, constraint. It turns out that the mean deviation distribution, also known as the Laplace, is actually a smeared Gaussian where we have MaxEnt uncertainty in σ-squared. So Laplace entropy is higher than the Gaussian entropy)<br /><br />We can trivially solve the integral to generate a concentration at some downstream location <span style="font-weight: bold; font-style: italic;">x</span> (forget about adding extra dimensions as a one-dimensional result should suffice).<br /><blockquote><span style="font-style: italic; font-weight: bold;">n</span>(<span style="font-style: italic; font-weight: bold;">x</span>,<span style="font-style: italic; font-weight: bold;">t</span>) = 1/(<span style="font-weight: bold; font-style: italic;">v</span><sub><span style="font-style: italic; font-weight: bold;">m</span></sub><span style="font-weight: bold; font-style: italic;">t</span>) exp(-<span style="font-weight: bold; font-style: italic;">|x</span>/(<span style="font-weight: bold; font-style: italic;">v</span><sub><span style="font-weight: bold; font-style: italic;">m</span></sub><span style="font-weight: bold; font-style: italic;">t</span>)-<span style="font-weight: bold; font-style: italic;">v<sub>0</sub></span>/<span style="font-weight: bold; font-style: italic;">v</span><sub><span style="font-weight: bold; font-style: italic;">m</span></sub>|)</blockquote>Let's see how this works in practice.<br /><br />I pulled data from a pair of papers from 2008, <span style="font-weight: bold; font-style: italic;">"Non-Fickian dispersion in porous media"</span>, T Le Borgne, P Gouze, et al. The scientists created a carefully controlled experiment, which relied on a customized apparatus for making precise measurements of the contaminant, a flourescent dye called <span style="font-style: italic;">uranine</span>. The value of this particular experiment lies in the large dynamic range of the resultant data. The concentration runs over 4-orders of magnitude and the time scale 2-orders. Their own model, although generating a good fit to the data, needed a numerical calculation to solve, violating my assertion that we can model via simpler mechanisms.<br /><br />The following figure allows for the wide dynamic range by plotting the concentration (also known as a <span style="font-style: italic;">breakthrough curve</span>) on a log-log scale. The red triangles <span style="color: rgb(255, 0, 0);">◊</span> fit the Maximum Entropy dispersion model, <span style="font-style: italic; font-weight: bold;">n</span>(<span style="font-style: italic; font-weight: bold;">x</span>,<span style="font-style: italic; font-weight: bold;">t</span>), for a fixed value of <span style="font-weight: bold; font-style: italic;">x</span> and a value of <span style="font-weight: bold; font-style: italic;">v</span><sub style="font-weight: bold;">m</sub>/<span style="font-weight: bold; font-style: italic;">v</span><sub>0</sub> = 0.18. By inverting the concentration we can get the probability distribution of velocities in the bottom figure; on a semi-log plot a symmetric two-sided exponential looks like a perfect isosceles triangle. Based on the outstanding fit and symmetric distribution I find it blatantly obvious that entropic mechanisms generate the dispersion observed. You won't get this parsimonious a fit from such a simple model -- with essentially a single parameter <span style="font-weight: bold; font-style: italic;">v</span><sub style="font-weight: bold;">m</sub>/<span style="font-weight: bold; font-style: italic;">v</span><sub>0 -- </sub>unless it has some real merit.<br /><br /><div style="text-align: center;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://a.imageshack.us/img818/5708/uraninetracerplots.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 523px; height: 612px;" src="http://a.imageshack.us/img818/5708/uraninetracerplots.gif" alt="" border="0" /></a><span style="font-weight: bold;">Figure 2</span>: Breakthrough curve (top) and<br />measured velocity distribution (bottom)<br />for flourescent dye tracer experiment.<br /><br /></div>I would suggest that any further modeling of these kinds of porous structures makes little sense since we have essentially proved that the multitude of the pathways maximize entropy and thus maximized the disorder of the system. In other words, you could not model a more complex system given those constraints if you tried. Nature will always win out with entropy in its back pocket.<br /><br />The simplicity of the model also points out how readily fat-tail effects emerge from entropic disorder. The power law drop-off obeys a 1/time behavior that certainly has consequences in terms of how long a contaminant will remain in a groundwater basin. Velocity dispersion with a mean MaxEnt constraint will always lead to a power-law drop-off in time (see <a href="http://mobjectivist.blogspot.com/2010/06/mentaculus.html">more here</a>).<br /><br />See also these posts:<br /><ol><li><span style="font-size:85%;"><a href="http://mobjectivist.blogspot.com/2009/06/dispersive-transport.html">http://mobjectivist.blogspot.com/2009/06/dispersive-transport.html</a></span></li><li><span style="font-size:85%;"><a href="http://mobjectivist.blogspot.com/2010/05/characterizing-mobility-in-disordered.html">http://mobjectivist.blogspot.com/2010/05/characterizing-mobility-in-disordered.html</a><br /> </span></li><li><span style="font-size:85%;"><a href="http://mobjectivist.blogspot.com/2010/05/fokker-planck-for-disordered-systems.html">http://mobjectivist.blogspot.com/2010/05/fokker-planck-for-disordered-systems.html</a></span></li></ol>The hydologists and geologists who ignore entropy in favor of some other fancy model do so based on their own stubborness or ignorance. I have observed the practice of making things too complicated runs rampant among geologists and it really strikes me as kind of sad. We have hydrogeologist hacks like <a href="http://mobjectivist.blogspot.com/2010/05/worst-book-on-oil-crisis-written-yet.html">Steven Gorelick </a>writing cornucopian books diminishing the significance of peak oil, when they can't even do the science of their own discipline correctly.@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.com6tag:blogger.com,1999:blog-7002040.post-22789076698410121112010-08-20T06:49:00.000-07:002010-08-20T07:06:05.505-07:00Tasseography<a href="http://europe.theoildrum.com/node/6863">Oil Watch Monthly</a><br /><br />Because of the magnified nature of the production scale I find it interesting to place the data on the real scale, which shows the zeros and the full temporal range. See the short black segment in the following figure, which signifies the range reported on TOD.<br /><div class="content"><p> <img src="http://a.imageshack.us/img228/4163/oilwatchmonthly.gif" /></p>I don't really understand this infatuation with what I consider noise riding on top of the more important overall scaled profile. Readers must feel a need to see this magnified view which I don't quite grasp.<br /></div><br />Is it because people have become accustomed to using the information for futures trading or anticipating the stock market? I presume that every little glitch provides a chance to make some money.<br /><br />Or do we suffer from climate change envy where temperature trends get studied to death? That works in a different context because temperatures normally occupy a narrow range and the important signal can get buried in the measurement noise. <br /><br />Or do people want to anticipate seeing that sudden, precipitous drop that will signal us going over the cliff?<br /><br />More likely the answer is that we continue to plot the magnified view because we can and it gives us a strawman to argue back and forth over. The term tasseography describes this behavior.<br /><br />Noise can tell us something but it to first-order it really only tells us what we already know. The fewer the number of independent measurements or actors in the market, the greater the noise and fluctuations.@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.com2tag:blogger.com,1999:blog-7002040.post-40815401968770466472010-07-01T20:35:00.000-07:002010-07-02T06:21:53.565-07:00GOM Maximum Production Rate and MacondoI did some analysis based on Berman's post from a few days ago:<br />(<a href="http://www.theoildrum.com/node/6644">Estimated Oil Flow Rates From the BP Mississippi Canyon Block 252 “Macondo” Well</a>)<br /><br />I think he messed up the statistics because of his use of a truncated data set from the MMS and the log-normal distribution he used.<br /><br />I wasn't sure exactly how he got his data but I essentially had to screen scrape the data off of about 18 PDF files giving the Maximum Production Rate (MPR) going back to 1975: http://www.gomr.mms.gov/homepg/pubinfo/repcat/product/MPR.html<br /><br />I plotted the results histogram against a <a href="http://mobjectivist.blogspot.com/2010/06/gom-reservoir-size-distributions.html">model of dispersive aggregation for reservoir sizes</a>. The maximum rate is then a simple proportional draw-down from the reservoir size. Bigger reservoirs have a higher rate and smaller reservoirs have a smaller rate -- nothing to argue about here as it is a pretty safe approximation. The way you read this histogram is that the flat regions have the highest frequency.<br /><img src="http://img413.imageshack.us/img413/2488/gommms.gif" /><br /><br />The integrated underneath the two curves is equal and about 16.5 million barrels per day peak. Don't confuse this with any rate attainable from the GOM; it is high because it sums up the peaks from a span of years. The median value is 200 barrels per day.<br /><br />The interesting point in the curve is that the model predicts a higher peak rate for the largest reservoirs, the curve goes off the graph to above 400,000 barrels per day. Now, I would think that the operators would never try to have that throughput from a single well. So what do they do? Of course they split it into several wells to extract the maximum amount from that reservoir and essentially throttle that from an individual well.<br /><br />Since the total amount is conserved between the two curves, the bulge that you see in the data is the extra wells drilled to make up for the excess. My model is totally based on the principle of Maximum Entropy applied to reservoir sizing, and the reordering of the rank histogram is caused by artificial constraints set by human intervention. Notice that all the small reservoirs effectively require no throttling.<br /><br />The point of this comment is that working wells are likely throttled but the Macondo could conceivably be higher than the maximum of 50,000 barrels per day that Berman suggested. The operators have no way of throttling it until the relief wells are put in place. Of course this kind of throughput is very rare, as at the most a couple of dozen out of 10,000 reservoirs will get this big and generate this potential, but this is the way that nature operates, a big fat-tail effect.@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.com4tag:blogger.com,1999:blog-7002040.post-34975468852725588482010-06-19T18:11:00.000-07:002010-06-19T18:40:02.708-07:00Petroleum EngineeringWith all the discussion on the Gulf Oil disaster going on, lots of petroleum engineers and others from the oil industry have pitched in with their opinions. In which case we can see exactly what they think of their profession.<br /><br />One commenter, an authority on reservoir engineering apparently had this to say <a href="http://www.theoildrum.com/node/6597#comment-655123">about Peak Oil</a>:<blockquote>We understand how our business works, certainly. Guys like us, (those IN THE KNOW) have been declaring the end of oil since at least 1886. In Pittsburgh to be specific. Can't say we didn't give the rest of you noobs plenty of warning.</blockquote>So let me understand this statement. Oil industry types apparently have always known that the end of oil would occur since day one. I wonder why no one thought to just ask them? How did we miss that one?<br /><br />This same fellow has huge problems with my analysis, because he thinks that what I do amounts to "curve fitting".<br /><blockquote><a href="http://www.theoildrum.com/node/6597#comment-654990">I mean seriously, who else would confuse curve fitting with knowledge?</a></blockquote>In truth, most of the forecasters who point to continually increasing oil production well into the future base their projections on very little real knowledge. They actually practice curve fitting, i.e. fitting a curve to the production level that we need, because they have no other justification for a realistic outlook.<br /><br />Bayesian analysis works by using past knowledge to predict future outcomes. We have so much knowledge about previous discoveries, reserve growth mechanisms, and extraction rates that our ability to predict should work very effectively ... if we would just start universally using this kind of approach. The other benefit is that the analysis keeps on getting better and better with time due to the Bayesian updating process. The mathematician Laplace first applied this powerful mode of probabilistic reasoning in the late 1700's to real problems, but we still have holdouts in various disciplines. To top it off, if you have a real model underneath the knowledge, it makes the forecasting that much more powerful.<br /><blockquote>Let them get through diffy-q, I suppose the only other gang besides engineers forced through that one are the more mathematically inclined....and they are mostly jealous because their theoretical skills don't translate into income very well. </blockquote>Common knowledge in college that students that went into geology, civil, and petroleum engineering didn't want to get stick in a desk job. Lots of them could not imagine being sedentary for 8 hours a day.@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.com0tag:blogger.com,1999:blog-7002040.post-2405078152574804362010-06-16T16:50:00.000-07:002010-06-16T19:04:22.212-07:00Hubbert peak in Five Easy PiecesBased on the increase in spill rate from the leaking Gulf of Mexico oil well, HO at TheOilDrum.com suggested a <a href="http://www.theoildrum.com/node/6611">potential explanation</a>. His post essentially argued that sand particles acting as a strong abrasive driven along by the already high velocity stream of escaping oil leads to increasing in the channeling and thus an even faster leak rate.<br /><br />HO described a process known as CHOPS (Cold Heavy Oil Production with Sand) which can enlarge a well's streaming throughput by promoting the formation of heavily eroded channels. The TOD post provided the following picture of the possible outcome of the behavior.<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://www.theoildrum.com/files/2%20BC%20CHOPS%20production.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 432px; height: 293px;" src="http://www.theoildrum.com/files/2%20BC%20CHOPS%20production.jpg" alt="" border="0" /></a>Note that the lower curve shows the typical output from a throttled flow. Above that curve, the modulated line shows the results of an accelerated extraction -- note that a peak actually appears which pinpoints the maximum flow rate. In terms of the oil spill, we don't want this behavior because it gives us less time to fix or relieve the problem well. Yet, ordinarily we want this same behavior -- that of fast extraction -- in practical situations because we want and need the oil right now! (so that oil companies can make money, of course)<br /><br />Which leads me to formulating the following very simple but physically correct model of Hubbert's Peak. You won't find this anywhere else, because this derivation does not jive with how geologists think about oil extraction. They get many of the pieces but they never put them all together.<br /><br />I will offer up a derivation for this behavior leading to a Hubbert Peak in 5 easy pieces.<br /><br /><span style="font-weight: bold;">Piece 1. </span>The standard assumption of draw-down from a reservoir results in an exponential decline over time. You can consider that the exponential shape results from a law of diminishing returns; in that a constant amount proportional to the remainder draws down per unit time. Or you can say that a maximum entropy range of extraction rates gets applied to the volume. A proportional extraction rate that we call <span style="font-weight: bold; font-style: italic;">R </span>defines the mean and <span style="font-weight: bold; font-style: italic;">U<sub>0</sub></span> is the reservoir size. <span style="font-weight: bold; font-style: italic;">U</span>(<span style="font-weight: bold; font-style: italic;">t</span>) gives us the cumulative reserve.<br /><blockquote><span style="font-weight: bold; font-style: italic;">U</span>(<span style="font-weight: bold; font-style: italic;">t</span>) = <span style="font-weight: bold; font-style: italic;">U<sub>0</sub></span>*exp(-<span style="font-weight: bold; font-style: italic;">R</span>*<span style="font-weight: bold; font-style: italic;">t</span>)</blockquote><div style="text-align: center;"><img src="http://img197.imageshack.us/img197/4061/exponentialdecline.gif" /><br /></div><br /><span style="font-weight: bold;">Piece 2.</span> Next, we realize that we have uncertainty over the size of the reservoir; the <span style="font-weight: bold; font-style: italic;">U<sub>0</sub></span> we have defined actually only serves as an estimate of the size. This means we have an uncertainty over the rate of proportional extraction as well. This turns into a form of hyperbolic discounting and the cumulative draw-down actually looks like this.<br /><blockquote><span style="font-weight: bold; font-style: italic;">U</span>(<span style="font-weight: bold; font-style: italic;">t</span>) = <span style="font-weight: bold; font-style: italic;">U<sub>0</sub></span> / (1+<span style="font-weight: bold; font-style: italic;">R</span>*<span style="font-weight: bold; font-style: italic;">t</span>)</blockquote><br /><div style="text-align: center;"><img src="http://img88.imageshack.us/img88/3682/hyperbolicdecline.gif" /><br /></div>Note the <a href="http://mobjectivist.blogspot.com/2010/05/hyperbolic-decline-fat-tail-effect.html">fat-tail</a>.<br /><br /><span style="font-weight: bold;">Piece 3.</span> Next we assert that the constant but uncertain proportional extraction rate undergoes an acceleration starting from the original value, <span style="font-weight: bold; font-style: italic;">R</span>(<span style="font-weight: bold; font-style: italic;">t</span>) = <span style="font-weight: bold; font-style: italic;">R<sub>0</sub></span> + <span style="font-weight: bold; font-style: italic;">k</span>*<span style="font-weight: bold; font-style: italic;">t</span>. This acceleration equates to Newton's law, first-order with time. Then the instantaneous absolute rate of extraction from the remaining reservoir looks like:<br /><blockquote><span style="font-weight: bold; font-style: italic;">RateOfExtraction</span>(<span style="font-weight: bold; font-style: italic;">t</span>) = -d<span style="font-weight: bold; font-style: italic;">U</span>(t)/d<span style="font-weight: bold; font-style: italic;">t</span> = <span style="font-weight: bold; font-style: italic;">U</span><sub>0</sub>*(<span style="font-weight: bold; font-style: italic;">R<sub>0</sub></span> + <span style="font-weight: bold; font-style: italic;">k</span>*<span style="font-weight: bold; font-style: italic;">t</span>)/(1+<span style="font-weight: bold; font-style: italic;">R<sub>0</sub></span>*<span style="font-weight: bold; font-style: italic;">t</span>+<span style="font-weight: bold; font-style: italic;">k</span>*<span style="font-weight: bold; font-style: italic;">t</span><sup>2</sup>/2)<sup>2</sup><br /></blockquote>For <span style="font-weight: bold; font-style: italic;">R<sub>0</sub></span>=0.5 and <span style="font-weight: bold; font-style: italic;">k</span>=2, it results in this shape<br /><br /><div style="text-align: center;"><img src="http://img18.imageshack.us/img18/8584/acceleratedecline.gif" /><br /></div><br />This curve we can scale and overlay on top of the CHOPS curve to validate our thought process.<br /><br /><div style="text-align: center;"><img src="http://img507.imageshack.us/img507/6761/accelerateddecline.gif" /><br /></div><br /><span style="font-weight: bold;">Piece 4</span>. Over a larger set of reservoirs that experience a technical improvement over time, we can assume that the proportional extraction rate can accelerate even more strongly over time, <span style="font-weight: bold; font-style: italic;">R</span>(<span style="font-weight: bold; font-style: italic;">t</span>)=<span style="font-weight: bold; font-style: italic;">C</span>*exp(<span style="font-weight: bold; font-style: italic;">k</span>*<span style="font-style: italic; font-weight: bold;">t</span>). This gives us a Moore's law form of acceleration, doubling every set number of years. Then<br /><blockquote><span style="font-weight: bold; font-style: italic;">RateOfExtraction</span>(<span style="font-weight: bold;">t</span>) = -d<span style="font-weight: bold; font-style: italic;">U</span>(<span style="font-weight: bold; font-style: italic;">t</span>)/d<span style="font-weight: bold; font-style: italic;">t</span> = <span style="font-weight: bold; font-style: italic;">U<sub>0</sub></span> * <span style="font-weight: bold; font-style: italic;">R</span>(<span style="font-weight: bold; font-style: italic;">t</span>) / (1+<span>integral</span><span>(</span><span style="font-weight: bold; font-style: italic;">R</span>(<span style="font-weight: bold; font-style: italic;">t</span>)<span style="font-style: italic;"><span style="font-weight: bold;">d</span></span><span style="font-weight: bold; font-style: italic;">t</span><span>)</span>)<sup>2</sup><br /><br />= <span style="font-weight: bold; font-style: italic;">U<sub>0</sub></span>*<span style="font-weight: bold; font-style: italic;">C</span>*exp(<span style="font-weight: bold; font-style: italic;">k</span>*<span style="font-style: italic; font-weight: bold;">t</span>)/(1+<span style="font-weight: bold; font-style: italic;">C</span>/<span style="font-weight: bold; font-style: italic;">k</span>*(exp(<span style="font-weight: bold; font-style: italic;">k</span>*<span style="font-style: italic; font-weight: bold;">t</span>)-1))<sup>2</sup></blockquote>For a small starting rate, the acceleration further accentuates the subtle peak that we observe in piece 3 and it turns into a full-fledged symmetric peak as shown in the next figure:<br /><br /><div style="text-align: center;"><img src="http://img408.imageshack.us/img408/1599/hubbertpeak.gif" /><br /></div><br /><span style="font-weight: bold;">Piece 5</span>. Congratulations. <a href="http://www.imdb.com/title/tt0065724/quotes">You haven't broken any rules</a> and you have just derived the famed Hubbert Peak, also known as the Logistic Sigmoid function.<br /><br /><br /><span style="font-style: italic;">Some Backstory</span><br />An alternate derivation exists for the corresponding <span style="font-style: italic;">discovery</span> peak, which I call <a href="http://mobjectivist.blogspot.com/2007/11/sometimes-i-get-bit-freaked-out-by.html">Dispersive Discovery</a>. There, the uncertainty involves how much volume gets explored and at what rate, otherwise the math turns out <a href="http://mobjectivist.blogspot.com/2010/06/oil-discovery-simulation-reality.html">exactly the same</a>. Both derivations result from an assumed finite constraint but uncertainty in both rates and subvolumes. The only problem with using the Hubbert peak derivation for extraction is that it premises that each extraction rate started at the same time (globally this would be 1858). We know that this has not happened for global production, as extraction can only start after a discovery, and then some variable hold time. By using dispersive discovery, we get a larger spread in start years, and then <a href="http://mobjectivist.blogspot.com/2008/08/pipes-and-oil-shock-model.html">The Oil Shock model</a> generates the extraction curve. In general, if the discovery peak precedes the oil production peak by a number of years, I would use Dispersive Discovery, but if the two coincide, then extraction tracks discovery and it doesn't really matter how you interpret the rates. This explains why this particular derivation works well for more localized production areas that have seen significant technology changes. In contrast, the technology of discovery has undergone tremendous technology changes over the years, so that dispersive discovery works very well in terms of global modeling. This is actually not much of a caveat, as the <a href="http://mobjectivist.blogspot.com/2010/06/mentaculus.html">more ways that you can find the same result</a>, the more confidence you have that you have remained on the right track.<br /><br />The current derivation also points out the huge hole in the technique known as Hubbert Linearization (HL). As defined, HL derives from the observation that<br /><blockquote>d<span style="font-weight: bold; font-style: italic;">U</span>(<span style="font-weight: bold; font-style: italic;">t</span>)/d<span style="font-weight: bold; font-style: italic;">t</span> = <span style="font-weight: bold; font-style: italic;">U</span>(<span style="font-weight: bold; font-style: italic;">t</span>)*(<span style="font-weight: bold; font-style: italic;">U<sub>0</sub></span>-<span style="font-weight: bold; font-style: italic;">U</span>(<span style="font-weight: bold; font-style: italic;">t</span>))<br /></blockquote>Yet this only works for the one case where we can define <span style="font-weight: bold; font-style: italic;">R</span>(<span style="font-weight: bold; font-style: italic;">t</span>) as an exponential function, that of piece 4. The formula does not work for either piece 1, 2, or 3. Therefore, HL only serves as a curious mathematical identity for that one exponential case, which we know does not always occur.<br /><br />The actual "WebHub" Linearization takes the following form:<br /><blockquote>d<span style="font-weight: bold; font-style: italic;">U</span>(<span style="font-weight: bold; font-style: italic;">t</span>)/d<span style="font-weight: bold; font-style: italic;">t</span> = -<span style="font-weight: bold; font-style: italic;">U<sub>0</sub></span> * <span style="font-weight: bold; font-style: italic;">R</span>(<span style="font-weight: bold; font-style: italic;">t</span>) / (1+integral(<span style="font-weight: bold; font-style: italic;">R</span>(<span style="font-weight: bold; font-style: italic;">t</span>)d<span style="font-weight: bold; font-style: italic;">t</span>))<sup>2</sup></blockquote>This may not prove as handy as HL perhaps, but it has the benefit of correctness, and it <a href="http://mobjectivist.blogspot.com/2008/10/significant-no-hyperbole.html">works well for certain cases</a>.<br /><br />Like me, <a href="http://www.theoildrum.com/node/2389">Robert Rapier has railed against the inadequacy of HL</a> and this may take up the slack.@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.com0tag:blogger.com,1999:blog-7002040.post-28038882946237382102010-06-14T18:58:00.000-07:002010-06-14T19:19:19.367-07:00GOM Reservoir Size Distributions<div class="content"><p>Question:</p><p></p><blockquote><span class="byline"> <span class="username"><a href="http://www.theoildrum.com/user/bigmoose" title="View user profile.">BigMoose</a></span> on June 14, 2010 - 6:26pm </span> <span class="toplinks"> <a href="http://www.theoildrum.com/node/6573#comment-650477" title="Permalink">Permalink</a> | <a href="http://www.theoildrum.com/node/6573/650477" title="Subthread" rel="nofollow">Subthread</a> | <a href="http://www.theoildrum.com/node/6573#comment-650104" title="Parent">Parent</a> | <a href="http://www.theoildrum.com/node/6573/650104" title="Parent subthread">Parent subthread</a> | <a href="http://www.theoildrum.com/node/6573#comments_top" title="Comments top">Comments top</a></span> <div class="content"><p>I have heard many unofficial estimates of the magnitude of oil in this formation... 2nd largest in America, 2nd largest in the world...</p> <p>Does anyone have a credible estimate on the formation reserves?</p> </div></blockquote>Some historical data available from the MMS.<br /><a href="http://www.gomr.mms.gov/PDFs/2009/2009-064.pdf" title="http://www.gomr.mms.gov/PDFs/2009/2009-064.pdf" rel="nofollow">http://www.gomr.mms.gov/PDFs/2009/2009-064.pdf</a><br /><blockquote>On the basis of proved oil, for 8,014 proved undersaturated oil reservoirs, the median is 0.3 MMbbl, the mean is 1.8 MMbbl.</blockquote><p></p> <p>Peak Oil theory (<a href="http://www.energybulletin.net/node/51768">Entropic Dispersive Aggregation</a>) says the cumulative size distribution of reservoirs (ranked small to large) goes as P(Size)=1/(1+0.3/Size) if we assume a median of 0.3. It doesn't quite follow this exactly because infinite sized reservoirs can not exist. </p> <p>If you want the raw data it is here:<br /><a href="http://g/RE/Shared/EOGR%20Report/2008-034%20Estimated%20Oil%20and%20Gas%20Reserves/excel/97-RANGE.xls" title="///G:/RE/Shared/EOGR%20Report/2008-034%20Estimated%20Oil%20and%20Gas%20Reserves/excel/97-RANGE.xls" rel="nofollow">file:///G:/RE/Shared/EOGR%20Report/2008-034%20Estimated%20Oil%20and%20Ga...</a></p> <p>Sorry, that was a joke, the MMS puts the information on a public web server, and the data is retrieved as a local filesystem URL?<br /><a href="http://www.gomr.mms.gov/homepg/pubinfo/freeasci/geologic/estimated2006.html" title="http://www.gomr.mms.gov/homepg/pubinfo/freeasci/geologic/estimated2006.html" rel="nofollow">http://www.gomr.mms.gov/homepg/pubinfo/freeasci/geologic/estimated2006.html</a></p><p>I placed whatever data I could get into Google Docs, and placed theory next to it.<br /></p></div><iframe src="http://spreadsheets.google.com/pub?key=0AuycoDmNCe6wdDhyVE1xWUpSaWpFc25vek5MM1RhZHc&hl=en&single=true&gid=0&output=html&widget=true" frameborder="0" height="300" width="500"></iframe><br /><img src="http://spreadsheets.google.com/oimg?key=0AuycoDmNCe6wdDhyVE1xWUpSaWpFc25vek5MM1RhZHc&oid=1&zx=hc9kzgu69lea" /><br /><br />The MMS is to be split into 3 agencies apparently. Throughout their history, they failed in doing any kind of useful depletion analysis in the GOM. Anyone can collect data; interpreting it is the challenging part.@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.com0tag:blogger.com,1999:blog-7002040.post-9658038175684753902010-06-12T15:00:00.000-07:002017-10-20T23:15:14.126-07:00The Mentaculus<a href="http://img408.imageshack.us/i/aseriousman.jpg/" target="_blank"><img align="right" border="0" src="http://img408.imageshack.us/img408/4549/aseriousman.th.jpg" /></a><br />
<ul>
<li><b><i>Update</i></b>: <a href="https://tallbloke.wordpress.com/2017/08/01/suns-core-rotates-four-times-faster-than-its-surface/comment-page-1/#comment-131201" target="_blank">Here is a real-life crazy Mentaculus man</a> ⇦ Paul Vaughan is Arthur.</li>
</ul>
<br />
I saw the Coen brothers movie <span style="font-style: italic; font-weight: bold;">"A Serious Man"</span> a few months ago. A definite period piece from the 1960's, it contrasted two scientists, one an academic and one a hapless amateur. The main protagonist, Larry Gopnick, a physics professor at what looks like a small liberal arts school in the Twin Cities (Macalester, Hamline maybe?), spends time teaching his students what look like elaborate mathematical derivations on a huge chalkboard. He has trouble dealing with some of his students on occasion:<br />
<b><a href="http://www.imdb.com/name/nm3176222/"></a></b><br />
<blockquote>
<b><a href="http://www.imdb.com/name/nm3176222/">Clive Park</a></b>: Yes, but this is not just. I was unaware to be examined on the mathematics.<br />
<b><a href="http://www.imdb.com/name/nm0836121/">Larry Gopnik</a></b>: Well, you can't do physics without mathematics, really, can you?<br />
<b><a href="http://www.imdb.com/name/nm3176222/">Clive Park</a></b>: If I receive failing grade I lose my scholarship, and feel shame. I understand the physics. I understand the dead cat.<br />
<b><a href="http://www.imdb.com/name/nm0836121/">Larry Gopnik</a></b>: You understand the dead cat? But... you... you can't really understand the physics without understanding the math. The math tells how it really works. That's the real thing; the stories I give you in class are just illustrative; they're like, fables, say, to help give you a picture. An imperfect model. I mean - even I don't understand the dead cat. The math is how it really works.</blockquote>
His academic colleagues want Professor Gopnick to publish articles at some point (with the implicit threat of not getting tenure). Gopnick's main problem lies in his rationality:<br />
<a href="http://explodingkinetoscope.blogspot.com/2009/10/secret-test-serious-man-2009.html"></a><br />
<blockquote>
<a href="http://explodingkinetoscope.blogspot.com/2009/10/secret-test-serious-man-2009.html">But his rigid framing of a cause-and-effect universe makes him indignant about lack of apparent cause ...</a></blockquote>
Gopnick's brother, the minor character of Uncle Arthur, takes the role of an almost savant numerologist, busy at work on a treatise called <span style="font-weight: bold;">The Mentaculus</span>. Filled with <a href="http://etctatic.com/post/398651818/the-mentaculus">dense illustrations and symbology</a>, it apparently functions as a "probability map" in what appears to spell out a Theory of Everything. It also apparently works to some extent:<br />
<a href="http://explodingkinetoscope.blogspot.com/2009/10/secret-test-serious-man-2009.html"></a><br />
<blockquote>
<a href="http://explodingkinetoscope.blogspot.com/2009/10/secret-test-serious-man-2009.html">We might guess that it makes no sense, but Arthur's "system" apparently "works" as intended, and he applies it to winning at back room card games.</a></blockquote>
Based on the events that eventually transpire, the theme of the movie essentially says that if you seek rationality, you will ultimately only land on random chance.<br />
<br />
I consider myself a "serious man" as well. But do I have a variation of The Mentaculous buried in the contents of this blog?<br />
<br />
I tried to make a probability map of all the applications and blog links that I have worked on relating to what I call <a href="http://www.energybulletin.net/node/51768">entropic dispersion</a> in the following table [full <a href="http://spreadsheets.google.com/pub?key=0AuycoDmNCe6wdGQ4MFpkVXJWeHVlVUtJYllXaHdLRFE&hl=en&output=html">HTML</a>]:<br />
<br />
<iframe frameborder="0" height="300" src="http://spreadsheets.google.com/pub?key=0AuycoDmNCe6wdGQ4MFpkVXJWeHVlVUtJYllXaHdLRFE&hl=en&output=html&widget=true" width="650"></iframe><br />
<br />
The math is how it really works. Perhaps I should publish. Yet <a href="http://www.theoildrum.com/node/6589#comment-647516">blogging is too much fun</a>. Perhaps I need to take a canoe trip.<br />
<br />
<hr />
<br />
Good reads describing The Mentaculus of probability and statistics<br />
<ol>
<li><a href="http://www.dam.brown.edu/people/mumford/Papers/OverviewPapers/DawningAgeStoch.pdf">"Dawning of the Age of Stochasticity"</a>, David Mumford<blockquote style="font-family: times new roman;">
<span style="font-size: 85%;">From its shady beginnings devising gambling strategies and counting corpses in medieval London, probability theory and statistical inference now emerge as better foundations for scientific models, especially those of the process of thinking and as essential ingredients of theoretical mathematics, even the foundations of mathematics itself.</span></blockquote>
</li>
<li><a href="http://omega.albany.edu:8008/JaynesBook.html">"Probability Theory: The Logic of Science"</a>, Edwin T. Jaynes<br /><blockquote>
<div class="quote">
<span style="font-size: 85%;">Our theme is simply: <span style="font-style: italic;">probability theory as extended logic.</span> The ‘new’ perception amounts to the recognition that the mathematical rules of probability theory are not merely rules for calculating frequencies of ‘random variables'; they are also the unique consistent rules for conducting inference(i.e. plausible reasoning) of any kind. and we shall apply them in full generality to that end.</span></div>
<!-- quote --> </blockquote>
</li>
<li><a href="http://www.atm.damtp.cam.ac.uk/mcintyre/mcintyre-thinking-probabilistically.pdf">"On Thinking Probabilistically"</a>, M.E. McIntyre</li>
<li>"The Black Swan" and "Fooled by Chance", N.N. Taleb</li>
</ol>
@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.com9tag:blogger.com,1999:blog-7002040.post-18033474238369210332010-06-11T06:58:00.000-07:002010-06-12T16:54:31.499-07:00Worst Book on Oil Crisis Written YetFormer USGS staffer Steven Gorelick has written a book called <a href="http://www.wiley.com/WileyCDA/WileyTitle/productCd-1405195487.html">"</a><span style="font-size:100%;"><a href="http://www.wiley.com/WileyCDA/WileyTitle/productCd-1405195487.html">Oil Panic and the Global Crisis: Predictions and Myths"</a>. It has to rank as the worst of the neo-cornucopian books out there simply because it actually spreads myths instead of deeming to correct them, as the title implies.<br /><br />The author acts the role of a somewhat neutral bystander and balanced pseudo-journalist, never giving the appearance of a rabid oil cornucopian, yet slipping in so many groaners that he basically gives away his not-so-hidden agenda. From a scientific context, providing both sides of the story makes no sense when the objective is truth rather than balanced reporting. Excerpts of the book would fit right into a Fox news piece.<br /><br />To give a taste of how little original research that Gorelick has actually performed and how much he relies on other cornucopians, consider the passage wherein he references geology professor Larry Cathless. On page 128, Gorelick quotes Cathles as saying that we may find as much as <span style="font-style: italic;">"1 trillion barrels of oil and gas in just a portion of the gulf oil sediments". </span><br /><br />I found the original statement by Cathles <a href="http://www.geotimes.org/june03/NN_gulf.html">here</a>:<br /><blockquote><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://www.geotimes.org/june03/Gulf_map.jpg"><img style="margin: 0pt 0pt 10px 10px; float: right; cursor: pointer; width: 268px; height: 209px;" src="http://www.geotimes.org/june03/Gulf_map.jpg" alt="" border="0" /></a>Cathles and his team estimate that in a study area of about 9,600 square miles off the coast of Louisiana, source rocks a dozen kilometers down have generated as much as 184 billion tons of oil and gas — about 1,000 billion barrels of oil and gas equivalent. "That's 30 percent more than we humans have consumed over the entire petroleum era," Cathles says. "And that's just this one little postage stamp area; if this is going on worldwide, then there's a lot of hydrocarbons venting out."<br /></blockquote>Although not directly implicated as an <a href="http://mobjectivist.blogspot.com/2005/03/thy-name-is-mud.html">abiotic oil advocate</a> (unlike his late Cornell University colleague <a href="http://en.wikipedia.org/wiki/Abiogenic_petroleum_origin">Thomas Gold</a>), former Chevron employee Cathles has close ties to the largely mythical <a href="http://aapgbull.geoscienceworld.org/cgi/content/abstract/86/8/1463">Eugene Island story</a>. Several years ago new discoveries from the previously tapped-out Eugene area had people's hopes up that somehow oil reservoirs could go through a near real-time "replenishment".<br /><blockquote>"We're dealing with this giant flow-through system where the hydrocarbons are generating now, moving through the overlying strata now, building the reservoirs now and spilling out into the ocean now," Cathles says. </blockquote>Well, as it turned out, the Eugene Island secondary production turned out just a blip on the radar screen, yet Cathles still gets a mention as a credible source? </span><span style="font-size:100%;"><span style="font-size:85%;">(<span style="font-style: italic;">Think about it, if this turned out true, then the recent Gulf Oil spill could allow a never-ending release of hydrocarbons from beneath the waters, <a href="http://www.marketoracle.co.uk/Article20207.html">as this urban legend gets repeated still</a></span>. <span style="font-style: italic;">How embarrassingly timely for Gorelick.</span>). </span></span><br /><span style="font-size:100%;"><br /></span>Elsewhere, the book becomes safe pablum for a narrowly defined audience. Note the limited depth of Gorelick's analysis and the intentional dumbing down in his writing: <blockquote style="font-style: italic;"><p>Hubbert used a straightforward formula that yields the curve as illustrated in Figure 1.2. The logistic-curve formula is a simple expression with three adjustable parameters (mathematical knobs) that control the slope, peak, height and time of peak</p></blockquote> <p>Now you see what happens when an author keeps it too simple. He ends up never explaining anything about the logistic, apart from providing the functional form in a footnote, and makes it worse by calling the parameters "mathematical knobs"<b>. </b>That essentially gives a flavor of the depth of the mathematics.<br /></p><span style="font-size:100%;">Gorelick has an entire chapter called "Counter-Arguments to Imminent Oil Depletion". Notwithstanding that oil depletion is imminent <span style="font-style: italic;">by definition</span> (it certainly does not regenerate contrary to the implications), this chapter contains some of the most unscientific assertions that I have come across. Consider this bullet point coming from Gorelick</span><span style="font-size:100%;">:</span><br /><blockquote style="font-style: italic;"><span style="font-weight: bold;">-</span> The world has never run out of any significant globally traded, non-renewable Earth resource.</blockquote><span style="font-size:100%;">This </span><span style="font-size:100%;">false equivalency </span><span style="font-size:100%;">comes somewhere from the list of <a href="http://www.nizkor.org/features/fallacies/">logical fallacies</a>. I find it bizarre that a reputable scientist would appeal to this kind of argument. Further he bullet points:<br /><blockquote style="font-style: italic;"><span style="font-weight: bold;">-</span> The trends in production of global oil and natural gas have not declined as predicted.</blockquote>I call a strawman fallacy as no one has really come up with a formal theory for depletion. Instead every oil prediction that I have seen has relied on some sort of <span style="font-style: italic;">ad hoc</span> analysis via heuristics. So to imply that something has not followed as predicted does not prove anything. As I have said before, heuristics do not substitute for theory and Gorelick unfortunately has not contributed any research of his own.<br /><br />I listed only 2 of the 21 bullet pointed counter-arguments that Gorelick concludes the chapter with. I can understand the need for these bullet points if he wanted to act like an objective journalist wanting to tell both sides of the story. Yet we have all learned from <a href="http://en.wikiquote.org/wiki/Paul_Krugman">Krugman</a> that real science does not scream headlines that say <span style="font-weight: bold; font-style: italic;">"</span></span><span style="font-weight: bold; font-style: italic;">Shape of Earth--Views Differ"</span><span style="font-size:100%;">. A scientist should dig deep and try to come up with a model or theory that would confirm or rebut the empirical evidence. You just don't rely on tired worn-out assertions (the world has never run out of a resource, predictions have not come true, etc) from the cornucopian right, put them in a book and consider this an advancement of knowledge.<br /><br />The book industry likely published Oil Panic because it does not even remotely challenge business as usual and actually condones the cornucopian viewpoint.</span><br /><br />End of book review.<br /><br /><span style="font-size:100%;"><hr /><br /><span style="font-weight: bold;">Musings</span><br /><br />Since Gorelick has propagated half-truths and not resolved any myths at all in the oil depletion realm, I figured I would return the favor in his own research area. </span><span style="font-size:100%;">From his CV, the "honored and awarded" Gorelick moved on from the USGS and became a <a href="http://earthsciences.stanford.edu/people/cv_printable.php?personnel_id=189">professor of hydrogeology</a> and part of the </span>Environmental Earth System Science<span style="font-size:100%;"> department at Stanford University</span><span style="font-size:100%;">. If he can write a book on peak oil and </span><span style="font-size:100%;">turn back progress on understanding oil depletion</span><span style="font-size:100%;">, I can opine on hydrogeology.<br /><br /></span><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJTC9xc02eZxZJ6bMlBhjh0AnfIn9pYa9qcckwPxHDjxxKkQiTBM3PhNV_S04X2ZthZPBjetXjC4CS4LOdJ4Vca-9msrXp9PgBfJgMKVUq-e3KfGHbc8BWuf0u45UBWG4lq8f7/s1600/breakthrough.gif"><img style="margin: 0pt 0pt 10px 10px; float: right; cursor: pointer; width: 200px; height: 154px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJTC9xc02eZxZJ6bMlBhjh0AnfIn9pYa9qcckwPxHDjxxKkQiTBM3PhNV_S04X2ZthZPBjetXjC4CS4LOdJ4Vca-9msrXp9PgBfJgMKVUq-e3KfGHbc8BWuf0u45UBWG4lq8f7/s200/breakthrough.gif" alt="" id="BLOGGER_PHOTO_ID_5476350759392653682" border="0" /></a><span style="font-size:100%;">From his research papers, Gorelick claims to understand how to model principles of hydrogeology and presumably knows about breakthrough curves. It turns out that most of the dispersive transport involved in hydrology applications hinges on some very simple overriding principles. These principles are so obvious to me that I don't understand why the brilliant scientific minds in geology have not figured this out.</span> Consider that Gorelick has expertise in "multiple-rate mass transfer" which I associate this with the simple idea of <a href="http://mobjectivist.blogspot.com/2010/05/word-on-dispersion.html">dispersion applied to material transport</a>. I actually ran across Gorelick's work prior to reviewing his book because of my studies of generalized dispersive transport.<br /><br />As Gorelick should know, all processes do not proceed at the same rate, and this includes variations in oil discovery rates around the world. This leads directly to the fat-tail effects that I see in oil reserves <span style="font-weight: bold;">and</span> to the fat-tails that Gorelick observes in solute transport in his groundwater contamination studies. Not all solute diffuses and drifts at the same rate, so that scientists see these long tails. How Gorelick can publish research on groundwater rates, but see no analogy to the larger issue of oil extraction seems such a waste of intellectual potential.<br /><span style="font-size:100%;"><br /></span><span style="font-size:100%;">Should Gorelick ever read this review, I challenge him to read my work on dispersion and the math behind depletion of oil. These models come from solid math and probability underpinnings and simple physical first principles, and lead to the kind of insight that we all need to make sense of our fossil fuel energy situation.<br /><br /></span>@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.com2tag:blogger.com,1999:blog-7002040.post-74767210774048939442010-06-09T16:38:00.000-07:002010-06-10T07:48:26.890-07:00Oil Discovery Simulation RealityI should have run this particular simulation long ago. In this exercise, I essentially partitioned the <a href="http://www.theoildrum.com/node/3287">Dispersive Discovery model</a> into a bunch of subvolumes. Each subvolume belongs to a specific prospecting entity, which I have given a short alias. The simulation assigns each one of the entities a random search rate and each one of the subvolumes also has a randomly sized value. The physical analogy equates to the prospector (i.e. the entity is an owner, leaser, company, nation, etc.) given their own subvolume (geographic location) to explore for oil. When they exhaustively search that subvolume, they end up with a cumulative amount of oil. The abstraction for subvolumes allows for the random sizing to directly translate to a proportional amount of oil. In general, bigger subvolumes equates to more oil but this does not have to hold, since the random rates blur this distinction.<br /><br />Removing the technical mumbo-jumbo, the previous paragraph describes quite simply the context for the dispersive discovery model. Nothing about this description can possibly get misinterpreted as it essentially describes the process of a bunch of people systematically <a href="http://www.theoildrum.com/node/2712">searching through a haystack for needles</a>. Each person has varying ability and owns a varying size to search through, which essentially describes the process of dispersion.<br /><br />The random number distributions derive from a mean search rate and a mean subvolume based on the <a href="http://en.wikipedia.org/wiki/Principle_of_maximum_entropy">principle of maximum entropy</a> (MaxEnt). The number of subvolumes multiplied by the mean subvolume generates an ultimately recoverable resource (URR) total. By building a Monte Carlo simulation of this model, we can see how the discovery process plays out for randomly chosen configurations.<br /><br />When the simulation executes, the search rates accelerate in unison so that the variance remains the same, maintaining MaxEnt of the aggregate. If I choose an exponential acceleration, the result turns precisely into the <a href="http://mobjectivist.blogspot.com/2005/11/derivation-of-logistic-function.html">Logistic sigmoid</a>, also known as the classic Hubbert Curve..<br /><br />The entire simulation exists on a Google spreadsheet. Each row corresponds to a prospecting entity/subvolume pairing. The first two cells provide a random starting rate and a randomly assigned subvolume. As you move left to right across the row, you see the fraction of the subvolume searched increase in an accelerating fashion with respect to time. The exponential growth factor resides in cell A2. At some point in time, the accelerating search volume meets the fixed volume constraint and the number stops increasing. At that moment, the prospector has effectively finished his search. That subvolume has essentially ceased to yield newly discovered oil.<br /><br />I reserve the 4th row for the summed values, the 3rd line generates the time derivative which plots out as a yearly discovery. The simulation "runs" one Monte Carlo frame at a time. We essentially see a full snapshot of one sample for about 150 years of dispersive search.<br /><br /><span style="font-size:130%;"><a href="http://spreadsheets.google.com/pub?key=0AuycoDmNCe6wdFVxQ3VoRG1ZdWNjem1VLTR5bUdDemc&hl=en&output=html">View Google Spreadsheet</a></span><br /><br />I associated short names for each of the prospecting entities[1]. As I did not to want to make the spreadsheet too large, I limited it to 250 entities (which pushes Google to the limit for data). This of course introduces some noise fluctuations. The non-noisy solid line displays the analytical solution to the dispersive discovery model, which happens to match the derivative of the Logistic sigmoid.<br /><br />The most important insight that we get from this exercise has to do with generating a <span style="font-weight: bold;">BLINDINGLY SIMPLE</span> explanation for deriving the Logistic behavior that most oil depletion analysts assume to exist, yet have no basis for. For crying out loud, I have seen children's board games with more complicated instructions than what I have given in the above paragraphs. Honestly, if you find someone that can't understand what it is going on from what I have written, don't ask them to play <span style="font-style: italic;">Chutes & Ladders</span> either. Common sense Peak Oil theory ultimately reduces to this <a href="http://www.theoildrum.com/node/4171">basic argument</a>.<br /><br />Contrast the elegance of the dispersive model with the most common alternative derivation for the logistic peak shape. This involves a completely misguided deterministic model that not surprisingly makes <span style="font-weight: bold;">ABSOLUTELY NO SENSE</span>. Whoever originally dreamed up the <a href="http://en.wikipedia.org/wiki/Logistic_function">Verhulst derivation for ecological modeling</a> and decided to apply it to Peak Oil must have consumed large quantities of mind-altering drugs prior to putting pencil to paper.<br /><br />I also want to point out that what I did has nothing to do with <a href="http://dieoff.org/page191.htm">multi-cycle Hubbert modeling </a>which adds even less insight to the fundamental process.<br /><br />I hope that this exercise helps in understanding the mechanism behind dispersive discovery. Seriously, the big intuitive sticking point that people have with the model has to do with the lack of any feedback mechanism in dispersive discovery. I imagine that engineers and most scientists get so used to seeing the feedback-derived Verhulst and LV equations derive the Logistic that they can't believe a simple and correct formulation actually exists!<br /><br />In real terms, at some point the oil companies will cease to discover much of anything as they exhaust search possibilities. I suggest that they might want to consider making up for lost profit by licensing the oil discovery board game. This would help explain to their customers the reality of the situation.<br /><br /><span style="font-weight: bold;">UPDATE</span>:<br />Occasionally Google does an underflow or overflow on some calculations so that the aggregate curve won't plot. The following animated GIF shows a succession of curves:<br /><a target='_blank' href='http://img130.imageshack.us/img130/8448/ddsim.gif'><img src='http://img130.imageshack.us/img130/8448/ddsim.th.gif' border='0'/></a><br /><br /><br /><hr /><br />[1] I used shortened versions of TOD commenter names in the spreadsheet to make it a little more entertaining. I probably spent more time on writing the names down and battling the sluggishness of Google spreadsheet than I did on the simulation.@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.com0tag:blogger.com,1999:blog-7002040.post-34221657727700137462010-06-08T21:07:00.000-07:002010-06-08T23:14:41.268-07:00Predictably Unreliable<div class="Section1"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">I wrote about the <a href="http://mobjectivist.blogspot.com/2010/05/wind-energy-dispersion-analysis.html">unpredictably predictable</a> nature of wind power in a <a href="http://mobjectivist.blogspot.com/2010/06/wind-variability-in-germany.html">few recent posts</a>. <o:p></o:p></span></span><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><br /><br />And of course we have watched the unexpected and unpredicted blow-out of the Deepwater Horizon oil well (the ultra-rare 1 out of 30,000 failure <a href="http://www.theoildrum.com/node/6496">according to conventional wisdom</a>) and hoping for the <a href="http://mobjectivist.blogspot.com/2010/06/reliability-of-relief-wells.html">successful deployment of relief wells</a>.<o:p></o:p></span></span> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">In the wind situation we know that it will work at least part of the time </span></span><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">(given sufficient wind power, that is) </span></span><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">without knowing precisely when, while in the second case we can only guess when a catastrophe with such safety-critical implications will occur.<o:p></o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">We also have the unnerving situation of knowing that something <span style="font-style: italic;">will eventually</span> blow-out, but with uncertain knowledge of exactly when. Take the unpredictability of <a href="http://mobjectivist.blogspot.com/2009/10/popcorn-popping-as-discovery.html">popcorn popping</a> as a trivial example. We can never predict the time of any particular kernel but we know the vast majority will pop.<o:p></o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">In a recent episode that I went through, the specific failure also did not come as a surprise. I had an inkling that an Internet radio that I frequently use would eventually stop working. From everything I had read on-line, my Soundbridge model had a power-supply flaw that would eventually reveal itself as a dead radio. Previous customers had reported the unit would go bad anywhere from immediately after purchase to a few years later. After about 3 years it finally happened to my radio and the failure mode turned out exactly the same as everyone else's -- a blown electrolytic capacitor and a possible burned out diode.<o:p></o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">The part obviously blew out because of some heat stress and power dissipation problem, yet like the popcorn popping, my interest lies in the wide range in failure times. The Soundbridge failure in fact looks like the classic Markov process of a constant failure rate per unit time. In a Markov failure process, the number of expected defects reported per day equate proportionally to how many units remain operational. This turns into a flat line when graphed as failure rate versus time. Customers that have purchased Soundbridges will continue to <a href="http://forums.rokulabs.com/viewtopic.php?f=16&t=18007&sid=4ccf8801cfb7eef1bf6c3db12a553f13">routinely report the failures</a> for the next few years, with fewer and fewer reports as that model becomes obsolete.<o:p></o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">Because of the randomness of the failure time, we know that any failures should follow some stochastic principle and likely that entropic effects play into the behavior as well. When the component goes bad, the unit's particular physical state and the state of the environment governs the actual process; engineers call this the <a href="http://www.calce.umd.edu/general/education/physics_of_failure_and_reliabili.htm">physics of failure</a>. Yet, however specific the failure circumstance, the variability in the component's parameter space ultimately sets the variability in the failure time.<o:p></o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">So I see another way to look at failure modes. We can either interpret the randomness from the perspective of the component or from the perspective of the user. If the latter, we might expect that someone would abuse the machine more than another customer, and therefore effectively speed up its failure rate. Except for some occasional power-cycling this likely didn't happen with my radio as the clock </span></span><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">stays powered </span></span><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">in standby most of the time. Further, many people will treat their machine gingerly. So we have a spread in both dimensions of component and environment.<o:p></o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">If we look at the randomness from a component quality-control perspective, certainly manufacturing variations and manual assembly plays a role. Upon internal inspection, I noticed the Soundbridge needed lots of manual labor to construct. Someone posting to the <a href="http://forums.rokulabs.com/viewtopic.php?f=16&t=18007&sid=4ccf8801cfb7eef1bf6c3db12a553f13"> online Roku radio forum</a> noticed a manually extended lead connected to a diode on their unit -- not good from a reliability perspective. <o:p></o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">So I have a different way of thinking about failures which doesn't always match the conventional wisdom in reliability circles. In certain cases the result derives as expected, but in other cases the result diverges from the textbook solution.<o:p></o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style="font-size:100%;"><b><span style="font-family:Arial;"><span style="font-weight: bold;font-family:Arial;" >Fixed wear rate, variable critical point:</span></span></b></span><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"> To model this to first-order, we assume a critical-point (<span style="font-style: italic; font-weight: bold;">cp</span>) in the component that fails and then assume a distribution of the <span style="font-weight: bold; font-style: italic;">cp</span> value about a mean. Maximum entropy would say that this distribution would approximate an exponential:<o:p></o:p></span></span></p> <p class="MsoNormal"></p><blockquote><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"> <span style="font-style: italic; font-weight: bold;">p</span>(<span style="font-weight: bold; font-style: italic;">x</span>) = 1/<span style="font-weight: bold; font-style: italic;">cp </span>* exp(-<span style="font-style: italic; font-weight: bold;">x</span>/<span style="font-weight: bold; font-style: italic;">cp</span>)<o:p></o:p></span></span></blockquote><p></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">The rate at which we approach the variable <span style="font-weight: bold; font-style: italic;">cp</span> remains constant at <span style="font-weight: bold; font-style: italic;">R </span>(everyone uses/abuses it at the same rate). Then the cumulative probability of failure is <o:p></o:p></span></span></p> <p style="text-indent: 0.5in;" class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><span style="font-style: italic; font-weight: bold;">P</span>(<span style="font-weight: bold; font-style: italic;">t</span>) = integral of <span style="font-style: italic; font-weight: bold;">p</span>(<span style="font-weight: bold; font-style: italic;">x</span>) from <span style="font-weight: bold; font-style: italic;">x</span>=0 to <span style="font-weight: bold; font-style: italic;">x</span>=<span style="font-weight: bold; font-style: italic;">R</span><span style="font-size:85%;">*</span><span style="font-weight: bold; font-style: italic;">t</span><o:p></o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">This invokes the monotonic nature of failures by capturing all the points on the shortest critical path, and anything "longer" than the <span style="font-weight: bold; font-style: italic;">R</span>*<span style="font-weight: bold; font-style: italic;">t</span> threshold won't get counted until it fails later on. The solution to this integral becomes the expected rising damped exponential.<o:p></o:p></span></span></p> <p class="MsoNormal"></p><blockquote><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"> </span></span><span style=";font-family:Arial;font-size:100%;" ><span lang="FR" style="font-family:Arial;"><span style="font-weight: bold; font-style: italic;">P</span>(<span style="font-weight: bold; font-style: italic;">t</span>) = 1 - exp(-<span style="font-weight: bold; font-style: italic;">R</span>*<span style="font-weight: bold; font-style: italic;">t</span>/<span style="font-weight: bold; font-style: italic;">cp</span>)<o:p></o:p></span></span></blockquote><p></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span lang="FR" style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">Most people will substitute a value of </span></span><span style="font-weight: bold; font-style: italic;">τ</span> <span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">for <span style="font-weight: bold; font-style: italic;">cp</span>/<span style="font-style: italic; font-weight: bold;">R</span> to make it look like a lifetime. This is the generally accepted form for the expected lifetime of a component to first-order.<o:p></o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"></p><blockquote><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"> </span></span><span style=";font-family:Arial;font-size:100%;" ><span lang="FR" style="font-family:Arial;"><span style="font-weight: bold; font-style: italic;">P</span>(<span style="font-style: italic; font-weight: bold;">t</span>) = 1 - exp(-<span style="font-weight: bold; font-style: italic;">t</span> / </span></span><span style="font-weight: bold; font-style: italic;">τ</span><span style=";font-family:Arial;font-size:100%;" ><span lang="FR" style="font-family:Arial;">)<o:p></o:p></span></span></blockquote><p></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span lang="FR" style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">So even though it looks as if we have a distribution of lifetimes, in this situation we actually have as a foundation a distribution in critical points. In other words, I get the correct result but I approach it from a non-conventional angle.<o:p></o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLtIAIjKW-A9zXbQRRfvTlNEyfkl-PmBn2gAfohRk0LPT5pj1tw0Bck2EFaDw0gh7PwRmG6PIqfPH6nhqV1DsRxfL4DMwh7Qs4GVV2bYMwwzxO91TRsm_rI1axsBuW1rQLyiLG/s1600/velocity_reliability.gif"><img style="margin: 0pt 0pt 10px 10px; float: right; cursor: pointer; width: 210px; height: 208px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLtIAIjKW-A9zXbQRRfvTlNEyfkl-PmBn2gAfohRk0LPT5pj1tw0Bck2EFaDw0gh7PwRmG6PIqfPH6nhqV1DsRxfL4DMwh7Qs4GVV2bYMwwzxO91TRsm_rI1axsBuW1rQLyiLG/s1600/velocity_reliability.gif" alt="" border="0" /></a><span style="font-size:100%;"><b><span style="font-family:Arial;"><span style="font-weight: bold;font-family:Arial;" >Fixed critical point, variable rate</span></span></b></span><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">: Now turn this case on its head and say that we have a fixed critical point and we have a maximum entropy variation in rate assuming some mean value, <span style="font-weight: bold; font-style: italic;">R</span>. <o:p></o:p></span></span></p> <p class="MsoNormal"></p><blockquote><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"> <span style="font-weight: bold; font-style: italic;">p</span>(<span style="font-weight: bold; font-style: italic;">r</span>) = 1/<span style="font-weight: bold; font-style: italic;">R</span> * exp(-<span style="font-weight: bold; font-style: italic;">r</span>/<span style="font-weight: bold; font-style: italic;">R</span>)<o:p></o:p></span></span></blockquote><p></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">Then the cumulative integral looks like:<o:p></o:p></span></span></p> <p style="text-indent: 0.5in;" class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><span style="font-weight: bold; font-style: italic;">P</span>(<span style="font-weight: bold; font-style: italic;">t</span>) = integral of <span style="font-weight: bold; font-style: italic;">p</span>(<span style="font-weight: bold; font-style: italic;">r</span>) from <span style="font-weight: bold; font-style: italic;">r</span>=<span style="font-weight: bold; font-style: italic;">cp</span>/<span style="font-weight: bold; font-style: italic;">t </span> to <span style="font-weight: bold; font-style: italic;">r</span>=</span></span><span style="font-weight: bold;">∞</span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">Note carefully that the critical path in this case captures only the fastest rates and anything slower than the <span style="font-weight: bold; font-style: italic;">cp</span>/<span style="font-weight: bold; font-style: italic;">t</span> threshold won't get counted until later. <o:p></o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">The result derives to<o:p></o:p></span></span></p> <p class="MsoNormal"></p><blockquote><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"> <span style="font-weight: bold; font-style: italic;">P</span>(<span style="font-weight: bold; font-style: italic;">t</span>) = exp(-<span style="font-weight: bold; font-style: italic;">cp</span>/(<span style="font-style: italic; font-weight: bold;">R</span>*<span style="font-weight: bold; font-style: italic;">t</span>))<o:p></o:p></span></span></blockquote><p></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">This has the characteristics of a fat-tail distribution because time goes into the denominator of the exponent, instead of the numerator. Physically, this means that we have very few instantaneously fast rates and many rates proceed slower than the mean. <o:p></o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style="font-size:100%;"><b><span style="font-family:Arial;"><span style="font-weight: bold;font-family:Arial;" >Variable wear rate, variable critical point: </span></span></b></span><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">In a sense, the two preceding behaviors act complementary to each other. So we can also derive <span style="font-weight: bold; font-style: italic;">P</span>(<span style="font-weight: bold; font-style: italic;">t</span>) for the situation whereby <i><span style="font-style: italic;">both the rate and critical point</span></i> vary.<o:p></o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p style="text-indent: 0.5in;" class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><span style="font-weight: bold; font-style: italic;">P</span>(<span style="font-weight: bold; font-style: italic;">t</span>) = integral of <span style="font-weight: bold; font-style: italic;">P</span>(<span style="font-weight: bold; font-style: italic;">t</span> | <span style="font-weight: bold; font-style: italic;">r</span>)*<span style="font-weight: bold; font-style: italic;">p</span>(<span style="font-weight: bold; font-style: italic;">r</span>) over all <span style="font-weight: bold; font-style: italic;">r</span><o:p></o:p></span></span></p> <p style="text-indent: 0.5in;" class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">This results in the exponential-free cumulative, which has the form of an <a href="http://mobjectivist.blogspot.com/2010/04/entroplet-species-area-relationships.html">entroplet</a>.<o:p></o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p style="text-indent: 0.5in;" class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span lang="FR" style="font-family:Arial;"><span style="font-weight: bold; font-style: italic;">P</span>(<span style="font-weight: bold; font-style: italic;">t</span>) = <span style="font-weight: bold; font-style: italic;">R</span>*<span style="font-weight: bold; font-style: italic;">t</span>/<span style="font-weight: bold; font-style: italic;">cp </span>/ (1+ <span style="font-weight: bold; font-style: italic;">R</span>*<span style="font-weight: bold; font-style: italic;">t</span>/<span style="font-weight: bold; font-style: italic;">cp</span>) = <span style="font-weight: bold; font-style: italic;">t</span>/</span></span><span style="font-weight: bold; font-style: italic;">τ</span><span style=";font-family:Arial;font-size:100%;" ><span lang="FR" style="font-family:Arial;">/(1+<span style="font-weight: bold; font-style: italic;">t</span>/</span></span><span style="font-weight: bold; font-style: italic;">τ</span><span style=";font-family:Arial;font-size:100%;" ><span lang="FR" style="font-family:Arial;">)<o:p></o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span lang="FR" style="font-family:Arial;"> <o:p></o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">Plotting the three variations side-by-side and assuming that </span></span><span style="font-weight: bold; font-style: italic;">τ</span><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">=1, we get the following set of cumulative failure distributions. The full variant nestles in between the two other exponential variants, so it retains the character of more early failures (ala the <a href="http://mobjectivist.blogspot.com/2009/10/failure-is-complement-of-success.html">bathtub curve</a>) yet it also shows a fat-tail so that failure-free operation can extend for longer periods of time.<o:p></o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style="font-size:100%;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi432YRa-TE9Nj16IIjfjnLyo79ntY1ICnuIlN1Y7jkVt98pO6NmBtvysJhxotvUYB7TdoQJZbHlYBnZoiFzmwBvs2_cD5XtIH8UE5oGvLxttn8yNwXeWmJfBLTQnbnXm4u1l5N/s1600/rel-curves1.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 364px; height: 198px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi432YRa-TE9Nj16IIjfjnLyo79ntY1ICnuIlN1Y7jkVt98pO6NmBtvysJhxotvUYB7TdoQJZbHlYBnZoiFzmwBvs2_cD5XtIH8UE5oGvLxttn8yNwXeWmJfBLTQnbnXm4u1l5N/s400/rel-curves1.gif" alt="" id="BLOGGER_PHOTO_ID_5480622135652375570" border="0" /></a></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">To understand what happens at a more intuitive level we define the fractional failure rate as<br /></span></span></p><p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><blockquote><span style="font-weight: bold; font-style: italic;">F</span>(<span style="font-weight: bold; font-style: italic;">t</span>) =<span style="font-weight: bold;"> d</span><span style="font-weight: bold; font-style: italic;">P</span>/<span style="font-weight: bold;">d</span><span style="font-weight: bold; font-style: italic;">t</span> / (1-<span style="font-weight: bold; font-style: italic;">P</span>(<span style="font-weight: bold; font-style: italic;">t</span>)) </blockquote>Analysts use this form since it makes it more amenable to predicting failures on populations of parts. The rate then applies only to how many remain in the population, and the ones that have failed drop out of the count.<o:p></o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">Only the first case above gives a failure rate that approaches the Markov ideal of constant rate over time. The other two dip below the constant rate of the Markov simply because the fat-tail cumulative requires a finite integrability over the time scale, and so the rates will necessarily stay lower.<o:p></o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3pI-xWayRmEWTiSBgf-XMUlBY19pox05MGt9f0V0hjy36n6Lb5YKBC3D4DoRs9lxEJcdy_iJq5osMg7Iw3grGQO-boglUDBx7e7kYQJLIj6bTai4KHtzOMN9i75GDbqtr7ExS/s1600/rel-curves2.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 393px; height: 197px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3pI-xWayRmEWTiSBgf-XMUlBY19pox05MGt9f0V0hjy36n6Lb5YKBC3D4DoRs9lxEJcdy_iJq5osMg7Iw3grGQO-boglUDBx7e7kYQJLIj6bTai4KHtzOMN9i75GDbqtr7ExS/s400/rel-curves2.gif" alt="" id="BLOGGER_PHOTO_ID_5480622633146630450" border="0" /></a></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><a href="http://mobjectivist.blogspot.com/2009/10/creep-failure.html">Another post</a> gives a full account of what happens when we generalize the first-order linear growth on the rate term, letting <span style="font-weight: bold; font-style: italic;">R</span>=<span style="font-weight: bold; font-style: italic;">g</span>(<span style="font-weight: bold; font-style: italic;">t</span>). The full variant ultimately gives <span style="font-weight: bold; font-style: italic;">dg</span>/<span style="font-weight: bold; font-style: italic;">dt</span> / (1+<span style="font-weight: bold; font-style: italic;">g</span>(<span style="font-weight: bold; font-style: italic;">t</span>)), so that if <span style="font-weight: bold; font-style: italic;">g</span>(<span style="font-weight: bold; font-style: italic;">t</span>) starts rising we get the complete <a href="http://mobjectivist.blogspot.com/2009/10/failure-is-complement-of-success.html">bathtub curve</a>.<o:p></o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">If we don't invoke other time dependencies on the rate function <span style="font-weight: bold; font-style: italic;">g</span>(<span style="font-weight: bold; font-style: italic;">t</span>), we see how certain systems never show failures after an initial period. Think about it for a moment -- the fat-tails of the variable rate cases push the effective threshold for failure further and further into the future. <o:p></o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">In effect, normalizing the failures in this way explains why some components have predictable unreliability, while other components can settle down and seemingly last forever after the initial transient.</span></span></p><p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">I discovered that <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.77.4826&rep=rep1&type=pdf">this paper by Pandey</a> jives with the way I think about the general problem.</span></span><br /><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p></o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;">Enjoy your popcorn, it should have popped by now.<o:p></o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" ><span style="font-family:Arial;"><o:p> </o:p></span></span></p></div>@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.com0tag:blogger.com,1999:blog-7002040.post-24660846566613306012010-06-06T14:04:00.000-07:002010-06-28T21:54:29.486-07:00Reliability of Relief WellsI have seen much discussion on <a href="http://theoildrum.com/">TOD</a> and elsewhere of the effectiveness of adding relief wells to take the pressure off the failed well in the Gulf. Occasionally I have noticed questions on how one would make a kind of reliability prediction given estimated success/failure probability numbers. This turns into the classic <a href="http://en.wikipedia.org/wiki/Redundancy_%28engineering%29">redundant configuration</a> reliability prediction problem.<br /><p>Initially, for pure success probabilities I wouldn't add time to the equation. In the steady-state we just work with basic probability multiplications. If the probabilities of success rates remain independent of each other, then they form a pattern. Say we have three tries for relief wells, each one having a value between 0 and 1. If all three fail then the whole attempt failed:<br /></p><blockquote>P(failure) = P1(failure)*P2(failure)*P3(failure)</blockquote>and<br /><blockquote>P(success)=1-P(failure)</blockquote><p></p>so if P1=P2=P3=1-0.7=0.3<br /><p>then P(failure)=0.027<br /><br />and P(success)=0.973</p>With time you need to work from the notion of a deadline, i.e. that no failures occur in a certain amount of time. Otherwise you end up using the fixed probabilities above because you have essentially infinite time to work with.<br /><p>Apart from end-state failure analysis, you can also do a time-averaged effectiveness, where the rates help you do a trade-off analysis between how long it takes before you fix the problem and how much oil gets released in the meantime. Unfortunately, when you look at the optimization criteria, the only valid optimum in most people's minds is to stop the oil leak as quickly as possible. Otherwise it looks like we play dictator rolling dice (at least IMO that is the political response I predict to get).</p>Given that political issue, you can create a set of criteria with weights on the probabilities of success, the cost, and on the amount of oil leaked (the first and third as Markov models as a function of time). When you combine the three and look for an optimum, you might get a result that gives you a number of relief wells somewhere between 1 and infinity. The hard part remains establishing the weighting criteria. Place a lower weight on cost and you will definitely lower the number of wells. And that's where the politics plays in again, as many people will suggest that cost does not form a limitation. We also have the possibility of a massive blow-out by adding a botched relief well, but that risk may turn out acceptable.<br /><p>Below I show a state diagram from a Markov-based reliability model. With the Markov process premise you can specify rates of probability flow between various states and then execute it without having to resort to Monte Carlo.</p><p></p><div style="text-align: center;"><img style="width: 376px; height: 635px;" src="http://img204.imageshack.us/img204/9025/reliefwell.gif" /><br /></div><p>I made this diagram for 3 relief wells drilled in succession, when one doesn't work, then we start the next. The term <span style="font-weight: bold; font-style: italic;">B1</span> is the rate for a failure specified as 0.01 (or 1 in 100 days). <span style="font-weight: bold; font-style: italic;">B2</span> is a success rate of 0.02 (or 1 in 50 days). The start state is <span style="font-weight: bold; font-style: italic;">P1</span>, the success state is <span style="font-weight: bold; font-style: italic;">P3</span>, and the end failure state is <span style="font-weight: bold; font-style: italic;">P5</span>.</p>When I execute this for 200 days, the probability of entering state <span style="font-weight: bold; font-style: italic;">P5</span> is 3.5% and it will rise to 3.7% after 1000 days. <span style="font-weight: bold; font-style: italic;">P3</span> is 95% after 200 days. The sanity check on this gives a success ratio of about 0.02/(0.01+0.02)=0.666 and from the formula this gives a probability of failure at the end state of (1/3)^3 = 0.037 = 3.7%. This sanity checks with the output after 1000 days.<br /><p>The Markov model allows you to predict the time dependence of success and failure based on the assumptions of the individual non-redundant failure rates. You can thus work the model as a straightforward reliability prediction. Change the success probabilities to 50% individual success rate and we still only need three relief wells if we want to get to 87.5% . Contrast that to 97% average success rate with 3 wells, if we remain on the optimistic side of 50%. So you can see that our confidence grows with the confidence in the success of the individual wells, which makes intuitive sense.<br /></p>This particular model assumes a serial succession of relief wells. You can also model relief wells constructed in parallel, which I believe remains the current strategy in the Gulf. Or you can model the initial delay a little better. With the model as described, we have success rates that can occur earlier than perhaps expected. An exponential on the success rate per time provides a distribution where the standard deviation equals the mean, which is the most conservative estimator should you have no idea what the standard deviation is. To generate a model with about half the standard deviation, we can turn the exponential into a gamma. Each relief well spends about half its time in a "build" stage where it experiences neither success or failure. Then the next stage of its life-cycle gets spent in testing for success. See the following chart:<br /><p style="text-align: center;"><img src="http://img32.imageshack.us/img32/4593/reliefwellgamma.gif" width="350" /></p>The overall result doesn't differ much from the previous model but you do see a much diminished success rate early on -- which makes the model match reality better.<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtB3l_9wOZ4vCOvS4rfhaCPDk0-sGG5hlAKn8yLIGuAcwrmS-iNKDRpM2BWNVPOhHTC6LgWsifCYucH7hciSfbplSoKRbXwuJRBntv_Yhx3XZ_ItpfH3dSInTMgPZRuwhHUiAH/s1600/fb.gif"><img style="margin: 0pt 0pt 10px 10px; float: right; cursor: pointer; width: 317px; height: 235px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtB3l_9wOZ4vCOvS4rfhaCPDk0-sGG5hlAKn8yLIGuAcwrmS-iNKDRpM2BWNVPOhHTC6LgWsifCYucH7hciSfbplSoKRbXwuJRBntv_Yhx3XZ_ItpfH3dSInTMgPZRuwhHUiAH/s400/fb.gif" alt="" id="BLOGGER_PHOTO_ID_5479770419579408018" border="0" /></a><br /><br />As another possibility, we can repeat an individual relief well several times, backing up and retrying if the last one doesn't work. That models as a state that directs back on itself, with a rate <span style="font-weight: bold; font-style: italic;">B4</span>. I won't run this one because I don't know the rates of retries, but the general shape of the of the failure/success curve looks similar.<br /><br /><br /><p>I'm sure some group of analysts somewhere has worked a similar kind of calculation. Whether it pays off or not for a single case, I can't really say. However, this kind of model effectively describes how the probabilities work out and how you can use a state diagram to keep track of failure and success transitions.</p>By the way, this same math goes into the <a href="http://mobjectivist.blogspot.com/2008/08/pipes-and-oil-shock-model.html">Oil Shock Model</a> which I use for oil production prediction. In the oil shock model, transitions describe the rates between oil production life-cycle states, such as construction and maturation and extraction. So both the reliability model and the Oil Shock model derive from probability-based data flow models. This kind of model works very well for oil production because we have a huge number of independently producing regions around the world and the law of large numbers makes the probability projections that much more accurate. As a result, I would put more trust in relying on the results of the oil shock model than predicting the success of the recovery of a single failed deep-water production well. Yet, the relief well redundancy model does help to estimate how many extra relief wells to add and adds some quantitative confidence to one's intuition.<br /><br /><hr /><br /><br />Based on the post by Joules Burn (JB) on TOD <a href="http://www.theoildrum.com/node/6573">BP's Deepwater Oil Spill: A Statistical Analysis of How Many Relief Wells Are Needed</a>, I added a few comments:<br /><br />JB did everything perfectly correctly given the premises. Another way to look at it is that you need to accomplish a sequence of steps, each with a probability rate of entering into the next state. This would simulate the construction of the relief well itself (a sequence of steps). Then you would have a rate into a state where you start testing the well for success. This goes into a state that results in either a success, retry, or failure (the utter failure in JB lingo). The convenient thing is that you can draw the retry as a feedback loop, so the result looks like the following for a single well:<br /><br /><img src="http://img808.imageshack.us/img808/2462/retry.gif" /><br />I picked some of the numbers from intuition, but the results have the general shape that JB showed. When you look at a rate like 0.1, inverting it gives a mean transition of 10 days.<br /><br />This is a state diagram simulation like that used in the Oil Shock model, which I use to project worldwide oil production. I find it interesting to see how well accepted the failure rate approach is for failure analysis, but few seem to accept it for oil depletion analysis. I presume oil depletion is not as mission critical a problem as the Gulf spill is :)@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.com1tag:blogger.com,1999:blog-7002040.post-54988227821382479102010-06-05T12:31:00.000-07:002010-06-06T11:54:42.016-07:00Thermal Entropic DispersionAs we learn how to extract energy from disordered, entropic systems such as <a href="http://mobjectivist.blogspot.com/2010/05/characterizing-mobility-in-disordered.html">amorphous photovoltaics</a> and <a href="http://mobjectivist.blogspot.com/2010/05/wind-energy-dispersion-analysis.html">wind power</a>, we can really start thinking creatively in terms of our analysis. Most of the conventional thinking goes out the window as considerations of the impact of disorder requires a different mindset.<br /><br /><a href="http://mobjectivist.blogspot.com/2010/05/word-on-dispersion.html">In a recent post</a>, I solved the Fokker-Planck diffusion/convection equation for disordered systems and demonstrated how well it applied to transport equations; I gave examples for both amorphous silicon photocurrent response and for the breakthrough curve of a solute. Both these systems feature some measurable particle, either a charged particle for a photovoltaic or a traced particle for a dispersing solute.<br /><br />Similarly, the conduction of heat also follows the Fokker-Planck equation at its most elemental level. In this case, we can monitor the temperature as the heat flows from regions of high temperature to regions of low temperature. In contrast to the particle systems, we do not see a drift component. In a static medium, not abetted by currents (as an example, mobile ground water) or re-radiation, heat energy will only move around by a diffusion-like mechanism.<br /><br />We can't argue that the flow of heat shows the characteristics of an entropic system -- after all temperature serves as a measure of entropy. However, the way that heat flows in a homogeneous environment suggests more order than you may realize in a practical siuation. In a perfectly uniform medium, we can propose a single diffusion coefficient, <span style="font-weight: bold; font-style: italic;">D</span>, to describe the flow or flux. A change of units translates this to a thermal conductivity. This value inversely relates to the <a href="http://en.wikipedia.org/wiki/Thermal_conductivity">R-value</a> that most people have familiraity with when it comes to insulation.<br /><br />For particles in the steady state, we think of Fick's First Law of Diffusion. For heat conduction, the analogy is <a href="http://en.wikipedia.org/wiki/Heat_conduction">Fourier's Law</a>. These both rely on the concept of a concentration gradient, and functionally appear the same, only the physical dimensions of the parameters change. Adding the concept of time, you can generalize to the Fokker-Planck equation (i.e Fick's Second Law or the <a href="http://wapedia.mobi/en/Heat_equation">Heat Equation</a> respectively).<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlyNJHF28dSZsQwDudQrww8XHxkiWc7Ra-jk140oeUuzxpMDiZ9iG06uWpSyaM_Q9G8dIqIo7Bfxff-W5fN6msD9jx50xPVKdxKwOIwQQ5kpieFeaPHyELyeA5hX5lz7xsn5FQ/s1600/fpe-gaussian.gif"><img style="margin: 0pt 0pt 10px 10px; float: right; cursor: pointer; width: 296px; height: 202px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlyNJHF28dSZsQwDudQrww8XHxkiWc7Ra-jk140oeUuzxpMDiZ9iG06uWpSyaM_Q9G8dIqIo7Bfxff-W5fN6msD9jx50xPVKdxKwOIwQQ5kpieFeaPHyELyeA5hX5lz7xsn5FQ/s400/fpe-gaussian.gif" alt="" id="BLOGGER_PHOTO_ID_5479694138003481250" border="0" /></a>Much as with a particle system, solving the one-dimensional Fokker-Planck equation for a thermal impulse you get a Gaussian packet that widens from the origin as it diffuses outward. See the picture to the right for progressively larger values of time. The cumulative amount collected at some point, <span style="font-weight: bold; font-style: italic;">x</span>, away from the origin results in a sigmoid-like curve known as an<span style="font-style: italic;"> complementery error function </span><span style="font-family:courier new;">or <span style="font-weight: bold;">erfc</span></span>.<br /><br />Yet in practice we find that a particular medium may show a strong amount of uniformity. For example, earth may contain large rocks or pockets which can radically alter the local diffusivity. Same thing occurs with the insulation in a dwelling; doors and windows will have different thermal conductivity than the walls. The fact that reflecting barriers exist means that the <span style="font-style: italic;">effective</span> thermal conductivity can vary (similarly this arises in variations due to Rayleigh scattering in <a href="http://mobjectivist.blogspot.com/2010/06/wind-variability-in-germany.html">wind</a> and <a href="http://mobjectivist.blogspot.com/2010/04/rayleigh-fading-wireless-gadgets-and.html">wireless</a> observations). I see nothing radical about the overall non-uniformity concept, just an acknowledgment that we will quite often see a heterogeneous environment and we should know how to deal with it.<br /><br />Previously, I solved the FPE for a disordered system assuming both diffusive and drift components. <a href="http://mobjectivist.blogspot.com/2010/05/fokker-planck-for-disordered-systems.html">In that solution</a> I assumed a maximum entropy (MaxEnt) distribution for mobilities and then tied diffusivity to mobility via the Einstein relation. The solution simplifies if we remove the mobility drift term and rely only on diffusivity. The cumulative impulse response to a delta-function heat energy flux stimulus then reduces to:<br /><blockquote><span style="font-style: italic; font-weight: bold;">T</span>(<span style="font-weight: bold; font-style: italic;">x</span>,<span style="font-weight: bold; font-style: italic;">t</span>) = <span style="font-weight: bold; font-style: italic;">T</span>1* exp(-<span style="font-weight: bold; font-style: italic;">x</span>/sqrt(<span style="font-weight: bold; font-style: italic;">D</span>*<span style="font-weight: bold; font-style: italic;">t</span>)) + <span style="font-weight: bold; font-style: italic;">T</span>0</blockquote><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1ybQzLoGMRbo3cVhHxwAorEf0vN65kCCDHU8kRwmZUX-jp1HBZVmVVjEf0mSq1bfh6iT1TxVWRsEHNjXsa_UWpqrpUPARqu_QGNzN-d5E6TuE7Tdflh-WceMdMs6iSDYReCRa/s1600/erf.gif"><img style="margin: 0pt 0pt 10px 10px; float: right; cursor: pointer; width: 316px; height: 205px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1ybQzLoGMRbo3cVhHxwAorEf0vN65kCCDHU8kRwmZUX-jp1HBZVmVVjEf0mSq1bfh6iT1TxVWRsEHNjXsa_UWpqrpUPARqu_QGNzN-d5E6TuE7Tdflh-WceMdMs6iSDYReCRa/s400/erf.gif" alt="" id="BLOGGER_PHOTO_ID_5479704987247396274" border="0" /></a>No <span style="font-family:courier new;"> erfc </span>in this equation (which by the way makes it useful for quick analysis). I show the difference between the two solutions in the graph to the right (for a one-dimensional distance <span style="font-weight: bold; font-style: italic;">x</span>=1 and a scaled diffusivity of <span style="font-weight: bold; font-style: italic;">D</span>=1). The uniform diffusivity form (<span style="color: rgb(153, 0, 0);">red </span>curve) shows a slightly more pronounced knee as the cumulative increases than the disordered form (<span style="color: rgb(51, 51, 255);">blue </span>curve) does. The fixed <span style="font-weight: bold; font-style: italic;">D</span> also settles to an asymptote more quickly than the MaxEnt disordered <span style="font-weight: bold; font-style: italic;">D</span> does, which continues to creep upward gradually. In practical terms, this says that things will heat up or slow down more gradually when a variable medium exists between yourself and the external heat source<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2nRSuluGWkKTtjYQs7_KbjMdVlNj1SQEKt8qlK2VQS_t9Ye26Kk7-ysQZ6xg07YZj1iG-YJDSFrTLZfbM6D5pIa5PL5A1HPlP9lGZOSQ6NjTxI2XBKSsNX9auXAh4kYJyGs52/s1600/erf-small.gif"><img style="margin: 0pt 0pt 10px 10px; float: right; cursor: pointer; width: 320px; height: 199px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2nRSuluGWkKTtjYQs7_KbjMdVlNj1SQEKt8qlK2VQS_t9Ye26Kk7-ysQZ6xg07YZj1iG-YJDSFrTLZfbM6D5pIa5PL5A1HPlP9lGZOSQ6NjTxI2XBKSsNX9auXAh4kYJyGs52/s400/erf-small.gif" alt="" id="BLOGGER_PHOTO_ID_5479706857000099634" border="0" /></a><br />Because of the variations in diffusivity, some of the heat will also arrive a bit more quickly than if we had a uniform diffusivity. See the figure to the right for small times. Overall the differences appear a bit subtle. This has as much to do with the fact that diffusion already implies disorder, while the MaxEnt formulation simply makes the fat-tails fatter. Again it essentially disperses the heat -- some gets to its destination faster and a sizable fraction later.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://www.comsol.com/stories/nasa_life_support/full/"><img style="margin: 0pt 0pt 10px 10px; float: right; cursor: pointer; width: 220px; height: 174px;" src="http://static1.comsol.com/shared/images/stories/nasa_life_support/html/picture3.gif" alt="" border="0" /></a>Which brings up the question of how we can get some direct evidence of this behavior from empirical data. With drift, the dispersion becomes much more obvious, as systems with uniform mobility with little disorder show very distinct knees (ala photocurrent time-of-flight measurements or solute breakthrough curves for uniform materials) . Adding the MaxEnt variation makes the fat-tail behavior very obvious, as you would observe from the anomalous transport behavior in amorphous semiconductors. With diffusion alone, the knee automatically smears, as you can see from the figure to the right for a typical thermal response measurement.<br /><br /><span style="font-weight: bold;font-size:130%;" >Evidence</span><br />Much of the interesting engineering and scientific work in characterizing thermal systems comes out of Europe. <a href="http://www.groenholland.com/nl/consultancy/site_testing_and_characterisation/trial_borehole_and_trt.php">This paper investigating earth-based heat exchangers</a> contains an interesting experiment. As a premise, they wrote the following, where incidentally they acknowledge the wide variation in thermal conductivities of soil:<br /><blockquote>The thermal properties can be estimated using available literature values, but the range of values found in literature for a specific soil type is very wide. Also, the values specific for a certain soil type need to be translated to a value that is representative of the soil profile at the location. The best method is therefore to measure directly the thermal soil properties as well as the properties of the installed heat exchanger.<br /><p align="justify">This test is used to measure with high accuracy:</p> <ul><li>The temperature response of the ground to an energy pulse, used to calculate: <ul><li>the effective thermal conductivity of the ground </li><li>the borehole resistance, depending on factors as the backfill quality and heat exchanger construction</li></ul> </li><li>The average ground temperature and temperature - depth profile. </li><li>Pressure loss of the heat exchanger, at different flows. </li></ul></blockquote>The authors of this study show a measurement for the temperature response to a thermal impulse, with the results shown over the course of a couple of days. I placed a solid red and blue line indicating the fit to an entropic model of diffusivity in the figure below. The mean diffusivity comes out to <span style="font-style: italic; font-weight: bold;">D</span>=1.5/hr (with the red and blue curves +/- 0.1 from this value) assuming an arbitrary measurement point of one unit from the source. This fit works arguably better than a fixed diffusivity as the variable diffusivity shows a quicker rise and a more gradual asymptotic tail to match the data.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSu3ZFofKvgNzovbSvwvRnlHD00qR78gMseTcF2ypy5i_aCqj5hV4fQdI6jZv-o6biXcOr-ehmutZJo2DloxoQ47YesQpM1zpVQr1dphvs8E7ZY7mGBdbljSI4mS0yb2aW7nL7/s1600/borehole.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 253px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSu3ZFofKvgNzovbSvwvRnlHD00qR78gMseTcF2ypy5i_aCqj5hV4fQdI6jZv-o6biXcOr-ehmutZJo2DloxoQ47YesQpM1zpVQr1dphvs8E7ZY7mGBdbljSI4mS0yb2aW7nL7/s400/borehole.gif" alt="" id="BLOGGER_PHOTO_ID_5479518469735980914" border="0" /></a><br />The transient thermal response tells us a lot about how fast a natural heat exchanger can react to changing conditions. One of the practical questions concerning their utility arises from how quickly the heat exchange works. Ultimately this has to do with extracting heat from a material showing a natural diffusivity and we have to learn how to deal with that law of nature. Much like we have to acknowledge the <a href="http://mobjectivist.blogspot.com/2010/04/wind-dispersion-and-renewable-hubbert.html">entropic variations in wind</a> or cope with variations in <a href="http://mobjectivist.blogspot.com/2010/04/fat-tail-in-co2-persistence.html">CO2 uptake</a>, we have to deal with the variability in the earth if we want to take advantage of our renewable geothermal resources.@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.com1tag:blogger.com,1999:blog-7002040.post-19850634899396144252010-06-01T18:13:00.000-07:002010-06-01T19:23:30.674-07:00Wind Variability in GermanyBy adding more data to the post on <a href="http://mobjectivist.blogspot.com/2010/05/wind-energy-dispersion-analysis.html">wind dispersion</a>, we can observe how dispersion in wind speeds has a universal character. I picked up the previous data set from several years worth of output from Ontario. This new set hails from northwest Germany and <a href="http://www.transpower.de/pages/tso_de/Transparenz/Veroeffentlichungen/Netzkennzahlen/Tatsaechliche_und_prognostizierte_Windenergieeinspeisung/index.htm">this site</a> (thanks to globi for the link). The data consists of wind power collected at 15 minute intervals.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7lTV4HpOCkjVmEoCCM20tScsTiXLNBXMwJqE9GriGDLoKROKgj9-IfT34YL_deM8abPWqrvaIF3YegfMWDaDOLq2F8gNcLcu8E2QSRRBRNY6dTm4KzNk70pL52h2CZqfhNj7t/s1600/nw-german.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 395px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7lTV4HpOCkjVmEoCCM20tScsTiXLNBXMwJqE9GriGDLoKROKgj9-IfT34YL_deM8abPWqrvaIF3YegfMWDaDOLq2F8gNcLcu8E2QSRRBRNY6dTm4KzNk70pL52h2CZqfhNj7t/s400/nw-german.gif" alt="" id="BLOGGER_PHOTO_ID_5477980256491972866" border="0" /></a><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEis8PRlmTAADtliuDl6X0LP1pccTB4IwYQnAxFHYKNuCRrks4301KxyrgEbJ4Yu36jns6riBYRuRD5395dEIK_DaeRYZDOP8LWe18MtFP3J4mErFOpzPhyphenhyphenNdQnbZ9CFgfVejrFO/s1600/wind-energy.gif"><img style="margin: 0pt 0pt 10px 10px; float: right; cursor: pointer; width: 293px; height: 198px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEis8PRlmTAADtliuDl6X0LP1pccTB4IwYQnAxFHYKNuCRrks4301KxyrgEbJ4Yu36jns6riBYRuRD5395dEIK_DaeRYZDOP8LWe18MtFP3J4mErFOpzPhyphenhyphenNdQnbZ9CFgfVejrFO/s1600/wind-energy.gif" alt="" border="0" /></a>Note that the same entropic dispersion holds as for Ontario (see graph to the right). Both curves display the same damped exponential probability distribution function for frequency of wind power (derived from wind speed). We also see the same qualitative cut-out above a certain power or wind energy level. As I said previously, we don't gain much by drawing from these higher power levels as they occur more sporadically than the nominally rated wind speeds at the upper reaches of the curve.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://aventa.ch/Bilder%20englisch/Leistungskurve%20AV-7%20englisch-Dateien/image001.gif"><img style="margin: 0pt 0pt 10px 10px; float: right; cursor: pointer; width: 409px; height: 273px;" src="http://aventa.ch/Bilder%20englisch/Leistungskurve%20AV-7%20englisch-Dateien/image001.gif" alt="" border="0" /></a>The following figure gives an explanation for the cutout above the "max" wind speed. Globi also provided this <a href="http://alturl.com/8qn3">PDF</a> from Vestas, a maker of wind turbines. The end of the document has the complete spec.<br /><blockquote>Power regulation : pitch regulated with variable speed<br /><span style="font-weight: bold;">Operating data</span> <br />Rated power : 3,000 kW Cut-in wind speed : 3 m/s<br />Rated wind speed : 12 m/s<br />Cut-out wind speed : 25 m/s<br /></blockquote>Too many people get the idea that the sporadic nature of wind confronts us with some kind of "problem". We will have to get used to a different way of thinking about wind. The entropic dispersion of wind acts much like a variation of the <a href="http://en.wikipedia.org/wiki/Carnot_cycle">Carnot cycle</a>. In the Carnot cycle of engine efficiency, we have to live with a maximum level of energy conversion based on temperature differences of the input and output reservoirs. With wind, the earth's environment and atmosphere provides the temperature differences which leads directly to the variability over time.<br /><br />Which leads to the fact that <span style="font-size:78%;">WITH WIND POWER, WE CAN ACHIEVE VERY HIGH USAGE EFFICIENCY GIVEN THE ENTROPIC CHARACTERISTICS OF THE WIND</span>. I put this in upper case because it amounts to a law of nature. We need to talk about efficiencies within the constraints of the physical laws just as with the Carnot cycle. We will observe intermittency as a result of entropic dispersion and we have to get used to it. We should not call it a fundamental "problem", as we cannot change the characteristics of entropy (apart from adding energy, and that just moves us back to square one).<br /><br />Other people would suggest that the fundamental problem with farming derives from the intermittent nature of the rain. With farming, we adapt -- likewise with wind energy. Instead of a problem, we need to call it an opportunity.<br /><br /><br /><hr /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://img76.exs.cx/img76/8427/windsurfing-animate.gif"><img style="margin: 0pt 0pt 10px 10px; float: right; cursor: pointer; width: 114px; height: 87px;" src="http://img76.exs.cx/img76/8427/windsurfing-animate.gif" alt="" border="0" /></a>As a blast from the past check out <a href="http://mobjectivist.blogspot.com/2004/09/forgery-exposed.html">my expose</a> of the forged video editing by the George Bush marketing team against John Kerry. Wind energy advocates will have to watch out for these tactics as the right-wingers will project and frame any way they can to make wind look like a wimpy exercise designed by the elite for the elite.@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.com3tag:blogger.com,1999:blog-7002040.post-1699669100054567552010-05-29T20:15:00.000-07:002010-05-30T17:29:06.925-07:00The Word on DispersionCredit the Gulf oil disaster with allowing the words dispersion and dispersants to enter our common vocabulary. In the context of the spill, the use of dispersants on the oil causes the potentially sticky coagulating oil to split apart into finer granularity drops and somehow make it more amenable to breaking down. Dispersion in terms of a chemical definition simply means spreading out particles in the medium, in this case seawater. So a dispersant breaks it up and dispersion scatters it about.<br /><br />The BP team apparently wanted to break up the oil up so that it could easily migrate and essentially dilute its strength within a larger volume. So instead of allowing a highly concentrated dose of oil to impact a seashore or the ocean surface, the dispersants would force the oil to remain in the ocean volume, and let the vast expanse of nature take its course. Somebody in the bureaucratic hierarchy made the calculated decision to apply dispersants as a judgment call. I can't comment on the correctness of that decision but I can expound on the topic of dispersion, which no one seems to fully understand, even in a scientific context.<br /><br />As the media has forced us to listen to <a href="http://news.blogs.cnn.com/2010/05/25/gulf-coast-oil-spill-demystified-a-glossary/">made up technical terms</a> such as "top kill", "junk shot", and "top hat" which describe all sorts of wild engineering fixes, I will take a turn toward the more fundamental notions of disorder, randomness, and entropy to explain that which we cannot necessarily control. I always think that if we can understand concepts such as dispersion from first principles, we actually have a good chance of understanding how to apply it to a range of processes besides oil spill dispersal. In other words, well beyond this rather specific interpretation, we can apply the fundamentals to other topics such as green-house gases, financial market fluctuations, and oil discovery and production, amongst a host of other natural or man-made processes. Really, it is this fundamental a concept.<br /><br /><span style="font-weight: bold;">Background</span><br /><br />If by the process of dispersion we want the particles to dilute as rapidly as possible, we need to somehow accelerate the rate or <span style="font-style: italic;">kinetics</span> of the interactions. This becomes a challenge of changing the fundamental nature of the process, via a homogeneous change, or by introducing additional heterogeneous pathways that provide alternate pathways to faster kinetics. From this perspective, dispersion describes a mechanism to divergently spread-out the rates and dilute the material from its originally concentrated form. One can analogize in terms of a marathon race; the initial concentration of runners at the starting line rapidly disperses or spreads out as the faster runners move to the front and the slower runners drop to the rear. In a typical race, you see nothing homogeneous about the makeup of the runners (apart from their human qualities); the elites, competitive amateurs, and spur-of-the-moment entrants cause the dispersion. Whether we want to achieve a homogeneous dispersion or not, we have to account for the heterogeneous nature of the material. In other words, we rarely deal with pure environments so have to solve for much more than the limited variability we originally imagined. Generalizing from the rather artificial constraints of a marathon race, dispersion in other contexts (such as <a href="http://mobjectivist.blogspot.com/2010/04/dispersive-and-non-dispersive-growth-in.html">crystal growth or reservoir growth</a>) results from an increase of disorder as a direct consequence of entropy and the second law of thermodynamics.<br /><br />In terms of the spread in dispersion, we might often observe a tight bunching or a wide span in the results. The wider dispersion usually indicates a larger disorder, variability, or uncertainty in the characteristics -- a <a href="http://mobjectivist.blogspot.com/2010/04/power-laws-and-entrophic-systems.html">"fat-tail"</a> to the statistics so to speak. So when we introduce a dispersant into the system, we add another pathway and basically remove order (or introduce disorder) into the system. Dispersion may thus not accelerate a process in a uniform manner, but instead accelerates the differences in the characteristic properties of the material. This again describes an entropic process, and we have to add energy or find exothermic pathways to fight the tide of increasing disorder.<br /><br />This seems like such a simple concept, yet it rarely gets applied to most scientific discussions of the typical disordered process. Instead, particularly in an academic setting, what one usually reads amounts to pontificating about some abnormal or anomalous kind of random-walk that must occur in the system. The scientists definitely have a noble intention -- that of explaining a fat-tail phenomenon -- yet they don't want to acknowledge the most parsimonious explanation of all. They simply do not want to consider heterogeneous disorder as described by the <a href="http://mobjectivist.blogspot.com/2010/05/wind-energy-dispersion-analysis.html">maximum entropy principle</a>.<br /><br /><div style="text-align: center;"><br /></div><table style="text-align: left; margin-left: auto; margin-right: auto;" border="0"><tbody><tr><td><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://www.grunch.net/synergetics/images/random3.jpg"><img style="margin: 0pt 0pt 10px 10px; float: right; cursor: pointer; width: 250px; height: 278px;" src="http://www.grunch.net/synergetics/images/random3.jpg" alt="" border="0" /></a></td><td><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://sethgodin.typepad.com/.a/6a00d83451b31569e2012877573fb6970c-800wi"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 250px; height: 276px;" src="http://sethgodin.typepad.com/.a/6a00d83451b31569e2012877573fb6970c-800wi" alt="" border="0" /></a><br /></td></tr></tbody></table><div style="text-align: center;"><span style="font-weight: bold;">Figure 1:</span> Difference between a classical random walk (left) and an anomalous random walk (right). The salient difference is that occasional long jumps (Levy flights) occur in the anomalous random walk. A much simpler approach admits that a heterogeneous nix of random walkers of different rates exists. This will give essentially the same observable outcome without resorting to arcane mathematical modeling.<br /><br /></div>The complicating factor in discussions about dispersion involves the intuitively related concept of <span style="font-style: italic;">diffusion</span> and <span style="font-style: italic;">convection</span> or <span style="font-style: italic;">drift</span>. Diffusion also derives from the statistics of disorder and describes how particles can spontaneously spread out without a real driving force, apart from the uniform environment, for example from the thermal background. The analysis of a particle undergoing random walk leads directly to the concept of diffusion. Random walk ideas seem to intrigue mathematicians and scientists because it places the concept of diffusion into a real concrete representation. In some sense everyone can relate to the idea of a particles bouncing around, but not necessarily to the idea of a gradient in concentration.<br /><br />Convection and drift describe the motion of particles under an applied force, say charged particles under the influence of an electric field (<a href="http://mobjectivist.blogspot.com/2010/05/fokker-planck-for-disordered-systems.html">Haynes-Shockley</a>), or of solute or suspended particles under the influence of gravity (<a href="http://mobjectivist.blogspot.com/2008/07/solving-enigma-of-reserve-growth.html">Darcy's Law</a>). This essentially describes the typical constant velocity, akin to a terminal velocity, that we observe in a pure semiconductor (Haynes-Shockley) or a uniformly porous media (Darcy's).<br /><br />Dispersion can effect both diffusion and drift, and that establishes the premise for the novel derivation that I came up with.<br /><br /><span style="font-weight: bold;">Breakthrough</span><br /><br />The unification of the dispersion and diffusion concepts could have a huge influence on the way we think about practical systems, if we could only factor the mathematics describing the process. <a href="http://mobjectivist.blogspot.com/2010/05/fokker-planck-for-disordered-systems.html">I can straightforwardly demonstrate a huge simplification assuming a single somewhat obvious premise.</a> This involves applying the conditions of maximum entropy, by essentially maximizing disorder under known constraints or moments (i.e. mean values, etc).<br /><br />The obviousness of this unifying solution contrasts with my lack of awareness of of any such similar simplification in the scientific literature. Surprisingly, I can't even confirm that anyone has really looked into the general idea. So far, I can't find any definitive work on this unification and little interest in pursuing this premise. Stating my point-of-view flatly, the result has such a comprehensive and intuitive basis that it should have a far-reaching impact on how we think about dispersion and diffusion. It just needs to gain a foothold of wider acceptance in the marketplace of ideas.<br /><br />Which brings up a valid point I have heard directed my way. From my postings on <a href="http://theoildrum.com/">TheOilDrum.com</a>, commenters occasionally ask me why I don't publish these results in an academic setting, such as a journal article. To answer that, journals have evidently failed in this case, as I never find any serious discussion of dispersion unification. So consider that even if I submitted these ideas to a journal, it may just sit there and no one would ever apply the analysis in any future topics. This makes it an utterly useless and ultimately futile exercise. I will risk putting the results out on a blog and take my chances. A blog easily has as much archival strength, much more rapid turnaround, the potential for critiquing, and has searchability (believe it or not, googling the term <a href="http://www.google.com/search?q=%22Dispersive+transport%22">"dispersive transport"</a> yields this blog as the #3 result, out of 16,200,000). The general concepts do not apply to any specific academic discipline apart perhaps applied math, and I certainly won't consider publishing the results in that arena with out risking it disappear without a trace. Eventually, I want to place this information in a Wikipedia entry and see how that plays out. I would call it an experiment in <a href="http://www.opensourcescience.net/index.php?title=Main_Page">Open Source science.</a><br /><br />But that gets a little ahead of the significance of the current result.<br /><br /><span style="font-weight: bold;">The Unification of Diffusion and Drift with Dispersion</span><br /><br />As <a href="http://mobjectivist.blogspot.com/2010/05/fokker-planck-for-disordered-systems.html">my most recent post described</a>, solving the Fokker-Planck equation (FPE) under maximum entropy conditions provides the fundamental unification between dispersion, diffusion and drift. For fans of Taleb and Mandelbrot, this shows directly how "thin-tail" statistics become "fat-tail" statistics without resorting to fractal arguments.<br /><br />The Fokker-Planck equation shows up in a number of different disciplines. Really, anything having to do with diffusion or drift has a relation to Fokker-Planck. Thus you will see FPE show up in its various guises: <a href="http://en.wikipedia.org/wiki/Convection%E2%80%93diffusion_equation">Convection-Diffusion equation</a>, Fick's Second Law of Diffusion, Darcy's Law, Navier-Stokes (<a href="http://en.wikipedia.org/wiki/Navier%E2%80%93Stokes_equations">kind of</a>), <a href="http://mobjectivist.blogspot.com/2010/05/fokker-planck-for-disordered-systems.html">Shockley's Transport Equation</a>, Nernst-Planck; even something as seemingly unrelated as the Black-Scholes equation for finance has applicability for FPE (where the random walk occurs as fractional changes in a metric).<br /><br />Because of its wide usage, the FPE tends to take the form of a hammer, where everything it applies to acts as the nail. (You don't see this more frequently than in finances, where Black-Scholes played the role of the hammer) Since the solution of FPE results in a probability distribution, it gives the impression that some degree of disorder prevails in the system under study. I find this understandable since the concept of diffusion implies an uncertainty exactly like a random walk shows uncertainty. In other words, no two outcomes will turn out exactly the same. Yet, in mathematical terms, the measurable value associated with diffusion, the diffusion constant <span style="font-weight: bold; font-style: italic;">D</span>, has a fixed value for random motion in a homogeneous environment. When the parameters actually change, you enter in the world of <a href="http://en.wikipedia.org/wiki/Stochastic_differential_equation">stochastic differential equations</a>; I won't descend to deeply into this area, only to apply this as a basic concept. The diffusion and mobility parameters have a huge variability that we have yet adequately accounted for in many disordered systems.<br /><br />For that reason, the FP equation really applies to ordered systems that we can characterize well. Not surprisingly the ordinary solution to FPE gives rise to the conventional ideas of normal statistics and thin-tails.<br /><br />So for phenomenon that appear to depart from conventional normal diffusion (the so-called <a href="http://mobjectivist.blogspot.com/2009/06/dispersive-transport.html">anomalous diffusion</a>) we have two distinct camps and corresponding solution paths to choose from. The prevailing wisdom suggests that an entirely different kind of random walk occurs (Camp 1). No longer does the normal diffusion apply, giving rise to normal statistics; instead we get the statistics of fat-tails and random walk trajectories called <a href="http://en.wikipedia.org/wiki/L%C3%A9vy_flight">Levy flights</a> to concretely describe the situation (see Figure 1). The mathematics quickly gets complicated here and most of the results get cast into heuristic power-laws. It takes a leap of faith to follow these arguments.<br /><br />The question comes down to whether we wish to ascribe anomalous diffusion as a strange kind of random walk (Camp 1) or simply suggest that heterogeneity in diffusional and drift properties adequately describes the situation (Camp 2). I take the stand in the latter category and stand pretty much alone in this regard. Find some academic research article on anything related to anomalous diffusion and very few will accept the most parsimonious explanation -- that a range of diffusion constants and mobilities explain the results. Instead the researcher will punt and declare that some abstract Levy flight describes the motion. Above all I would rather think in practical terms, and simple variability has a very pragmatic appeal to it.<br /><br />I went through the derivation of the dispersive FPE solution for a disordered semiconductor in <a href="http://mobjectivist.blogspot.com/2010/05/fokker-planck-for-disordered-systems.html">the last post</a>, and want to generalize it here. This makes it especially applicable to notions of transport physical transport of material in porous matter. This would include the <a href="http://mobjectivist.blogspot.com/2008/10/dispersive-discovery-field-size.html">motion of oil underground</a>, <a href="http://mobjectivist.blogspot.com/2010/05/how-shock-model-analysis-relates-to-co2.html">CO2 in the air</a>, and perhaps even spilled oil at sea.<br /><br />In the one-dimensional model of applying an impulse function of material, the concentration <span style="font-weight: bold; font-style: italic;">n</span> will disperse according to the following equation:<br /><blockquote><span style="font-style: italic; font-weight: bold;">n</span>(<span style="font-style: italic; font-weight: bold;">x</span>, <span style="font-style: italic; font-weight: bold;">z</span>) = (<span style="font-style: italic; font-weight: bold;">z</span> + sqrt(<span style="font-style: italic; font-weight: bold;">zL</span> + <span style="font-style: italic; font-weight: bold;">z</span>^2)/sqrt(<span style="font-style: italic; font-weight: bold;">zL</span> + <span style="font-style: italic; font-weight: bold;">z</span>^2)*exp(-2<span style="font-style: italic; font-weight: bold;">x</span>/(<span style="font-style: italic; font-weight: bold;">z</span> + sqrt(<span style="font-style: italic; font-weight: bold;">zL</span> + <span style="font-style: italic; font-weight: bold;">z</span>^2))<br /><br />where<br /><span style="font-weight: bold; font-style: italic;">z</span>= <span style="font-weight: bold; font-style: italic;"></span><span style="font-size:100%;"><span style="font-weight: bold; font-style: italic;font-family:arial;font-size:130%;" ><span style="font-family:arial;"><span><span style="font-family:arial;"></span></span></span></span></span><span style="font-weight: bold; font-style: italic;">μFt </span><br /><span style="font-weight: bold; font-style: italic;">L</span> = <span style="font-style: italic; font-weight: bold;">β</span><span style="font-style: italic;">/<span style="font-weight: bold;">F</span></span><br /></blockquote>The term<span style="font-weight: bold; font-style: italic;"> z</span> takes the place of a time-scaled distance, which can speed up or slow down under the influence of a force <span style="font-weight: bold; font-style: italic;">F</span> (i.e. gravity, or electric field for a charged particle). The characteristic distance <span style="font-style: italic; font-weight: bold;">L</span> represents the effect of the stochastic force <span style="font-style: italic; font-weight: bold;">β</span> (aka <a href="http://mobjectivist.blogspot.com/2010/05/characterizing-mobility-in-disordered.html">Boltzmann's constant</a>) and ties in the diffusional aspects of the system. The specific parameterization of the exponential results in the fat-tail observed.<br /><br />In the past, I had never gone through the trouble of solving the FPE, simply because intuition would suggest that the dispersive envelope would cancel out most of the details of the diffusion term. In the dispersive transport model that I originally conceived, the dispersion would at most follow the leading wavefront of the drifting diffusional field as <span style="font-family:courier new;"> "sqrt(</span><span style="font-weight: bold; font-style: italic; font-family: courier new;">Lz</span><span style="font-family:courier new;">+</span><span style="font-weight: bold; font-style: italic; font-family: courier new;">z</span><span style="font-style: italic; font-family: courier new;"><span style="font-weight: bold;">^</span>2</span><span style="font-family:courier new;">)" </span> as described <a href="http://mobjectivist.blogspot.com/2009/06/dispersive-transport.html">here</a> or as<span style="font-family:courier new;"> "sqrt(</span><span style="font-weight: bold; font-style: italic; font-family: courier new;">Lz</span><span style="font-family:courier new;">)+</span><span style="font-weight: bold; font-style: italic; font-family: courier new;">z</span><span style="font-family:courier new;">" </span><a href="http://mobjectivist.blogspot.com/2010/05/characterizing-mobility-in-disordered.html">here</a>.<br /><br />I estimated that the diffusion term would follow as the square root of time according to <a href="http://mobjectivist.blogspot.com/2006/01/self-limiting-parabolic-growth.html">Fick's first law</a> and that drift would follow time linearly, with only an idea of the qualitative superposition of the terms in my mind.<br /><br />As one might expect, the actual entropic FPE solution borrowed from a little of each of my estimates, essentially averaging between the two:<br /><blockquote><span style="font-style: italic; font-weight: bold;"><span style="font-weight: bold;"><span style="font-style: italic;"><span style="font-weight: bold;"></span></span></span></span>(<span style="font-style: italic; font-weight: bold;">z</span> + sqrt(<span style="font-style: italic; font-weight: bold;">zL</span> + <span style="font-style: italic; font-weight: bold;">z</span>^2))/2<br /></blockquote>So the solution to the dispersive FPE form for a disordered system turns out entirely intuitive , and one can almost generate the result from inspection. The difference between the original entropic dispersion derivation and the full FPE treatment amounts to a bit of pre-factor bookkeeping in the first equation above. You can see this by comparing the two approaches for the case of L=1 and unity width for the dispersive transport current model.<br /><br /><div style="text-align: center;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqQFL9dcXbcZlkzYbmm4wis3TnhIQoja5djg9mUaRGVv8So8ADjFqByrMVSPOP-4SX6XO4sqELjgjfZ70TwVZy9OvdYB9K3a01sIbwVF1OqsmXwIaSmAes-YUfCroNfSHRn0n4/s1600/dispersion-diffusion-drift.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 347px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqQFL9dcXbcZlkzYbmm4wis3TnhIQoja5djg9mUaRGVv8So8ADjFqByrMVSPOP-4SX6XO4sqELjgjfZ70TwVZy9OvdYB9K3a01sIbwVF1OqsmXwIaSmAes-YUfCroNfSHRn0n4/s400/dispersion-diffusion-drift.png" alt="" id="BLOGGER_PHOTO_ID_5477151222281606578" border="0" /></a><span style="font-weight: bold;">Figure 2</span>: Differences between the original entropic dispersive model and the fully quantified FPE solution will converge as <span style="font-weight: bold; font-style: italic;">L</span> gets smaller.<br /></div><br /><span style="font-weight: bold;">Dispersive Transport in Porous Media.</span><br /><br />The above solved equations can actually apply directly as solutions to Darcy's law when it comes to describing the flow of material in a disordered porous media. I suppose this will irk the petroleum engineers, hydrologists, and geologists out there who have long sought the solution to this particular problem.<br /><br />Yet we should not act surprised by this result. The actions of multiple processes acting concurrently on a mobile material will generally result in a universal form governed by maximum entropy. It doesn't matter if we model carriers in a semiconductor or particles in a medium, the result will largely look the same. In a hydraulic conductivity experiment, Lange treated the breakthrough curve of a trace element through a natural catchment as a FPE convection-dispersion model, and came up with the same results independent of the fractionation of the media.<br /><br />By applying the simple dispersion model (blue curve below) to Lange's results, one sees that an excellent fit results with the fat-tail exactly following the <a href="http://mobjectivist.blogspot.com/2010/05/hyperbolic-decline-fat-tail-effect.html">hyperbolic decline</a> that reservoir engineers often see in long-term flow behavior. This could includes the time dependent emptying of the currently leaking deep sea Gulf reservoir!<br /><br /><div style="text-align: center;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZBOtuEfrM4fR1kClVgSYg5ymUTYhs02d6tdb3MRtmKKUn9rppnP-DPwM_pNKKIAlfVaItblvpXv6Jls1B3243S7Wc1ZHbdpNvbXXW8nWxhqIXvMUavH4RLDzE6y5BIXBMHpOq/s1600/haag.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 288px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZBOtuEfrM4fR1kClVgSYg5ymUTYhs02d6tdb3MRtmKKUn9rppnP-DPwM_pNKKIAlfVaItblvpXv6Jls1B3243S7Wc1ZHbdpNvbXXW8nWxhqIXvMUavH4RLDzE6y5BIXBMHpOq/s400/haag.gif" alt="" id="BLOGGER_PHOTO_ID_5476937725517794578" border="0" /></a><span style="font-weight: bold;">Figure 3</span>: Breakthrough curve of a traced material showing results from an entropic dispersion model in blue.<br /></div><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6B-pXiT2L-wbxg0eluj7IYExvFvWJXkU8zXg1Lv2du081NYFxp0OGDc4XO6lsCEPLdXpvQWELsv3Nqr5IRk465Dt60daBLck3eL7QFmnhmsnECRRyd_IEtgrz2vXwKLpWoh2E/s1600/haag2.gif"><img style="margin: 0pt 0pt 10px 10px; float: right; cursor: pointer; width: 200px; height: 153px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6B-pXiT2L-wbxg0eluj7IYExvFvWJXkU8zXg1Lv2du081NYFxp0OGDc4XO6lsCEPLdXpvQWELsv3Nqr5IRk465Dt60daBLck3eL7QFmnhmsnECRRyd_IEtgrz2vXwKLpWoh2E/s200/haag2.gif" alt="" id="BLOGGER_PHOTO_ID_5477222320820069122" border="0" /></a>Moreover, the amount of diffusion that occurs appears quite minimal. Adding a greater proportion of diffusion by increasing <span style="font-style: italic; font-weight: bold;">L</span> does not improve the fit of the curve (see the chart to the right). Just as in the semiconductor case, the shape has a significant meaning when analyzed from the perspective of maximum entropy.<br /><br />Nothing complicated about this other than admitting to the fact that heterogeneous disordered systems appear everywhere and we have to use the right models to characterize their behavior. <br /><br />The details of this experiment are described in the following papers:<br /><ol><li>D.Haag and M.Kaupenjohann, <a href="http://www.hyle.org/journal/issues/6/haag.htm">Biogeochemical Models in the Environmental Sciences: The Dynamical System Paradigm and the Role of Simulation Modeling</a></li><li>H. Lange, <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.40.785&rep=rep1&type=pdf">Are Ecosystems Dynamical Systems?</a></li></ol>The authors of these papers have mixed feelings about the applicability of modeling <span style="font-style: italic;">biogeochemical</span> systems and speculate whether we should use any kinds of models for "ecological risk assessment". They point out that ecological systems obviously can adapt under certain circumstances and no amount of physical modeling can predict which way the system will go. Will spilled oil decompose faster as the environment adapts around it? Will that make dispersion less relevant? Who knows?<br /><br />Still the work of modeling the physical process alone has enormous value as Haag and Kaupenjohann point out:<br /><p><span style="font-size:85%;"></span></p><blockquote><p><span style="font-size:85%;">Despite not being a ‘real’ thing, "a model may resonate with nature" (Oreskes <i>et al</i>. 1994) and thus has heuristic value, particular to guide further study. Corresponding to the heuristic function, Joergensen (1995) claims that models can be employed to reveal ecosystem properties and to examine different ecological theories. Models can be asked scientific questions about properties. According to Joergensen (1994), examples for ecosystem properties found by the use of models as synthesizing tools are the significance of indirect effects, the existence of a hierarchy, and the ‘soft’ character of ecosystems. However, we agree with Oreskes <i>et al</i>. (1994) who regard models as "most useful when they are used to challenge existing formulations rather than to validate or verify them". Models, as ‘sets of hypotheses’, may reveal deficiencies in hypotheses and the way biogeochemical systems are observed. Moreover, models frequently identify lacunae in observations and places where data are missing (Yaalon 1994). </span></p><p><span style="font-size:85%;">As an instrument of synthesis (Rastetter 1996), models are invaluable. They are a good way to summarize an individual research project (Yaalon 1994) and they are capable of holding together multidisciplinary knowledge and perspectives on complex systems (Patten 1994). </span></p><p><span style="font-size:85%;">While models as a product may have heuristic value, we would like to emphasize also the role of the modeling process: "[…] one of the most valuable benefits of modeling is the process itself. These benefits accrue only to participants and seem unrelated to the character of the model produced" (Patten 1994). Model building is a subjective procedure, in which every step requires judgment and decisions, making model development ‘half science, half art’ and a matter of experience (Hoffmann 1997, Hornung 1996). Thus modeling is a learning process in which modelers are forced to make explicit their notions about the modeled system and in which they learn how the analytically isolated components of a system can be ‘glued’ (Paton 1997). As modeling mostly takes place in groups, modeling and the synthesis of knowledge has to be envisaged as a dynamic communication process, in which criteria of relevance, the meaning of terms, the underlying concepts and theories, and so forth are negotiated. Model making may thus become a catalyst of interdisciplinary communication. </span></p><p><span style="font-size:85%;">In the assessment of environmental risks, however, an exclusively scientific modeling process is not sufficient, as technical-scientific approaches to ‘post-normal’ risks are unsatisfactory (Rosa 1998) and as the predictive capacity and operational validity of models (<i>e.g.</i> for scenario computation) is in doubt. The post-normal science approach (Funtowicz & Ravetz 1991, 1992, 1993) takes account of the stakes and values involved in environmental decision making. Following a ‘post-normal’ agenda, model development and model validation for risk assessment should become a trans-scientific (communication) task, in which "extended peer communities" participate and in which non-equivalent descriptions of complex systems are made explicit, negotiated, and synthesized. In current modeling practice, however, models are highly opaque and can rarely be penetrated even by other scientists (Oreskes, personal communication). As objects of communication, models still are closed systems and black boxes. </span></p></blockquote>We need to really take up the charge on this as our future depends on understanding the role of entropy in nature. For too long, we have not shown the intellectual curiosity to model how much oil we have underground, what size distribution the reservoirs take, and how fast that they can epmty, even though some <a href="http://www.google.com/search?q=dispersive+discovery+site%3Amobjectivist.blogspot.com">perfectly acceptable models</a> can describe this statistically, using dispersion no less!<br /><br />Now that the Macondo oil has discovered an escape hatch and has gone disordered on us and will go who-knows-where, it seems we can really make some headway in our common understanding. Nothing like having your feet in the fire.@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.com4tag:blogger.com,1999:blog-7002040.post-61717708719298022172010-05-24T18:59:00.000-07:002010-05-29T20:14:22.826-07:00Fokker-Planck for Disordered SystemsTo get the cost of photovoltaic (PV) systems down, we will have to learn how to efficiently use crappy materials. By crap I mean that mass-produced PV materials will end up getting rolled or extruded or organically grown. Unless we perfect the process, most everything will turn out non-optimal. We already know the difference between clean-room cultivated single crystal semiconducting material and the defect-ridden and often amorphous materials that nature and entropy drives us to. For performance sensitive applications such as communications and computing we would only rarely consider disordered material as a candidate semiconductor. Certainly, the performance of these materials makes them unlikely candidates for high speed processing -- yet for solar cell applications, they may serve us well. In the end, we just have to learn how to understand and deal with crap.<br /><br />The following will revisit a couple of <a href="http://mobjectivist.blogspot.com/2010/05/characterizing-mobility-in-disordered.html">previous</a> <a href="http://mobjectivist.blogspot.com/2009/06/dispersive-transport.html">posts</a> where I outlined a novel way to analyze the behavior of disordered semiconducting material. I know for certain that no one has proposed the particular approach before. If it does exist, I certainly can't find it in the literature. From one perspective, this analysis sets forth a <span>baseline for the characterization of a maximally disordered semiconductor</span><span>.</span><br /><br /><span style="font-size:130%;"><span style="font-weight: bold;">Background</span></span><br /><br />The prehistoric 1949 <a href="http://www.labtrek.net/proHSuk.html">Haynes-Shockley experiment</a> first measured the dynamic behavior of charged carriers in a semiconducting sample. It basically confirmed the solution of the diffusion (<a href="http://en.wikipedia.org/wiki/Fokker%E2%80%93Planck_equation">Fokker-Planck</a>) equation and it demonstrated diffusion, drift, and recombination in a conceptually simple setup. <a href="http://pvcdrom.pveducation.org/index.html">This animated site</a> gives a very interesting overview of PV electrical behavior.<br /><div style="text-align: center;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://www.labtrek.net/HaynesOptEng.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 322px; height: 159px;" src="http://www.labtrek.net/HaynesOptEng.jpg" alt="" border="0" /></a><br /><span style="font-weight: bold;">Figure 1</span>: Apparatus for the Haynes-Shockley experiment<br /></div><br />This setup works according to theory for an ordered semiconductor with uniform properties but apparently gets a bit unwieldy for any disordered or non-uniform material sample. I inferred this as conventional wisdom since most scientists either punt or use heuristics partially derived from the inscrutable work of a select group of random-walk theorists (see <a href="http://link.aps.org/doi/10.1103/PhysRevB.12.2455">Scher & Montroll</a>).<br /><br />I had previously applied a very straightforward interpretation to the problem of carrier transport in disordered material. My dispersion analysis essentially set aside the Fokker-Planck formalism for a mean value approximation where I tactically applied the Maximum Entropy Principle. In particular, I really like the MaxEnt solution because I can recite the solution from memory. It matches intuition in a conceptually simple way once you get into a disordered mind-set.<br /><br />In the real Haynes-Shockley experiment, a pulse gets injected at one electrode, and a nearly pure time-of-flight (TOF) profile results. The initial pulse ends up spreading out in width a bit, but the detected pulse usually maintains the essential Gaussian sigmoid shape.<br /><br /><span style="font-size:130%;"><span style="font-weight: bold;">Adding Disorder</span></span><br /><br />For the time-of-flight for a disordered system, the Maximum Entropy solution looks like:<br /><blockquote><span style="font-weight: bold; font-style: italic;">q</span>(<span style="font-weight: bold; font-style: italic;">t</span>) = <span style="font-weight: bold; font-style: italic;">Q</span> * exp(-<span style="font-weight: bold; font-style: italic;">w</span>/(sqrt((<span style="font-weight: bold; font-style: italic;font-family:arial;font-size:85%;" ><span style=";font-family:arial;font-size:85%;" ><span><span style=";font-family:arial;font-size:85%;" >μ</span></span></span></span><span style="font-weight: bold; font-style: italic;">Et</span>)<sup>2</sup> + 2<span style="font-weight: bold; font-style: italic;">Dt</span>)) <a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhUAcqWNyFUMbDM5s9WVVeErasw_r2B3YteqHhmOstdWntcT6X1bkG5ipMpp9lV6Rsa8Pw0eRueTFTH7DMeijoiXIk3hI-4ShQ9BxalF2pBHXOIHt0INSvTCgXH8fP3qEeN67iD/s1600/dt-eq1.gif"><img style="margin: 0pt 0pt 10px 10px; float: right; cursor: pointer; width: 117px; height: 36px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhUAcqWNyFUMbDM5s9WVVeErasw_r2B3YteqHhmOstdWntcT6X1bkG5ipMpp9lV6Rsa8Pw0eRueTFTH7DMeijoiXIk3hI-4ShQ9BxalF2pBHXOIHt0INSvTCgXH8fP3qEeN67iD/s200/dt-eq1.gif" alt="" id="BLOGGER_PHOTO_ID_5474173269887909410" border="0" /></a></blockquote>This essentially states that the expected amount of charge accumulated at one end of the sample (at a distance <span style="font-weight: bold; font-style: italic;">w</span>) at time <span style="font-weight: bold; font-style: italic;">t</span>, follows a maximum entropy probability distribution. The varying rates described by<span style="font-size:130%;"> </span><span style="font-weight: bold; font-style: italic;font-family:arial;font-size:130%;" ><span style="font-family:arial;"><span><span style="font-family:arial;">μ</span></span></span></span><span style="font-size:130%;"> </span>and <span style="font-weight: bold; font-style: italic;">D</span> disperse the speed of the carriers so that a broadened profile results from the initial pulse spike.<br /><br />The equation above formed the baseline for the interpretation I described initially <a href="http://mobjectivist.blogspot.com/2009/06/dispersive-transport.html">here</a>.<br /><br />For completeness, I figured to test my luck and see if I can bull my way through the basic diffusion laws. If I could produce an equivalent solution by applying the Maximum Entropy Principle directly to the Fokker-Planck equation, then this would give a better foundation for the "inspection" result above.<br /><br />The F-P diffusion equation gets expressed as a partial differential equation with a conservation law constraint:<br /><div style="text-align: left;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://upload.wikimedia.org/math/e/6/7/e67e52262260227c5bd70b93a6d20df0.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 476px; height: 44px;" src="http://upload.wikimedia.org/math/e/6/7/e67e52262260227c5bd70b93a6d20df0.png" alt="" border="0" /></a>In this case <span style="font-weight: bold;"><span style="font-style: italic;">D</span>1</span>=<span style="font-weight: bold; font-style: italic;font-family:arial;font-size:130%;" ><span style="font-family:arial;"><span><span style="font-family:arial;">μ</span></span></span></span>* (carrier mobility) and <span style="font-weight: bold;"><span style="font-style: italic;">D</span>2</span>=<span style="font-weight: bold; font-style: italic;">D</span>* (diffusion coefficient), and <span style="font-weight: bold; font-style: italic;">f</span>(<span style="font-weight: bold; font-style: italic;">x,t</span>)=<span style="font-weight: bold; font-style: italic;">n</span>(<span style="font-weight: bold; font-style: italic;">x,t</span>) (carrier concentration). With recombination, the solution in one-dimension looks like:<br /></div><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://upload.wikimedia.org/math/d/3/a/d3a7d978a128e629107b3ffcd100fdb8.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 359px; height: 47px;" src="http://upload.wikimedia.org/math/d/3/a/d3a7d978a128e629107b3ffcd100fdb8.png" alt="" border="0" /></a>This of course works for well-ordered semiconductors, but <span style="font-weight: bold; font-style: italic;">D*</span> and <span style="font-weight: bold; font-style: italic;font-family:arial;font-size:130%;" ><span style="font-family:arial;"><span><span style="font-family:arial;">μ</span></span></span></span><span style="font-weight: bold; font-style: italic;">*</span> will likely vary for disordered material. I made the standard substitution via the Einstein Relation for<br /><div style="text-align: center;"><blockquote><span style="font-weight: bold; font-style: italic;">D</span>* = <span style="font-weight: bold; font-style: italic;">V<sub>t</sub> </span><span style="font-weight: bold; font-style: italic;font-family:arial;font-size:100%;" ><span style="font-family:arial;"><span><span style="font-family:arial;">μ*</span></span></span></span><br /></blockquote></div>where <span style="font-weight: bold; font-style: italic;">V<sub>t</sub></span> = <span style="font-style: italic; font-weight: bold;">β/q</span><b> </b> stands for the chemical or thermal potential at equilibrium (usually <span style="font-style: italic; font-weight: bold;">β </span>equals <span style="font-weight: bold; font-style: italic;">kT</span> where <span style="font-weight: bold; font-style: italic;">k</span> is Boltzmann's constant and <span style="font-weight: bold; font-style: italic;">T</span> is absolute temperature). At equilibrium, the stochastic force of diffusion exactly balances the electrostatic force <span style="font-weight: bold; font-style: italic;">F</span> = <span style="font-weight: bold; font-style: italic;">qE</span>.<br /><br />From the basic physics, we can generate a maximum entropy density function for <span style="font-weight: bold; font-style: italic;">D</span><br /><blockquote><span style="font-weight: bold;">p</span>(<span style="font-weight: bold; font-style: italic;">D*</span>) = 1/<span style="font-weight: bold; font-style: italic;">D</span> * exp(-<span style="font-weight: bold; font-style: italic;">D*</span>/<span style="font-weight: bold; font-style: italic;">D</span>)<br /></blockquote>then<br /><blockquote><span style="font-weight: bold; font-style: italic;">n</span>(<span style="font-weight: bold; font-style: italic;">x,t</span>) = Integral <span style="font-weight: bold;">p</span>(<span style="font-weight: bold; font-style: italic;">D</span>*) * <span style="font-weight: bold; font-style: italic;">n<span style="font-size:78%;"><sub>mean</sub></span></span>(<span style="font-weight: bold; font-style: italic;">x,t</span>) over all <span style="font-weight: bold; font-style: italic;">D*</span><br /></blockquote>This looks hairy but the integral comes out straightforwardly as (ignoring the constant factors)<br /><blockquote><span style="font-weight: bold; font-style: italic;">n</span>(<span style="font-weight: bold; font-style: italic;">x,t</span>) = 1/sqrt(<span style="font-weight: bold; font-style: italic;">t</span>*(4<span style="font-weight: bold; font-style: italic;">D</span>+<span style="font-weight: bold; font-style: italic;">t</span>*(<span style="font-weight: bold; font-style: italic;">E</span><span style="font-weight: bold; font-style: italic;font-family:arial;font-size:85%;" ><span style=";font-family:arial;font-size:85%;" ><span><span style=";font-family:arial;font-size:85%;" >μ</span></span></span></span>)<sup>2</sup>)) * exp(-<span style="font-weight: bold; font-style: italic;">x</span>*<span style="font-weight: bold; font-style: italic;">R</span>(<span style="font-weight: bold; font-style: italic;">t</span>)) /<span style="font-weight: bold; font-style: italic;"> R</span>(<span style="font-weight: bold; font-style: italic;">t</span>)<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgskoIFb3loHojtoMRDAjzPUNgnnSig2zLEPQte4vIC286DaY-feD9BzvphB-dSYeGC9fKYw2yf2RxFHNbwtYH9BNuwpC9kllsLLednJq1eNPcECzWw7EEaHB3y8VdsW9SudQ_X/s1600/dt-eq2.gif"><img style="margin: 0pt 0pt 10px 10px; float: right; cursor: pointer; width: 124px; height: 52px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgskoIFb3loHojtoMRDAjzPUNgnnSig2zLEPQte4vIC286DaY-feD9BzvphB-dSYeGC9fKYw2yf2RxFHNbwtYH9BNuwpC9kllsLLednJq1eNPcECzWw7EEaHB3y8VdsW9SudQ_X/s200/dt-eq2.gif" alt="" id="BLOGGER_PHOTO_ID_5474173272837738082" border="0" /></a></blockquote>where<br /><blockquote><span style="font-weight: bold; font-style: italic;">R</span>(<span style="font-weight: bold; font-style: italic;">t</span>) = sqrt(1/(<span style="font-weight: bold; font-style: italic;">Dt</span>) + <span style="font-weight: bold; font-style: italic;">E</span>/(2<span style="font-weight: bold; font-style: italic;">V<sub>t</sub></span>)<sup>2</sup>) - <span style="font-weight: bold; font-style: italic;">E</span>/(2<span style="font-weight: bold; font-style: italic;">V<sub>t</sub></span>)<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfSUQuD0FtQzO1HeWOuK7IpyiLL3_8IRyKMLPWyYCRn_tRrJjiC7rbQ0hUpPcoapNt-YsUJ0j3u2NIe88wZe4S_nwgwRecRLKJylVnWtwPFRtlOFmH6pXmCmOKjJilmf7fEkJP/s1600/dt-eq3.gif"><img style="margin: 0pt 0pt 10px 10px; float: right; cursor: pointer; width: 133px; height: 46px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfSUQuD0FtQzO1HeWOuK7IpyiLL3_8IRyKMLPWyYCRn_tRrJjiC7rbQ0hUpPcoapNt-YsUJ0j3u2NIe88wZe4S_nwgwRecRLKJylVnWtwPFRtlOFmH6pXmCmOKjJilmf7fEkJP/s200/dt-eq3.gif" alt="" id="BLOGGER_PHOTO_ID_5474173275176897346" border="0" /></a></blockquote><br />If we evaluate this for carriers that have reached the drain electrode at <span style="font-style: italic; font-weight: bold;">x=w</span>, the total charge collected q is:<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://www1.wolframalpha.com/Calculate/MSP/MSP408019agbh2f7b81b1c40000575afa9hf66h682i?MSPStoreType=image/gif&s=53&w=218&h=49"></a><blockquote><span style="font-style: italic;"><span style="font-weight: bold;">q</span></span>(<span style="font-weight: bold; font-style: italic;">t</span>) = Q/sqrt(<span style="font-weight: bold; font-style: italic;">t</span>*(4<span style="font-weight: bold; font-style: italic;">D</span>+<span style="font-weight: bold; font-style: italic;">t</span>*(<span style="font-weight: bold; font-style: italic;">E</span><span style="font-weight: bold; font-style: italic;font-family:arial;font-size:85%;" ><span style=";font-family:arial;font-size:85%;" ><span><span style=";font-family:arial;font-size:85%;" >μ</span></span></span></span>)<sup>2</sup>) * exp(-<span style="font-weight: bold; font-style: italic;">w</span>*<span style="font-weight: bold; font-style: italic;">R</span>(<span style="font-weight: bold; font-style: italic;">t</span>)) / <span style="font-weight: bold; font-style: italic;">R</span>(<span style="font-weight: bold; font-style: italic;">t</span>)<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjb3tYo3crgCtx4poRk6P1ZGNcz-USxRcl6BrvLSJIASJcP8NGD7zElfktvEVvrbUpqtpFyxuPY_f1qGmAHgCxXRwNPxc2jZyKJvC84jq1tFsWNJOHtluDSa-MoJwmWLxoSKqIq/s1600/dt-eq4.gif"><img style="margin: 0pt 0pt 10px 10px; float: right; cursor: pointer; width: 153px; height: 52px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjb3tYo3crgCtx4poRk6P1ZGNcz-USxRcl6BrvLSJIASJcP8NGD7zElfktvEVvrbUpqtpFyxuPY_f1qGmAHgCxXRwNPxc2jZyKJvC84jq1tFsWNJOHtluDSa-MoJwmWLxoSKqIq/s200/dt-eq4.gif" alt="" id="BLOGGER_PHOTO_ID_5474173280493145394" border="0" /></a></blockquote><br />The measured current is<br /><blockquote><span style="font-weight: bold; font-style: italic;">I</span>(<span style="font-weight: bold; font-style: italic;">t</span>) = mean of d<span style="font-weight: bold; font-style: italic;">q</span>(<span style="font-weight: bold; font-style: italic;">t</span>)/d<span style="font-weight: bold; font-style: italic;">t</span> from 0 to <span style="font-weight: bold; font-style: italic;">w</span><br /></blockquote>The simple entropic dispersive expression and the Fokker-Planck result obviously differ in their formulation, yet the two show the same asymptotic trends. For an arbitrary set of parameters, one can't detect a practical difference. Use whichever you feel comfortable with.<br /><a target="_blank" href="http://img8.imageshack.us/i/concentration.gif/"><img src="http://img8.imageshack.us/img8/3825/concentration.th.gif" align="right" border="0" /></a><br />I show the dynamics of the carrier profile in the animated GIF to the right. The initial profile starts with a spike at the origin and then the profile broadens as the mean starts drifting and diffusing to the opposing contact. You don't see much from this perspective as it looks completely like mush. Yet, when plotted on a log-log scale, it does take on more character.<br /><br />The collected current profile looks like the following<br /><br /><div style="text-align: center;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhCZtCGcLhYRkdm18dIziRINONHMPgIscGnm864QdV8jWGsazMY2QoEnp8pqXRbZT4-i2rJ2m38eAmlwlbqDcKpr2hXeoiOfGLU3XoR1nsxl4elHGqF4znsOzUGus7Ju9C0tV9h/s1600/apfo.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 293px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhCZtCGcLhYRkdm18dIziRINONHMPgIscGnm864QdV8jWGsazMY2QoEnp8pqXRbZT4-i2rJ2m38eAmlwlbqDcKpr2hXeoiOfGLU3XoR1nsxl4elHGqF4znsOzUGus7Ju9C0tV9h/s400/apfo.gif" alt="" id="BLOGGER_PHOTO_ID_5474675163979556674" border="0" /></a><span style="font-weight: bold;">Figure 2:</span> Typical photocurrent trace showing the initial diffusional spike, a plateau for relatively constant collection from the active region, and then a power-law tail produced from the entropic drift dispersion.<br /><br /></div><br /><br /><span style="font-weight: bold;">Organic Semiconductor Applications</span><br /><br />The photocurrent profile displayed above came from from Andersson's <span style="font-style: italic;">"Electronic Transport in Polymeric Solar Cells and Transistors"</span> (<a href="http://liu.diva-portal.org/smash/get/diva2:17130/FULLTEXT01">2007</a>) wherein he analyzed the transport in a specific organic semiconducting material, the polymer APFO.<br /><br />The <span style="color: rgb(51, 51, 255);">blue </span><span style="color: rgb(51, 51, 255);">line</span> drawn through the set of traces follows the entropic dispersion formulation. The upper part of the curve describes the diffusive spike while the lower part generates the fat-tail due to the drift component (this shows an inverse square power law in the tail).<br /><br /><div style="text-align: center;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9EhYtvgT13c2QxnUXKLl-aKTVgdtRNrKfaUImtWMv-D_Uvr9xVhWkm6rZw9EK4JUtHPYrNUa8M9RrTZe9H2Kro-1BXePT-lfOy7L33pHlKFm-OU04DYmw0kuojZAPdTd9x9wS/s1600/apfo.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 301px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9EhYtvgT13c2QxnUXKLl-aKTVgdtRNrKfaUImtWMv-D_Uvr9xVhWkm6rZw9EK4JUtHPYrNUa8M9RrTZe9H2Kro-1BXePT-lfOy7L33pHlKFm-OU04DYmw0kuojZAPdTd9x9wS/s400/apfo.gif" alt="" id="BLOGGER_PHOTO_ID_5474676291170438050" border="0" /></a><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFfBUYN6Hf8n5UPZcEmKHZydRmyMo3a93EOZ2eEfy8MF8vocLxT-Wk7ve7XbbK8UmstrR_Z8flOOnCCaPFiGSwRUQdaS3JZed6ClaAyFw-jGU2WhBv6iIpdbJ9hqll0shWEzEJ/s1600/apfo2.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 267px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFfBUYN6Hf8n5UPZcEmKHZydRmyMo3a93EOZ2eEfy8MF8vocLxT-Wk7ve7XbbK8UmstrR_Z8flOOnCCaPFiGSwRUQdaS3JZed6ClaAyFw-jGU2WhBv6iIpdbJ9hqll0shWEzEJ/s400/apfo2.gif" alt="" id="BLOGGER_PHOTO_ID_5474685076722614146" border="0" /></a><span style="font-weight: bold;">Figure 3</span>: Universal profile generated over a set of applied electric field values. For this set, scaling of transit time with respect to the applied field holds, indicative of a constant mobility. However, carrier diffusion causes the initial transient and this does not scale, as the electric field has no effect on diffusion, as shown in the lower set of<span style="color: rgb(102, 204, 204);"> <span style="color: rgb(51, 51, 255);">blue curves</span></span>.<br /><br /></div>As I stated in the <a href="http://mobjectivist.blogspot.com/2010/05/characterizing-mobility-in-disordered.html">previous post</a>, most scientists when discussing this shape have either (1) referred to Scher/Montroll and the vague heuristic <span style="font-weight: bold; font-style: italic;">α</span>, (2) dismissed these features, or (3) labelled them as uninteresting. Andersson follows suit:<br /><blockquote> At best this transient, as the high α value indicates, might be possible to evaluate in a meaningful way with a bit of error and at worst it is of no use. Either way the amount of material and effort required is rather large compared to the usefulness of the results. APFO-4 is also the polymer that, among the investigated, gives the ”nicest” transients. The conclusion from this is that if alternative measurement techniques can be used it is not worthwhile to do TOF.<br /></blockquote>Not to dismiss the hard work that went into Andersson's experiment, but I would beg to differ with his assessment of the worthiness of the approach. When characterizing a novel material, every measurement adds to the body of knowledge, and as the interpretation of the aggregation of data becomes more cohesive, we end up learning much more of the internal structure. As I have learned, if someone does not understand a phenomena, they tend to dismiss it (myself included).<br /><br />By their very nature, disordered systems contain a huge state space and we really can't afford to throw out any information.<br /><br />Which brings up another interesting set of <a href="http://jialigao.org/kiniu/thesis.pdf">TOF experiments</a> that I dug up. These also deal with organic semiconducting materials -- the polymers with the abbreviations ANTH-OXA6t-OC12 and TPA-Cz3d. The following figures show the TOF results for various applied voltages. I superimposed the entropic dispersion equation form as the <span style="color: rgb(204, 0, 0);">red line</span> with the derived mobility in the caption below each figure. The original researcher had applied the Scher&Montroll Continuous Time Random Walk (CTRW) heuristic as indicated by the intersecting sloped lines. The CTRW model clearly fails in this situation as the slopes need quite a bit of creative interpretation. Note that we don't observe the diffusive spike; I integrated the charge from 10% to 100% of the width instead of 0% to 100%.<br /><table border="0"><br /><tbody><tr><td><div style="text-align: center; font-weight: bold;">ANTH-OXA6t-OC12 </div><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhjTcTm63QDKAXjE44kmi3XAvobX7oi8eNjRKDAVFMIQz-CPKn5He10Mc4oc2AIyleLt-lFpuLknhcPsnIswJODJV_QDz-UyfSa4ZRkOwomT2QCu1wuoo9lffsk4Oi9c7jxgdxv/s1600/a40.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 200px; height: 196px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhjTcTm63QDKAXjE44kmi3XAvobX7oi8eNjRKDAVFMIQz-CPKn5He10Mc4oc2AIyleLt-lFpuLknhcPsnIswJODJV_QDz-UyfSa4ZRkOwomT2QCu1wuoo9lffsk4Oi9c7jxgdxv/s200/a40.gif" alt="" id="BLOGGER_PHOTO_ID_5473932157484420370" border="0" /></a><span style="font-weight: bold; font-style: italic;font-family:arial;font-size:85%;" ><span style=";font-family:arial;font-size:85%;" ><span><span style=";font-family:arial;font-size:85%;" >μ = 0.0025<br /></span></span></span></span></td><td><div style="text-align: center;"><span style="font-weight: bold;">TPA-Cz3d</span><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBveLGY1j-MwKF4Leuikci2yafKtXUCnyIH5n7SO46WS9lCYly_-hBz-4uOdK1hqqMz3Ze8DUEElM1sF7wFnhv6Rrd5nCmDedV5HSKX7Y8hiNJFaQgzM0aYO-ypXeus6YyaQf_/s1600/t40.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 200px; height: 196px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBveLGY1j-MwKF4Leuikci2yafKtXUCnyIH5n7SO46WS9lCYly_-hBz-4uOdK1hqqMz3Ze8DUEElM1sF7wFnhv6Rrd5nCmDedV5HSKX7Y8hiNJFaQgzM0aYO-ypXeus6YyaQf_/s200/t40.gif" alt="" id="BLOGGER_PHOTO_ID_5473931875335376850" border="0" /></a></div><span style="font-weight: bold; font-style: italic;font-family:arial;font-size:85%;" ><span style=";font-family:arial;font-size:85%;" ><span><span style=";font-family:arial;font-size:85%;" >μ = 0.0013<br /></span></span></span></span></td><br /></tr><br /><tr><td><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgwA4Dy65IXu3JAiALooy4N55CmqMBEw7TZWdLDaI_L9nLBM3Ru8B7wTCIetZkRO5abYQrbY2JscPaayL8WdRAhtKPeHAmtxdMBk_tCXtbFcPK9yS6xaPGIIGXf_H3aJ_gyvTgE/s1600/a60.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 200px; height: 196px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgwA4Dy65IXu3JAiALooy4N55CmqMBEw7TZWdLDaI_L9nLBM3Ru8B7wTCIetZkRO5abYQrbY2JscPaayL8WdRAhtKPeHAmtxdMBk_tCXtbFcPK9yS6xaPGIIGXf_H3aJ_gyvTgE/s200/a60.gif" alt="" id="BLOGGER_PHOTO_ID_5473932153176720130" border="0" /></a><span style="font-weight: bold; font-style: italic;font-family:arial;font-size:85%;" ><span style=";font-family:arial;font-size:85%;" ><span><span style=";font-family:arial;font-size:85%;" >μ = 0.00155<br /></span></span></span></span></td><td><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-3C-XD-OjfnOOWKdPnsA2nTH8UDcOl0ihtEtvceJwoKMbWuF0UIqno9M6oMZEzXyKUfAqvQIW5kneU7km8U57JZzBmRDdu-7LD87fKnDDbVEROeMByY4wd6khDXPp-r3WowZj/s1600/t60.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 200px; height: 196px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-3C-XD-OjfnOOWKdPnsA2nTH8UDcOl0ihtEtvceJwoKMbWuF0UIqno9M6oMZEzXyKUfAqvQIW5kneU7km8U57JZzBmRDdu-7LD87fKnDDbVEROeMByY4wd6khDXPp-r3WowZj/s200/t60.gif" alt="" id="BLOGGER_PHOTO_ID_5473931882420836434" border="0" /></a><span style="font-weight: bold; font-style: italic;font-family:arial;font-size:85%;" ><span style=";font-family:arial;font-size:85%;" ><span><span style=";font-family:arial;font-size:85%;" >μ = 0.0004<br /></span></span></span></span></td><br /></tr><br /><tr><td><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgYMZ6HRttv-SOYxkEgueYjGGYiNrd2pKZPsl4CsmqQP-fsD0R014UQA6yOu7zjDXA-oVYpwWYEKhAZlRI_RUbony52NwhtCZxxL2eddjh5N61o2eX9SIwE63AD_bvQwhZVTW4I/s1600/a80.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 200px; height: 196px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgYMZ6HRttv-SOYxkEgueYjGGYiNrd2pKZPsl4CsmqQP-fsD0R014UQA6yOu7zjDXA-oVYpwWYEKhAZlRI_RUbony52NwhtCZxxL2eddjh5N61o2eX9SIwE63AD_bvQwhZVTW4I/s200/a80.gif" alt="" id="BLOGGER_PHOTO_ID_5473932162135550162" border="0" /></a><span style="font-weight: bold; font-style: italic;font-family:arial;font-size:85%;" ><span style=";font-family:arial;font-size:85%;" ><span><span style=";font-family:arial;font-size:85%;" >μ = 0.00125<br /></span></span></span></span></td><td><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlSGjlRqdyUTSXdeijRPXe7ciB3ZGmEx2CbYe3mkpvXcPcHlQXE2xqhpq-PR6QyCy0SHoKJLctOhLXJ5MDC92RhJvlLIRoqpxt98x-OaJup2w_x4aMUcLDo8BvGqipDmwvNyOg/s1600/t80.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 200px; height: 196px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlSGjlRqdyUTSXdeijRPXe7ciB3ZGmEx2CbYe3mkpvXcPcHlQXE2xqhpq-PR6QyCy0SHoKJLctOhLXJ5MDC92RhJvlLIRoqpxt98x-OaJup2w_x4aMUcLDo8BvGqipDmwvNyOg/s200/t80.gif" alt="" id="BLOGGER_PHOTO_ID_5473931896285924178" border="0" /></a><span style="font-weight: bold; font-style: italic;font-family:arial;font-size:85%;" ><span style=";font-family:arial;font-size:85%;" ><span><span style=";font-family:arial;font-size:85%;" >μ = 0.0005<br /></span></span></span></span></td><br /></tr><br /><tr><td><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj193Xez60RiSee2Ot1bCTOIRoQq5nUllSCvhSEaqJ_A9RXKaeUFUUESAepa__0GRC_D4ic-OSxImkhzHOwiTtDPM20Kslu2n2kCJJYk_EOEbv0j3puCTO29FyRJkSck0Br2piY/s1600/a100.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 200px; height: 196px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj193Xez60RiSee2Ot1bCTOIRoQq5nUllSCvhSEaqJ_A9RXKaeUFUUESAepa__0GRC_D4ic-OSxImkhzHOwiTtDPM20Kslu2n2kCJJYk_EOEbv0j3puCTO29FyRJkSck0Br2piY/s200/a100.gif" alt="" id="BLOGGER_PHOTO_ID_5473932142468001586" border="0" /></a><span style="font-weight: bold; font-style: italic;font-family:arial;font-size:85%;" ><span style=";font-family:arial;font-size:85%;" ><span><span style=";font-family:arial;font-size:85%;" >μ = 0.00085<br /></span></span></span></span></td><td><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkxMQGVDd4OKRSz-ecKOYQgigVZxz13jrkKpFiEEW3zKcrLqQzEcyTW0lTcKK8SbDtQrZMPaZ4uhQVXHsgx1Bw4jmMymYkjoZ8qa41T1EXZFoa0lDNvIKfTCYskGvHwR5bMehn/s1600/t100.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 200px; height: 196px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkxMQGVDd4OKRSz-ecKOYQgigVZxz13jrkKpFiEEW3zKcrLqQzEcyTW0lTcKK8SbDtQrZMPaZ4uhQVXHsgx1Bw4jmMymYkjoZ8qa41T1EXZFoa0lDNvIKfTCYskGvHwR5bMehn/s200/t100.gif" alt="" id="BLOGGER_PHOTO_ID_5473931888117111490" border="0" /></a><span style="font-weight: bold; font-style: italic;font-family:arial;font-size:85%;" ><span style=";font-family:arial;font-size:85%;" ><span><span style=";font-family:arial;font-size:85%;" >μ = 0.0006<br /></span></span></span></span></td><br /></tr><br /><span style=";font-family:Times;font-size:100%;" ><span style=";font-family:Times;font-size:12px;" ><span style=";font-family:Times;font-size:100%;" ><span style=";font-family:Times;font-size:12px;" ><div style="position: absolute; top: 91548px; left: 188px;"><nobr></nobr></div></span></span></span></span><br /></tbody></table><br /><br /><br /><hr /><br /><br />The number of papers I find, especially when dealing with organic semiconductors, that cannot apply the Scher/Montroll theory indicates that it truly lacks any generality. In other words, it works crappily for describing disorderly crap. I will also say the theory has some very serious flaws, including the claim that an <span style="font-weight: bold; font-style: italic;">α</span> = 1 defines a non-dispersive material. How could a power-law of -2 be anything but dispersive?<br /><br />The fact that the entropic dispersion formulation works on any disordered material makes it much more general. Several years ago Scher wrote a popular article for<a href="http://lipid.phys.cmu.edu/biophys/Scher%20PhysToday%2091.pdf"> Physics Today</a> extolling the wonders of his theory, and how it seemed to fit a variety of disordered systems. He mentioned how well it fit amorphous silicon based on the number of orders of magnitude that his piece-wise line segments matched. Well, the entropic dispersion does just as well:<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjq01RvE9hr42GnNTvvu0VWW708BOS1o9EbiuF4czM3CqyReuL2lm1NRoMXpT_GhvavWv6wZVbbwI6LxZD6h0ZnuKaAwQ5tIQYK95E9qYH8LE1hpTdR2EB4YxBCyXhISCvxMmYe/s1600/tjiede.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 309px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjq01RvE9hr42GnNTvvu0VWW708BOS1o9EbiuF4czM3CqyReuL2lm1NRoMXpT_GhvavWv6wZVbbwI6LxZD6h0ZnuKaAwQ5tIQYK95E9qYH8LE1hpTdR2EB4YxBCyXhISCvxMmYe/s400/tjiede.gif" alt="" id="BLOGGER_PHOTO_ID_5474706606569203218" border="0" /></a>And nothing mysterious about that slope of 0.5; that results from the diffusion having a square root dependence with time.@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.com1tag:blogger.com,1999:blog-7002040.post-68136090458058626472010-05-21T05:46:00.000-07:002010-05-21T05:58:10.858-07:00Waste Half-LifeThe big Gulf Spill got me thinking about the half-life of the leaking crude oil and the expanding slick. First of all, the oil will biodegrade over time. We don't have the situation as in CO2 where <a href="http://mobjectivist.blogspot.com/2010/04/fat-tail-in-co2-persistence.html">a sizable fraction will wander around the atmosphere</a> trying to find a suitable location to react and form solutes.<br /><br />Most of the oil will stay on the surface where it will get plenty of attention from aerobic microoganisms. Some of the oil will sink into the ocean and find anaerobic conditions at the bottom and essentially become inert or wash up on shore as sticky globs. Also the composition of crude oil includes many different hydrocarbons, some of which<a href="http://www.iosc.org/papers/02115.pdf"> biodegrade at much slower rates</a>, due to their molecular structure.<br /><br />So I imagine that we can't calculate the half-life of the spilled oil in terms of a single rate constant, <span style="font-style: italic; font-weight: bold;">k</span>. This kind of first-order kinetics would likely show an exponential decline, which proceeds pretty quickly once you get past the half-lifetime, 1/<span style="font-weight: bold; font-style: italic;">k</span> . Instead we will get a mix of various rates, with the fast rates occurring initially and the slower rates picking up the slack.<br /><br />Radioactive waste-dumps also show a <a href="http://knowledgepublications.com/doe/doe_nuclear_physics_detail.htm">mix of decay constants</a>. Nominally, radioactive material will show a single Poisson emission rate, leading to an exponential decline over time. But when the different radioactive materials get combined, the Geiger counter will pick up this mixture of rates, and the decline will turn from an exponential to a fat tail distribution See the red curve below.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgwW62zWBCKk8Rak7BFSltmqZZKKctEd_S0E3wWgmafRmW3g-e33e6njcHsTxOiU6RCSIcyp0v8UUSiuZ0NgiSlSmip84OVn2rz1W5N0yvt3bTzUOSvG0-n5LfY-9BV7Evs89i/s1600/radioactive_decay_rates.gif"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 238px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgwW62zWBCKk8Rak7BFSltmqZZKKctEd_S0E3wWgmafRmW3g-e33e6njcHsTxOiU6RCSIcyp0v8UUSiuZ0NgiSlSmip84OVn2rz1W5N0yvt3bTzUOSvG0-n5LfY-9BV7Evs89i/s400/radioactive_decay_rates.gif" alt="" id="BLOGGER_PHOTO_ID_5467657584676113074" border="0" /></a><br />A maximum entropy mix of decay rates (where a high decay rate indicates a potentially more energetic state) will generate the following half-life decline profile:<br /><blockquote><span style="font-style: italic; font-weight: bold;">P</span>(<span style="font-weight: bold; font-style: italic;">t</span>) = 1/(1+<span style="font-style: italic; font-weight: bold;">k</span>*<span style="font-weight: bold; font-style: italic;">t</span>)</blockquote>where <span style="font-style: italic; font-weight: bold;">k</span> is the average of the individual rates. This looks exactly the same as the<a href="http://mobjectivist.blogspot.com/2010/05/hyperbolic-decline-fat-tail-effect.html"> hyperbolic decline of reservoirs in my last post</a>.<br /><br />As you can see, the combined activity shows a much larger equivalent half-life since the tail has so much meat in it. In the limit of a full dispersion of rate constants, the average half-life will actually slowly diverge as the log of infinity. However, it never reaches this because the slowest decay rate will eventually dominate and that will not diverge.<br /><br />In any case, this gives a good qualitative description of a random waste dump.<br /><br />If I make the same MaxEnt assumption for crude oil and assume that the most energetic oil (by the bond strength of the hydrocarbon [1]) will likely prove the most difficult to decompose, then the half-life may also show a similar kind of fat-tail as that of a waste dump. It looks like benzene breaks down much slower than diesel oil for example.<br /><br />As usual, disordered natural phenomena show many of the same dispersive characteristics, driven largely by maximizing entropy.<br /><br /><br /><hr width="50%"><br /><br /><span style="font-weight: bold;">Notes:</span><br /><br /><span style="font-weight: bold;">[1]</span> For the derivation, we assume that we have a mean energy E0 and then a probability density function will show many small energies and progressively fewer high energies.<br /><blockquote>p(E) = exp(-E/Eo)/E0</blockquote>but the decomposition rate R depends on E, so that<br /><blockquote>P(t) = integral of P(t|E)p(E) over all E<br />P(t|E) = exp(-kE*t)<br /><br />P(t) = 1/(1+tkEo)<br /><br /></blockquote>(<a href="http://mobjectivist.blogspot.com/2010/04/fat-tail-in-co2-persistence.html">See this for a more detailed derivation.</a>)@whuthttp://www.blogger.com/profile/18297101284358849575noreply@blogger.com0