[[ Check out my Wordpress blog Context/Earth for environmental and energy topics tied together in a semantic web framework ]]

Saturday, May 29, 2010

The Word on Dispersion

Credit the Gulf oil disaster with allowing the words dispersion and dispersants to enter our common vocabulary. In the context of the spill, the use of dispersants on the oil causes the potentially sticky coagulating oil to split apart into finer granularity drops and somehow make it more amenable to breaking down. Dispersion in terms of a chemical definition simply means spreading out particles in the medium, in this case seawater. So a dispersant breaks it up and dispersion scatters it about.

The BP team apparently wanted to break up the oil up so that it could easily migrate and essentially dilute its strength within a larger volume. So instead of allowing a highly concentrated dose of oil to impact a seashore or the ocean surface, the dispersants would force the oil to remain in the ocean volume, and let the vast expanse of nature take its course. Somebody in the bureaucratic hierarchy made the calculated decision to apply dispersants as a judgment call. I can't comment on the correctness of that decision but I can expound on the topic of dispersion, which no one seems to fully understand, even in a scientific context.

As the media has forced us to listen to made up technical terms such as "top kill", "junk shot", and "top hat" which describe all sorts of wild engineering fixes, I will take a turn toward the more fundamental notions of disorder, randomness, and entropy to explain that which we cannot necessarily control. I always think that if we can understand concepts such as dispersion from first principles, we actually have a good chance of understanding how to apply it to a range of processes besides oil spill dispersal. In other words, well beyond this rather specific interpretation, we can apply the fundamentals to other topics such as green-house gases, financial market fluctuations, and oil discovery and production, amongst a host of other natural or man-made processes. Really, it is this fundamental a concept.

Background

If by the process of dispersion we want the particles to dilute as rapidly as possible, we need to somehow accelerate the rate or kinetics of the interactions. This becomes a challenge of changing the fundamental nature of the process, via a homogeneous change, or by introducing additional heterogeneous pathways that provide alternate pathways to faster kinetics. From this perspective, dispersion describes a mechanism to divergently spread-out the rates and dilute the material from its originally concentrated form. One can analogize in terms of a marathon race; the initial concentration of runners at the starting line rapidly disperses or spreads out as the faster runners move to the front and the slower runners drop to the rear. In a typical race, you see nothing homogeneous about the makeup of the runners (apart from their human qualities); the elites, competitive amateurs, and spur-of-the-moment entrants cause the dispersion. Whether we want to achieve a homogeneous dispersion or not, we have to account for the heterogeneous nature of the material. In other words, we rarely deal with pure environments so have to solve for much more than the limited variability we originally imagined. Generalizing from the rather artificial constraints of a marathon race, dispersion in other contexts (such as crystal growth or reservoir growth) results from an increase of disorder as a direct consequence of entropy and the second law of thermodynamics.

In terms of the spread in dispersion, we might often observe a tight bunching or a wide span in the results. The wider dispersion usually indicates a larger disorder, variability, or uncertainty in the characteristics -- a "fat-tail" to the statistics so to speak. So when we introduce a dispersant into the system, we add another pathway and basically remove order (or introduce disorder) into the system. Dispersion may thus not accelerate a process in a uniform manner, but instead accelerates the differences in the characteristic properties of the material. This again describes an entropic process, and we have to add energy or find exothermic pathways to fight the tide of increasing disorder.

This seems like such a simple concept, yet it rarely gets applied to most scientific discussions of the typical disordered process. Instead, particularly in an academic setting, what one usually reads amounts to pontificating about some abnormal or anomalous kind of random-walk that must occur in the system. The scientists definitely have a noble intention -- that of explaining a fat-tail phenomenon -- yet they don't want to acknowledge the most parsimonious explanation of all. They simply do not want to consider heterogeneous disorder as described by the maximum entropy principle.



Figure 1: Difference between a classical random walk (left) and an anomalous random walk (right). The salient difference is that occasional long jumps (Levy flights) occur in the anomalous random walk. A much simpler approach admits that a heterogeneous nix of random walkers of different rates exists. This will give essentially the same observable outcome without resorting to arcane mathematical modeling.

The complicating factor in discussions about dispersion involves the intuitively related concept of diffusion and convection or drift. Diffusion also derives from the statistics of disorder and describes how particles can spontaneously spread out without a real driving force, apart from the uniform environment, for example from the thermal background. The analysis of a particle undergoing random walk leads directly to the concept of diffusion. Random walk ideas seem to intrigue mathematicians and scientists because it places the concept of diffusion into a real concrete representation. In some sense everyone can relate to the idea of a particles bouncing around, but not necessarily to the idea of a gradient in concentration.

Convection and drift describe the motion of particles under an applied force, say charged particles under the influence of an electric field (Haynes-Shockley), or of solute or suspended particles under the influence of gravity (Darcy's Law). This essentially describes the typical constant velocity, akin to a terminal velocity, that we observe in a pure semiconductor (Haynes-Shockley) or a uniformly porous media (Darcy's).

Dispersion can effect both diffusion and drift, and that establishes the premise for the novel derivation that I came up with.

Breakthrough

The unification of the dispersion and diffusion concepts could have a huge influence on the way we think about practical systems, if we could only factor the mathematics describing the process. I can straightforwardly demonstrate a huge simplification assuming a single somewhat obvious premise. This involves applying the conditions of maximum entropy, by essentially maximizing disorder under known constraints or moments (i.e. mean values, etc).

The obviousness of this unifying solution contrasts with my lack of awareness of of any such similar simplification in the scientific literature. Surprisingly, I can't even confirm that anyone has really looked into the general idea. So far, I can't find any definitive work on this unification and little interest in pursuing this premise. Stating my point-of-view flatly, the result has such a comprehensive and intuitive basis that it should have a far-reaching impact on how we think about dispersion and diffusion. It just needs to gain a foothold of wider acceptance in the marketplace of ideas.

Which brings up a valid point I have heard directed my way. From my postings on TheOilDrum.com, commenters occasionally ask me why I don't publish these results in an academic setting, such as a journal article. To answer that, journals have evidently failed in this case, as I never find any serious discussion of dispersion unification. So consider that even if I submitted these ideas to a journal, it may just sit there and no one would ever apply the analysis in any future topics. This makes it an utterly useless and ultimately futile exercise. I will risk putting the results out on a blog and take my chances. A blog easily has as much archival strength, much more rapid turnaround, the potential for critiquing, and has searchability (believe it or not, googling the term "dispersive transport" yields this blog as the #3 result, out of 16,200,000). The general concepts do not apply to any specific academic discipline apart perhaps applied math, and I certainly won't consider publishing the results in that arena with out risking it disappear without a trace. Eventually, I want to place this information in a Wikipedia entry and see how that plays out. I would call it an experiment in Open Source science.

But that gets a little ahead of the significance of the current result.

The Unification of Diffusion and Drift with Dispersion

As my most recent post described, solving the Fokker-Planck equation (FPE) under maximum entropy conditions provides the fundamental unification between dispersion, diffusion and drift. For fans of Taleb and Mandelbrot, this shows directly how "thin-tail" statistics become "fat-tail" statistics without resorting to fractal arguments.

The Fokker-Planck equation shows up in a number of different disciplines. Really, anything having to do with diffusion or drift has a relation to Fokker-Planck. Thus you will see FPE show up in its various guises: Convection-Diffusion equation, Fick's Second Law of Diffusion, Darcy's Law, Navier-Stokes (kind of), Shockley's Transport Equation, Nernst-Planck; even something as seemingly unrelated as the Black-Scholes equation for finance has applicability for FPE (where the random walk occurs as fractional changes in a metric).

Because of its wide usage, the FPE tends to take the form of a hammer, where everything it applies to acts as the nail. (You don't see this more frequently than in finances, where Black-Scholes played the role of the hammer) Since the solution of FPE results in a probability distribution, it gives the impression that some degree of disorder prevails in the system under study. I find this understandable since the concept of diffusion implies an uncertainty exactly like a random walk shows uncertainty. In other words, no two outcomes will turn out exactly the same. Yet, in mathematical terms, the measurable value associated with diffusion, the diffusion constant D, has a fixed value for random motion in a homogeneous environment. When the parameters actually change, you enter in the world of stochastic differential equations; I won't descend to deeply into this area, only to apply this as a basic concept. The diffusion and mobility parameters have a huge variability that we have yet adequately accounted for in many disordered systems.

For that reason, the FP equation really applies to ordered systems that we can characterize well. Not surprisingly the ordinary solution to FPE gives rise to the conventional ideas of normal statistics and thin-tails.

So for phenomenon that appear to depart from conventional normal diffusion (the so-called anomalous diffusion) we have two distinct camps and corresponding solution paths to choose from. The prevailing wisdom suggests that an entirely different kind of random walk occurs (Camp 1). No longer does the normal diffusion apply, giving rise to normal statistics; instead we get the statistics of fat-tails and random walk trajectories called Levy flights to concretely describe the situation (see Figure 1). The mathematics quickly gets complicated here and most of the results get cast into heuristic power-laws. It takes a leap of faith to follow these arguments.

The question comes down to whether we wish to ascribe anomalous diffusion as a strange kind of random walk (Camp 1) or simply suggest that heterogeneity in diffusional and drift properties adequately describes the situation (Camp 2). I take the stand in the latter category and stand pretty much alone in this regard. Find some academic research article on anything related to anomalous diffusion and very few will accept the most parsimonious explanation -- that a range of diffusion constants and mobilities explain the results. Instead the researcher will punt and declare that some abstract Levy flight describes the motion. Above all I would rather think in practical terms, and simple variability has a very pragmatic appeal to it.

I went through the derivation of the dispersive FPE solution for a disordered semiconductor in the last post, and want to generalize it here. This makes it especially applicable to notions of transport physical transport of material in porous matter. This would include the motion of oil underground, CO2 in the air, and perhaps even spilled oil at sea.

In the one-dimensional model of applying an impulse function of material, the concentration n will disperse according to the following equation:
n(x, z) = (z + sqrt(zL + z^2)/sqrt(zL + z^2)*exp(-2x/(z + sqrt(zL + z^2))

where
z= μFt
L = β/F
The term z takes the place of a time-scaled distance, which can speed up or slow down under the influence of a force F (i.e. gravity, or electric field for a charged particle). The characteristic distance L represents the effect of the stochastic force β (aka Boltzmann's constant) and ties in the diffusional aspects of the system. The specific parameterization of the exponential results in the fat-tail observed.

In the past, I had never gone through the trouble of solving the FPE, simply because intuition would suggest that the dispersive envelope would cancel out most of the details of the diffusion term. In the dispersive transport model that I originally conceived, the dispersion would at most follow the leading wavefront of the drifting diffusional field as "sqrt(Lz+z^2)" as described here or as "sqrt(Lz)+z" here.

I estimated that the diffusion term would follow as the square root of time according to Fick's first law and that drift would follow time linearly, with only an idea of the qualitative superposition of the terms in my mind.

As one might expect, the actual entropic FPE solution borrowed from a little of each of my estimates, essentially averaging between the two:
(z + sqrt(zL + z^2))/2
So the solution to the dispersive FPE form for a disordered system turns out entirely intuitive , and one can almost generate the result from inspection. The difference between the original entropic dispersion derivation and the full FPE treatment amounts to a bit of pre-factor bookkeeping in the first equation above. You can see this by comparing the two approaches for the case of L=1 and unity width for the dispersive transport current model.

Figure 2: Differences between the original entropic dispersive model and the fully quantified FPE solution will converge as L gets smaller.

Dispersive Transport in Porous Media.

The above solved equations can actually apply directly as solutions to Darcy's law when it comes to describing the flow of material in a disordered porous media. I suppose this will irk the petroleum engineers, hydrologists, and geologists out there who have long sought the solution to this particular problem.

Yet we should not act surprised by this result. The actions of multiple processes acting concurrently on a mobile material will generally result in a universal form governed by maximum entropy. It doesn't matter if we model carriers in a semiconductor or particles in a medium, the result will largely look the same. In a hydraulic conductivity experiment, Lange treated the breakthrough curve of a trace element through a natural catchment as a FPE convection-dispersion model, and came up with the same results independent of the fractionation of the media.

By applying the simple dispersion model (blue curve below) to Lange's results, one sees that an excellent fit results with the fat-tail exactly following the hyperbolic decline that reservoir engineers often see in long-term flow behavior. This could includes the time dependent emptying of the currently leaking deep sea Gulf reservoir!

Figure 3: Breakthrough curve of a traced material showing results from an entropic dispersion model in blue.

Moreover, the amount of diffusion that occurs appears quite minimal. Adding a greater proportion of diffusion by increasing L does not improve the fit of the curve (see the chart to the right). Just as in the semiconductor case, the shape has a significant meaning when analyzed from the perspective of maximum entropy.

Nothing complicated about this other than admitting to the fact that heterogeneous disordered systems appear everywhere and we have to use the right models to characterize their behavior.

The details of this experiment are described in the following papers:
  1. D.Haag and M.Kaupenjohann, Biogeochemical Models in the Environmental Sciences: The Dynamical System Paradigm and the Role of Simulation Modeling
  2. H. Lange, Are Ecosystems Dynamical Systems?
The authors of these papers have mixed feelings about the applicability of modeling biogeochemical systems and speculate whether we should use any kinds of models for "ecological risk assessment". They point out that ecological systems obviously can adapt under certain circumstances and no amount of physical modeling can predict which way the system will go. Will spilled oil decompose faster as the environment adapts around it? Will that make dispersion less relevant? Who knows?

Still the work of modeling the physical process alone has enormous value as Haag and Kaupenjohann point out:

Despite not being a ‘real’ thing, "a model may resonate with nature" (Oreskes et al. 1994) and thus has heuristic value, particular to guide further study. Corresponding to the heuristic function, Joergensen (1995) claims that models can be employed to reveal ecosystem properties and to examine different ecological theories. Models can be asked scientific questions about properties. According to Joergensen (1994), examples for ecosystem properties found by the use of models as synthesizing tools are the significance of indirect effects, the existence of a hierarchy, and the ‘soft’ character of ecosystems. However, we agree with Oreskes et al. (1994) who regard models as "most useful when they are used to challenge existing formulations rather than to validate or verify them". Models, as ‘sets of hypotheses’, may reveal deficiencies in hypotheses and the way biogeochemical systems are observed. Moreover, models frequently identify lacunae in observations and places where data are missing (Yaalon 1994).

As an instrument of synthesis (Rastetter 1996), models are invaluable. They are a good way to summarize an individual research project (Yaalon 1994) and they are capable of holding together multidisciplinary knowledge and perspectives on complex systems (Patten 1994).

While models as a product may have heuristic value, we would like to emphasize also the role of the modeling process: "[…] one of the most valuable benefits of modeling is the process itself. These benefits accrue only to participants and seem unrelated to the character of the model produced" (Patten 1994). Model building is a subjective procedure, in which every step requires judgment and decisions, making model development ‘half science, half art’ and a matter of experience (Hoffmann 1997, Hornung 1996). Thus modeling is a learning process in which modelers are forced to make explicit their notions about the modeled system and in which they learn how the analytically isolated components of a system can be ‘glued’ (Paton 1997). As modeling mostly takes place in groups, modeling and the synthesis of knowledge has to be envisaged as a dynamic communication process, in which criteria of relevance, the meaning of terms, the underlying concepts and theories, and so forth are negotiated. Model making may thus become a catalyst of interdisciplinary communication.

In the assessment of environmental risks, however, an exclusively scientific modeling process is not sufficient, as technical-scientific approaches to ‘post-normal’ risks are unsatisfactory (Rosa 1998) and as the predictive capacity and operational validity of models (e.g. for scenario computation) is in doubt. The post-normal science approach (Funtowicz & Ravetz 1991, 1992, 1993) takes account of the stakes and values involved in environmental decision making. Following a ‘post-normal’ agenda, model development and model validation for risk assessment should become a trans-scientific (communication) task, in which "extended peer communities" participate and in which non-equivalent descriptions of complex systems are made explicit, negotiated, and synthesized. In current modeling practice, however, models are highly opaque and can rarely be penetrated even by other scientists (Oreskes, personal communication). As objects of communication, models still are closed systems and black boxes.

We need to really take up the charge on this as our future depends on understanding the role of entropy in nature. For too long, we have not shown the intellectual curiosity to model how much oil we have underground, what size distribution the reservoirs take, and how fast that they can epmty, even though some perfectly acceptable models can describe this statistically, using dispersion no less!

Now that the Macondo oil has discovered an escape hatch and has gone disordered on us and will go who-knows-where, it seems we can really make some headway in our common understanding. Nothing like having your feet in the fire.

Labels:

Monday, May 24, 2010

Fokker-Planck for Disordered Systems

To get the cost of photovoltaic (PV) systems down, we will have to learn how to efficiently use crappy materials. By crap I mean that mass-produced PV materials will end up getting rolled or extruded or organically grown. Unless we perfect the process, most everything will turn out non-optimal. We already know the difference between clean-room cultivated single crystal semiconducting material and the defect-ridden and often amorphous materials that nature and entropy drives us to. For performance sensitive applications such as communications and computing we would only rarely consider disordered material as a candidate semiconductor. Certainly, the performance of these materials makes them unlikely candidates for high speed processing -- yet for solar cell applications, they may serve us well. In the end, we just have to learn how to understand and deal with crap.

The following will revisit a couple of previous posts where I outlined a novel way to analyze the behavior of disordered semiconducting material. I know for certain that no one has proposed the particular approach before. If it does exist, I certainly can't find it in the literature. From one perspective, this analysis sets forth a baseline for the characterization of a maximally disordered semiconductor.

Background

The prehistoric 1949 Haynes-Shockley experiment first measured the dynamic behavior of charged carriers in a semiconducting sample. It basically confirmed the solution of the diffusion (Fokker-Planck) equation and it demonstrated diffusion, drift, and recombination in a conceptually simple setup. This animated site gives a very interesting overview of PV electrical behavior.

Figure 1: Apparatus for the Haynes-Shockley experiment

This setup works according to theory for an ordered semiconductor with uniform properties but apparently gets a bit unwieldy for any disordered or non-uniform material sample. I inferred this as conventional wisdom since most scientists either punt or use heuristics partially derived from the inscrutable work of a select group of random-walk theorists (see Scher & Montroll).

I had previously applied a very straightforward interpretation to the problem of carrier transport in disordered material. My dispersion analysis essentially set aside the Fokker-Planck formalism for a mean value approximation where I tactically applied the Maximum Entropy Principle. In particular, I really like the MaxEnt solution because I can recite the solution from memory. It matches intuition in a conceptually simple way once you get into a disordered mind-set.

In the real Haynes-Shockley experiment, a pulse gets injected at one electrode, and a nearly pure time-of-flight (TOF) profile results. The initial pulse ends up spreading out in width a bit, but the detected pulse usually maintains the essential Gaussian sigmoid shape.

Adding Disorder

For the time-of-flight for a disordered system, the Maximum Entropy solution looks like:
q(t) = Q * exp(-w/(sqrt((μEt)2 + 2Dt))
This essentially states that the expected amount of charge accumulated at one end of the sample (at a distance w) at time t, follows a maximum entropy probability distribution. The varying rates described by μ and D disperse the speed of the carriers so that a broadened profile results from the initial pulse spike.

The equation above formed the baseline for the interpretation I described initially here.

For completeness, I figured to test my luck and see if I can bull my way through the basic diffusion laws. If I could produce an equivalent solution by applying the Maximum Entropy Principle directly to the Fokker-Planck equation, then this would give a better foundation for the "inspection" result above.

The F-P diffusion equation gets expressed as a partial differential equation with a conservation law constraint:
In this case D1=μ* (carrier mobility) and D2=D* (diffusion coefficient), and f(x,t)=n(x,t) (carrier concentration). With recombination, the solution in one-dimension looks like:

This of course works for well-ordered semiconductors, but D* and μ* will likely vary for disordered material. I made the standard substitution via the Einstein Relation for
D* = Vt μ*
where Vt = β/q stands for the chemical or thermal potential at equilibrium (usually β equals kT where k is Boltzmann's constant and T is absolute temperature). At equilibrium, the stochastic force of diffusion exactly balances the electrostatic force F = qE.

From the basic physics, we can generate a maximum entropy density function for D
p(D*) = 1/D * exp(-D*/D)
then
n(x,t) = Integral p(D*) * nmean(x,t) over all D*
This looks hairy but the integral comes out straightforwardly as (ignoring the constant factors)
n(x,t) = 1/sqrt(t*(4D+t*(Eμ)2)) * exp(-x*R(t)) / R(t)
where
R(t) = sqrt(1/(Dt) + E/(2Vt)2) - E/(2Vt)

If we evaluate this for carriers that have reached the drain electrode at x=w, the total charge collected q is:
q(t) = Q/sqrt(t*(4D+t*(Eμ)2) * exp(-w*R(t)) / R(t)

The measured current is
I(t) = mean of dq(t)/dt from 0 to w
The simple entropic dispersive expression and the Fokker-Planck result obviously differ in their formulation, yet the two show the same asymptotic trends. For an arbitrary set of parameters, one can't detect a practical difference. Use whichever you feel comfortable with.

I show the dynamics of the carrier profile in the animated GIF to the right. The initial profile starts with a spike at the origin and then the profile broadens as the mean starts drifting and diffusing to the opposing contact. You don't see much from this perspective as it looks completely like mush. Yet, when plotted on a log-log scale, it does take on more character.

The collected current profile looks like the following

Figure 2: Typical photocurrent trace showing the initial diffusional spike, a plateau for relatively constant collection from the active region, and then a power-law tail produced from the entropic drift dispersion.



Organic Semiconductor Applications

The photocurrent profile displayed above came from from Andersson's "Electronic Transport in Polymeric Solar Cells and Transistors" (2007) wherein he analyzed the transport in a specific organic semiconducting material, the polymer APFO.

The blue line drawn through the set of traces follows the entropic dispersion formulation. The upper part of the curve describes the diffusive spike while the lower part generates the fat-tail due to the drift component (this shows an inverse square power law in the tail).

Figure 3: Universal profile generated over a set of applied electric field values. For this set, scaling of transit time with respect to the applied field holds, indicative of a constant mobility. However, carrier diffusion causes the initial transient and this does not scale, as the electric field has no effect on diffusion, as shown in the lower set of blue curves.

As I stated in the previous post, most scientists when discussing this shape have either (1) referred to Scher/Montroll and the vague heuristic α, (2) dismissed these features, or (3) labelled them as uninteresting. Andersson follows suit:
At best this transient, as the high α value indicates, might be possible to evaluate in a meaningful way with a bit of error and at worst it is of no use. Either way the amount of material and effort required is rather large compared to the usefulness of the results. APFO-4 is also the polymer that, among the investigated, gives the ”nicest” transients. The conclusion from this is that if alternative measurement techniques can be used it is not worthwhile to do TOF.
Not to dismiss the hard work that went into Andersson's experiment, but I would beg to differ with his assessment of the worthiness of the approach. When characterizing a novel material, every measurement adds to the body of knowledge, and as the interpretation of the aggregation of data becomes more cohesive, we end up learning much more of the internal structure. As I have learned, if someone does not understand a phenomena, they tend to dismiss it (myself included).

By their very nature, disordered systems contain a huge state space and we really can't afford to throw out any information.

Which brings up another interesting set of TOF experiments that I dug up. These also deal with organic semiconducting materials -- the polymers with the abbreviations ANTH-OXA6t-OC12 and TPA-Cz3d. The following figures show the TOF results for various applied voltages. I superimposed the entropic dispersion equation form as the red line with the derived mobility in the caption below each figure. The original researcher had applied the Scher&Montroll Continuous Time Random Walk (CTRW) heuristic as indicated by the intersecting sloped lines. The CTRW model clearly fails in this situation as the slopes need quite a bit of creative interpretation. Note that we don't observe the diffusive spike; I integrated the charge from 10% to 100% of the width instead of 0% to 100%.










ANTH-OXA6t-OC12
μ = 0.0025
TPA-Cz3d
μ = 0.0013
μ = 0.00155
μ = 0.0004
μ = 0.00125
μ = 0.0005
μ = 0.00085
μ = 0.0006






The number of papers I find, especially when dealing with organic semiconductors, that cannot apply the Scher/Montroll theory indicates that it truly lacks any generality. In other words, it works crappily for describing disorderly crap. I will also say the theory has some very serious flaws, including the claim that an α = 1 defines a non-dispersive material. How could a power-law of -2 be anything but dispersive?

The fact that the entropic dispersion formulation works on any disordered material makes it much more general. Several years ago Scher wrote a popular article for Physics Today extolling the wonders of his theory, and how it seemed to fit a variety of disordered systems. He mentioned how well it fit amorphous silicon based on the number of orders of magnitude that his piece-wise line segments matched. Well, the entropic dispersion does just as well:

And nothing mysterious about that slope of 0.5; that results from the diffusion having a square root dependence with time.

Friday, May 21, 2010

Waste Half-Life

The big Gulf Spill got me thinking about the half-life of the leaking crude oil and the expanding slick. First of all, the oil will biodegrade over time. We don't have the situation as in CO2 where a sizable fraction will wander around the atmosphere trying to find a suitable location to react and form solutes.

Most of the oil will stay on the surface where it will get plenty of attention from aerobic microoganisms. Some of the oil will sink into the ocean and find anaerobic conditions at the bottom and essentially become inert or wash up on shore as sticky globs. Also the composition of crude oil includes many different hydrocarbons, some of which biodegrade at much slower rates, due to their molecular structure.

So I imagine that we can't calculate the half-life of the spilled oil in terms of a single rate constant, k. This kind of first-order kinetics would likely show an exponential decline, which proceeds pretty quickly once you get past the half-lifetime, 1/k . Instead we will get a mix of various rates, with the fast rates occurring initially and the slower rates picking up the slack.

Radioactive waste-dumps also show a mix of decay constants. Nominally, radioactive material will show a single Poisson emission rate, leading to an exponential decline over time. But when the different radioactive materials get combined, the Geiger counter will pick up this mixture of rates, and the decline will turn from an exponential to a fat tail distribution See the red curve below.


A maximum entropy mix of decay rates (where a high decay rate indicates a potentially more energetic state) will generate the following half-life decline profile:
P(t) = 1/(1+k*t)
where k is the average of the individual rates. This looks exactly the same as the hyperbolic decline of reservoirs in my last post.

As you can see, the combined activity shows a much larger equivalent half-life since the tail has so much meat in it. In the limit of a full dispersion of rate constants, the average half-life will actually slowly diverge as the log of infinity. However, it never reaches this because the slowest decay rate will eventually dominate and that will not diverge.

In any case, this gives a good qualitative description of a random waste dump.

If I make the same MaxEnt assumption for crude oil and assume that the most energetic oil (by the bond strength of the hydrocarbon [1]) will likely prove the most difficult to decompose, then the half-life may also show a similar kind of fat-tail as that of a waste dump. It looks like benzene breaks down much slower than diesel oil for example.

As usual, disordered natural phenomena show many of the same dispersive characteristics, driven largely by maximizing entropy.





Notes:

[1] For the derivation, we assume that we have a mean energy E0 and then a probability density function will show many small energies and progressively fewer high energies.
p(E) = exp(-E/Eo)/E0
but the decomposition rate R depends on E, so that
P(t) = integral of P(t|E)p(E) over all E
P(t|E) = exp(-kE*t)

P(t) = 1/(1+tkEo)

(See this for a more detailed derivation.)

Tuesday, May 18, 2010

Hyperbolic Decline a Fat-Tail Effect

If the Gulf Oil spill shows results of a hyperbolic decline, the effects can go on for quite some time.

For a typical reservoir, oil depletion goes through either an exponential decline or a hyperbolic decline. Geologists by and large don't realize this, and definitely don't teach this, but hyperbolic decline constitutes a "fat-tail" effect that results from an aggregation of varying exponential declines summed together. As to the behavior of hyperbolic decline, one notices that the effects tend to drag out for a long time. The fast exponential decline finishes more quickly than the slower exponential components. That's where the fat-tail comes from and why the hyperbolic decline can proceed endlessly are at least as long as the longest exponential portion.

Derivation of hyperbolic decline as a one-liner:

The exponential has a rate of x, and x gets integrated over all possible values of r according to an exponential Maximum Entropy probability density function. You can see the fat-tail in the plot below:


This is just entropy at work because nature tends to want to disperse.

EDIT:

JB asked the question on the slope of the two functions. As plotted, these give the cumulatives. If we want to look at the probability density functions, then yes you will see that the hyperbolic gives a mix of these rates more in line with intuition, with a faster initial slope and then the fatter tail later. See the figure below:


Sunday, May 09, 2010

Characterizing mobility in disordered semiconductors

I always look for analogies between physical systems. This often leads to dead ends but sometimes you uncover some interesting parallels that actually add to the knowledge-base of information and ideas for both systems.

As I worked out the problem of CO2 dispersion in the atmosphere, I went back and revisited the work I did on dispersive transport in amorphous semiconductors. Essentially the same math gets used on both analyses, with the same fundamental goal in mind -- that of trying to characterize the annoyingly sluggish response from an input stimulus.

For the climate case, the poor response comes from CO2 molecules wandering around aimlessly trying to find a good resting place. For the disordered semiconductor, the carrier of electricity (the electron or hole) encounters so many trapping states and scattering centers, that it effectively takes much longer for the charge to cross a region. It does have the advantage of the assist of an electric field, but the low effective transport rate makes an amorphous semiconductor such as hydrogenated amorphous silicon (a-Si:H) marginally useful for any time-sensitive applications -- yet eminently usable as a photo-voltaic.

Still, knowing the physical characteristics helps to understand the nature of the material, and could unlock some secrets beneficial to future applications of material such as polycrystalline or amorphous silicon, or any disordered semiconductor. In the future, we will make mass quantities of this material for the PV industry and we won't have the luxury of single crystal material.

The fact that dispersive transport does have the help of an electric field, makes it amenable to experimentation. By applying various electric fields, one can distinguish between a drift component and a diffusive component (of the photoelectric current, for example). With no electric field, any photo-generated carriers will wander around until they recombine. This can take relatively long times, especially in comparison to a piece of single crystal silicon. As the electric field increases, the carriers get swept out faster and the diffusion plays less of a role.

The fact that the atmosphere has no drift role apart from turbulent diffusion, means that CO2 plays the analogous part of a electronic device with generated carriers, but nowhere to remove them (alas, we have no electrodes attached to the atmosphere). So, I wanted to get a bit of insight by looking at the carrier transport problem, and as a goal, perhaps find a way to increase the removal of CO2 by something equivalent to an electric field, and particularly to ask if this could reduce the CO2 mean residence time.

I noticed one detail that I left hanging on the dispersive carrier transport problem. This had to do with the initial diffusion transient often observed. See the figure to the right (from here). You can see the transient near the start time as a quickly declining response from the initial impulse. The particular trace in the tiny inset came from a non-disordered device (perhaps from a commercial-grade photodetector), as the individual regions show sharp delineations. For a disordered material, the regions show more blurring, as shown in the following figure.


Figure 1: Fitting to the dispersive tail from previous posting. Note the missing initial transient in the curve fit in the curves in color.

I did not include the initial spike term in my initial analysis from last year, as I forgot to apply the chain rule to one set of rate equations. I had justified a transport function that had a non-linear component that propagates as the square-root of time, characteristic of diffusion. Yet to generate a current from this, one needs to differentiate this as a simple chain rule. Not too surprisingly, but perhaps non-intuitively, the derivative of a square root generates the reciprocal of the square root, which of course will spike to infinity at times close to zero. However, the accumulated amount of current generated by this spike nowhere approaches infinity, as the transient has very little width to it. Looking at it on a log-log plot, the width appears long but that occurs simply as an optical illusion.

For a pulsed light source, the entire impulse response equation boils down to a simple charge conservation problem. We know that charge builds up as the photons excite the carriers, but we only know the mean rate and we let the Maximum Entropy Principle figure out the rest. The concentrations build up as the following form, with g(t) acting as the transport growth term across a region w:
C(t) = C0 * g(t)* (1 - exp(-w/g(t))
with
g(t) = sqrt(2*D*t) + u*E*t
where D is the diffusivity, w is the active width, u is the charge mobility, and E is the electric field strength (E = Voltage/w). The total number of excited carriers is C0, and this number provides the maximum amount of current that gets collected. Common to all stochastic probability problems, the conservation of probability becomes a strong constraint.

The current derives as:
I(t) = dC(t)/dt = C0 * dg(t)/dt * (1 - exp(-w/g(t) * (1 + w/g(t))
Note the dg(t)/dt term, which I had neglected to derive completely before, keeping only the drift term (note the 1/sqrt(t) term below).
dg(t)/dt = 0.5*sqrt(D/t) + u*E
Re-plotting the original fitted curve trace with the extra chain-rule term, we can actually see the initial transient. Take a close look at the figure below, and observe how well the curve matches all the inflection points, and works over several orders of magnitude. Mystery solved.

Figure 2 : Dispersive transport which includes a term to describe the initial transient. Note the agreement of the dispersive transport model at short durations. Upper curve fits a fixed average mobility sample. For the lower curve, the average mobility depends on applied electric field strength.

You can spend all sorts of time trying to fit the curves; the more time you spend, the better an estimate you can make of the average mobility, u, and diffusivity, D. Suffice to say, no fudge factors play into the equations. If this isn't a textbook ready formula, I don't know what is.

As I said before, no one in the semiconductor industry seems to use this simple dispersive formulation, preferring to hand-wave and heuristically account for the fat-tails of the transient. Importantly, this particular impulse response function both explains the behavior seen, and derives from the most simple particle counting statistics (i.e maximum entropy randomness), so it likely serves as the most canonical model for dispersive transport in disordered materials.

Linking back to CO2

Now a curious fact presents itself. Not many people in science and engineering seem to understand disorder. If they did, somebody would have discovered this dispersion formulation. Yet they haven't (AFAIK). Billions of dollars goes into semiconductor research and I can only find several purely academic papers on anomolous diffusion and Levy flights and fractional random walks. It really is not that complicated to derive the physics behavior, if you simply assume entropic disorder.

So as it turns out, the dispersion math essentially matches that of what happens to CO2 as it enters the atmosphere. The peculiar piece in the transport that provides that initial photo-current spike acts identically to the fast rate of CO2. In other words, a fraction of charged carriers that can diffuse quickly to a recombination site (i.e. an electrode) act precisely the same as CO2 that reacts quickly and removes itself from the atmosphere. Yet the long tails in the dispersion remain, both in the disordered semiconductor, and in the disordered atmosphere. The fat-tails will kill us in atmospheric CO2 build-up, just like the fat-tails in amorphous semiconductors make it useless to use in a fast microprocessor or in a cell-phone receiver.

Now put 2 and 2 together. No wonder no one knows how to simply describe the CO2 buildup problem! Like the scientists and engineers who experiment with dispersive transport can't see the forest for the trees and thus can't come up with a simple derivation that a near layman can understand, the climate scientists also completely miss out on the obvious and have never come up with the equivalent "probability as logic" formulation.

ImpulseResponseCO2(t) = 1/(1+sqrt(t/T))

That is all there is to it.

Thursday, May 06, 2010

Wind Energy Dispersion Analysis

subtitle: Wind is entirely predictable in its unpredictability

A few weeks ago I wrote about how to derive wind speed characteristics from a straightforward maximum entropy analysis: Wind Dispersion and the Renewable Hubble Curve. This assumed only a known mean of wind energy levels (measured as power integrated over a fixed time period).

From this simple formulation, one can get a wind speed probability graph. Knowing the probability of wind speed, you can perform all kinds of interesting extrapolations -- for example, how long it would take to accumulate a certain level of energy.

I received a few comments on the post, with one by BDog pointing out how the wind flow affects the rate of energy transfer, i.e. the load of kinetic energy enclosed by a volume of air gets pushed along at a rate proportional to its speed. I incorporated that modification in a separate calculation and did indeed notice a dispersive effect on the output. I didn't pick up on this at first so I edited the post with BDog's new correction included.

As a fortunate coincidence, Jerome posted a wind-themed article at TheOilDrum and in the comment section LenGould volunteered a spreadsheet of Ontario wind speed data (thanks Len).

In the past 12 months, the max output was 1017 MW, so there's at least that much online, quite widely distributed accross the 500 mile width of the southern part of the province near the great lakes (purportedly excellent wind resource territory).

On April 20th from 8:00 to 10:00 AM, the output averaged 3.5 MW. (0.34%)
On Mar 16th from 11:00AM to 1:00 PM, the output averaged 4.0 MW. (0.39%)
On Mar 9th from 10:00AM to 6:00 PM, the output averaged 6.7 MW. (0.66%)

That's just a few random picks I made in peak demand hours. I've done thorough analysis of this before and found the data to completely contradict your statement. These wind generators aren't anywhere NEAR to baselaod, and look like they never will be, since winds from here to North Dakota all travel in the same weather patterns.

I used LenGould's data set to try to verify the entropic dispersion model.

The data file consisted of about 36,000 sequential hourly measurements in terms of energy (kilowatt-hours). The following chart shows the cumulative probability distribution function of the energy values. This shows the classic damped exponential function, which derives from either the Maximum Entropy Principle (probability) or the Gibbs-Boltzmann distribution (statistics). It also shows a knee in the curve at about 750 KWh, which I assume comes from a regulating governor of some sort designed to prevent the wind turbine from damaging itself at high winds.

I also charted the region around zero energy to see any effect in the air flow transfer regime (which should be strong near zero). In this regime the probability would go as sqrt(E)*exp(-E/E0) instead of exp(-E/E0). Yet only a linearized trend appears courtesy of the Taylor's series expansion of the exponential around E=0.


Remember that this data consists of a large set of independent turbines. You might think that because of the law of large numbers that the distribution might narrow or show a peak. Instead, the mixture of these turbines over a wide variation in the wind speed provides a sufficiently disordered path so that we can apply the maximum entropy principle.

With a gained confidence in the entropic dispersive model, we can test the obvious nagging question behind wind energy -- How long do we have to wait until we get a desired level of energy?

I generated a resampled set of the data (only resampled in the sense that I used a wraparound at the 4 year length of the data to create a set free from any boundary effects). The output of the resampling essentially generated a histogram of years it would take to reach a given energy level. I chose two levels, E(T)=1000 MW-hrs and E(T)=200 MW-hrs. I plotted the results below along with the predetermined model fit next to the data.




The model from the previous post predicts the behavior used in the two fits:
p(t | E>E(T)) = T * exp(-T/ t ) / t 2
where T is the average time it will take to reach E(T). From the exponential fit in the first figure, this gives T= 200/178 and T=1000/178, reespectively, for the two charts. As expected we get the fat-tails that fall off as 1/t^2 (not 1/t^1.5 as the velocity flow argument would support).

The models do not work real effectively at the boundary conditions, simply because the wind turbine limiting governors prevent the accumulation of any energy levels above 1000 MWh level; this occurs either in a short amount of time or at long times as a Poisson process of multiple gusts of lower energy. That said, any real deviations likely arise from short-duration correlations between wind energy measurements spaced close together. We do see this as the lower limit of E(200) shows more correlation curvature than E(1000) does. Wind speeds do change gradually so these correlations will occur; yet these seem minor perturbations on the fundamental entropic dispersion model, which seems to work quite well under these conditions.

As a bottom-line, this analysis tells us what we already intuited. Because of intermittency in wind speed, it often takes a long time to accumulate a specific level of energy. Everyone knows this from their day-to-day experience dealing with the elements. However, the principle of maximum entropy allows us to draw on some rather simple probability formula so that we can make some excellent estimates for long-term use.

The derivation essentially becomes the equivalent of a permanent weather forecast. Weathermen perform a useless function in this regard. Only something on the scale of massive global warming will likely effect the stationary results.

Monday, May 03, 2010

How Shock Model Analysis relates to CO2 Rise

I would rate the graph below as one of the most famous charts in the annals of science, rivalled only by its close kin, the "hockey stick" graph ( the sketch of Hubbert's Peak is an also-ran in this contest):
Figure 0: The classic and frightening atmospheric CO2 build-up.

From just a technical perspective, it has an interesting composition -- a committed research team that has collected data for some 50 years, measurements showing very little noise, the fascinating periodic cycle due to seasonal variations, and Al Gore to present it.

I don't think many people realize how easy one can derive this curve. You only need a historical record of fossil fuel usage, a few parameters and conversion factors, and the knowledge of how to do a convolution. Since I use convolutions heavily in the Oil Shock model, doing this calculation has become second nature to me.

The way I view it, the excess CO2 production becomes just another stage in the set of shock model convolutions, which model how fossil fuel discoveries transition into reserves and then production. The culminating step in oil usage becomes a transfer function convolution from fuel consumption to a transient or persistent CO2 (depending on what you want to look at). Add in the other hydrocarbon sources of coal and natural gas and you have a starting point for generating the Mauna Loa curve.

The Recipe

First of all, we can roughly anticipate what the actual CO2 curve will look like, as it will lie somewhere between the two limits of immediate recapture of CO2 (the fast transient regime hovering just above the baseline) and no recapture (the persistent integrated regime which keeps accumulating). See Figure 1.
Figure 1: The actual CO2 levels fall between the constraints of immediate uptake (red curve) and persistent inertness (orange curve). The latter results from an accumulation or integration of carbon emissions.

Although this transient will show very long persistence and a very fat tail as I described here, we only need an average rate to generate the initial rise curve. (The oscillating part decomposes trivially, and we can safely add that in later)

So the ingredients:
  1. Conversion factor between tons of carbon generated and an equivalent parts-per-million volume of CO2. This is generally accepted as 2.12 Gigatons carbon to 1 ppmv of CO2. Or ~7.8 Gt CO2 to 1 via purely molecular weight considerations.
  2. A baseline estimate of the equilibrium CO2, also known as the pre-industrial level. This ranges anywhere from 270 ppm to 300 ppm, with 280 ppm the most popular (although not necessarily definitive).
  3. A source of historical fossil fuel usage. The further back this goes in time the better. I have two locations: one from the Wikipedia site on atmospheric CO2 (Image) or one from the NOAA site.
  4. A probability density function (PDF) for the CO2 impulse response (see the previous post). If you don't have this PDF, use the first-order reaction rate exponential function, R(t)=exp(-kt).
  5. A convolution function, which you can do on a spreadsheet with the right macro [1].
The convolution of carbon production Pc(t) with the impulse response R(t) generates C(t):
C(t) = k*[Integral of Pc(t-x)*R(x) from x=0 to x=t] + L
Multiplying the result by a conversion factor k; then adding this to the baseline L generates the filtered Mauna Loa curve as a concentration in CO2 parts per million.

I used R(t)=exp(-t/T), where T=42 years and L=1280 ppm baseline for the following curve fit (using data from Figure 3 for Pc(t)) .
Figure 2: Convolution ala the Shock Model of the yearly carbon emission with an impulse response function. An analytical result from a power-law (N=4) carbon emission model is shown as a comparison..

For Figure 2, I also applied a curve fit model of the carbon generated, which followed a Time4 acceleration, and which had the same cumulative as of the year 2004 [2]. You can see subtle differences between the two which indicates that the rate function does not completely smooth out all the yearly variations in carbon emission (see Figure 3). So the two convolution approaches show some consistency with each other, but the fit to the Mauna Loa data appears to have a significant level shift. I will address this in a moment.

Figure 3: Carbon emission data used for Figure 2. A power-law starting in the year 1800 generates a smoothed idealized version of the curve useful for generating a closed-form expression.

The precise form of the impulse response function, other than the average rate selected, does not change the result too much. I can make sense out of this since the strongly increasing carbon production wipes out the fat-tails tails of slower order reaction kinetics (see Figure 4). In terms of the math, a Time4 power effectively overshadows a weak 1/sqrt(Time) or 1/Time response function. However, you will see start to see this tail if and when we start slowing down the carbon production. This will give a persistence in CO2 above the baseline for centuries.


Figure 4: Widening the impulse response function by dispersing the rates to the maximum entropy amount, does not significantly change the curvature of the CO2 concentration. Dispersion will cause the curve to eventually diverge and more closely follow the integrated carbon curve but we do not see this yet on our time scale.
Once we feel comfortable doing the convolution, we can add in a piecewise extrapolated production curve and we can anticipate future CO2 levels. We need a fat-tail impulse response function to see the long CO2 persistence in this case (unless 42 years is long enough for your tastes).

The Loose End

If you look at Figure 1, you can obviously see an offset of the convolution result from the actual data. This may seem a little puzzling until you realize that the background (pre-industrial) level of CO2 can shift the entire curve up or down. I used the background level of 280 ppm purely out of popularity reasons. More people quote this number than any other number. However, we can always evaluate the possibility that a higher baseline value would fit the convolution model more closely. Let's give that a try.

The following figure (adapted from here) shows a different CO2 data set which includes the Mauna Loa data as well as earlier proxy ice core data. Based on the levels of CO2, I surmised that the NOAA scientist that generated this graph subtracted out the 280ppm value and plotted the resultant offset. I replotted the data convolution as the dotted gray line.

Figure 5: The CO2 data replotted with extra proxy ice core data, assuming a 280ppm baseline (pre-industrial) level. The carbon production curve is also plotted. You can clearly see that the convolution of the impulse response results in a curve that has a consistent shift of between 10 and 20 ppm below the actual data.

Note that my curve consistently shows a shift 14ppm below the actual data (note the log-scale). This indicates to me that the actual background CO2 level sits 14ppm above 280ppm or at approximately 294ppm. When I add this 14ppm to the curve and replot, it looks like:
Figure 6: The convolution model replotted from Figure 5 with a baseline of 294ppm CO2 instead of 280. Note the generally better agreement to the subtle changes in slope

Although the data does not go through a wide dynamic range, I see a rather parsimonious agreement with the two parameter convolution fit.

Just like in the oil shock model, the convolution of the stimulus with an impulse response function will tend to dampen and shift the input perturbations. If you look closely at Figure 6, you can see faint reproductions of the varying impulse, only shifted by about 25 years. I contend that this "delayed ghosting" comes about directly as a result of the 42-year time constant I selected for the reaction kinetics rate. This same effect occurs with the well-known shift between the discovery peak and production peak in peak oil modeling. Even though King Hubbert himself pointed out this effect years ago, no one else has explained the fundamental basis behind this effect, other than through the application of the shock model. That climate scientists most assuredly use this approach as well points out a potential unification between climate science and peak oil theory. I know David Rutledge of CalTech has looked at this connection closely, particularly in relation to future coal usage.

Bottom Line

To believe this model, you have to become convinced that 294 ppm is the real background pre-industrial level (not 280), and that 40 years is a pretty good time constant for CO2 decomposition kinetics. Everything else follows from first-order rate laws and the estimated carbon emission data.

Of course, this simple model does not take into possible positive feedback effects, yet it does give one a nice intuitive framework to think about how hydrocarbon production and combustion leads directly to atmospheric CO2 concentration changes and ultimately climate change. Doing this exercise has turned into an eye-opener for me, as it didn't really occur to me how straightforward one can derive the CO2 results. Gore had it absolutely right.


Update: From the feedback from some astute TOD readers, it has become clear that some other forcing inputs could easily make up the 14 ppm offset. Changing agriculture and forestry patterns, and other human modifications of the biota could alter the forcing function during the 200+ year time-span since the start of the industrial revolution. Although recyclable plant life should eventually become carbon neutral, the fat-tail of the CO2 impulse response function means that sudden changes will persist for long periods of time. A slight rise from time periods from before the 1800's coupled with an extra stimulus on the order of 500 million tons of carbon per year (think large-scale clearcutting and tilling from before and after this period) would easily close the 14 ppm CO2 gap and maintain the overall fit of the curve.

However, we would need to apply the fat-tail response function, g/(g+sqrt(t)), to maintain the offset for the entire period.

Another comment by EoS:

I don't think it is useful to think of an average CO2 lifetime. That implies a lumped linear model with only a single reservoir, hence an exponential decay towards equilibrium. In reality there are lots of different CO2 reservoirs with different capacities and time constants. So any lumped model had better use several reservoirs with widely varying time constants at a minimum, or else it will get the time behavoir seriously wrong.

It turns out that the variation or dispersion in reaction rates makes very little difference in the slope on the climb up. That is fundamental and I addressed that in Figure 4. The reason for this is very simple mathematics -- the climb up in CO2 is generated by power laws on the order of N>3 or by exponential increases. That is the nature of accelerating fossil fuel usage. In contrast the reaction rates of CO2 have exponents that are negative or have inverse power laws of very low order, the so-called fat-tail distributions. When you put these together, the power law increase essentially crushes the long-tails and all you see are the average value of the faster kinetics. I put in the analytical solution so you can see this directly in the convolution results.

Alternately, apply a simple convolution of accelerating growth [exp(at)] with a first-order reaction decline [exp(-kt)] and you will see what I mean. You get this:

C(t) = (exp(at)-exp(-kt)/(a+k)
The accelerating rate a will quickly overtake the decline term k. If you put in a spread in k values as a distributed model, the same result will occur. That essentially demonstrates Figure 4. Climate scientists should realize this as well since they have known about the uses of convolution in the carbon cycle for years (see chapter 16 in "The carbon cycle" by T. M. L. Wigley and David Steven Schimel).

Yet, if we were to stop burning hydrocarbons today, then we would see the results of the fat-tail decline. Again, I think the climate scientists understand this fact as well but that idea gets obscured by layers of computer simulations and the salient point or insight doesn't get through to the layman. This is understandable because these are not necessarily intuitive concepts.

This following figure models CO2 uptake if we suddenly stop growing fossil fuel use after the year 2007. We don't simple stop using oil and coal, we simply keep our usage constant.

Figure 7: Extrapolation of slow kinetics vs fat-tail kinetics

Up to that point in time a dispersive (i.e. variable) set of rate kinetics will be virtually indistinguishable from a single rate (see Figure 4). And you can see that behavior as the curves match for the same average rate. But once the growth increase is cut off, the dispersive/diffusive kinetics takes over and the rise continues. With the first-order kinetics the growth continues but it becomes self-limiting as it reaches an equilibrium. (see http://mobjectivist.blogspot.com/2010/04/fat-tail-in-co2-persistence.html). This works as a plain vanilla rate theory with nothing by the way of feedbacks in the loop. When we include a real positive feedback, that curve can even increase more rapidly.

Recall that this analysis carries over from studying dispersion in oil discovery and depletion. The rates in oil depletion disperse all over the map, yet the strong push of technology acceleration essentially narrows the dispersed elements so that we can get a strong oil production peak or a plateau with a strong decline. In other words, if we did not have the accelerating components, we would have had a long drawn out usage of oil that would reflect the dispersion. That explains why I absolutely hate the classical derivation of the Hubbert Logistics curve, as it reinforces the opinion of peak oil as some "single-rate" model. In fact just like climate science, everything gets dispersed and follows multiple pathways, and we need to use the appropriate math to analyze that kind of situation.

Climate scientists understand convolution, but peak oil people don't, except when you apply the shock model.

That basically outlines why I want to share these ideas with climate scientists and unify the concepts. It will help both camps, simply by dissemination of fresh ideas and unification of the strong ones.




Notes:
[1] Excel VB convolution script

http://www.microsoft.com/communities/newsgroups/list/en-us/default.aspx?dg=microsoft.public.excel.worksheet.functions&tid=933752da-6f86-4af8-9dba-b9edf57f77d9&cat=en_us_b5bae73e-d79d-4720-8866-0da784ce979c&lang=en&cr=us&sloc=&p=1

Copy the function below into a regular codemodule, then use it like

=SumRevProduct(A2:E2,A3:E3)

It will work with columns as well as rows.

HTH,
Bernie
MS Excel MVP

Function SumRevProduct(R1 As Range, R2 As Range) As Variant
Dim i As Integer
If R1.Cells.Count <> R2.Cells.Count Then GoTo ErrHandler
If R1.Rows.Count > 1 And R1.Columns.Count > 1 Then GoTo ErrHandler
If R2.Rows.Count > 1 And R2.Columns.Count > 1 Then GoTo ErrHandler

For i = 1 To R1.Cells.Count
SumRevProduct = SumRevProduct + _
R1.Cells(IIf(R1.Rows.Count = 1, 1, i), _
IIf(R1.Rows.Count = 1, i, 1)) * _
R2.Cells(IIf(R2.Rows.Count = 1, 1, R2.Cells.Count + 1 - i), _
IIf(R2.Rows.Count = 1, R2.Cells.Count + 1 - i, 1))
Next i
Exit Function
ErrHandler:
SumRevProduct = "Input Error"
End Function


[2] Try this with Wolfram Alpha. It gets finicky sometimes but it does symbolic algebra fairly well.