[[ Check out my Wordpress blog Context/Earth for environmental and energy topics tied together in a semantic web framework ]]

Wednesday, April 14, 2010

Business as Entropic Warfare

I am trying to make sense of "growth" environments by comparing to a number of natural systems. Working from a basic premise of maximum entropy dispersion, I have applied this to almost every problem that I have encountered and it has worked extremely well so far. No one seems to have worked precisely this angle so I have gotten a lot of mileage out of a relatively simple concept.

Modeling human income distributions has the potential to most closely resemble the relative species abundancespecies area relationships found in living creatures. The artificiality of our own reward system remains the stubborn issue in our own understanding. The following post ties together some loose ends and ultimately shows the power of a relatively simple model.
and

The Model for Worker Productivity

Assume that an individual worker in a firm gains productivity, C, over the course of time. In a capitalist system, productivity gets measured in monetary terms so that profit has to come from somewhere. As we have a limited bucket of money, we can imagine that a portion of the profit must come at the expense of our competitors1. These ingredients when put together has all the makings of a classic war-gaming strategy.

Texts on non-linear equations occasionally make reference to a simplistic model known as the Lanchester equation. Apparently conceived during the height of World War I, this formulation tries to model warfare in terms of size of opposing forces. Conceptually we attempt to set the differential equations up to indicate that the rate of decrease of the other side's number changes proportionally to how many your own side has. The symmetry of the equations leads to a square law for a range in values. This excerpt from MacKay's paper describes the premise and derivation:


As a simple, deterministic model[1] of a battle, suppose that R(t) red and G(t) green units begin fighting at t = 0, and that each unit destroys r or g (the fighting effectiveness) enemy units in one unit of time, so that

dR / dt = -gG  ,
dG/dt = -rR (1)

Rather than solve directly, we eliminate the explicit t-dependence by dividing the second equation by the first and then separating variables: then

\int  r RdR = \int gGdG(2)

and we see that

rR^{2} - gG^{2} = constant (3)

This has remarkable implications.
(excerpt from
Lanchester combat models by NJ MacKay)
I am not enamored with using non-linear differential equations (more later), but the Lanchester construction has the benefit of quickly getting to the crux of the situation. Concentrating on the green units for the moment and equating opposing forces as business competitors, the lower case g acts as a growth constant and the upper case G becomes equivalent to the productivity C (instead of a force number). In other words, productivity stands in for the element of battle force strength. As formulated, this equation describes a deterministic system and we have to add disorder to make sense of it in a real situation.

So that realistically all these parameters (C, g, G) can vary wildly over a large state space. We no longer consider this a battle between two opposing forces, but try to envision a multiplicity of battles occurring over a highly disordered environment -- perhaps of maximum entropy content. We have no way of gleaning any of the parameters of this space, so we instead use the Principle of Maximum Entropy to fill in the missing pieces.

As a first step, if we look at the attractor equation (3) in the comment block above and consider that the quadratic term stays within a constraint, we have a ready-to-use Maximum Entropy (MaxEnt) variant constraint. In other words, gG2 can likely vary uniformly anywhere within a range up to a maximum constant value. The other MaxEnt variant, r, has at least a mean value and so, based on that limited information, we make r an exponential random variate. This remains consistent with the approach I have used in the past in applications such as Dispersive Discovery -- all rate parameters disperse according to the maximum entropy principle.

The entropic dispersion derives to (if we scale g=1 and set G to C as our productivity variant) the uniform representation of a cumulative dispersion probability distribution. I have derived this ad nauseum on this blog and on TOD, so I won't repeat the derivation:


P(C) = C^{2}/Max *(1-e^{-Max/C^{2} } )

So now we need data to compare this model to. Given that we only have a single parameter to modify, Max, this should prove a black & white case study.

Interesting data comes from a paper "A Stochastic Model of Labor Productivity and Employment" published recently from the Research Institute of Trade, Industry, and Industry (RIETI) and Kyoto University. The model associated with the paper seems a bit complicated, but it comes with a mind-boggling set of productivity data from small to medium-sized Japanese business firms. It essentially covers a million firms and over 15 million workers, and they created a set of probability distributions describing among other things, worker productivity.

Figure 1
shows the rolled up Japanese firm data along with the single parameter fit to the entropic dispersion model.

Figure 1: The entropic dispersion model fit to the data of Japanese worker productivity.

This fits the fat-tail of the curve very well, and shows indication of fitting the low productivity statistics as well. The value of Max is 470 million. The authors of the study hedged their bets by stating that the tail showed an exponent of 1.88 but the entropic dispersion model definitively places this as a power of 2.

We can also look at the area around the knee of the curve by magnifying that portion of the data. I added two fine tuning parameters to match the curvature observed. At low values of C, we expect a v factor linear with C according to Lanchester (i.e. the linear law). Also we expect a minimum value of the attractor constant, which we call Min.


P(C) = (C^{2}+vC)/(Max-Min) *(e^{-Min/(C^{2}+vC) }-e^{-Max/(C^{2}+vC) } )

The model fit shown below in Figure 2 for v=3500 and Min =10 million visually amounts to a slight perturbation but it adds completeness to the model. The hair-raising agreement with the data points to the fact that humans may amount to mere statistical billiard balls in the entropic universe. All the individual game-playing relationships constituting the micro-economics of productivity get swept into a bigger econophysics picture that fit better than anything I have so far seen describing the statistical mechanics of economics. The warfare of business acts precisely like an entropic swarm of ants filling in all of space. Mind boggling.

Figure 2: Fit to the low productivity values in the knee of the curve of Figure 1.

A big part in the success of this fit comes from the good statistics dealing with a large data set. Having 15 million data points at your disposal does wonders for reducing statistical noise, as one can see from the low amount of spikiness in the PDF data of Figure 2. We have no need for any special filtering apart from tabulating the histogram. One can but wonder what could we do with our our oil depletion models if we had access to equally comprehensive data sets -- see my post on shocklets for a PDF that corresponds to productivity of a generic North Sea oil reservoir. With the oil modeling we have different constraints but the same dispersive model, just that we have to deal with more noise with such a limited data set.

As a verification, I wrote a Monte Carlo simulation to mimic the random nature of the entropic dispersion model. The pseudo-code for the MC results figure below essentially draws random numbers for 3 variates. A exponential variate for the rate parameter labelled Dispersion, a uniform variate for the Max value, and a random draw for the value of C, which assumes an ergodic traversal for all possible states. It took a few minutes to execute the simulation, but the analytical model remains much more practical.

Figure 3 : Monte Carlo varification

Full_Span : constant := 10_000_000.0
Num_Samples : constant := 1_000_000_000
Max : constant := 4.7e8
Min : constant := 100.0
Vel : constant := 3500.0
Histogram : array of Integer
for I in 1..Num_Samples loop
Dispersion := Exponential_Random_Variate(1.0)
C := Uniform_Random_Variate * Full_Span
G := Dispersion*(C*C + Vel*C)
M := (Max - Min)* Uniform_Random_Variate + Min
if G < M then
Bin := Log10(C)
else
Bin := Log10((-Vel+Sqrt(Vel*Vel+4.0*M/Dispersion))/2.0)
end
Histogram(Bin) := Histogram(Bin)+1
end loop

Cumulative := Num_Samples
Previous := Cumulative
for I in Histogram loop
Previous := Cumulative
Cumulative := Cumulative - Histogram(I)
Put (I & "," & Cumulative/Num_Samples)
end loop
The fact that this model applies to the Japanese business culture, which may in fact turn out more homogeneous than say USA firms may not prove important. I would suggest that Japanese face the same vagaries of tax structure, regulations, etc, but in the end, it may not matter. The fundamental nature of entropy smears all these details together.

In a previous post, I looked at income distributions for the USA. I had to go through some gyrations to get out elements of the compounding growth. What makes Japan different? Perhaps not much, as it appears that the USA data has a smaller sampled data set and that noise may obscure the same underlying distribution. If we look at income alone, one can see the same general shape for US income distributions. The author of the study I had concentrated on, Yakovenko, may have taken liberties by creatively graphing the income data to artificially separate the two regimes. He claimed that the upper reaches of the income curve follow Boltzmann-style statistics and the lower a Pareto tail. This may still apply for the USA yet if you look closely at the curves below, they do follow in general an inverse square fat-tail. In retrospect, I wasted quite a bit of time trying to reconcile Yakovenko's view of the statistics with my own. I have see-sawed on this way too much, but with the square law as a clear winner with respect to the Japanese data, it will make sense to recast the findings, particularly in regard to compounding growth due to equity investments, which may have a bigger impact on individual income than worker productivity.

Interestingly this square law dependence doesn't work for the firm size as a whole. The distribution of firm sizes follows regular linear entropic dispersion as I described here. Firms are more like species in that the relative sizes of firms get compared, not measures of productivity. This gets discussed further in this paper: "Labour Productivity Superstatistics", Hideaki Aoyama, Hiroshi Yoshikawa, Hiroshi Iyetomi, Yoshi Fujiwara http://arxiv.org/PS_cache/arxiv/pdf/0809/0809.3541v1.pdf.

The notion of superstatistics gets elaborated by Aoyama et al, which in general agrees with the entropic dispersion approach. The generality of superstatistics arises from the application of successive layers of stochastic modeling, which I have no qualms with. This concept first appeared as a physics review topic in 2003 which indicates that it has not quite taken hold -- worth keeping watch over.

Further Discussion

I recently ran across an interesting paper by Robert Rosen called "On Models and Modeling". In this essay, Rosen, a computational biologist covers some of the pitfalls of the non-linear modeling, especially in terms of Volterra's role in formulating the Lotka-Volterra equations in biological modeling. As I noted earlier, the Lanchester model differs little from the Lotka-Volterra used to model predator/prey relationships.

Rosen suggests that "good mathematicians often make bad modellers" because they don't make the necessary connection to reality. The following excerpt can apply to Eq (1):
This is now, apparently, a purely formal, mathematical problem, which pure mathematicians can happily attack without worrying further about the external biological referents that gave rise to it. And indeed, there are a substantial number of papers that investigate precisely this problem -- papers that study the existence of such attractors and the details of how trajectories approach them. Good, subtle mathematics here, but very poor modelling.

Why? Simply because, when population sizes get small, the rate equations in (1) progressively lose their meaning; they lose their contact with the external referents they are supposed to describe. The use of such systems of differential equations, and of all the tools of analysis that come with them, are tacitly predicated on a hypothesis, namely, that population sizes are large enough to justify it. When this hypothesis is not satisfied, as it is not whenever we are close to the bounding hyperplanes of the first orthant, the of (1) are utterly artifactual as far as population biology is properties concerned.

In fact, the deterministic rate equations in (1) are themselves; an approximation to something else -- an approximation; that is only good under certain conditions, and no good othenvise. We may look upon them as a macroscopic version of underlying microscopic processes, which are not themselves governed by those equations. Intuitively, by forcing population sizes to become small, we correspondingly force the underlying microscopic properties to become more and more prominent until they simply swamp everything else.
The critical point in Rosen's interpretation is that once the model gets out of a deterministic regime, say the attractor of Eq(3), then anything can happen. I suggest that, at the very least, a dispersion of the parametric growth factors may play an effect. In a similar fashion, ecological modeler Bill Shipley (who knows some French) dug out Volterra's original work and also interpreted his findings as a case for applying the right amount of disorder:
Curiously, given the historical dominance of the demographic Lotka-Volterra equations, Volterra recognized the difficulties of this approach and even considered a statistical mechanistic approach (22). Very few authors have followed his lead (23–31).
Science 3 November 2006: Vol. 314. no. 5800, pp. 812 - 814

So even the originator of the equations points out their inadequacy, yet people continue to use them without applying the non-deterministic elements! I believe the key is just as Shipley states and to recast all the complexity arguments as stochastic (governed by probability&statistics) problems and start using entropy arguments properly.


Although I used the same math in most of my previous oil discovery modeling posts, I can now go back and look again at the dispersive discovery derivation of the logistic and perhaps cast it as an epic productivity war between oil and man. We know who wins that one, we only need to name the parameters; I predict that Max=URR and gC is the accelerating discovery rate. As Hannibal Smith said, I love it when a plan comes together.


Notes

1 edit: Or from a huge debt, implying a debt-based economy. As long as a source of debt exists, such as the existence of cheap energy (i.e. oil), this can go on indefinitely. However, once this dries up, the only source left remains zero-sum con games, which necessitates an infinite source of rubes.

2 Comments:

Professor Blogger Joshua Stults said...

So even the originator of the equations points out their inadequacy, yet people continue to use them without applying the non-deterministic elements!

This is true of the Lorenz model too; it becomes a toy for mathematicians when it takes on unphysical parameter values.

11:21 AM  
Professor Blogger @whut said...

Agree. The number of unphysical fractal algorithms that mimic nature is huge. Mathematicians may fiddle with these because they might describe some other phenomena, in a heuristic fashion. In economics, the quants invoke ideas like negative probabilities, and I have to shake my head.

The number of physically-possible algorithms that model the behavior well while also helping us to understand the fundamentals is quite small.

1:28 PM  

Post a Comment

<< Home


"Like strange bulldogs sniffing each other's butts, you could sense wariness from both sides"