[[ Check out my Wordpress blog Context/Earth for environmental and energy topics tied together in a semantic web framework ]]

Thursday, May 06, 2010

Wind Energy Dispersion Analysis

subtitle: Wind is entirely predictable in its unpredictability

A few weeks ago I wrote about how to derive wind speed characteristics from a straightforward maximum entropy analysis: Wind Dispersion and the Renewable Hubble Curve. This assumed only a known mean of wind energy levels (measured as power integrated over a fixed time period).

From this simple formulation, one can get a wind speed probability graph. Knowing the probability of wind speed, you can perform all kinds of interesting extrapolations -- for example, how long it would take to accumulate a certain level of energy.

I received a few comments on the post, with one by BDog pointing out how the wind flow affects the rate of energy transfer, i.e. the load of kinetic energy enclosed by a volume of air gets pushed along at a rate proportional to its speed. I incorporated that modification in a separate calculation and did indeed notice a dispersive effect on the output. I didn't pick up on this at first so I edited the post with BDog's new correction included.

As a fortunate coincidence, Jerome posted a wind-themed article at TheOilDrum and in the comment section LenGould volunteered a spreadsheet of Ontario wind speed data (thanks Len).

In the past 12 months, the max output was 1017 MW, so there's at least that much online, quite widely distributed accross the 500 mile width of the southern part of the province near the great lakes (purportedly excellent wind resource territory).

On April 20th from 8:00 to 10:00 AM, the output averaged 3.5 MW. (0.34%)
On Mar 16th from 11:00AM to 1:00 PM, the output averaged 4.0 MW. (0.39%)
On Mar 9th from 10:00AM to 6:00 PM, the output averaged 6.7 MW. (0.66%)

That's just a few random picks I made in peak demand hours. I've done thorough analysis of this before and found the data to completely contradict your statement. These wind generators aren't anywhere NEAR to baselaod, and look like they never will be, since winds from here to North Dakota all travel in the same weather patterns.

I used LenGould's data set to try to verify the entropic dispersion model.

The data file consisted of about 36,000 sequential hourly measurements in terms of energy (kilowatt-hours). The following chart shows the cumulative probability distribution function of the energy values. This shows the classic damped exponential function, which derives from either the Maximum Entropy Principle (probability) or the Gibbs-Boltzmann distribution (statistics). It also shows a knee in the curve at about 750 KWh, which I assume comes from a regulating governor of some sort designed to prevent the wind turbine from damaging itself at high winds.

I also charted the region around zero energy to see any effect in the air flow transfer regime (which should be strong near zero). In this regime the probability would go as sqrt(E)*exp(-E/E0) instead of exp(-E/E0). Yet only a linearized trend appears courtesy of the Taylor's series expansion of the exponential around E=0.


Remember that this data consists of a large set of independent turbines. You might think that because of the law of large numbers that the distribution might narrow or show a peak. Instead, the mixture of these turbines over a wide variation in the wind speed provides a sufficiently disordered path so that we can apply the maximum entropy principle.

With a gained confidence in the entropic dispersive model, we can test the obvious nagging question behind wind energy -- How long do we have to wait until we get a desired level of energy?

I generated a resampled set of the data (only resampled in the sense that I used a wraparound at the 4 year length of the data to create a set free from any boundary effects). The output of the resampling essentially generated a histogram of years it would take to reach a given energy level. I chose two levels, E(T)=1000 MW-hrs and E(T)=200 MW-hrs. I plotted the results below along with the predetermined model fit next to the data.




The model from the previous post predicts the behavior used in the two fits:
p(t | E>E(T)) = T * exp(-T/ t ) / t 2
where T is the average time it will take to reach E(T). From the exponential fit in the first figure, this gives T= 200/178 and T=1000/178, reespectively, for the two charts. As expected we get the fat-tails that fall off as 1/t^2 (not 1/t^1.5 as the velocity flow argument would support).

The models do not work real effectively at the boundary conditions, simply because the wind turbine limiting governors prevent the accumulation of any energy levels above 1000 MWh level; this occurs either in a short amount of time or at long times as a Poisson process of multiple gusts of lower energy. That said, any real deviations likely arise from short-duration correlations between wind energy measurements spaced close together. We do see this as the lower limit of E(200) shows more correlation curvature than E(1000) does. Wind speeds do change gradually so these correlations will occur; yet these seem minor perturbations on the fundamental entropic dispersion model, which seems to work quite well under these conditions.

As a bottom-line, this analysis tells us what we already intuited. Because of intermittency in wind speed, it often takes a long time to accumulate a specific level of energy. Everyone knows this from their day-to-day experience dealing with the elements. However, the principle of maximum entropy allows us to draw on some rather simple probability formula so that we can make some excellent estimates for long-term use.

The derivation essentially becomes the equivalent of a permanent weather forecast. Weathermen perform a useless function in this regard. Only something on the scale of massive global warming will likely effect the stationary results.

6 Comments:

Professor Blogger Unknown said...

The "probability distribution of wind energy graph" is impressive.
So, the knee in the curve is because wind turbines reach their maximum power already at, for instance, 12 m/s windspeed or because they turn off at 25 m/s or both?
(Here's a link to a typical windturbine incl. power graph: http://alturl.com/8qn3)

And please excuse my ignorance but I don't understand the "time to reach 200 MWh graph". Wouldn't the probability go up the more time has passed or what am I misunderstanding?

7:53 AM  
Professor Blogger Unknown said...

It may also be interesting how the data in Canada compares to the data in Germany.
Here's a link with wind power data in 15 minute-steps from Transpower which primarily controls all the windfarms in Northwest Germany:
http://alturl.com/zubm (you would need to download the csv-files on the bottom.)

(This data also includes the wind power prognosis which is always done at least 16 hours in advance. I guess one could also make an analysis of the probability of the prognosis-accuracy.)

8:17 AM  
Professor Blogger @whut said...

I think it is a governor that clips the power at a certain level so it won't get damaged at higher rotation speed.
That link is an excellent spec that shows exactly what's happening.

Its not a cumulative probability graph, but a density probability graph. You can get a cumulative by integrating along time.

6:59 PM  
Professor Blogger Unknown said...

Thank you, now it's clear. (I just wasn't sure what you meant with prevention of damage as at a certain cut-out wind speed, wind turbines stop producing power entirely (which is not what you meant).
The generator determines the rated max. power and the rotor torque and speed is controlled accordingly (typically by pitching the blades). A relatively small wind turbine generator powered by a relatively large rotor diameter will shift the power curve as well as the cut-out speed to the left: http://alturl.com/5cab )

Anyway, when one solves your integral one gets: p=exp(-T/t) Which would mean that 200 MWh can be reached with a probability of over 0.9 after 9 hours?
(If this was true, it would be almost unbelievably simple. Why is this not being commonly applied/taught or what am I missing?)

2:45 PM  
Professor Blogger @whut said...

Good question. I don't know why it's not taught. Just about everything I write about on this blog is inexplicably not taught. I presume it has something to do with the subject matter of disordered systems not falling into a classical statistical category.

I will try to do something with the German data when I get a chance. Thanks.

10:27 PM  
Professor Blogger sman said...

Thank you for the information Energy Analysis in UK

10:48 PM  

Post a Comment

<< Home


"Like strange bulldogs sniffing each other's butts, you could sense wariness from both sides"