If tower thump were not an issue then a one bladed turbine would be the best possible turbine (cost efficiency wise).
Further down he's even arguing for turbines with 12 blades, it's a ridiculous number and quite inefficient unless you want very high torque at low rpm such as for directly driving a mechanical waterpump.
Another example - commercial aircraft are known to have a relatively inefficient shape, but that's cheaper to produce than the more complex and more aerodynamic alternatives even though they would reduce fuel costs.
An article purporting to 'discover' that design would probably be more shocking if it wasn't already in common use for sensible applications.
I'm not an expert but I'm pretty sure there is a big difference from a device trying to pull energy out of the fluid.
A quick google didn't lead me to an authoritative source, but I know I have read many times that a one-bladed propeller is the most efficient.
Between that and your points, my impression is that many-blade designs aren't useful beyond small scale generation, which is exactly where they're still in use. There are an awful lot of three-blade advantages that went completely neglected in this piece.
I always found the one blade designs most fascinating and vividly remember a video of the three one blade rotors of the Monopteros 50. Each blade was 56 m long.
There was also GROWIAN, a two blade design with 100 m diameter on a 100 m tower.
All of these projects were abandoned and the towers demolished. The three blade design has won for the reasons you stated.
I searched hard for the Monopteros video but there isn't even a good picture which shows all three towers. By-catch:
Wikipedia page about unconventional wind turbines 
In this example, it's used purely because of compactness when stowed.
Of course, that's much less appealing for commercial installations, where access to the blades is expensive and difficult.
edit: also, even blade counts on big, high-powered turbines tend to get into problems with resonance by being symmetrical.
Some early windparks used lattice towers, and some of those survive to the present but they are now known to be worse performance wise than tubular towers.
The profile of a hyperbolic wind tower may be altered such that the cross section at maximum height minus blade radius is at a minimum. The largely conical profile of the existing Shukhov towers were used for power transmission lines and a radio broadcast antenna.
Check this out: http://www.pbs.org/wgbh/nova/next/tech/proposed-hyperboloid-...
Is one example of lattice towers in production that I know of (but not like the ones you describe and I have never seen those being used for windmills).
This is why is is sometimes difficult to avoid premature optimization, because post-maturity optimization tends not to happen, even when it may be warranted. I know of no wind farm anywhere that is used for side-by-side comparisons of wind turbine technologies. It would very likely have to be owned by a university and supported by public funds.
I have never seen a hyperboloid lattice or diagrid structure in person, but they are not entirely unheard of for shapes that do not require a lot of space efficiency in the tower portion, such as for lighthouses, observation decks, water towers, radio towers, etc.:
If you are a manufacturer though, you have all of the incentives to come up with something more efficient though. Wind turbines aren't controlled by a monopoly are they?
Siemens has largely gone to a system where everything is in the nacelle. This way it makes it easier to completely test and connect the power electronics and control, since more of the work is done at the factory instead of in the field. Labor and equipment is much more expensive in the field than in the factory.
The main way this is achieved is to take a huge rotor and put it on a small generator. More energy is lost at high winds, but the ramp up from no generation to maximum is much faster. Since most projects spend the majority of their time along that ramp rather than at rated power this results in large gains in annual energy.
This means that some sites once considered to be marginal to end up being good to very good. One area I've been working in saw a more than 25% increase in predicted energy for the same capital cost with the new turbine types.
For a low to middling wind site in the US, with a 7.5 m/s average wind speed represented by a Rayleigh distribution:
Using a GE 1.5-77, the most common turbine in use in the US in the late 2000s, I calculate 40.5% gross capacity factor, and estimate net capacity factor of 33.5%.
Using a GE 2.0-116, one of the most energetic turbines on the market for low-wind sites, I calculate a 53.2% gross capacity factor, and estimate a net capacity factor of 44.1% using the same annual loss parameters as before.
For a hypothetical project of equivalent size we get over 30% more energy every year from the same wind by only changing from the old technology to the new, and we build less infrastructure to support it. The new machines are taller, and we probably need the same amount of total land for the project.
Cringely's assumption that new designs aren't being tried because the industry is stupid is almost certainly wrong. More likely, they're still going after the low hanging fruit, and will start optimizing design for suboptimal locations once they've fully exploited the optimal ones. But there's a lot less profit in using more expensive turbines to harvest less wind.
Another thing vendors are doing now is purposefully running generators beyond their design at higher wind speeds (getting 2.2 MW from a 2.0 MW generator, for example), provided that other operating parameters are measured to be within range. This is usually driven by cooling requirements.
It's easier to dismiss radical thinking than to continually pursue them in the face of pervasive dismissiveness.
It's not. There are a limited number of good onshore wind sites near big loads. California has only four good sites, and they all have big wind farms. There are lots of sites with enough wind northward from the Texas panhandle, but no big loads within a thousand miles.
These may exist but I've not yet seen them.
"Changing Winds: The Evolving Wind Turbine":
It has (what appears to my untrained eye to be) a quick overview of various designs. To me it seems that vertical turbines makes a lot of sense - if for no other reason than it would seem like most of those designs get you more "turbine" for your build-costs/materials (and less "inert tower").
Obviously, this would be much more costly because of the additional big generator and the complication of the drivetrain. But it'd let you harvest a bunch of energy when it's really windy. And with a vertical-axis mill, not only would this be better for wildlife (lower overall speeds), the generators would be on the ground, not way up in the air, and driven by a shaft from the mill through its tower, so servicing would be easier and the weight and size of the generators and the drivetrain wouldn't be an issue.
This idea has higher turbine costs (and site profitability already ranges from impressive to deeply marginal), and the extra profit from the high end isn't excellent. Cringely overstates the money lost in high-wind conditions - most shutdowns are for low wind or to avoid damage in storm conditions, which are rather different than smooth, profitable high wind.
Further, it might offer major operating-cost hits even with ground-based generators simply by being a major break with existing systems. Obviously that's surmountable, but it means that the system has to be better for a lot of sites to be worthwhile.
That said, there are definitely vertical mill projects in the works, and I wouldn't be surprised if some of them have switchover generators.
This works, but is less efficient and mechanically more taxing on the machine than variable speed, variable pitch angle systems.
* There is such a huge difference between a 12m blade and a 60m blade that I don't see how the comparison is at all relevant. We played with smaller turbines for decades before we reached this sort of price efficiency.
* Betz' law explicitly disavows picking a number of blades: "Assumptions: 1. The rotor does not possess a hub and is ideal, with an infinite number of blades which have no drag. Any resulting drag would only lower this idealized value. ..."
* Betz' law is a three-dimensional consequence of convervation laws, not an observation about turbulent blade interactions. "Consider that if all of the energy coming from wind movement through a turbine was extracted as useful energy the wind speed afterwards would drop to zero. If the wind stopped moving at the exit of the turbine, then no more fresh wind could get in - it would be blocked. In order to keep the wind moving through the turbine there has to be some wind movement, however small, on the other side with a wind speed greater than zero. Betz' law shows that as air flows through a certain area, and when it slows from losing energy to extraction from a turbine, it must spread out to a wider area. As a result geometry limits any turbine efficiency to 59.3%."
* We already have a good idea what a moderate redesign of the fundamentals of a wind turbine looks like; It points downwind, it's huge like the existing turbines, and its two blades bend. They just need to solve the tower strike problem. https://www.technologyreview.com/s/401583/wind-power-for-pen... https://www.technologyreview.com/s/528581/two-bladed-wind-tu...
In fact I asked why he wasn't going with 5 or 7 or 9 blades (he explained that even numbers tend to be Bad News) and he explained that the big problem was, as you increase the number of blades, the proportion of the rotor disk that consists of blade gets higher, which means the force on the tower component gets larger -- it goes something like the third power of wind-speed. Every few years or so, hurricane-force winds batter parts of Europe, and therefore there is a massively important design requirement that the wind turbine not completely be obliterated during these violent storms. An airplane with a funky propeller can of course be stored indoors when this happens; a turbine would require maintenance technicians to disassemble each one the day that the high-wind forecast was received.
Googling some of these related issues brings me to the page:
However I cannot say with great certainty whether this is the same person that I heard these things from, or just a peer in the same field with the same general conclusions.
12m blades have an area of 452m^2
60m blades have an area of 11,300m^2
At the same efficiency, you would need 25 times more windmills to generate the same power. Even if Cringely's theory is correct that smaller windmills can be made to be more efficient (doubtful), requiring an order of magnitude more physical hardware (towers, generators, etc.) is going to be a killer cost increase.
That is going off the assumption that he'll get his optimal running time.
Which is sometimes true, but far more often, it's because those engineers know something you don't, because working on this problem is their job.
There are a few good points here (e.g. per-field profit is a different goal than per-rotor efficiency), but it feels a lot like an out-of-field expert assuming that no one in the field has ever been creative.
That's because Cringley doesn't, either. The starting torque is entirely provided by the wind, not the grid.
Edit (one more thing): At this scale, permanent magnet (PM) machines aren't much more efficient than DFIG machines. It turns out that eddy current losses in the stator back iron remain as a significant term in both machines. But for a DFIG machine, the eddy current losses at light load are reduced by quite a bit relative to the losses produced by a PM machine. Since real-world wind turbines end up operating at partial load for much of their life, the effects basically wash out, or are even a net negative for the PM machine.
Each individual wind turbine is optimized to be the "most efficient" it can possibly be.
But not in the context of the environment, which requires complex control systems and methods to reduce damage in non-optimal conditions.
And not if you take cost and efficiency of the whole system into account; where four times as many small turbines with less complexity run more often and produce more overall output.
But hey, each wind turbine is "optimal." Interesting.
Most complex systems have this property. Even (especially?) your business.
"The logic of rocket equation is brutal and inexorable", the NASA rocket scientists would say. "As the performance of the rocket engine decreases linearly, the propellent requirements increases exponentially. Therefore it is imperative to have the highest-possible performance from your rocket engines. The combustion of liquid hydrogen / oxygen is the highest-efficiency chemical reaction, and therefore liquid hydrogen is the only propellant we will consider using for our engines."
All of this was correct, but also wrong. It was wrong in a whole-systems context, because:
A.) Liquid hydrogen is much lower-density than traditional rocket fuels like Kerosene, requiring far larger and heavier structures to carry the same amount of energy, and larger and heavier engines and plumbing and everything else -- and the structural mass of a rocket matters.
B.) Liquid hydrogen will boil off and leak through anything, requiring massive amounts of insulation (again, adding weight), and far stricter operational protocols around the fueled vehicle.
C.) Liquid hydrogen is so cold that it will embrittle and shatter ordinary metals and requires much more exotic and expensive metallurgy.
and finally, D.) The alternative, kerosene, gives you less efficient engines but also much lighter-weight vehicles -- and sure, per that inexorable logic of the rocket equation, you need to use more fuel for the same amount of payload, but kerosene is cheaper than milk so who fucking cares?
Anyhow, the upshot of this is that US rocket scientists spent literally tens of billions of dollars developing hyper-locally-optimised rocketry schemes, most of which failed outright. For the remainder, for every dollar of kerosene they didn't have to buy, they probably spent upwards of $1000 on exotic aerospace hardware. Finally, they gave up and just bought kerosene-based engines from the Russians.
So then there's Elon Musk. And of course there's a ton of brilliant rocket science in the Falcon, but much of the reason for SpaceX's success is because it doesn't optimise solely on the rocket science. They optimise for cost as a whole, and consider factors like design, manufacturing, operations, reusability, etc. to be relevant to the question of cost.
For example: the Falcon 9 would undoubtedly be more efficient if it had a hydrogen-based upper stage. But doing that would cause it to lose most of its commonality with the lower stage, requiring an entirely new design, manufacturing, and operations workforce. Which would cost several orders of magnitude more than the extra kerosene required to fly a "sub-optimal" rocket. The reason they've already cut the price of launch by about 80% vs. (say) the Space Shuttle is because they optimised the whole system, keeping the unique part count and operational complexity as low as possible.
I still sometimes hear old-school rocket scientists grumble about how the Falcon is a less optimised vehicle than the Shuttle, though. They just don't get it. Never will.
Anyhow, that was a tangent. Fascinating to think that wind turbines might be amenable to a similar class of disruption.
It's also untrue, US rocketry kept varied propulsion types throughout its history including both RP1-only and LH2-only as well as mixed stages, HTPB and Aerozine. Your painting of rocket scientists as some sort of confederacy of dunce unable to get a grip on tradeoffs is not only inane it's insulting bullshit.
Everyone who grew up idolizing rocket scientists should read it. If you have problems with missing pages or broken figures, try a different PDF viewer. The PDF has something odd about it that confuses some in-browser viewers.
My account was simplified, but the tangent was long enough already. It's true that in the early days of US rocketry (through the 1960s), there were a large variety of propulsion types that were experimented and deployed. The stasis didn't really set in until the 1970s.
Since then, it's also true that solids and hypergolics continued to have a role, both as strap-on boosters (because gravity losses) and in military systems (and their derivatives) where cryogenic liquids aren't advisable. (The military was always more whole-systems and pragmatic about their approach to rocketry.) Finally, it's also true that non-LH2 legacy rockets (and their closely-related derivatives) continued to fly (because switching costs).
However: in civilian rocketry (eg. NASA), as far as new developments were concerned, the picture is as I described. I was peripherally involved in that scene during the 1990s and had many people at NASA and LockMart tell me personally, quite explicitly, that specific impulse was very nearly the sole determinant of rocketry, and that therefore no fuel other than LH2 would ever be considered. Feel free to look at the kind of arguments cited during the fuel-density wars on sci.space.policy if you want a feeling for the kind of vehemence with which this opinion was expressed.
So, yes, I stand by my story. From the mid-1970s through the mid-2000s, US rocketry development was wholly paralysed, due in very large part to these kind of attitudes and overly-narrow optimisations. As evidence, I present the NASP, X33, and SLI, amongst dozens of smaller, equally failed initiatives. Feel free to counter with some examples of successful new rockets (as opposed to derivatives like the Pegasus, Taurus, and Delta III) developed during that period. The only examples you'll be able to find are Delta-IV -- which switched to LH2 at frankly mind-blowing expense, has had an uninspiring career, and is about to be axed -- and the Atlas V -- which gave up and just bought LOX/RP1 engines from Russia, since the US had lost the expertise to develop decent LOX/RP1 engines itself.
I'm not saying that the US civilian aerospace industry was a confederacy of dunces, just that for 30+ years it had the productive output of one. It was certainly well-supplied with smart people, but yes they were absolutely unable to get a grip on tradeoffs, and it showed.
> is not only inane it's insulting bullshit.
You've got a lot of HN Karma, but you haven't yet learned that people here prefer evidence-backed arguments to ad-hominem attacks?
At the time, however, I recall hearing endless griping from various corners that NASA had the gall to fund a non-LH2 engine. Internally, this was justified as being nothing more than a cheap and cheerful way to get the X-34 to its flight regime -- definitely not a serious rocket engine, no sir! That was LH2's job! Everyone knew that!
The support of the RP1 rebels within NASA wasn't enough to keep it alive, and the program was cancelled after a few years. Later, it was resurrected as the template for SpaceX's first generation engines. So I personally consider this to be the exception that proves the rule.
Your evidence is:
>I was peripherally involved in that scene during the 1990s and had many people at NASA and LockMart tell me personally, quite explicitly, that specific impulse was very nearly the sole determinant of rocketry... Feel free to look at the kind of arguments cited during the fuel-density wars on sci.space.policy if you want a feeling for the kind of vehemence with which this opinion was expressed.
Your entire point is based on bulletin board discussions and anecdotal evidence from being peripherally involved with rocketry in the 90s (which given context, I'm taking to mean "I hung out in space forums and bulletin boards and talked with people who claimed to be rocket engineers.")
I enjoyed reading both of your posts about this (with a grain of salt), but it's a bit silly to call someone out for lack of evidence after writing several hundred words without a single piece of hard evidence supporting it.
Oh for fuck's sake. I don't like arguments from authority, but if you really want one: yes, I hung out on internet space forums in the 90s and am thus aware of their contents. But I also was a member of the Space Access Society and regular attendee of its conferences; an organiser for the National Space Society and regular attendee of its conferences; and an organiser and founding member of the Mars Society and regular attendee of its conferences. In this context, I was personally well-acquainted with folks like Gary Hudson, Robert Zubrin, and Buzz Aldrin, as well as innumerable lesser-known individuals from NASA, Boeing/Lockmart, and the various alt.space renegades. The stories I tell come directly from my contacts with them.
Furthermore, the "evidence" I provided was citations of the NASP, the X-33, and the SLI. The first two represented NASA's flagship attempts to develop new space-launch systems during 80s and 90s; they failed due to weight growth and integration problems stemming from over-optimisation of the (LH2) propulsion system at the expense of all the rest of the vehicle's systems. By the time of the SLI in 2000, RP1 was starting to find a (small) place at the table again, but the program still failed due to too narrow a focus on propulsion optimisations. I figured that anybody familiar with the space industry (as the person I was replying to seemed to be) should be familiar with these failures and their underlying causes; they are not exactly secrets.
I'm absolutely certain that Masklinn did not assert that the optimal design was used every time or that NKoren's comment was completely lacking any accurate information. Neither of these assertions needs to be made to dispute the central idea that US rocket scientists are too incompetent to even consider alternative fuels or broader design tradeoffs.
What do you mean by this? The Mercury Atlas launch vehicle used kerosene / LOX in its first stage. The Saturn V used kerosene / LOX in its first stage and LOX / LH2 in subsequent stages.
Also, for the majority of spaceflight, both manned and unmanned, "US rocket scientists" were the government, right? Especially after WWII.
The STS famously had all sorts of contradictory goals (some came from outside NASA) that resulted in sub-optimal design tradeoffs--which in turn required the highest-performance engines NASA could get their hands on.
From the outside, looking back, it seems like the Space Shuttle was so (unexpectedly) demanding that it basically took over NASA for 25 years. Maybe there just weren't enough resources to pursue other launch vehicle ideas, even if people there recognized the need.
The days up to and including the Apollo missions were the US's glory days; all the great innovation in rocketry was happening then, and it culminated in the Saturn V, which was a big success (esp. considering the level of overall technology back then, and what they were trying to do). And, as you pointed out, it used kerosene. It still remains (I'm pretty sure) the largest and heaviest-lift launch vehicle in all of human history. We still haven't surpassed it, and it was designed in the 1960s.
What have we done since the Apollo program was canceled early? Nothing as notable. A bunch of really expensive launches to LEO with an overpriced space-going pickup truck basically. And a bunch of small launches (satellites, probes, etc.) using rockets that were designed way back then for launching nuclear weapons. We're still highly dependent on rocket engines built by Russia. The whole Space Shuttle idea itself was stupid: the costs and complexity were huge because of the idiotic requirement of being able to retrieve military satellites from orbit and bring them back intact and in secret. If it weren't for that, they never would have come up with such an absurd design, and would have just stuck with much cheaper capsules like the Soyuz and the upcoming manned vehicles by the US.
ISS? Hubble? Various Mars rovers?
Sure they aren't as notable as putting humans on the freaking moon, but that's also a pretty impossibly high bar to match. The only thing that would best it is humans on Mars.
> We're still highly dependent on rocket engines built by Russia.
That's not entirely true. The Soyuz-U does have by far the most launches of any system at 771, true. But the still successful Atlas II (63 launches and also the most reliable system to date) used all Rocketdyne engines. The newer Atlas V does use a Russian engine for the first stage but Rocketdyne for the second.
The Delta IV is also all Rocketdyne. It's also entirely LH2/LOX and yet weighs less than the Falcon 9.
As I said before, nothing as notable.
>Sure they aren't as notable as putting humans on the freaking moon
And it seems you agree with me!!
>but that's also a pretty impossibly high bar to match. The only thing that would best it is humans on Mars.
Wrong, there's plenty of things that would best Apollo. A serious space station (like the rotating one in 2001) perhaps at a Lagrangian point, a Moon base, Moon or asteroid mining operations, orbital construction of a longer-range spacecraft, creation of nuclear spacecraft engines, etc. Basically all the stuff we dreamed about back in the Apollo days and then never bothered to do and just forgot about.
Their robotic missions have been hugely successful.
Consider just the number of them: Viking, Voyager, Galileo, Deep Space, Cassini, New Horizons, Dawn, Kepler, and Juno. That's not even counting our many Mars probes. It's pretty impressive to launch a new interplanetary probe that frequently, and every one of them accomplished its mission.
We have now sent probes to every planet in the Solar System, two dwarf planets, asteroids, and comets. If we are still in an era where crewed interplanetary travel is impractical, we're making the most of it with these uncrewed missions.
Probes are nice for science, but they really don't inspire people the way manned flights, space stations, etc. do. Nor do they require nearly as many resources; they comparatively cheap and simple. The resources poured into Apollo were enormous, and the technologies derived were hugely significant. What technologies were derived from New Horizons? Probably none. I'm not saying it shouldn't have been launched; it was absolutely a worthy project. But it just doesn't compare to Apollo in any way. So as I said before: "nothing as notable".
No, it's because their system is much less capable and benefits from having a starting point of 50 years of accumulated knowledge.
Take Musk & the Space-X team and plop them in 1980 and you would not get the same Falcon 9 that exists today. You wouldn't get anything close.
Comparing the space shuttle to the Falcon is idiotic, anyway. The shuttle had capabilities that no other launch system has had or will have anytime soon (for better or worse).
you know what would be funny if they make a CN tower size rail gun and rail gun materials to the moon to build the same thing LOL
did scientists rule out rail guns ?
His engineer just flat out refused to produce any plans for this design. He tried to make something like his plane happen anyway, going without sleep for weeks, and had a meltdown on camera. The show eventually had to be about him giving up on his cherished design and building a throwback to biplanes, with a family of artisans who are experienced in building them.
I had the feeling that Cringely was trying to channel Steve Jobs and it didn't work out. (Note: Jobs drives other people to do impossible missions without sleep, not himself, and doesn't display his work until it's ready, giving the appearance of effortless superiority).
Anyway as for this wind farm idea, the evidence of his friend's alternative propeller design is interesting. But let's also note that that this friend chose not to pursue wind farms, maybe for a reason.
 EDIT: I am not an aviation person, and my memory of this is poor, but from Googling I see apparently he wanted to make it out of unusual materials, have foldable wings, and put the engine behind the pilot. I have a memory of his design requiring the pilot to straddle the drive train to a front-mounted propeller. I don't know how problematic that is, but I remember it as being presented as a major problem.
EDIT 2: To be honest I feel a bit bad now about this being a top-voted comment, as Cringely learned a lesson on camera that many startup people learn behind closed doors – confidence is great, but trying to innovate in too many directions at once will kill you. Maybe that means that he is more cautious now, so perhaps he's really sure this thing will work. On the other hand, it was the first thing I thought of, that maybe he doesn't have a great track record with aviation iconoclasm.
Tries to channel Steve Jobs and it doesn't work out. This is something like common Silicon Valley misstep #2.
Still a functional design though, with certain advantages.
The juiced up v2, the P-63 King Cobra had a proper supercharger system. It was more capable at high altitude - though the USAAF felt that the P-51 was better.
After the war, the remaining few P-63s (and P-39s that could be retrofitted with better engines) dominated air racing. Flying very fast at low altitude and turning on a dime (that mid-engine) generally out flew the P-51s. But since most of them were sent to the Soviets and elsewhere, and our own leftovers were simply scrapped or used for gunnery drones, there are very few left. I have never seen one in person.
Note: I have since discovered I used the wrong viscosity for air on this experiment, so the results aren't valid for use on the earth. (Maybe Jupiter.)
A more intricate genetic algorithm that runs a 3D wind model and accounts for the cost of manufacture would almost certainly produce some more novel solutions that completely demolish our preconceived notions of what turbines should look like, just like the "jellyfish" cross-section. But it would also probably not effectively scale beyond the model size. Genetic algorithms almost always exploit any previously hidden little flaws you may have in your model simulator, such as the rounding error on floating point numbers at the limit of their precision, or even the way in which you represent airflows. They evolve to maximize performance in the modeling environment, so if your model is not perfect, the solution does not always translate to reality.
As such, you always have to test the evolved solutions in the real world before you can say they are better than the designed solution.
In fact I've thought about what the optimal way to do real world experiments is. Assuming you want to minimize them as much as possible, if making real prototypes is expensive.
I think you could use machine learning on the results of real world experiments to invent a model. It would predict how well each design would do. Then you could optimize the design to the machine learned model. Which of course won't be very accurate. But then it will give you a new test to do in the real world, which will give you data to make it even more accurate.
Additive volume-printer or subtractive CNC machinery would almost be required to make the physical test models.
Doesn't it seem to converge a little aggressively?
How do you feel about the potential bottlenecking pressures of a breeding pool of size... six?
The parameters of the genetic algorithm are so overwhelmingly ready put things permanently askew from random variations in early stages of the trial sequence that I'm not sure the sim really has a significant impact on the result at all. The author even seems to be aware of this, when he admits running it again does not generate similar results, and starting with a single piece of bias in the initial sample causes that result to dominate.
Genetic algorithms aren't something that can be thrown at a problem with zero attention to the parameters. Sorry, no algorithm is that magical.
Generally I think it's best to use a small population, or even just hillclimbing. But with the smaller population it should run and converge much, much faster. Then you can run it multiple times, and take the best result.
He talks about 20 minute tests, and then later uses a 12 hour test and gets a significantly different power number. I wonder how differently the evolution might proceed if he just had more power to run the full tests on each generation.
> Twelve blades is a nice number.
Why twelve? Why not 50, it's a nice number as well.
> Lipps turbines can operate in faster winds [...] turbines could be allowed to run 24/7 in any wind with no computer
So even in the strongest winds turbines with 40 feet blades do not need to be stopped?
> using permanent magnet generators instead of alternators, but those are more expensive
> Use permanent magnet generators leading to [...] even lower cost.
Which is it now?
> what matters isn’t power efficiency per turbine so much as power production per acre of wind farm.
Isn't it rather electricity production efficiency in terms of invested capital?
This leads me to the question: is it possible to determine a good rotor design by using optimization techniques such as simulated annealing or genetic programming? Or are the simulations too costly?
The simulations are quite difficult - flow simulation is well-established, but it's a big step from that wind farms. You need to simulate many turbines interacting at a specific site, under a variety of conditions. That's not the problem with modeling single turbines, though.
Rotor design is hard to simulate productively for the same reason that Cringely's analysis is flawed: many of the biggest factors are unrelated to everyday running.
- Even blade counts create resonance patterns that stress your tower.
- High solidity designs suffer more stress in storms (even while stopped), so they break more
- Noise interferes with putting rotors near houses or over pastures
- Storage is difficult and consistency is valuable, so expanding the operating range up to high winds is less useful than expanding it down to low winds
And so on. If you model pure efficiency, the sims are pretty easy. If you model real value, you either can't do it, or you get a search space so jagged that finding maxima is hopeless.
They sounded like they had a bunch of advantages, not least a much higher tolerance for extreme wind conditions, less stress on long (suspended) blades, possibly less gearing issues in translating motion back to the ground - but presumably they had / still have distinct disadvantages?
> Prototypes of the device are claimed to be 10 - 30 times more efficient than small wind turbines. One prototype has powered two LEDs, a radio, and a clock (separately) using wind generated from a household fan. The cost of the materials was well under US$10. $2–$5 for 40 mW is a cost of $50–$125 per watt.
Wikipedia has a short section:
I think people tend to forget that the big wind turbine manufacturers today started out small, experimenting with various designs before they settled on the three-bladed model. At the moment there's a lot of accumulated research and infrastructure present that allows them to erect giant 3-6-8 MW turbines that can take the hammering of the weather for > 20 years. Competing against this is really, really hard, you really have to a big advantage in your favor and perhaps also decades of research and development.
Rotor power coefficient vs. tip-speed ratio.
The three-bladed machines won out commercially. Machine size went up because output vs cost decreases with size, at least up to 1-2 MW. Lots of little machines were a pain to install and maintain.
Wind generators used to be AC generators synchronous to the grid. But with higher power semiconductors available, putting a big AC-DC-AC converter on the output to sync it to the grid is becoming popular. This allows generating some power during low-wind conditions, and provides much more adaptability to wind gusts. When the wind speed changes, the blade pitch is adjusted to compensate, but on big turbines, this takes tens of seconds. Being able to adjust electrically in milliseconds avoids power grid transients.
The push for permanent magnet motors in wind turbines is more about converting to direct drive and getting rid of the gearbox. Wind turbine gearboxes are a huge pain, wearing badly for reasons that were only understood in the last few years.
Urban windmills harm the environment 
> A small windmill on your roof or in the garden is an attractive idea. Unfortunately, micro wind turbines deliver hardly enough energy to power a light bulb. Their financial payback time is much longer than their life expectancy and in urban areas they will not even deliver as much energy as was needed to produce them. Sad, but true.
Small windmills put to the test 
> A real-world test performed by the Dutch province of Zeeland (a very windy place) confirms our earlier analysis that small windmills are a fundamentally flawed technology
(Note that the picture shows that almost all windmills tested had three blades)
Wind powered factories: history (and future) of industrial windmills 
> In the 1930s and 1940s, decades after steam engines had made wind power obsolete, Dutch researchers obstinately kept improving the – already very sophisticated – traditional windmill. The results were spectacular, and there is no doubt that today an army of ecogeeks could improve them even further. Would it make sense to revive the industrial windmill and again convert kinetic energy directly into mechanical energy?
Unlike this story, which certainly sounds interesting but shares no real data to back it up, Kris de Decker thoroughly digs through sources to write articles backed up by available data as best as possible.
 Not sure both changed the number of blades during WWII.
In his words, the 109 handled much better at the limit than the P-51. The P-51 could dive and go very fast, but with poor control. In his opinion the 109 was the superior aircraft.
Not sure if it relates at all to the prop discussion, but maybe in the grand scheme of engineering? Those warbirds are fascinating pieces of hardware to me.
"I guess available power from the engine plays a big role, as engines got more powerful they started fitting 4 blades."
I wonder if it was a carry over from the first world war when guns fired through the propeller, so the more blades the more chance of shooting it off? (There were systems which regulated when the gun would fire to prevent this happening)
But, it does have three blades.
It may still be a nonissue, of course, but the comparison with cats doesn't tell the whole story.
I don't see any reason why bald eagles would be more likely to be killed by turbines than other species of birds. If only 0.01% of birds are killed by turbines, then they would only kill 1 bald eagle a year out of thousands.
Regardless, this is more of a reason to invent solutions for repelling birds from turbines. Maybe we could come up with noises or lights that repel them. Or drones, or a water cannon robot. But I'm not certain it's even an issue.
"Not all bird species are equally vulnerable to wind turbines. Eagles appear to be particularly susceptible."
It's a nice thought though, just kill off the big birds, and watch the benefits "trickle down" to the little birds. /s
Yes, that wonderful, fertile cat shit full of Toxoplasma and other parasites. What a great addition to my garden! Not to mention that wonderful feeling and smell when your sink your hands into cat shit while prepping a garden bed.
Your other point, in regards to keeping populations under control, is way off the mark. http://www.nature.com/ncomms/journal/v4/n1/full/ncomms2380.h...
Keep your cats indoors, people! They're great pals to have, but inside.
The point is birds are not going to go extinct even if we increase the number of turbines ten fold. It wouldn't even make a dent in the bird death rate, which cats are only a small part of.
Domestic cats surely dominate all other cats in population.
I don't see why the numbers would be significantly different on bat death rates than bird death rates.
- "Conventional wisdom says wind farms should have their turbines placed in such a way that they don’t interfere with each other, the fear being that placing one turbine too closely in the shadow of another will reduce the efficiency of the showed turbine."
True, and this is why we have wake models based to predict the losses from other turbines and optimize placement.
- "The rule of thumb, then, is that turbines be placed no closer than seven diameters apart. Keep that number in mind."
Not true. You may find that a 7x7 array is relatively common in offshore applications, but a typical onshore application is more likely to be between 2-4 diameters apart in a row, with rows 7-13 diameters apart front to back.
- "Oh, and turbines are placed seven diameters apart. That’s it, no CFD."
Wrong. But CFD is generally computationally complex, so we usually use models with reduced fluid dynamics equations to make it possible to iterate quickly. See previous comment about wake models.
- "In some cases wind farm automation can cost as much as the turbines, themselves."
I'd like to see these magical cases. A typical 2 MW turbine costs $2 million to purchase, and about $3-3.5 million total as part of an overall project of 50 turbines. SCADA is a minor fraction of this, as is operations.
- "Shorter blades are stronger than longer blades, so the Lipps turbines can operate in faster winds."
This is a non-issue. There are very few sites in the world that require even the highest wind speed turbine designs; most of the world is less windy, and the majority of sites benefit from using turbines designed for lower winds.
- "Use permanent magnet generators and the turbines could be allowed to run 24/7 in any wind with no computer control required at all, leading to more production at even lower cost."
Computer control of turbines is a non-issue, and the cost is minimal relative to the raw materials cost of the machines.
- "This is because they use alternators that consumer electrical power to energize their windings so there is no point in turning-on the alternator (energizing those windings) until there’s enough wind to generate a net positive amount of electricity."
This is only a little bit correct, and mostly not. Wind turbines by design generally need to be connected to the grid to run, but winding energization is not why they don't start generating until there is enough wind. Turbines are generally on all the time, and typically consume anywhere from 10 to 50 kW at idle. And I don't know why he's using the term "alternators" to describe the turbine generators. The most common generator type is a doubly-fed induction generator, but squirrel cage induction generators, permanent magnet generators, and synchronous generators are often used. Usually turbines are connected to the grid through power converters which allow them to run at various speeds while remaining electrically synchronized with the grid.
- "Remember the diameters are smaller so instead of hundreds of turbines we’re talking about thousands of turbines for the same wind farm. Imagine a field of mature dandelions."
This is actually a problem. When you can get 100 MW with a 50% capacity factor by building 50 machines in one township in Nebraska, why would you want to build 1,000 machines instead? How is that less complex?
- "Try breaking into the industrial wind power business without at least $1 billion in capital. It can’t be done. The incumbent companies like it that way, too."
Manufacturing is capital intensive. News at 11.
- "Lipps wind farms could be closer to cities and therefore have lower transmission losses, further increasing power output."
Wind farm placement is about where the wind resource is. It's an economic decision.
- "The result of all this not starting and then stopping is that throughout the year an average workload of 23 percent is reached by inland wind farms, 28 percent for coastal farms and 43 for off-shore."
I have to assume his "average workload" here (a term I've never heard in the industry) is equivalent to capacity factor, which is the ratio of actual energy produced to the maximum possible. Most new projects in the windy areas of the US have predicted capacity factors of greater than 40 percent. High wind losses are typically very low, and online time is typically very high.
- "China will build the heck out of those smaller blades."
China is also building the heck out of the larger blades. China has more wind capacity installed than any other nation.
- "And no insane cows, either. Cattle can’t be pastured under wind farms because the motion of the turbine blades and especially their sound drives cows crazy."
Tell that to these cows https://www.flickr.com/photos/ashcreekphoto/7793429362 (did he even do an image search before posting that? I've built wind projects on cattle ranches.)
Why does this post use such an old, crappy US wind map? Why not the newer DOE wind maps available at http://apps2.eere.energy.gov/wind/windexchange/wind_maps.asp
I'm sure I could go on, but this is just a fundamentally misinformed article. It's trying to make an aerodynamic argument (which I am not qualified to judge) using a mess of bad or incorrect information.
Other (industrialish) things cows don't mind:
- Feed grinders
- Silage conveyors
- Automatic Milkers
- Being lifted to a horizontal position to have their hooves trimmed.
- Having their eggs artificially inseminated.
I was unsure about the credibility of Cringely on the topic of turbine blade count right up until I read this statement. Then I was sure about his credibility.
If I were to put my money either in the most advanced big-farm wind high-tech or in something that is decentralized, mass-produced, and well-abused by all kinds of groups of people, it would be the latter.
The article mentions that one billion is not enough to enter the game. That surely excludes a lot of the smartest and brightest people who might come up with new innovations and the billion-scale investments also seek conservative returns, further culling new ideas.
Fascinating! It’s not about asking questions, it’s about asking the right questions! The first framing question is efficiency from blade to outlet, but it’s really about effciency from capital markets to factory floor to farm to outlet.
That should become a mantra. It's about efficiency from capital markets to revenue per customer.