"" ^v^ ""
"" ^v^ ""
"" ^v^ "" ... low pressure
"""...""" held by vortex
######### ### dandelion falling
""""""""" """ rising air
[Edit] Maybe it is after all. :)
Creating lots of small vortices in rapid succession, merging vortices, splitting vortices, letting vortices "suck in" other vortices.
Those poor research assistants. Imagine counting hundreds of dandelion bristles every day. Probably still not possible with AI/image recognition either.
(Well thanks for the downvotes. I do this as my job, so I guess I'm doing the impossible or something)
a) Hire someone or a company skilled enough to whip up some classifier that can count these from some accuracy, level, which entails paying a developer, and probably 5-10 graduate students to sit around and count dandelion bristles on a good number of dandelion seeds to create a training set.
b) Just have 5-10 graduate students sit around and count dandelion bristles on a good number of dandelion seeds
Given that a is a super set of b, and the extra portions of a are likely much more expensive than all of b, I think the answer is fairly clear...
I'd imagine this kind of task would lend itself to some old-school image recognition techniques - photograph against uniform background, threshold, mask the middle part and count contiguous regions.
I do have one question though, how is the air flowing up through? It the seed not falling for that to happen?
The dandelion is doing the same thing. It's creating lift by descending slowly within the airmass, though the process of creating lift is via the vortex ring, rather than airflow over wings. Its aim is to fall very very slowly. Then it will often be falling slower than the airmass is rising, and it will be carried up and far away. Of course there's no pilot, so it can't actively seek out rising air, but there are an awful lot of dandelion seeds, and only a few need to get lucky to spread a long way.
Once we truly break the barrier between metal and meat, artificial processes may create real life in completely unforeseen ways. Gives a new ring to "asexual reproduction".
Or the horrific, ending-of-human-race potential for mistakes (or purposeful harmful action for profit/terrorism/state actions/etc).
I'd disagree with your point about intelligence though. It's simply that we have more resources than we need. Even the stupidest animals would develop rituals if they could afford to.
Skinner documented it in 1948, and there's a fair bit of literature on it.
Makes me wonder how much more of the wind a dandelion seed can catch with a vortex compared to if it didn't form one.
Would be interesting to know if there are different types of dandelions (I mean the part used to "fly") used by the same type of seed, each type adapted/optimized for a particular climate (e.g. humid for asian areas, dry for african areas, windy for coastal areas, ...).
2). Variation between the products of reproduction.
3). Heritability between those variants.
4). Differential success among the variants.
Anything that has those four characteristics will experience adaptive evolution. Where it gets really fascinating is when you realize it applies to things that don't go through biological reproduction, for example the graphical user interface.
They, dandelion seeds, appear to fly like other cotton-y plants do (eg heather ). I wonder if they creates these vortices too?
I'd guess not as their structures aren't regular and they don't have the weighted 'drop' to give stability?
Evolution is crazy good at finding and exploiting quirks in physics.
Nature is amazing.
I thought we had settled this, airplane wings work by deflecting air downward, not by the Bernoulli effect, right?
Well ... they're actually different ways to incorrectly describe the same phenomenon.
Wind tunnels are still used for aircraft design because we can't accurately model aerodynamics.
These are all fairly straightforward consequences of the basic rules which air must follow when flowing. The wing produces lift because the shape of it means that all the consequences of the above are the only ones which follow all the rules, much like a sudoku or crossword. The details of actually solving the full puzzle (in way which tells you how much lift and drag you get) turn out to be fiendishly complex, but the basic reasoning is not too hard to understand, apart from the fact that people get confused by the fact that you can explain it 'simply' in multiple different ways by focusing on only one part of the whole crossword.
My very rough understanding is that computer simulations of air flow are sufficently accurate for a high percentage of predictions for many kinds of objects. Fair? If not, under what cases does their accuracy suffer? Do we know why?
I am interested in why wind tunnels are sometimes used. Possible reasons I see are:
1. building computer models of an object being tested is sufficently difficult that it is more efficient to test in a wind tunnel
2. computer simulations lose significant accuracy when it comes to certain conditions ... but I don’t know what these conditions are
3. human or policy issues, e.g. some people trust a wind tunnel result more than a computer simulation.
Short version: Scale models (like wind tunnels) are useful because the most accurate simulations are extremely computationally expensive or computationally intractable, and the faster less accurate simulations are often so inaccurate that they are untrustworthy. Scale models are not 100% trustworthy themselves, and to construct and use them you need to understand similarity theory.
The general field is called computational fluid dynamics (CFD for short). There are broadly two types of turbulent computer simulations of flows: DNS and not-DNS.
DNS stands for direct numerical simulation. These simulations are very accurate, and sometimes are regarded as more trustworthy than experiments because in a particular experiment you may not be able to set a variable precisely, but you can always set variables precisely in a simulation.
Howver, in DNS you need to resolve all scales of the flow. Often this includes the "Kolmogorov scale" where turbulent dissipation occurs. It could also include even smaller scales like those involved in multiphase flows or combustion. This is so extremely computationally expensive that it's impractical (in terms of something you could run on a daily basis and iterate on) for anything but toy problems like "homogeneous isotropic turbulence". In terms of real world problems, DNS is limited to fairly simple geometries like pipe flows. Those simulations will take weeks on the most powerful supercomputers today. It's very rare for someone to attempt a DNS of a flow with a more complex geometry, and I'd argue that such works are mostly a waste of resources. Here's an interesting perspective on that: https://wjrider.wordpress.com/2015/12/25/the-unfortunate-myt...
"Not-DNS" includes a variety of "turbulence modeling" approaches which basically try to reduce the computational cost to something more manageable. This can reduce the cost to hours or days on a single computer or cluster. The two most popular turbulence modeling approaches are called RANS and LES.
Instead of solving the Navier-Stokes equations as is done in DNS, modified versions of the Navier-Stokes equations are solved. If you time average the equations instead, you'll get the Reynolds averaged Navier-Stokes (RANS) equations: https://en.wikipedia.org/wiki/Reynolds-averaged_Navier%E2%80...
These equations are "unclosed" in the sense that they contain more unknowns than equations. In principle, you could write a new equation for the unclosed term (which is called the Reynolds stress in the RANS equations), but you'll end up with even more unclosed terms. So, the unclosed terms are instead modeled.
RANS is older, computationally cheaper, and usually computes the quantity that you want (e.g., a time averaged quantity). LES is newer, and has better justification in theory (e.g., good LES models converge to DNS if you make the grid finer, but RANS will not), but it often doesn't compute precisely what you want and the specifics of the LES models are often specified in inconsistent ways. My experience is that people tend to ignore the problems with LES or be ignorant of them. (Though I do believe LES is more trustworthy.)
The problem is that modeling turbulence has proved to be rather difficult, and none of these models work particularly well. Some are better than others, but the more accurate ones typically are more computationally expensive. Personally, I don't trust any turbulence model outside of its calibration data.
Some people lately have proposed that machine learning could construct a particularly accurate turbulence model, but that seems unlikely to me. People said that same things about chaos theory and other buzzwords in the past, but we're still waiting. Many turbulence models are fitted to a lot of data, and they're still not particularly credible. Also, machine learning doesn't take into account the governing equations. Methods which are similar to machine learning but do take into account the governing equations are typically called "model order reduction". If you want to do machine learning for turbulence, you actually should do model order reduction for turbulence. Otherwise, you're missing a big source of data: the governing equations themselves. (I could write more on this topic, in particular about constraints you'd want the model to fit which machine learning doesn't necessarily satisfy.)
Anyhow, scale models are basically treating the world as a computer. Often testing at full scale is too expensive, particularly if you want to iterate. "Similarity theory" gives a theoretical basis to scale models, so that you know how to convert between the model and reality.
One of the most important results in similarity theory is the Buckingham Pi Theorem: https://en.wikipedia.org/wiki/Buckingham_%CF%80_theorem
This theorem shows that two systems governed by the same physics are "similar" if they have the same dimensionless variables, even if the physical variables differ greatly.
If any of this is confusing, I'd be happy to answer further questions.
I can relate to your comment: "Some people lately have proposed that machine learning could construct a particularly accurate turbulence model, but that seems unlikely to me". A healthy skepticism is important. Different inductive biases in various machine learning algorithms will have a significant effect here, I'd expect.
Here's some additional comments you or some other reader might find useful:
Dimensional homogeneity is the most important constraint I think most machine learning folks would miss. It's not really an "inductive bias", rather something which everyone agrees models need to satisfy, so it should be baked in from the start. This is trivial to meet, actually; just make sure all of the variables are dimensionless and it's automatically satisfied. (Depending on the larger model, you might have to convert back to physical variables.)
In terms of "inductive biases", I'm not certain what that would entail in terms of turbulence, but I'll think about it. Might be something to figure out empirically.
Turbulence models which satisfy certain physical constraints are called "realizable". Some of these constraints are seemingly trivial, but not necessarily satisfied, like requiring that a standard deviation be greater than zero. (Yes, some turbulence models might get that wrong!) The "Lumley triangle" is a more advanced example of a physical constraint that a (RANS) model needs to satisfy that often is not satisfied.
I'd be interested in applying machine learning type methods (combined with the model order reduction approaches to include information from the Navier-Stokes equations), but I'm not knowledgeable about them. My impression is that most people applying machine learning to turbulence are novices at machine learning. And I imagine most machine learning people applying machine learning to turbulence are novices in turbulence and wouldn't know much anything about the realizability constraints I mentioned.
Another issue worth mentioning is experimental design. I think the volume of data needed to make a truly good turbulence model is probably several orders of magnitude higher than anything done today for turbulence. Experimental design could make this more efficient. I don't think most machine learning people worry much about this. They seem to focus on problems which can be run many times without much trouble. Acquiring data for turbulence is slow and hard, so it's outside their typical experience.
For a parachute, the Re number is much higher which makes the dynamics of the flow chaotic (called turbulence). There is a critical Re number beyond which There is no way keep the vortex stable, or as they call it, the vortex bursts.
The Reynolds number is only part of the picture. You also need a measure of the strength of the turbulence. A common measure is the "turbulence intensity", which you can think of as the standard deviation of the velocity divided by the mean of the velocity. (Though that's only exactly true in "isotropic turbulence".)
In certain circumstances you can compensate for a higher Reynolds number with a lower turbulence intensity. The bristles of the dandelion may have a turbulence reduction ability, so perhaps this is already being done. I'm not certain how to reduce the turbulence level further as in this case it's mostly an ambient property which is beyond the control of the dandelion. Some sort of honeycomb structure upstream of the bristles might help, or it might hurt; it depends on the details.
Here are some examples:
Pipe flow can remain laminar for higher Reynolds numbers if the turbulence intensity is low enough. Though special turbulence control approaches (e.g., eliminating vibrations which could trigger transition to turbulence) laminar pipe flows have been observed at a Reynolds numbers of about 100000, about 50 times higher than the typical Reynolds number where laminar flow ends.
Here's a quote from a review article:
> The impression gained from presenting data in this way is that there is a transition between two definable states. One is the relatively rare but well-defined state of motion, laminar flow, and the other is the more common and ill-defined state of turbulence. Experimental evidence suggests that the laminar state can be achieved in pipe flows over a wide range of Re with the record standing at Re = 100,000 by Pfenniger (1961). Reynolds himself managed to achieve Re = 13,000, and Ekman (1911) later improved on this to ∼50,000 using Reynolds’ original apparatus. [...] Achieving laminar flows at high values of Re is an indication of the quality of an experimental facility and gives some confidence that the observations will not be contaminated by extraneous background disturbances such as entrance flow effects, convection, and geometrical irregularities.
Matching the turbulence intensity of two wind tunnels is often necessary to make the results comparable between the two wind tunnels. In the first volume of Sidney Goldstein's "Modern Developments in Fluid Dynamics", there's a plot showing (if I recall correctly) the Reynolds number at which the "drag crisis" occurs as a function of turbulence intensity. This basically means that the drag coefficient can be very sensitive to the turbulence intensity, at least in special circumstances.
(Why I wrote this: In my dissertation, I have an entire section about how turbulence intensity is too frequently neglected in analyses, particularly for the problem I'm studying for my PhD.)
It can be shown mathematically, using a technique called parabolized stability equations (PSE), that small disturbances amplify rapidly thorough non-linear interactions in the frequency space. Hence, although it's possible to create a laminar flow at high Re number in the lab, it's extremely hard to achieve in nature.
One interesting case of this is the Rutan's Voyager airplane in the 80s. It was designed to have a laminar flow over its wing to reduce drag. It worked quite well until it faced rain drops at some point which messed up the aerdynamics of the wing and caused the airplane to stall. At that point, they had to add vortex generators on the wing to prevent the stall.
I work in internal and multiphase flows, and changing the turbulence level is much easier there than in aerodynamics.
I'll also have to look at the parabolized stability equations as I am not familiar with them. If you have a preferred reference, I'd be interested.
While the gas temperatures would be higher due to viscous heating, which would increase the viscosity of the air, the increase in viscosity is much lower than the increase in velocity. So the Reynolds number would still increase. I very strongly doubt the turbulence intensity would be lower in this case too.
Plus, I imagine the bristles would be quickly ablated away.
Not saying that it would be good for anything, it just produced a cool image in my head. :-)
Perhaps I should have tried to patent that ...
> Many insects harbour such filter-like structures on their wings or legs, suggesting that the use of detached vortices for flight or swimming might be relatively common
I sense a bit of disconnect between the article and the headline.
> When some animals, aeroplanes or seeds fly, rings of circulating air called vortices form in contact with their wings or wing-like surfaces.
> Researchers thought that an unattached vortex would be too unstable to persist in nature.
I guess those animals and seeds aren't natural then?
How is it we are shocked to find out we are sometimes wrong and not shocked that sometimes we get it right?
My critique by the way is the headline, not the actual research. The headline is clickbait IMO, and just as I suppose some don't like my comment (not well thought out, emotional, etc.), the headline is the same.
E.g., consider, "Curious unexpected aerodynamic principles of dandelion seeds lead scientists to new areas of discovery"
You can even shorten that to "unexpected aerodynamic principles of dandelion seeds".
The webpage at https://www.nature.com/articles/d41586-018-07084-8 could not be loaded because: