Medicine is hard.
It has nothing to do with evolution; take an intelligent person with no mechanical experience, tools, or instructions, give them a car which is behaving oddly, and ask them to fix it. It will be a miracle if they manage to properly diagnose and fix the problem without destroying at least a couple of engines. The fundamental rule is "Interfering with a complex system for which you don't have the manual by trial and error is as likely to do harm as good."
I guess I don't see why this is treated as an insight into human intelligence. No one with mechanical experience thinks you can make a car go faster by using hotter gasoline. No one with technical experience thinks you can make a computer run faster by boosting the input voltage. For the same reason, no one with medical experience thinks there's a drug that can make you smarter. It just doesn't make sense.
But that doesn't mean you can't make a faster car, or computer, or brain, or that it need be terribly difficult. For all of these, the answer lies in structure. And in this respect, brain modification may be the easiest of the three, because modifying brain structure is one of the primary functions of the brain.
Fun fact: some computers actually can be made to run faster by raising the input voltage! The higher voltage decreases the delays of the logic gates, and lets you clock it faster. The big problem is that this also raises the power consumption -- both energy per second and per clock cycle.
That's what makes the article interesting - is there are exceptions to Algernons's law.
The author is opinionated, controversial, entertaining, and educational - with citations to boot.
I've noticed his user name here on HN and over on Wikipedia for a while. I was just doing some more browsing around for his writings, prompted by the article submitted here today. I see "My Mistakes"
provides a lot of food for thought.
http://www.gwern.net/Nootropics (uses sparklines!)
Really? This is getting upvoted here? How disappointing.
"Evolution doesn't have a goal, it doesn't make future plans. Evolution is an accident."
- Richard Dawkins in The Selfish Gene
Did you know giraffes have a nerve connecting the points in it's head that are about 5cm apart? But instead of connecting those 2 points directly, the nerve goes all the way down through the neck, then all the way back up, to connect its end point. Is there any advantage in this design instead of just connecting directly? No there isn't, giraffes are that way only because, historically, that's how they evolved. Evolution doesn't make intelligent plans. Mutation happens randomly, then natural selection will sometimes prune out bad mutations. That's all. Evolution is imperfect. Alluding there's anything intelligent about evolution is alluding to intelligent design and creationism.
> There is a discrepancy between the standards by which evolution measured the quality of her work, and the standards that we wish to apply. Even if evolution had managed to build the finest reproduction-and-survival machine imaginable, we may still have reason to change it because what we value is not primarily to be maximally effective inclusive-fitness optimizers.
The actual thesis of the article is more interesting than what you seem to have read into it.
If I see a well-written and interesting article that may not be correct, I'm likely to upvote it. Why? Well, I want to read an interesting discussion about it: perhaps its premise is wrong, but I'm not equipped to answer so I want to farm it out to the HN reader base.
That's a popular example, but Wikipedia has another example (mentioned in the same section) which I was unaware of: the vas deferens in human males, which unnecessarily loops over the ureter. You learn something new every day...
That is not all. Beneficial mutations sometimes occur; and because they provide a benefit they spread through the population [caveat below].
Bostrom's question is valid: if some change to the brain's chemistry could provide a boost to intelligence, why hasn't a mutation providing that change already swept through the population?
There's nothing wrong with the question, and it does have answers, e.g. constraints (no simple set of mutations can provide the specified change) or tradeoffs (the change would result in a net fitness loss; intelligence isn't everything) or just insufficiently large benefits (any rare gene, e.g. a new mutation, is likely to die out by chance simply because it's rare; providing a larger benefit helps its chances of survival. )
Basically, the assumption there is that the current equilibrium is optimal or close to optimal. The problem is that this is often untrue. If I'm being inefficient, there is no reason for an improvement not to be completely free, after all.
It also smacks of "conventional wisdom" which is often much more conventional than wise.
More pertinently, I think this can apply to evolution. Now, clearly, I am no expert on evolution, so I could be completely off-base. But my understanding is that evolution can get caught in local maxima. That is, there could be some sufficiently remote global maximum in the fitness function that isn't reached because any probable mutation hits a lower value between the current state and this possible maximum.
I suppose an example could be about how evolution never came up with the wheel. I think there are cases where wheels would increase fitness, but they are simply too remote from existing organisms to reasonably evolve.
If such an effect exists, then some sort of design process could overcome it. In this case, we could strictly improve on evolution. Now, I'm not sure if this is an actual effect, but it seems plausible. If anyone has any actual studies on the subject, I would like to see them.
Another trade-off completely changed by modern civilization is risk aversion. We can afford much more risk because of modern medicine, because of living in populous, anonymous cities, because of immense wealth to fall back on... And yet it takes a lot of courage for most people to approach and talk to a stranger (an attractive stranger of the opposite sex in particular), fight in a martial arts tournament, or stand out in any significant way (by dressing differently, by speaking loudly). Those who do usually aren't making a calculated decision but are simply unadjusted.
We want to live in big cities with high tech, medical care, abundant food. Let's make a genome optimized for that!
An individual who is more intelligent might be less likely to pass on their own genes, but may well make the species as a whole more resilient. Looking at evolution from the perspective of the individual is a mistake.
Note that being "less likely to pass on their own genes" means exactly the same thing here as having a "net evolutionary disadvantage", so your comment is actually agreeing with the sentence that you quoted, despite what appears to be your intent.
Regardless, there is a valid discussion to be had about evolution at levels other than the individual, but it's just not as simple as replacing individuals with species as the biological unit of evolution.
Edit: To make it a little more clear what this means, treating the organism as the unit of evolution is like treating the cell as the unit of walking. Yes, walking is the action of individual cells, but nothing interesting or relevant to the process is happening at a cellular level.
This implies effects in the population that are counterintuitive from the perspective of the individual, which is why it's important to make the distinction.
True, but a single organism with a new trait failing to reproduce means that the trait cannot be passed on, and therefore does not exist in the population in an evolutionary sense. You absolutely can't ignore individuals in the evolutionary process.
Perhaps the more neutral framing is this: There are things that happen to life which you will not be able to explain using a primarily gene-, individual-, or group-centered perspective. This is because evolution is complex, and anyone who claims to really understand it is lying.
I've been advocating for the population perspective mainly because it necessarily includes what's going on at a lower level, but the safer truth is simply that anything you think you know on the basis of theory needs to pass through many levels of analysis, and then be filtered with a healthy dose of what really happens.
I mean that's pretty much just standard evolutionary theory. C.f. sickle cell anemia.
Last thing I saw on it was a blogging heads video , which convinced me it's not as dead as I thought it was.
"The lesson is that Mother Nature know best. Or alternately, TANSTAAFL: 'there ain’t no such thing as a free lunch.' Trade-offs are endemic in evolutionary biology. Often, if you use a drug or surgery to optimize something, you will discover penalties elsewhere. . . .
"In 'The Wisdom of Nature: An Evolutionary Heuristic for Human Enhancement' 12 (Human Enhancement 2008), Nick Bostrom and Anders Sandberg put this principle as a question or challenge, 'evolutionary optimality challenge' (EOC):
"If the proposed intervention would result in an enhancement, why have we not already evolved to be that way?"
The answer to the evolutionary optimality challenge (EOC) comes from any properly taught Biology 101 course: evolution is not a teleological process, and it neither seeks nor is driven by "enhancement," but results in haphazard adaptations of ancestral systems through stochastic survival of genes.
I will make the counterclaim here that it is by no means clear (and it is certainly not conclusively shown by any of the examples in the interesting submitted article) that anything that can properly be called "optimizing" of human beings or of humanity as a whole necessarily results in a "penalty."
The staggering additions to human well being (and to the number of living Homo sapiens individuals) as a result of cultural innovations around the globe look to be mostly gain with remarkably little pain. I can eat foods that are grown in environments I have never visited, use this Internet device to communicate with all of you, fly to places far away without sprouting wings, and learn the thoughts of deep thinkers who are long dead. The progress of humankind in the last few thousand years has been a story of casting off natural constraints.
I may have to use the full Hacker News editing time period for comments to "optimize" this reply some more after I spend some time with my family, so please don't penalize me yet for the brevity of this reply. Before someone builds a chain of conclusions based on a supposed natural law, to reach other conclusions, it is first necessary logically to demonstrate the truth of the claimed natural law. "The only way of discovering the limits of the possible is to venture a little way past them into the impossible."
Thanks for bringing up this topic of discussion.
First edit: I see another comment mentions group selection. That is not a widely accepted idea (as contrasted with kin selection) in evolutionary theory. See three recent posts (there are more where those came from) from Jerry Coyne's Why Evolution Is True website:
24 June 2102 "The Demise of Group Selection"
26 June 2012 "Did human social behavior evolve via group selection? E. O. Wilson defends that view in the NYT"
28 June 2012 "I (and others) comment on Steve Pinker’s discussion of group selection"
Oh, and since we are talking about human intelligence as a big part of the discussion here, I should mention the Wikipedia user bibliography "Intelligence Citations,"
a reference pathfinder that will always need more editing, as there is continually new research on this topic, but which already gathers many of the best monographs on the subject and some good review articles in one place.
Second edit: pjscott's thoughtful reply asks,
I don't think you've actually disagreed with the article. Did you read the 'Loopholes' section?
I've reread the article again, now that you've asked, and I think what I see here is a certain degree of rhetorical incoherence. I wouldn't compose an article this way if I wanted to make a tight argument, but perhaps the author desires to "essay," and try out ideas, and is still making up his own mind.
Anyway, based on the author's yeah-buts in some of the examples given after the "Loopholes" section of the article, and on the artitle's title, and on the first wave of comments received here on Hacker News, I have to take a stand and strenuously disagree with the idea that there is any "evolutionary optimality challenge (EOC)" to be met by human beings endeavoring to improve themselves, to improve human society, or to improve the world in general. There is plenty of scope for further optimization of individual human beings and of the human condition.
I know that I have definitely described evolution in such terms when explaining it to non-scientists, but when participating in discussions with other scientists I try to stick to purely the intention-free ways of saying things.
In instances where we see obvious regularities, and we see that they've been exploited, then we should at least bump that probability that it's a valid explanation for the behavior. Are we sure? Of course not. But we've got some information, and often a pretty significant amount.
That's not to say there isn't a lot of bad evolutionary psychology out there, especially in the popsci section at the bookstore. But the idea that we shouldn't even consider evolution when we're trying to explain behavior is too far in the other direction.
This is a very likely a misattributed quotation.
Ernst Mayr's discussion in his essay "Teleological and Teleonomic: A New Analysis,"
while attributing the saying (without citation of a source) to Haldane, gets to the issue I was getting at in my top-level comment. Evolution happens because chance survival of genes favors the genes that survive to replicate again. But anything about an individual organism's body functions is not a goal of evolution, certainly not in the sense of "If the proposed intervention would result in an enhancement, why have we not already evolved to be that way?" Evolution is not driven by "enhancements," which all of the biologically trained readers of this thread fully understand.
Even a highly general abstract mathematical formulation like used in discussions of evolutionary fitness like calculating time to fixation of alleles would be far too much work to apply everywhere in this essay, would make it unreadable by practically everyone, and not actually add anything meaningful.
This is well illustrated in longevity science by the large and growing number of comparatively simple genetic manipulations that extend healthy life in mice, flies, and worms - by all measures an enhancement for the individual. e.g.
A great many of these alterations are well within the reach of mutational processes, and would be selected for if they provided advantage. But they clearly do not - many come with obvious costs, and the rest with costs we haven't figured out yet, perhaps as simple as leading to lack of front-loading of energy expenditures. Evolution only selects for longevity in comparatively rare instances, such as:
a) We humans, long lived amongst mammals of our size, and possibly due to our intelligence allowing things like the Grandmother Hypothesis:
b) Sessile creatures like mussels and trees, where longevity is a way of competing for space for yourself and your lineage.
c) Naked mole-rats, in which longevity looks like some mix of a side-effect of resistances needed to survive their natural environment, plus a touch of the longevity stratification that comes with being eusocial.
We don't really know that for sure. It could be a teleological process, but it doesn't have to be, so we can't really say either way.
If you want to take the same mechanics as evolution (stochastic mutation and survivable gene propagation) and assign a purpose or end-goal to the sum of all instances of evolutionary function, then you have to name that purpose, and back it up to make it a teleological process.
Of course then, I would argue you are not talking about evolution; but rather stochastic mutation, survivable gene propagation, both "together" aiming for some final purpose.
Well, no. We can say that it very likely isn't a teleological process, because, one, we have no evidence it is; two, we have a lot of evidence that seems to say it isn't; and, three, if it were, it would have massive implications in other fields that we haven't seen any evidence for, either.
Humans are allowed to think statistically and proceed on the basis of best (not absolute, but best) evidence; indeed, in the real world, we must do so.
I mean everything that's considered evidence of evolution could also be considered evidence of teleology: the entire process of the universe creating new forms of novelty, from matter to biology to ultimately possibly some sort of technological singularity.
"we have a lot of evidence that seems to say it isn't"
What is there that exists that is fundamentally incompatible with teleology existing or makes it highly unlikely?
"if it were, it would have massive implications in other fields that we haven't seen any evidence for, either."
What makes you think that you would see evidence for it if it existed? (Or rather, what makes you think that most or all evidence in favor of teleology couldn't be explained by some other non-teleological process?)
No. Teleology has a goal in mind and evolution has been seen to run down dead ends many, many times. All of the extinct species are dead ends.
> What is there that exists that is fundamentally incompatible with teleology existing or makes it highly unlikely?
All of the extinct species. The fact the human eye has a blind spot whereas the squid eye does not. The fact cancer is possible. If I had a biology textbook closer to hand, I could go on at length.
> What makes you think that you would see evidence for it if it existed?
If we can't see something, or measure it in any other way, or otherwise provide evidence for its existence, then why should we ever behave as if it existed?
> evidence in favor of teleology
I've yet to see any actual evidence in favor of teleology in evolution. Non-teleological explanations are simpler and they also explain all of the weird crap we see. In addition, our non-teleological theories have allowed us to predict quite a bit about how biology works in the real world. That mix of simplicity and increased predictive power makes the non-teleological theories a lot better than the previous teleological ones.
Those examples don't prove much, they just demonstrate that any goal there could be is not aligned with the fate of those species. You seem to be arguing against the assumption that teleology would imply some sort of perfection (for some somewhat arbitrary definition of perfection)
The truth is that there is an epistemological problem here. For all we know, the universe was created by Mr. X who _wants_ us to think there is no teleology. Deception could be the very goal. There is absolutely no way for us to verify this, so there is also no reason to state categorically that there is no teleology.
You could make an analogy with a (pseudo) random number generator. How do you distinguish between a real and a pseodo RNG? You can't, really. There are some statistical tests. But if someone is determined to fool you, they can just feed you X amount of GBs of data from a true random source, and only then start to feed you an infinite amount of zeroes.
I'm of course not defending the idea of teleology, just saying that epistemologically speaking, we have no basis for ruling it out nor accepting it.
And the simplest explanation is that there is no goal at all. Claiming otherwise is an extraordinary claim, and you know what they say about those.
For more, see http://en.wikipedia.org/wiki/Inductive_inference and http://www.scholarpedia.org/article/Algorithmic_probability.
This is another but similar concept of an incomputable strategy with optimal predictive power http://hans.math.upenn.edu/~ted/203S10/References/peculiar.p... predicated on picking the simplest explanation.
In a practical setting complex theories or models with lots of explanatory variables tend to overfit the data. In machine learning, regularization or picking a proper prior and things like Minimum Description length in forming networks are instances of penalizing complexity for better predictive power. In decision trees, pruning is another example. In genetic programming picking the smaller and or faster of two similarly performing programs substantially reduces its tendency to overfit.
I disagree entirely. Given two explanations that are "right" in that they explain the observations, and one is very simple, and the other involves vast mysterious unknowns, I would say that there is a strong philosophical argument in favour of the simple explanation.
Especially when complex suppositions like a "goal" of evolution have the characteristics that people are drawn to them for emotional reasons, they raise more questions than they answer.
The idea of simplicity is quite arbitrary. In machine learning there is an analogue in Minimum Description Length learning. It turns out that there the kind of representation is crucial; a hypothesis can be minimal under one representation, but not the other; ergo, what is "simple" is not a straightforward question, and therefore the heuristic is somewhat arbitrary. A "goal" to evolution may sound complex to you, but from an information theoretic perspective, the opposite, namely actual, complete randomness, is the most complex thing possible ...
The whole point of this is: we don't really know, the best we can say is "I won't assume things I don't have evidence for." Lastly, your "raise more questions than they answer" is the hallmark of philosophical problems ...
Outside of math (and by extension, mathematical models and virtual computer systems), there is no such thing as "correct". We have physical models we tend to rely on because they give good predictions.
But there's no "correct" explanation for physical phenomena. Newton's mechanics were "correct" until Schroedinger and Heisenberg showed they weren't. And I'm sure within a hundred years, we'll have an even better model than modern quantum mechanics.
In terms of a general intuition of how the choice of similarly expressive representations affect complexity, one can look to Kolmogorov Complexity. For any two languages used to compute the KC of some string, there is only an additive translative constant for each possible input. The asymptotics of how each language grows its representation with respect to input size by far will dominate such translative overheads. This is sufficient for philosophical purposes. Although in the practical case these choices matter and MDL was created specifically to treat these practicalities.
Yes, and it is also a hallmark of bullshit and the content-free.
The reasoning that "All X have the property Y. I have something here that is Y, therefore it must be an X" must be a classic logical fallacy, no?
But that's not where you started out, you started with (at least) two hypotheses that fit the facts:
> I mean everything that's considered evidence of evolution could also be considered evidence of teleology
Comparing right to wrong would be quite a straw man.
Five happy Moon's sang the Macarena. Truth is beauty. There is nothing grammatically wrong with those sentences, there is no way to philosophically disprove them, and they have nothing to do with the physical world around us.
PS: I am not saying philosophy is useless, just it's a search for something other than meaning... perhaps truth. ;)
To go to your analogy of determining whether something is a source of random numbers, of course if you only make your decision based on the first 10 million numbers you cannot distinguish between a true RNG and a source that provides 10 million random numbers and then all zeros. Bringing that analogy back to the hypothetical intelligence guiding the universe, if you only look at the first 13 billion years or so (current best estimate of the age of the universe), you can't tell the difference between a truly non-teleological universe and a universe guided by an intelligence that intentionally makes the universe act like a non-teleological one for the first 13 billion years and then starts doing something totally different (e.g. causes a scientifically unexplainable apocalypse in 2012). But what's the point of saying this? You're effectively saying that absolutely anything could happen at any time, because there aren't really any fundamental laws, just the will of a hidden intelligence that can do whatever it wants to whenever it wants to.
In summary, read this comic and replace "string theory" with "hidden intelligence guiding the universe": http://xkcd.com/171/
So we provisionally rule it out, like how I provisionally rule out the possibility I'm the current king of the Royal Kingdom of Thanatopsis.
The best way I've heard it described is like dropping a marble into a bowl. The marble will go up and down and wobble around but will eventually settle at the bottom of the bowl. In other words, it would basically be encoded into the universe as a preference that over time would win out.
"All of the extinct species. The fact the human eye has a blind spot whereas the squid eye does not."
Just because not all matter in the universe is condensed into a single point doesn't mean that gravity doesn't exist. Sometimes things get further apart, sometimes things get closer together, but gravity still exists.
"That mix of simplicity and increased predictive power makes the non-teleological theories a lot better than the previous teleological ones."
I'm not arguing that teleology is a better scientific theory than evolution; it's not. But just because it's a crappy scientific theory (well, actually it's not even a theory) doesn't mean that it's less likely that there are some sort of teleological forces baked into the universe, e.g. a preference for the creation and preservation of novelty.
Can you know for sure you're not the Queen of England?
Having a destination in mind doesn't mean you know exactly how to get there, right?
A path finding algorithm is clearly teleological, but also has false starts. That doesn't make it 'blind' or 'random'.
A) Even an organism "at rest" is actively moving against the direction of Entropy. Evolution and life in general is factually teleological. You just seem to have a very specific idea of what teleological means.
B) Evolution moves towards diversity. Even as a purely stochastic process this is a motion towards vertical complexity as well as evolutionary dead ends.
If you redefine any word into a fine enough mist, you can make it mean anything you want.
> Evolution moves towards diversity.
So does cream spreading out in a cup of coffee. Is that teleological, too?
Yea, that's my point.
>So does cream spreading out in a cup of coffee. Is that teleological, too?
Absolutely. Otherwise my previous juxtaposition of entropy and biological systems wouldn't make sense.
Until such a practice is accepted (and here's hoping it never is) we need to focus on what we can control: Culture. The unfortunate thing is this is difficult to approach from a geuinely scientific/empirical approach. The fortunate thing is that some common sense can go a long way: Feed babies/children well, set the bar high from an early age, start teaching them a second language at an early age, expose them to new and many experiences, etc.
There are plenty of other ways: Drugs, surgery, somatic gene therapy, and transcranial direct-current stimulation. All of these technologies offer much greater potential for enhancement than cultural changes. Also, changing culture is hard. Really hard. Creating intelligence-enhancing technologies is probably easier.
1. http://www.gwern.net/Modafinil and http://www.gwern.net/Nicotine are early examples of the potential of pharmaceuticals.
For now. In future I do hope there are ways to allow for artificial increase of intelligence via drugs/nano-machines.
Eugenics is like the last resort for increasing intelligence (the dirtiest hack imaginable). Mainly because it only works on population and not individual level and it's not immediate. Not to mention breeding for intelligence has other problems as well.
Why must there necessarily be a tradeoff for anything good? There's no law requiring that, and what makes one person happy may make another person miserable.
EDIT: I just realized there's about 15 people on HN that said the same thing. And that's why I visit this place!
Also various neuro diseases appear strongly correlated to IQ of the parents, like tay-sachs, for example.
It is probably more generally that closed mating groups are more likely to amplify traits good and bad.