Hacker News new | past | comments | ask | show | jobs | submit login
Eliezer Yudkowsky: That Alien Message (overcomingbias.com)
54 points by kf on May 22, 2008 | hide | past | favorite | 16 comments



From the article: We are reasonably certain that our own universe is running as a simulation on such a computer. Humanity decides not to probe for bugs in the simulation; we wouldn't want to shut ourselves down accidentally.

That puts a new twist on the anthropomorthic principle. The only universes that are observed are the ones that have observers - and haven't crashed their universe.


Except the universe is probably nothing like we imagine. This story is the modern-day equivalent of tales of sea monsters and sailing to the edge of the flat world. What is the probability that evolution would result in a "life form" anything like us? And by anything like us, I mean anything which can understand any of the concepts we understand? That includes all of the theories of mathematics we impress ourselves so with.


"What is the probability that evolution would result in a "life form" anything like us? And by anything like us, I mean anything which can understand any of the concepts we understand?"

Pretty good, in this universe at least. Every intelligent creature in this universe has to be able to do the same basic things: reproduce, manipulate objects, fend off predators, etc.

There's a theory, commonly used in economics and international relations, that competitive pressures force most groups at the apex of a system to think, act, and behave pretty much the same. For example, during the Cold War, the Soviet Union and the United States, despite their radically different concepts of how to run a government, each behaved roughly the same on the international stage. They built coalitions and acted pretty much universally in self-interest.

The same principle probably applies to organisms. The same competitive pressures probably force intelligent organisms to develop similar mental frameworks.

This isn't to say that ET will look anything like us. Just that he'll probably understand patterns, numbers, and the like, because understanding those things are what this universe rewards.


Pretty good, in this universe at least. Every intelligent creature in this universe has to be able to do the same basic things: reproduce, manipulate objects, fend off predators, etc.

That's a very small scope of 'basic things' - it very well might be that it only applies to the results of evolution on earth. What about evolution in our universe on scales that aren't our size, that don't move at our speeds, that aren't made up of carbon/oxygen/etc?

What is there to stop an evolutionary processes from happening inside a star? It doesn't have to be our star, it could be a red giant - with entire complex evolutionary "life" constantly being created and destroyed in a matters of seconds - to them, the star is the accessible universe - it forms the fundamental fabric of their existence.

What is there to stop evolutionary processes to happen on a quantum level?

Most things in the universe don't happen at our speeds, and they don't happen at our size. And get really not-like-this-universe when you start thinking about relatively big things, or relatively little things.


I think this is a good point. The number of possibilities is too great to imagine, and there very well could be some type of life I simply can't envision.

"What is there to stop evolutionary processes to happen on a quantum level?"

There are entropy constraints on the conditions under which life can evolve. There must be a persistent storage mechanism that degrades at a certain rate. Too fast, and the mutations pile up and the organisms all die. Too slow, and evolution can't operate. I suspect that most possible life would have to rely upon carbon-oxygen or something similar.


"No one - not even a Bayesian superintelligence - will ever come remotely close to making efficient use of their sensory information..."

We can settle that now...

[PDF] Physical limits to computation: http://puhep1.princeton.edu/~mcdonald/examples/QM/lloyd_natu...

Eye: http://www.eurekalert.org/pub_releases/2006-07/uops-prc07260...

So if the eye does 10 million bits/sec & the 1 kg ultimate computer does 10^51 oper/sec on 10^31 bits, I think there's a good chance a Bayesian superintelligence can do it.

That computer may not be buildable, & you might have instruments with somewhat more bits/sec, but there's a LOT of room to work with.


The eye may receive 10 million bits per second, and the optic nerve and brain do a marvellous job of finding patterns in those 10 million bits, but that doesn't mean we're processing it efficiently.

As Eliezer pointed out, an "Efficient" processing would allow us to derive the theory of relativity (or at least Newtonian mechanics) from about 6 frames of an apple falling in a field. That would be a super-efficient use of available information, but not even the best possible use! The eye might be a fantastic piece of kit, but it sure as hell doesn't do that. (or if it does, I need a proper user's manual, cause I haven't figured out how to do it yet!)


Isn't there a declining marginal value to the amount of sensory information available and the amount of processing you do on it?

Information and calculation beyond a certain point may simply becoming "boring."


Fair point, GavinB. If you defined "efficiency" in terms of achieving the most benefit from an eyeball using the fewest computations, then I'm not sure whether a Bayesian superintelligence could approach "efficiency". Maybe the theoretically optimal program to run on the eyeball's input, would itself require exponentially vast brainpower to calculate!

But I would concede a much higher chance that a Bayesian superintelligence could get bored at exactly the right time, than that it could simulate all possible universes.


Do you think it would be a "good" thing if the average intelligence level was bolstered by 40 points? Can you think of any ill side effects from this possibility?


increased existential depression...


It seems like a sufficiently intelligent intelligence will eventually "escape" all the way up to the "real world". From this example, if the Einsteinians had invented an AI without having the hardware spontaneously melt, it would have escaped into the Einsteinian world. It then would have moved up into the "undergrad world", since it would be smarter than the Einsteinians and they were capable of "moving up".

I can't quite put my finger on it but something seems wrong about this...something mind bending is going on here.


You should read Greg Egan.


Man, the kind of stuff I read on overcomingbias makes everyday life feel vapid and onerous.


- Shannon limit.

- Genius: perspiration vs inspiration and how much genetics alone can affect this.

- Relativity: if we're traveling near c that makes us less intelligent because we think more "slowly"?

- Tractability: here comes the heat death of the universe.


This is mind-bending. Good read, thank you!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: