Actually colonizing the galaxy would be so much harder than pretending to have done it when filming Star Wars or Serenity.
This article relates to one of my crazier recurring thoughts. Looking at how most of us have readily shifted to or accepting of virtual experiences and Baudrillard type "simulacra", I can't help but run the clock 250 years forward to see people willingly living with their brains permanently wired into an artificial reality (or, if we "crack" consciousness, being converted into conscious computer programs of sorts).
No longer would fighting the laws of physics be necessary, you could merely play a futuristic, hyperreal version of EVE Online in what your brain would identify as "reality" and do whatever you like.
But ultimately, I think we're guided by our minds and feelings. If we get the ability to keep a brain in a post-orgasmic-esque level of satisfaction and inactivity for perpetuity, it's probably game over for humanity.
Yup. This is always why I've questioned "technological Singularity"-type events: even if we might be subjectively much happier, from the point of view of other alien species we'll basically have just smothered ourselves in a big blanket of grey goo (nee Computronium.) If we ever do manage to get out into the galaxy, I expect to see that the vast majority of planets with evidence of previous civilization (junk in orbit, etc.) are now big grey spheres that are too busy reveling to be interested in talking to us (but may consider eating us to add to their Maximum Revelry Potential.)
That would be a fairly frightening hegemonic swarm. I suspect that hedonically oriented Kardashev type II civilizations are more likely to use a gentle "Join us and experience the bliss of a thousand heavens." sales pitch; if they have much of a foreign policy at all.
I'm imagining a complete partitioning of utility functions here. To me, a type-II civilization would be more akin to the planet of the Matrix movies—a symbiotic system involving:
A. a sentient civilization, living in a virtual universe with unlimited possibility, and
B. a completely automated outer reality, keeping the sim running, of which the sentients have (and need) no awareness.
Part of the sentients' remembered history would be that that "we used to live in a much more limited universe, but then we ascended to this one," or something like that. The "reality" that the automated system inhabits would be no more real to the sentients than your fingertip is to your IDE. (And would have as little realization of an orbital bombardment against the automation as your IDE has of your fingertip holding down the power button of your computer.)
The automated system would want only to grow and maintain itself, and would not have the higher-level thinking capacity required to formulate larger goals. It would be more like a plant, having the same sort of relationship to the sentients that we have to bacteria that live in our bloodstreams. (I want to mention both Avatar and [End of] Evangelion here, for oddly-complementary visuals.)
I would think that even such a technologically dominant civilization would want to keep a few sentinels in the base reality. If only to keep an eye out for stray existential threats. And such a sentinel would be a fascinating character to base the narrative around; aware of the true nature of things, given vastly consequential powers in physical reality, but for all that, but not particularly important in the view of the society running inside.
True (I sort of had the feeling that this is why the Great Machine in Babylon 5 required an operator.)
But when you start to think like a post-Singularity sentient, you start to realize that their ethical system would be much "pickier" than ours. They would see leaving any of their own (whether biological or AI-based) in the "limited" reality as a punishment equal in scope to the suffering child of Omelas—that is to say, one they could not possibly stand for.
Why would computronium limit itself to a single planet or even a single solar system? There's plenty of matter that could be harnessed to perform ever more calculations.
If you estimate that life at our stage of development is relatively common, but notice the stars are silent, you should then conclude that humanity is doomed. Either we're doomed to destroy ourselves (like 100% of other life forms at our stage) or we're doomed to be stuck in one solar system.
Note: I don't want to spend a bunch of time explaining my view on this, but I think that intelligent life is so rare that we're the only instance of it in our light cone.
I basically agree with your conclusion in practice as a thinking human being—but I'm also a writer, and it's very hard to write speculative fiction when there's no FTL and our nearest neighbor might live in the next galaxy over—so I like to ask the angels-on-a-pinhead questions anyway :)
So, a reasonable alternative solution: use the Anthropic argument. First, assume that there is intelligent life close to us. Then, realize that we still exist, and haven't been consumed by grey goo; and, according to our long-distance observations, all the planets around all the stars we've surveyed still exist too, in shapes that don't suggest intelligent redesign (even though "they" have had a few potential billion years to do so.)
Given that, it's likely that whatever path civilizations tend to follow eventually preclude them from wanting to leave to go redesign the galaxy in their image. This might just be a problem of limited extra utility derived from expansion: if Moore's exponential curve does, in fact, turn into Moore's S-curve, then there's no point in sticking more Computronium onto the pile, because the lightspeed-delay in the optical relays could just get to be too long for the extra circuits to do any good (presuming everyone wants to live on a single "instance" of AwesomeUniverse2000, rather than sharding it.)
Now, the number of additional assumptions in the conclusion can, by working back through Bayes' law, tell you how probable the assumption itself (that life exists close to us) is. I'm guessing it's a small, but not infinitesimal (compared to space) number.
"presuming everyone wants to live on a single "instance" of AwesomeUniverse2000"
Looks like you have worked out their business model then - unless you pay you have to share it with others, go for the upgrade and you get to be selective.
1. It's possible that right now we are living in a simulation that traps all our senses into believing this right here is 'reality'.
2. As long as there is even a single component of humanity(a physical brain, for instance) that remains in contact with today's world, humanity of the type depicted in your comment cannot fully accept "the virtual world" as reality.
For humanity to completely 'believe' that virtual world to be 'real', humanity will have to cut all ties to this world and live as 'pure' virtual entities. Not impossible, and as noted in 1, the 'reality' we accept and live with today was probably a 'virtual world' in some other 'reality'.
Given 1 and 2, it's possible that there is no such thing as "reality"—every simulation is just hosted within another simulation, in a directed cyclic graph (e.g. there is a simulation that eventually hosts one of the simulations above it—or, at least, a simulation quark-for-quark identical to it.)
And if you accept that, you're only a little way from proposing a basis to the existence of this graph: http://en.wikipedia.org/wiki/Mathematical_universe_hypothesi... — that a simulation that "runs" another is really just "connecting" to the mathematical structure of the other simulation, and so you can actually be simultaneously hosted by 1, 2, N, or even zero parent realities, and still you'd exist.
Maybe that's what the majority will be doing in 250 years, I couldn't tell you. But no matter what fantasies we indulge in a hypothetical simulation, this world here still exists, and eventually we will have to leave this planet or perish. I don't know about anyone else, but even given a fantastic simulation-world, I would treat it as just another game to distract me, possibly for a long-distance voyage across the stars. This is of course assuming we can't improve our intelligence any further, and with higher intelligence I might find something much more pleasing and interesting in the real world than the simulation, or desire to write out the part of me that likes the simulation.
This article relates to one of my crazier recurring thoughts. Looking at how most of us have readily shifted to or accepting of virtual experiences and Baudrillard type "simulacra", I can't help but run the clock 250 years forward to see people willingly living with their brains permanently wired into an artificial reality (or, if we "crack" consciousness, being converted into conscious computer programs of sorts).
No longer would fighting the laws of physics be necessary, you could merely play a futuristic, hyperreal version of EVE Online in what your brain would identify as "reality" and do whatever you like.
But ultimately, I think we're guided by our minds and feelings. If we get the ability to keep a brain in a post-orgasmic-esque level of satisfaction and inactivity for perpetuity, it's probably game over for humanity.