There are so many people that could come into existence in the future if humanity survives this critical period of time---we might live for billions of years, our descendants might colonize billions of solar systems, and there could be billions and billions times more people than exist currently. Therefore, even a very small reduction in the probability of realizing this enormous good will tend to outweigh even immense benefits like eliminating poverty or curing malaria, which would be tremendous under ordinary standards.
It's really an interesting moral discussion - it's like an extension of the "sacrifice one person to save five" classical dilemma, but I really can't agree with his flippant assertion that our moral obligation to the unborn future billions eclipses that to our obligation to help our contemporaries. And he's not even talking about preserving the planet for the future generations, he's merely concerned with them being born in the first place.
Also, his argument has the same short-coming as the "sacrifice" dilemma: We cannot know for sure that a certain action will have a certain outcome - or that any action taken was actually the cause of the outcome.
We effectively credit him with saving the world. If it weren't for him, most of us might not be around. I consider him a hero. From a moral perspective I would rather be him than Bill Gates, who has greatly reduced suffering and disease in the modern world. Just because of the impact on the future.
>Also, his argument has the same short-coming as the "sacrifice" dilemma: We cannot know for sure that a certain action will have a certain outcome - or that any action taken was actually the cause of the outcome.
I don't see how this is a shortcoming.
If I am leading a company, I cannot know for sure that a certain action will have a certain outcome – or that any action taking will actually be the cause of a given outcome. That's not going to prevent me from doing my best to build a good product and make a profit.
In the same way, the uncertainty inherent in reducing existential risk shouldn't prevent us from doing our best to reduce it.
Just like there's a time value of money, there should be a time value of people. The unborn future billions should be discounted to the present value so that we can accurately compare them. What's the discount rate?
You're pushing the idea of the discount rate beyond its underlying assumptions. Short-term temporal discounting is premised on two concepts:
1) A pseudo-psychological principle that people value things more in the present than in the future;
2) An economic principle rooted in the assumption that there are alternative investments available for any given expenditure.
(1) is problematic because: a) it's not true at the scales we're talking about; and b) it's intrinsically tied to how someone existing in the present values future benefits. It takes a very narrow view of "utility" to argue that only the value to those in the present is relevant.
(2) is problematic because the assumption isn't necessarily true. Imagine a world where there are no investments that yield a consistent return such that an investment at time T0 yields exponentially increasing wealth at time T0+T. In such a world the second principle provides no reason to engage in temporal discounting. At the scales we're talking about, this second principle starts to break down. You can't assume that an investment in the present will continue to yield returns indefinitely into the future if your actions result in their being no future humans.
I think a more apropos basis for a guiding principle in this area is the observation that, in absolute terms, the productivity of society grows exponentially. A single person 1,000 years from now will produce much more, in absolute terms, than a single person today. If we define our metric more objectively, something as the sum of all production over the existence of humanity, then the proper course of action is the one that preserves as many future lives as possible, even at the expense of present lives, because most production will happen in the future.
It has to be very high, because our uncertainty about the future is absolutely enormous. I'd put the discount rate even higher than the monetary discount rate, which with fairly standard numbers is already effectively 0 in 20 years.
But note that most commenters there accept discounting based on risk, uncertainty, opportunity cost, etc. - which are exactly the grounds that most ordinary people, even in this very Hacker News page where you would think people would understand & accept things like expected value or probabilistic reasoning, claim to ignore the future based on.
A lot of those bases break down completely when you're talking about these time scales. Consider opportunity cost. It is not sensible to spend $50 to save $100 20 years in the future, because even at a fairly low rate of return the opportunity cost of that $50 now is over $100 in 20 years. Note that such thinking is unavoidably rooted in the idea that there will be a consistent rate of return over the period in question. That assumption isn't necessarily true if you're talking about the possibility of present decisions increasing the risk of wiping out humanity in the future.
I feel these moral rules go out the window the moment you're faced with a life and death scenario. When we're talking about this theoretically, it's easy to say a billion people in the future are more important, but the moment you realize you could die, you say screw those people, I'm gonna save my life first.
Not to mention the fact that any person alive today is potentially the root of a vast ancestral tree of countless millions of those future unborn, each of which is potentially the root of other vast ancestral trees, and so on...
Ill effects suffered by these people may well be passed on through these generations (sins of the father, so to speak). Or worse, if they do not make it to procreation entire continents of future lives will be snuffed out.
The moral weight of this absurd postulation, if anything, swings toward the past rather than the future.
That's the problem of reasoning on timescales of civilizations, yeah? You can't really pay attention to transient effects and still make meaningful policies.
For example, consider how many people died mining coal in the past couple of centuries, or how many natives died when colonizing forces spread them diseases or outright committed genocide.
Sure, it's awful, but would the world really be a better place if America were limited to some traders on the East coast? Or if England had never really gotten the Industrial Revolution thing figured out?
We can't really reason in human terms when talking about the species writ large.
> how many natives died when colonizing forces spread them diseases or outright committed genocide.
Are there really people thinking the Native American Genocide was somehow worth it? That's despicable. Tell me because I'm not sure, you classify the genocide as "transient effect" or "meaningful policy"?
> [...] would the world really be a better place if America were limited to some traders on the East coast?
Irak war anyone? That's how USA makes world a better place.
"Even with nuclear weapons, if you rewind the tape you notice that it turned out that in order to make a nuclear weapon you had to have these very rare raw materials like highly enriched uranium or plutonium, which are very difficult to get [a]. But suppose it had turned out that there was some technological technique that allowed you to make a nuclear weapon by baking sand in a microwave oven or something like that. If it had turned out that way then where would we be now? Presumably once that discovery had been made civilization would have been doomed."
It works both ways; technology could find simpler ways to build superweapons, but it could also lower the entry barrier to complex, resource-intensive projects. Look at the consequences of even mild singularity predictions for robotics and AI -- things that lower industrial "costs" by orders of magnitude, things like 3D-printers, self-replicating machines, cheap and ubiquitous fab robots. Anyone could build incredibly sophisticated machines in their own homes -- including sports cars, jet engines, and giant TV screens, but equally, compact laser enrichment cascades [b] and nuclear weapons.
Perhaps it's lack of knowledge on my part, but I don't see what will stop atomic bombs from being as common as handguns in 30-100 years. Fissile material is ubiquitous [c]; there's no barrier beyond economics and engineering, of exactly the sort near-term AI could unpredictably disrupt.
[a] (Tangential silliness: terrestrial uranium WAS weapons-grade material a few billion years ago -- U-235 decays with a 0.7 billion year half life, compared to 4.5 billion years for U-238, so in geologic history the fissile fraction used to be extremely high. Another one for the "anthropic principle" bin: earth's intelligence must have evolved now, and not earlier, because if it had it would have trivially nuked itself...)
[b] Check out [NYT][APS] -- it's actually a mainstream position among arms control experts to consider shutting down US research in laser enrichment, to prevent the knowledge from being developed at all. To me this plan sounds about as airtight as have suggested to close the NSA, to keep mathematical secrets like RSA from being discovered.
[c] Here's a blogger who goes hiking and brings back uranium ore by the bucket [Willis], for his own hackery experimenting. Uranium ore occurs everywhere [Cameco]; even common granite is a low-grade ore containing 5-50 ppm (parts per million) U [AZGS]; and research suggests it's even feasible to extract it by bulk from seawater [NBF].
"Perhaps it's lack of knowledge on my part, but I don't see what will stop atomic bombs from being as common as handguns in 30-100 years. Fissile material is ubiquitous; there's no barrier but economics and engineering, of exactly the sort near-term AI could unpredictably disrupt."
You guess right--it's your lack of knowledge.
Fissile material is not useful for making nuclear weapons (of any more than the dirty bomb variety). So far, we seem to have converged on uranium and plutonium, and while at least uranium is relatively common in the Earth the processing and dredging required to get a useful amount of it, and then the refining to get the useful isotopes from that, is nontrivial.
Engineering and fast computers are great and all, and will get you arbitrarily close to the physical limits--but we're there right now, and physics says you aren't getting a centrifuge with meaningful output in your garage.
More troubling is the idea that frankly we've had the technology to build sports cars in our homes for decades and decades now. It's called buying a lathe, an arc welder, a mill, a forge. Despite the availability--cheap!--of these things to the general populace, not only is it not widespread among those who can--and there are damned few of those even with this magical Internet thing telling you how to do all of it--it will get less likely as people focus on cat pictures and tweeting.
We aren't worthy of a singularity as a culture.
Thank you for attaching some interesting facts. Allow me to do the same:
Your hiking source claims uranium metal, but points out that it is completely locked up in slag and that only at large-scales does the approach seem tractable.
Even allowing for that, the amount produced is negligible.
That's just for metal--we aren't even talking about a usable isotope yet.
Take 7/1000 of that result from the blogger, and wave a wand to make it pure enough to use.
Now collect the (at least) several pounds needed of that to make a functioning device. Now do the machining on the rest of the device to make it function correctly (have fun with the berylium dust, if you go that route). Now do the timing electronics, and the charge shaping (if you go that route).
"Fissile material is not useful for making nuclear weapons (of any more than the dirty bomb variety). So far, we seem to have converged on uranium and plutonium, and while at least uranium is relatively common in the Earth the processing and dredging required to get a useful amount of it, and then the refining to get the useful isotopes from that, is nontrivial."
I did not express myself well; this is exactly my point -- the barrier to (U-235 based) nuclear weapons is isotopic enrichment -- very difficult engineering -- and not so much access to natural uranium -- raw materials. So that, should difficult engineering become unexpectedly easy (as with singularity-type scenarios), then would be no major barrier to weapons, as access to uranium cannot possibly be blocked.
"Thank you for attaching some interesting facts. Allow me to do the same:"
Apologies for editing and expanding my comment after you read it -- it's a bad habit. ("Release early and release often"?)
"Engineering and fast computers are great and all, and will get you arbitrarily close to the physical limits--but we're there right now, and physics says you aren't getting a centrifuge with meaningful output in your garage."
Could you elaborate on this? I know the underground Fordow complex is about 6,000 m^2 [Forden] -- only about 1-2 orders of magnitude off, and it's built for human workers to access (not robots). And there's at least a factor-of-4 miniaturization in current laser technology (refs [APS][NYT]). I can't describe a design for a garage-sized enrichment plant, but I don't know that it's strictly precluded by physics or engineering constraints.
"1.6 to 16 times more efficient than first-generation gas centrifuges"
Even taking the SILEX claims at face value, that's compared with first-gen gas centrifuges--presumably those used in the early 20th century.
That's still not going to produce output in a form factor usable in a garage. Moreover, the argument I'd posited earlier doesn't hinge on the refinement--the sheer issues of scale of processing for ingress that much raw material are what gets in the way. Then you also have to dispose of the waste tailings.
"A single centrifuge might produce about 30 grams of HEU per year, about the equivalent of five Separative Work Unit (SWU). As as a general rule of thumb, a cascade of 850 to 1,000 centrifuges, each 1.5 meters long, operating continuously at 400 m/sec, would be able to produce about 20-25 kilograms of HEU in a year, enough for one weapon."
Even going with the factor of 20 speedup (overestimation from your provided article with the SILEX quote), we would expect to need 50 centrifuges, on the order of 1.5m long each--that's quite a lot. There's also the supporting equipment, power conditioners and piping and so forth.
In fact, powering the entire apparatus is also a big concern. The article I linked suggests a power draw on the order of several hundred thousand kilowatt-hours (compare with around six thousand for a normal home per year) per year. So, again, I don't see the garage fab making sense.
Oh, and during all that time?
You bet your ass the government is datamining your browsing history, purchase orders, and hobbies. I've ignored it so far in the discussion, but if you want to play the magical singularity wand I'll play the fascist police state card.
> More troubling is the idea that frankly we've had the technology to build sports cars in our homes for decades and decades now. It's called buying a lathe, an arc welder, a mill, a forge. Despite the availability--cheap!
Now I'm curious. How much would it cost to set up a reasonable home garage with all this machinery?
Welder capable of welding tube frame for racecar, about $300 - $500
Cheap lathe (Chinese import 12x36 or old American) about $1500
Cheap mill big enough for automotive work (Chinese import or used American) about $1200
Cheap cutoff saw for most purposes: $100
Don't need a forge, just head over to www.onlinemetals.com to get any metal you're likely to need.
I agree with the poster: tools and knowledge on how to do this stuff is readily available (I downloaded a pdf of plans for a 22-caliber single shot pistol I will build someday), but very few people actually bother. Just like we have tons of cheap computers, but only a vanishingly small percentage of people bother to learn programming them...
Depends on your budget of course. Machine tools don't 'depreciate' past their accuracy. Which is to say that a mill that can hold one thousandth of an inch repeatably costs $X used and on that can do one ten-thousandth (tenth) repeatably is $10X.
The challenge of building a sports car is generally building your own engine (which people don't do often unless they are steam based). You need to get the engine block cast and that requires a steel foundry. Building a blast furnace in your garage is quite difficult if it needs to have the volume capacity to make even a fairly small engine block. Once you have the castings however the various other bits can fit in your garage easily and do cost between $8K and $150K depending on newness, accuracy, control methodology etc.
I'm also assuming that you would use fiberglass layup for the body panels since a sheet metal press that can make a 'hood' sized piece, or fender sized piece, is also quite large (and tall).
The challenge of building a sports car is generally building your own engine (which people don't do often unless they are steam based). You need to get the engine block cast and that requires a steel foundry.
Unless you just insist on starting with a home-made block, you can acquire "naked" engine blocks fairly easily. Or at least you could a few years ago, I've been away from that scene for a while, so my knowledge isn't completely up to date. And even if you can't get a naked block you can certainly get partially built engines (usually the block, crankshaft, pistons and connecting rods) or "crate engines" (usually everything except intake manifold and carburetor / fuel injector) straight from the manufacturers. For example, see:
'course, starting with a pre-forged block and partially built engine isn't quite like doing it from scratch, but then again, I'm assuming anybody building their own car is buying pre-rolled tubing and sheet stock, etc., not literally building everything up from iron ore, aluminum ore, etc.
Yes, any sane person would start there :-) I was just responding to 'build from scratch' which can be interpreted quite literally. A friend of mine builds race cars which are basically some 'regular' car, except all the structural bits get replaced. I joked with him that it would be simpler if he just took the engine out and built a frame around the engine, but he claims the previous frame are his version of a story stick.
A friend of mine builds race cars which are basically some 'regular' car, except all the structural bits get replaced. I joked with him that it would be simpler if he just took the engine out and built a frame around the engine.
Cool. My dad builds and races late model stock cars and has been involved in racing as long as I can remember... so yeah, I can relate to exactly what you mean there. The race cars have almost nothing left of the original car except the exterior sheet metal.
Maybe people don't read as much speculative fiction as they used to (or books in general, for that matter, but I digress...) but it seems like lately I've run into more than a few people acting surprised about concepts that, frankly, have been explored to death in books as recently as half a century ago.
There's no reason to believe our universe is uncomputable. It may require vast resources, in excess of what our universe possesses (by definition, in some sense), but we have no ability to say that there can exist no other possible universe that may not only possess these resources, but consider our entire universe the moral equivalent of a homework problem running on a toy computer.
Even if our universe is in some sense based on real numbers (in the math sense), there's no apparent reason to believe that arbitrarily accurate simulations couldn't be run of it. (Or, alternatively, our host universe may also have real numbers and be able to use them for simulation purposes.)
Also, it could be that the 'real' civilization is not simulating the entire universe, but rather just the solar system and simulating the light entering the solar system from outside the solar system, substantially reducing the computational complexity of the simulation.
Still, modeling every living being as a sort of cellular automaton and running the simulation would take a lot of quantum computers. What would be the computational complexity of modelling the consciousness of a human being?
Perhaps the "real" civilization could run the simulation slower than real-time. For instance, while a whole hour elapses in the real world, only a half hour might elapse in the simulation. To the person observing the simulation, everyone inside the simulation appears to be going slowly, while time flows normally for the people inside the simulation.
I think that this would reduce the computational intensity by a factor of 2.
My almost entirely uninformed idea was that observation of the universe by something that is external to the universe might somehow interfere with the conservation of energy or in some other way produce phenomena that contradict the laws of physics.
The scare quotes are well placed. The problem with Bostrom et al. is not that they are spending money. It's not even the fact that they are wasting their time on imaginary risks. It's that, by trying to bend policy around those imaginary risks, they are increasing our vulnerability to real-life risks.
For example, there has been a lot of talk for the last few decades about bioterrorism. In 2001, it finally happened; a bioterrorist attack occurred. It killed five people.
In the meantime, natural infectious diseases kill several million people every year, and far worse is possible. In 1918-1919, a flu epidemic killed two or three times as many people in a single year as the First World War had killed in four years. In 2002-2003, a SARS outbreak killed a thousand people before it was stopped by quarantine measures. We got off _very_ lightly; SARS is at least as infectious as the 1918 flu, is no more curable, and has nominally roughly twice the lethality rate. The real lethality difference is higher, for the 2002-2003 outbreak was small enough that most of the victims could receive oxygen treatment, which significantly improves survival; in a pandemic, of course, there would not be enough such treatment to go around. And there have been historical plagues deadlier by far than SARS. Terrorists have political goals relative to which the weapon that killed their own communities and families would be counterproductive. The forces of mutation and natural selection working on disease organisms have no such reason for restraint.
Now what exactly do the bioterrorism criers think is going to happen if they start being taken seriously enough to influence policy? It's only too obvious what's going to happen, because it's the same thing that always does: more regulation on biotech research, more red tape, more of the sort of field day for the paranoid and bureaucracy gone mad that we already see in the security theater at airports. It's hard enough to fly to Disneyland in those conditions, let alone do cutting-edge research.
And so when - not if - something deadlier and more insidious than SARS does come along, whatever chance we might have had of being prepared for it may end up being thrown away.
It's not difficult to understand why our instinctive assessment of threats is so wildly irrational. In the ancestral environment there was plenty of disease, to be sure, but almost no medicine, so little selection pressure to be sensitive to that threat. Violence was the main cause of death _that you could do something about_. The human brain is wired to assume our fellow man is the primary threat, regardless of the facts.
But we can sometimes override even hardwired assumptions, once we understand that we live in a world in which they are no longer true. We had better learn to override this one, and quickly.
You might as well say there was a lot of talk about flight in the centuries before it was actually invented. Historical data can only take you so far.
>Now what exactly do the bioterrorism criers think is going to happen if they start being taken seriously enough to influence policy? It's only too obvious what's going to happen, because it's the same thing that always does: more regulation on biotech research, more red tape, more of the sort of field day for the paranoid and bureaucracy gone mad that we already see in the security theater at airports. It's hard enough to fly to Disneyland in those conditions, let alone do cutting-edge research.
Are you in favor of government regulation for people researching new designs for nuclear bombs?
It's all about cost benefit analysis. The TSA is a waste of time not only because hijacking is rare, but because the average hijacking kills fewer than 1000 people. If there was a strong theoretical argument for how a hijacked plane could permanently end the human race, it would not be a waste of time.
I'm in favor of unregulated biotech research if the expected benefit from additional disease cures exceeds the expected risk from potential engineered viruses. Frankly, I'm more concerned about the engineered viruses because it seems like an engineered virus has a better chance of killing off everyone (as opposed to not everyone, which is an extremely important distinction in my view; I'm playing for team humanity) just because engineered things tend to work better than things that assemble by chance.
> Are you in favor of government regulation for people researching new designs for nuclear bombs?
The cases are not particularly similar. It doesn't matter that we have crippled nuclear weapons R&D with regulation. It will matter a great deal if we similarly cripple biotech R&D.
> it seems like an engineered virus has a better chance of killing off everyone (as opposed to not everyone, which is an extremely important distinction in my view; I'm playing for team humanity)
I am not at all as optimistic as you are about that. It is the way of extinction that what kills the last individual may have nothing to do with the underlying factors that doomed the species. The last passenger pigeon died of old age. In my view, far more likely than a single super-disaster that kills everyone at the same time is a sequence of events where one factor delays progress enough to make us vulnerable to another that kills enough people or causes enough disruption to crash civilization, at which point we can't reboot because all the easily accessible fossil fuel deposits are long gone and there's no way to jump directly from wood-burning stoves to solar panels, leaving the survivors to struggle along until ordinary geological processes finish us off. That's the kind of scenario I'm concerned about.
I'm hoping we'll make it to 80,000,000,000 but your general point is correct. There's only so many computational cycles that can be extracted from the universe before you run out of places to sink entropy.
That's a bit pessimistic. If you can do 10^10 years, you can go out to a red dwarf and stick around 10^13-10^14 more. And with a few small mods you can go further. For example, should you want a "small, slow" world, consider what what would if you "turned off" a star (e.g. tore it apart until it could no longer sustain fusion) and burned its fuel more slowly, in engineered fusion reactors. The sun has a natural lifespan of a few billion years; but consumed at a relative trickle of 10^17 W (the part that strikes the earth), there's enough fuel for about 10^21 years. (Conversely, you could grow your world until it matches the power output of a star, like a Dyson sphere. Urban planning decision.) The part that humans actually metabolize is barely 10^12 W (10^26 years in bubble chambers), and non-organic human simulations could do better. And there's a lot more hydrogen where that came from... and human knowledge of this universe's physics is clearly incomplete (what the hell is dark energy?)
I'm bringing up an equation that needs consideration. PotentialDamage=NxP1xP2xP3xMaximumDamage
N=Population of World
P1=Percentage of people with knowledge and access to Dangerous Technologies
P2= Percentage of Personalities who are destructive
P3=Percentage of People who actually act
MaximumDamage=The Number of Casualties that can be caused by a single person
While P1,P2,P3 is relatively stable, and N is almost linear, MaximumDamage is exponential. You can imagine weapons accessible to normal evolved from sticks, to axes, to guns and explosives. At the present moment the main reason that nuclear weapons are not in exploded by terrorists is not because of technology but because of scarce resources. But for bio-warfare, technology cost will exponentially shrink and impact will exponentially grow.
This means if this goes on, PotentialDamage will one day be larger than our population, and extinction will occur. By estimating these parameters we can even predict a data.
It's worth bringing up The Most Important Video You'll Ever Watch , which the lecturer characterizes as humanity's biggest problem is our inability to understand the exponential function. Watch all 8 parts.
I have come to the conclusion that there simply are too many of us. We can probably sustain our current levels for a century, maybe two, but at some point scarce resources (and their subsequent cost) will have a devastating effect.
Basically, we need to correct our population before nature does.
As much as people point to space being our future, I simply (sadly) do not agree. While there might be plentiful resources in the asteroid belt (and on other bodies) nothing compares to how cheaply we can pull things out of the ground here on Earth. Our society is predicated on cheap, plentiful resources such that it can't survive them being several (or even one?) order of magnitude more expensive.
As far as interstellar space goes, even if we solve the reaction mass problem and have perfect (100% efficient) conversion of matter to energy, it will still be prohibitive to go to even the nearest stars.
Perhaps the simplest explanation of the Fermi Paradox is that potential growth for a starfaring civilization is geometric (being a sphere ultimately limited by the speed of light) while growth rates are exponential. And exponential will ultimately "win".
That video is a joke for anyone who has more than a superficial understanding of math. Just because a process has fit an exponential curve does not mean exponential growth is going to continue forever. Past growth patterns are fairly weak evidence of future growth patterns. ("Housing prices always go up!")
That's not to say we shouldn't fear a process that is inherently exponential in nature, like the reproduction of bacteria or a nuclear reaction going supercritical. But if the process only appears from the outside to have been growing exponentially, that's only a weak indicator that it will start behaving in an insane fashion.
In any case, it seems that as nations become more developed people stop having kids:
> Just because a process has fit an exponential curve does not mean exponential growth is going to continue forever.
Did you watch the video? This is precisely what it's talking about. Though the economy and energy usage have grown up exponentially for the past two centuries, we cannot hope that it will go on forever. Population growth isn't a concern per se, resource overconsumption and the impossibility of future economic growth are.
> Perhaps the simplest explanation of the Fermi Paradox is that potential growth for a starfaring civilization is geometric (being a sphere ultimately limited by the speed of light) while growth rates are exponential. And exponential will ultimately "win".
I read a paper within the last few months that used simulated expansion into a galaxy to debunk that idea. The fallacy is that growth is continuous and ceases simultaneously across a civilization. The (simulated) reality is that only a tiny percentage of frontier colonies need to survive to prevent extinction and eventually resume growth.
I wish I could find the paper. Anyone know the one I'm talking about?
The strongest selection pressure in the western world is for people to spend every waking hour making babies. It sounds very likely that population growth will pick up its pace this century unless it is actively prevented.
The selection pressure is real, but it is not the only force in play. Economic pressure is acting too, and it is against raising children; they are expensive to raise to adulthood, and you cannot derive any material benefit from them. It seems to me that, in this case, the economics have much greater force than the genetics. Whatever disposition toward childbearing you inherit from your parents, whether genetic or ideological, is a small thing compared to whether circumstances make providing for them practical.
In general, when dealing with a complex system, it is not enough to know that a single force is acting on it. You need to understand all the forces, and how strong they are and how they behave. All too often, this is the case with selection pressure arguments -- supposing that the force in question is the only one in play, without any regard for whether it has the power to do the job expected of it.
I find that the best course of action is usually to look at the actual data, and see how the system really is behaving, apart from theory. Nontrivial systems very rarely meet naive expectations.
And yet all indications are that's not what happens. The most likely scenario is by mid century the world population will be in decline and continue to do so at an accelerated pace until at least 2100.
Medicine has given us control over our fertility, and the way modern societies are structured children are a financial liability.
Energy is the only resource that matters. With enough energy and technology we can recycle and transform whatever current raw physical materials we have available, no matter how inefficient it is.
With the population leveling off, I think the amount of physical materials needed at any one time will stop growing exponentially. Consumption will rates will still be increasing, but we can recycle the raw materials to keep the absolute amount needed within physical limits.
Sorry, I'm not an expert in that kind of stuff. But since resources don't get out of Earth, then the total loss of resources is 0.
Resource consumption is growing exponentially
Hmm, really? My computer is thinner, so is my TV (and probably my future car), I don't purchase paper any more (or almost don't purchase/use it) and I consume more digital products than physical ones. Yet, even if I purchase double of what I have the next year, I can't keep in that pace for too long.
> But since resources don't get out of Earth, then the total loss of resources is 0.
Soil washed down to the ocean is lost to agriculture, if not lost to the planet itself, for instance. I'm quite sure that the planet will be OK whatever we do; the human race, however, may disappear quite easily.
> Hmm, really? My computer is thinner, so is my TV
Yes, and to build a new computer uses infinitely more than not building it in the first place. Your old computer isn't magically transformed back into a new one, or ore.
Every transformation needs energy. As long as energy is so cheap as it can be considered free and so largely available as it can be considered like infinite, everything's fine.
Just to remind you that a "service economy" relies on large, general availability of masses of almost free energy and resources. However, energy may soon turn more expensive and rarer.
We've been in peak oil for the past few years, and the production will began to drop any time now, probably within 3 to 5 years. There are no credible alternative to cheap fossil fuels, unfortunately, for most of their uses (transport, plastics, chemicals, fertilizers...).
So far, the way the world economy is currently organized cannot survive a massive change in energy availability. Particularly with the current accent put on economic growth. By definition, economic growth must stop at some point, by lack of resources, then go down. In the near future, economic recession will be the normal state of matters. How will we manage it? Without sinking into chaos? I really have no idea.
The one thing that's always skipped over in this kind of article is... Why? The universe doesn't care whether we're around or not. It sure as hell doesn't affect us whether people are around even as nearby as a thousand years from now. If you're worried about future generations suffering, then don't make them. That's a lot cheaper than spending fortunes on a technological solution. The money saved can be used to help those suffering now.
We're underestimating the risk because we're using Net Present Value to determine the value of future events. While it can be useful in business to determine profitability of future endeavors, it isn't useful when we are unable to correctly calculate all the other things that get wrapped up into the word "externality".
Oh... I do love everyone's opinion about this topic. It's interesting, but I think everyone can agree that "human evolution is a train with no tracks". Thinking there is a formula to solve it's equation by humans is pretty amusing.
I'm always fascinated by the extremely long term horizon concerns that people bring up in discussions about the future of humanity.
200 to 300 years out there is no humanity as we know it today. Whatever we are at that juncture, it won't be "human" as we now define it. Our self directed evolution has long since taken over, and it's accelerating at an extraordinary clip. You can debate the merits of that, the details of it, but it's happening either way.
In the next two or three decades we'll have begun to completely take over genetic alteration / evolution / improvement / etc, of our species. Within a few decades after that, we'll be severely altering what we are. Within 150 or 200 years, it'll be very hard for modern humans to relate to the ancestor humans from 2012.
Concerns about asteroids or global warming and so on are moot. We won't be here as a species pumping CO2 into the atmosphere or waiting around helplessly for a rock to crash into the surface. It's not an issue of if, it's just an issue of how long it takes and how many competing models of our self directed evolution become options.
The sole threat to that future is super virus / disease, most likely man-made. It's the only thing that could stop our evolution and wipe out our species in that couple hundred year time frame. Even nuclear war isn't a threat to species survival, you could detonate thousands of nukes simultaneously and it wouldn't come close to killing us off.
> The sole threat to that future is super virus / disease, most likely man-made.
I remember reading somewhere (maybe one of Cliff Stoll's books?) that the author, a computer person, thought that a biological virus was the greatest threat to humanity. He mentioned this to a biologist friend of his, who assured him that the popular doomsday scenario is essentially impossible, and that his greatest fear was a computer virus!
Tim Ferriss, on the Joe Rogan podcast , predicts a pandemic in the near future (around 28:45). Basically, he knows a guy who runs a biotech company who claims that they, if they wanted to, could engineer a virus within six months that could wipe out humanity.
I wouldn't say that it's the sole threat, however. Yudkowsky seems to take the threat of runaway AI quite seriously. These are crucial times.
I think you vastly overestimate our ability to understand genetics. The genome (any species' genome) evolved so haphazardly and with such deep and interconnected causal spiderwebs I think it's going to take centuries to understand it to the extent that we'll be able to intentionally influence it more than genetic drift and selection currently do.
NINJA EDIT: Regardless, the certainty with which you seem to regard your claims is hardly justified (unless you've got some concrete evidence you can share). If you're wrong, we'll have wished we were thinking about these things all along.
People tend to overestimate the change that can happen in 50 years and underestimate the change that can happen in 10 years. The overestimation is an interesting one, it is an overestimation of how consistent future trends will be with current. It is impossible to know in detail what the future of engineering and science will entail. It is possible to know the general outline of the next 10 years but it is impossible to know which of those changes will start an exponential cascade of changes and which things will be left to grow polynomially.
It estimates that a nuclear war in 1988 would have dropped world population from ~5 billion to ~3 billion (this includes estimates of death due to famine etc).
You can argue with each individual estimate, but it's hard to argue that a nuclear war would actually kill off the human race.
(As noted elsewhere, you can argue that it would be possible to kill off the human race if someone with control of a super-power nuclear arsenal tried to do it. That's a different argument, because you'd have to discuss the likelihood of that circumstance occurring)
Thousands of modern devices would be different in many ways: bombs would likely be more powerful on average than the '45 ones, but they could also be cleaner, and they would likely not be used all in the same place. I would guess the species would survive it, if only because the ones starting it would likely be in a good shelter.
Saying that they'd likely be more powerful on average than the '45 ones is a huge understatement. Fat Man was 20 kt, every Trident II missile can carry 12 475 kt warheads. Also, if you distribute nuclear explosions across the world, it's much different than having one meganuke in terms of its effect. Meganukes are very inefficient with their power due to the diameter of an explosive sphere relating to the cubic root of the power. I think the original commenter is way off in their proclamation that we couldn't nuke ourselves to extinction.
It also created a shock wave measured at over 5 on the Richter scale, despite being detonated 4km above the Earth, that travelled at least three times around the globve. What would it have done if it had been detonated at touch down. What would happen if you detonate 10 of those at the same time?
I think it would take worse than that to kill the species, but our civilization is more fragile.
Any claims about our "self-directed evolution" seem to either: define it so loosely as to include purely social conventions or uses of technology (we keep our information in the twitterspheres!, or to assume some magic breakthrough in bioengineering that overnight allows us to start jailbreaking our genome.
The sad, sorry fact of the matter is that in the time it takes to accomplish that, we could blow ourselves up.
Or, we could blow enough of ourselves up to render that future unachievable. The nations that are capable of carrying out this research, and supporting the societies that can support this sort of thing logistically and culturally, are very precarious in their positioning. As a thought exercise, consider how far food on your table had to travel, how many things had to work correctly for that to happen, and how fucked you are if they don't (and how tricky it is to find a replacement, and how many other people would be doing the same).
It would not be unreasonable to believe that a collapse of civilization could set back your scheme several hundred years--for example, consider how difficult it would be to get back to a point where you could use existing machines, much less make new ones. A semiconductor foundry--required for computers, in turn required for any meaningful engineering these days--would be practically unattainable in dire times, even if you find people alive who still knew how to run it!
Or, even more likely, nothing goes wrong, and simple social forces render stagnation even with advanced bioengineering. Read "The Calorie Man" by Paolo Bacigalupi for a recent take on this. There is no reason to expect that we're going to fare any better.
So, no, this isn't an unreasonable pessimism on the rest of our parts.
We could totally survive any climate change that we've ever seen in the geological record. Even if famine claims billions of lives, it wouldn't be a species-ending event. Bostrom is worried about things that have considerably more destructive potential.