It's really an interesting moral discussion - it's like an extension of the "sacrifice one person to save five" classical dilemma, but I really can't agree with his flippant assertion that our moral obligation to the unborn future billions eclipses that to our obligation to help our contemporaries. And he's not even talking about preserving the planet for the future generations, he's merely concerned with them being born in the first place.
Also, his argument has the same short-coming as the "sacrifice" dilemma: We cannot know for sure that a certain action will have a certain outcome - or that any action taken was actually the cause of the outcome.
Think of Petrov: http://lesswrong.com/lw/8f0/existential_risk/
We effectively credit him with saving the world. If it weren't for him, most of us might not be around. I consider him a hero. From a moral perspective I would rather be him than Bill Gates, who has greatly reduced suffering and disease in the modern world. Just because of the impact on the future.
I don't see how this is a shortcoming.
If I am leading a company, I cannot know for sure that a certain action will have a certain outcome – or that any action taking will actually be the cause of a given outcome. That's not going to prevent me from doing my best to build a good product and make a profit.
In the same way, the uncertainty inherent in reducing existential risk shouldn't prevent us from doing our best to reduce it.
For example, consider how many people died mining coal in the past couple of centuries, or how many natives died when colonizing forces spread them diseases or outright committed genocide.
Sure, it's awful, but would the world really be a better place if America were limited to some traders on the East coast? Or if England had never really gotten the Industrial Revolution thing figured out?
We can't really reason in human terms when talking about the species writ large.
So, as usual, the whole debate will be solved with brute force and whoever wins will get the right to speak like you.
Are there really people thinking the Native American Genocide was somehow worth it? That's despicable. Tell me because I'm not sure, you classify the genocide as "transient effect" or "meaningful policy"?
> [...] would the world really be a better place if America were limited to some traders on the East coast?
Irak war anyone? That's how USA makes world a better place.
(1) is problematic because: a) it's not true at the scales we're talking about; and b) it's intrinsically tied to how someone existing in the present values future benefits. It takes a very narrow view of "utility" to argue that only the value to those in the present is relevant.
(2) is problematic because the assumption isn't necessarily true. Imagine a world where there are no investments that yield a consistent return such that an investment at time T0 yields exponentially increasing wealth at time T0+T. In such a world the second principle provides no reason to engage in temporal discounting. At the scales we're talking about, this second principle starts to break down. You can't assume that an investment in the present will continue to yield returns indefinitely into the future if your actions result in their being no future humans.
I think a more apropos basis for a guiding principle in this area is the observation that, in absolute terms, the productivity of society grows exponentially. A single person 1,000 years from now will produce much more, in absolute terms, than a single person today. If we define our metric more objectively, something as the sum of all production over the existence of humanity, then the proper course of action is the one that preserves as many future lives as possible, even at the expense of present lives, because most production will happen in the future.
Still, 100 years is often considered a reasonable limit on such things as 5% * 100 years = 0.6%.
23% of all goods and services made since 1AD were made between 2001 and 2010: http://www.economist.com/node/21522912
If a similar pattern holds true, then the opportunity cost, in absolute terms, of a decision that results in fewer future humans is absolutely staggering.
It's odd to give that much moral weight to the potential future existence of unborn people centuries from now; how does that not favor simply having as many babies as possible?
But on a purely biological note, natural selection favors just that: those who make more babies than anyone else, who then go on to make more babies than any of their contemporaries, etc etc etc.
I have come to the conclusion that there simply are too many of us. We can probably sustain our current levels for a century, maybe two, but at some point scarce resources (and their subsequent cost) will have a devastating effect.
Basically, we need to correct our population before nature does.
As much as people point to space being our future, I simply (sadly) do not agree. While there might be plentiful resources in the asteroid belt (and on other bodies) nothing compares to how cheaply we can pull things out of the ground here on Earth. Our society is predicated on cheap, plentiful resources such that it can't survive them being several (or even one?) order of magnitude more expensive.
As far as interstellar space goes, even if we solve the reaction mass problem and have perfect (100% efficient) conversion of matter to energy, it will still be prohibitive to go to even the nearest stars.
Perhaps the simplest explanation of the Fermi Paradox is that potential growth for a starfaring civilization is geometric (being a sphere ultimately limited by the speed of light) while growth rates are exponential. And exponential will ultimately "win".
That's not to say we shouldn't fear a process that is inherently exponential in nature, like the reproduction of bacteria or a nuclear reaction going supercritical. But if the process only appears from the outside to have been growing exponentially, that's only a weak indicator that it will start behaving in an insane fashion.
In any case, it seems that as nations become more developed people stop having kids:
Did you watch the video? This is precisely what it's talking about. Though the economy and energy usage have grown up exponentially for the past two centuries, we cannot hope that it will go on forever. Population growth isn't a concern per se, resource overconsumption and the impossibility of future economic growth are.
There's not a clear causality either.
From the full paper:
>"* Morespecifically,ourfindingsreduce,if notyetcompletelyreject,fearsofpopulationdeclinethathavebeenincorporated inmanynationalpopulationforecastsforhighlyadvancedcountries.Nevertheless,weexpectcountrieslaggingbehindintermsofdevelopmenttocontinuetheir fertilitydecline,consistentlywithcurrentscientificknowledge.Moreover,some countriesatintermediatedevelopmentlevelsarelikelytofaceadeclineinpopulationsizebecausethesecountriesdonotyet—andmaynotintheforeseeable future—benefitfromthereversalofthedevelopment–fertilityrelationshipthathas occurredatadvancedHDIlevels*"
True as that may be, people are not acting the way that would lead you to expect.
World population appears to be on a logistic curve --
-- and birth rates for a lot of the world are close to or below replacement.
The selection pressure is real, but it is not the only force in play. Economic pressure is acting too, and it is against raising children; they are expensive to raise to adulthood, and you cannot derive any material benefit from them. It seems to me that, in this case, the economics have much greater force than the genetics. Whatever disposition toward childbearing you inherit from your parents, whether genetic or ideological, is a small thing compared to whether circumstances make providing for them practical.
In general, when dealing with a complex system, it is not enough to know that a single force is acting on it. You need to understand all the forces, and how strong they are and how they behave. All too often, this is the case with selection pressure arguments -- supposing that the force in question is the only one in play, without any regard for whether it has the power to do the job expected of it.
I find that the best course of action is usually to look at the actual data, and see how the system really is behaving, apart from theory. Nontrivial systems very rarely meet naive expectations.
Medicine has given us control over our fertility, and the way modern societies are structured children are a financial liability.
More humans doesn't equal less resources, it equals more resources. The Earth isn't "running out" of anything, the idea of a carrying capacity for a technological species is ridiculous.
Human population isn't growing exponentially, only pseudo-exponentially.
With the population leveling off, I think the amount of physical materials needed at any one time will stop growing exponentially. Consumption will rates will still be increasing, but we can recycle the raw materials to keep the absolute amount needed within physical limits.
Resource consumption is growing exponentially
Hmm, really? My computer is thinner, so is my TV (and probably my future car), I don't purchase paper any more (or almost don't purchase/use it) and I consume more digital products than physical ones. Yet, even if I purchase double of what I have the next year, I can't keep in that pace for too long.
Soil washed down to the ocean is lost to agriculture, if not lost to the planet itself, for instance. I'm quite sure that the planet will be OK whatever we do; the human race, however, may disappear quite easily.
> Hmm, really? My computer is thinner, so is my TV
Yes, and to build a new computer uses infinitely more than not building it in the first place. Your old computer isn't magically transformed back into a new one, or ore.
Every transformation needs energy. As long as energy is so cheap as it can be considered free and so largely available as it can be considered like infinite, everything's fine.
Just to remind you that a "service economy" relies on large, general availability of masses of almost free energy and resources. However, energy may soon turn more expensive and rarer.
We've been in peak oil for the past few years, and the production will began to drop any time now, probably within 3 to 5 years. There are no credible alternative to cheap fossil fuels, unfortunately, for most of their uses (transport, plastics, chemicals, fertilizers...).
So far, the way the world economy is currently organized cannot survive a massive change in energy availability. Particularly with the current accent put on economic growth. By definition, economic growth must stop at some point, by lack of resources, then go down. In the near future, economic recession will be the normal state of matters. How will we manage it? Without sinking into chaos? I really have no idea.
I read a paper within the last few months that used simulated expansion into a galaxy to debunk that idea. The fallacy is that growth is continuous and ceases simultaneously across a civilization. The (simulated) reality is that only a tiny percentage of frontier colonies need to survive to prevent extinction and eventually resume growth.
I wish I could find the paper. Anyone know the one I'm talking about?
no shit! breaking news everybody!
I hate to break it to him but there are 10 million science-fiction books out there dealing with every possible way humans could wipe themselves off the earth. (and maybe just as much real science)
Maybe people don't read as much speculative fiction as they used to (or books in general, for that matter, but I digress...) but it seems like lately I've run into more than a few people acting surprised about concepts that, frankly, have been explored to death in books as recently as half a century ago.
2) Almost all technologically mature civilizations lose interest in creating ancestor simulations: computer simulations detailed enough that the simulated minds within them would be conscious.
3) We're almost certainly living in a computer simulation.
It seems like there's a fourth possibility here, which is that a full simulation of the universe is not possible. Anyone care to comment on the computability of something like that?
Even if our universe is in some sense based on real numbers (in the math sense), there's no apparent reason to believe that arbitrarily accurate simulations couldn't be run of it. (Or, alternatively, our host universe may also have real numbers and be able to use them for simulation purposes.)
Still, modeling every living being as a sort of cellular automaton and running the simulation would take a lot of quantum computers. What would be the computational complexity of modelling the consciousness of a human being?
You only need to model one though.
I think that this would reduce the computational intensity by a factor of 2.
The other three assume a crowded universe.
I still can't believe that "trying to save the human race from destruction" is being allocated such a tiny fraction of the world GDP.
In the meantime, natural infectious diseases kill several million people every year, and far worse is possible. In 1918-1919, a flu epidemic killed two or three times as many people in a single year as the First World War had killed in four years. In 2002-2003, a SARS outbreak killed a thousand people before it was stopped by quarantine measures. We got off _very_ lightly; SARS is at least as infectious as the 1918 flu, is no more curable, and has nominally roughly twice the lethality rate. The real lethality difference is higher, for the 2002-2003 outbreak was small enough that most of the victims could receive oxygen treatment, which significantly improves survival; in a pandemic, of course, there would not be enough such treatment to go around. And there have been historical plagues deadlier by far than SARS. Terrorists have political goals relative to which the weapon that killed their own communities and families would be counterproductive. The forces of mutation and natural selection working on disease organisms have no such reason for restraint.
Now what exactly do the bioterrorism criers think is going to happen if they start being taken seriously enough to influence policy? It's only too obvious what's going to happen, because it's the same thing that always does: more regulation on biotech research, more red tape, more of the sort of field day for the paranoid and bureaucracy gone mad that we already see in the security theater at airports. It's hard enough to fly to Disneyland in those conditions, let alone do cutting-edge research.
And so when - not if - something deadlier and more insidious than SARS does come along, whatever chance we might have had of being prepared for it may end up being thrown away.
It's not difficult to understand why our instinctive assessment of threats is so wildly irrational. In the ancestral environment there was plenty of disease, to be sure, but almost no medicine, so little selection pressure to be sensitive to that threat. Violence was the main cause of death _that you could do something about_. The human brain is wired to assume our fellow man is the primary threat, regardless of the facts.
But we can sometimes override even hardwired assumptions, once we understand that we live in a world in which they are no longer true. We had better learn to override this one, and quickly.
>Now what exactly do the bioterrorism criers think is going to happen if they start being taken seriously enough to influence policy? It's only too obvious what's going to happen, because it's the same thing that always does: more regulation on biotech research, more red tape, more of the sort of field day for the paranoid and bureaucracy gone mad that we already see in the security theater at airports. It's hard enough to fly to Disneyland in those conditions, let alone do cutting-edge research.
Are you in favor of government regulation for people researching new designs for nuclear bombs?
It's all about cost benefit analysis. The TSA is a waste of time not only because hijacking is rare, but because the average hijacking kills fewer than 1000 people. If there was a strong theoretical argument for how a hijacked plane could permanently end the human race, it would not be a waste of time.
I'm in favor of unregulated biotech research if the expected benefit from additional disease cures exceeds the expected risk from potential engineered viruses. Frankly, I'm more concerned about the engineered viruses because it seems like an engineered virus has a better chance of killing off everyone (as opposed to not everyone, which is an extremely important distinction in my view; I'm playing for team humanity) just because engineered things tend to work better than things that assemble by chance.
The cases are not particularly similar. It doesn't matter that we have crippled nuclear weapons R&D with regulation. It will matter a great deal if we similarly cripple biotech R&D.
> it seems like an engineered virus has a better chance of killing off everyone (as opposed to not everyone, which is an extremely important distinction in my view; I'm playing for team humanity)
I am not at all as optimistic as you are about that. It is the way of extinction that what kills the last individual may have nothing to do with the underlying factors that doomed the species. The last passenger pigeon died of old age. In my view, far more likely than a single super-disaster that kills everyone at the same time is a sequence of events where one factor delays progress enough to make us vulnerable to another that kills enough people or causes enough disruption to crash civilization, at which point we can't reboot because all the easily accessible fossil fuel deposits are long gone and there's no way to jump directly from wood-burning stoves to solar panels, leaving the survivors to struggle along until ordinary geological processes finish us off. That's the kind of scenario I'm concerned about.
I'm not sure I've seen any statement from the Future of Humanity Institute on this issue, except for this blog post by research associate Robin Hanson which would appear to support your point of view:
the main question is do we have 8 months left or 8 years, 80,000 or 800,000.
N=Population of World
P1=Percentage of people with knowledge and access to Dangerous Technologies
P2= Percentage of Personalities who are destructive
P3=Percentage of People who actually act
MaximumDamage=The Number of Casualties that can be caused by a single person
While P1,P2,P3 is relatively stable, and N is almost linear, MaximumDamage is exponential. You can imagine weapons accessible to normal evolved from sticks, to axes, to guns and explosives. At the present moment the main reason that nuclear weapons are not in exploded by terrorists is not because of technology but because of scarce resources. But for bio-warfare, technology cost will exponentially shrink and impact will exponentially grow.
This means if this goes on, PotentialDamage will one day be larger than our population, and extinction will occur. By estimating these parameters we can even predict a data.
To put it another way: I have fire insurance, but I've never had a fire. I have a gun (or four) but I've never had to shoot anyone.
Prudent folk recognize that life is risk and prepare accordingly.
It works both ways; technology could find simpler ways to build superweapons, but it could also lower the entry barrier to complex, resource-intensive projects. Look at the consequences of even mild singularity predictions for robotics and AI -- things that lower industrial "costs" by orders of magnitude, things like 3D-printers, self-replicating machines, cheap and ubiquitous fab robots. Anyone could build incredibly sophisticated machines in their own homes -- including sports cars, jet engines, and giant TV screens, but equally, compact laser enrichment cascades [b] and nuclear weapons.
Perhaps it's lack of knowledge on my part, but I don't see what will stop atomic bombs from being as common as handguns in 30-100 years. Fissile material is ubiquitous [c]; there's no barrier beyond economics and engineering, of exactly the sort near-term AI could unpredictably disrupt.
[a] (Tangential silliness: terrestrial uranium WAS weapons-grade material a few billion years ago -- U-235 decays with a 0.7 billion year half life, compared to 4.5 billion years for U-238, so in geologic history the fissile fraction used to be extremely high. Another one for the "anthropic principle" bin: earth's intelligence must have evolved now, and not earlier, because if it had it would have trivially nuked itself...)
[b] Check out [NYT][APS] -- it's actually a mainstream position among arms control experts to consider shutting down US research in laser enrichment, to prevent the knowledge from being developed at all. To me this plan sounds about as airtight as have suggested to close the NSA, to keep mathematical secrets like RSA from being discovered.
[c] Here's a blogger who goes hiking and brings back uranium ore by the bucket [Willis], for his own hackery experimenting. Uranium ore occurs everywhere [Cameco]; even common granite is a low-grade ore containing 5-50 ppm (parts per million) U [AZGS]; and research suggests it's even feasible to extract it by bulk from seawater [NBF].
You guess right--it's your lack of knowledge.
Fissile material is not useful for making nuclear weapons (of any more than the dirty bomb variety). So far, we seem to have converged on uranium and plutonium, and while at least uranium is relatively common in the Earth the processing and dredging required to get a useful amount of it, and then the refining to get the useful isotopes from that, is nontrivial.
Engineering and fast computers are great and all, and will get you arbitrarily close to the physical limits--but we're there right now, and physics says you aren't getting a centrifuge with meaningful output in your garage.
More troubling is the idea that frankly we've had the technology to build sports cars in our homes for decades and decades now. It's called buying a lathe, an arc welder, a mill, a forge. Despite the availability--cheap!--of these things to the general populace, not only is it not widespread among those who can--and there are damned few of those even with this magical Internet thing telling you how to do all of it--it will get less likely as people focus on cat pictures and tweeting.
We aren't worthy of a singularity as a culture.
Thank you for attaching some interesting facts. Allow me to do the same:
The useful fissile uranium isotope (235) accounts for bout .72% of naturally occurring uranium (http://web.ead.anl.gov/uranium/guide/facts/).
Your hiking source claims uranium metal, but points out that it is completely locked up in slag and that only at large-scales does the approach seem tractable.
Even allowing for that, the amount produced is negligible.
That's just for metal--we aren't even talking about a usable isotope yet.
Take 7/1000 of that result from the blogger, and wave a wand to make it pure enough to use.
Now collect the (at least) several pounds needed of that to make a functioning device. Now do the machining on the rest of the device to make it function correctly (have fun with the berylium dust, if you go that route). Now do the timing electronics, and the charge shaping (if you go that route).
I did not express myself well; this is exactly my point -- the barrier to (U-235 based) nuclear weapons is isotopic enrichment -- very difficult engineering -- and not so much access to natural uranium -- raw materials. So that, should difficult engineering become unexpectedly easy (as with singularity-type scenarios), then would be no major barrier to weapons, as access to uranium cannot possibly be blocked.
"Thank you for attaching some interesting facts. Allow me to do the same:"
Apologies for editing and expanding my comment after you read it -- it's a bad habit. ("Release early and release often"?)
"Engineering and fast computers are great and all, and will get you arbitrarily close to the physical limits--but we're there right now, and physics says you aren't getting a centrifuge with meaningful output in your garage."
Could you elaborate on this? I know the underground Fordow complex is about 6,000 m^2 [Forden] -- only about 1-2 orders of magnitude off, and it's built for human workers to access (not robots). And there's at least a factor-of-4 miniaturization in current laser technology (refs [APS][NYT]). I can't describe a design for a garage-sized enrichment plant, but I don't know that it's strictly precluded by physics or engineering constraints.
"1.6 to 16 times more efficient than first-generation gas centrifuges"
Even taking the SILEX claims at face value, that's compared with first-gen gas centrifuges--presumably those used in the early 20th century.
That's still not going to produce output in a form factor usable in a garage. Moreover, the argument I'd posited earlier doesn't hinge on the refinement--the sheer issues of scale of processing for ingress that much raw material are what gets in the way. Then you also have to dispose of the waste tailings.
As for the refining... look at a pic from an '84 processing plant (http://en.wikipedia.org/wiki/File:Gas_centrifuge_cascade.jpg). Each of those cylinders is ~40' tall, according to the parent article's caption.
Better info is available from globalsecurity (http://www.globalsecurity.org/wmd/intro/u-centrifuge.htm):
"A single centrifuge might produce about 30 grams of HEU per year, about the equivalent of five Separative Work Unit (SWU). As as a general rule of thumb, a cascade of 850 to 1,000 centrifuges, each 1.5 meters long, operating continuously at 400 m/sec, would be able to produce about 20-25 kilograms of HEU in a year, enough for one weapon."
Even going with the factor of 20 speedup (overestimation from your provided article with the SILEX quote), we would expect to need 50 centrifuges, on the order of 1.5m long each--that's quite a lot. There's also the supporting equipment, power conditioners and piping and so forth.
In fact, powering the entire apparatus is also a big concern. The article I linked suggests a power draw on the order of several hundred thousand kilowatt-hours (compare with around six thousand for a normal home per year) per year. So, again, I don't see the garage fab making sense.
Oh, and during all that time?
You bet your ass the government is datamining your browsing history, purchase orders, and hobbies. I've ignored it so far in the discussion, but if you want to play the magical singularity wand I'll play the fascist police state card.
Now I'm curious. How much would it cost to set up a reasonable home garage with all this machinery?
Cheap lathe (Chinese import 12x36 or old American) about $1500
Cheap mill big enough for automotive work (Chinese import or used American) about $1200
Cheap cutoff saw for most purposes: $100
Don't need a forge, just head over to www.onlinemetals.com to get any metal you're likely to need.
I agree with the poster: tools and knowledge on how to do this stuff is readily available (I downloaded a pdf of plans for a 22-caliber single shot pistol I will build someday), but very few people actually bother. Just like we have tons of cheap computers, but only a vanishingly small percentage of people bother to learn programming them...
The challenge of building a sports car is generally building your own engine (which people don't do often unless they are steam based). You need to get the engine block cast and that requires a steel foundry. Building a blast furnace in your garage is quite difficult if it needs to have the volume capacity to make even a fairly small engine block. Once you have the castings however the various other bits can fit in your garage easily and do cost between $8K and $150K depending on newness, accuracy, control methodology etc.
I'm also assuming that you would use fiberglass layup for the body panels since a sheet metal press that can make a 'hood' sized piece, or fender sized piece, is also quite large (and tall).
Unless you just insist on starting with a home-made block, you can acquire "naked" engine blocks fairly easily. Or at least you could a few years ago, I've been away from that scene for a while, so my knowledge isn't completely up to date. And even if you can't get a naked block you can certainly get partially built engines (usually the block, crankshaft, pistons and connecting rods) or "crate engines" (usually everything except intake manifold and carburetor / fuel injector) straight from the manufacturers. For example, see:
'course, starting with a pre-forged block and partially built engine isn't quite like doing it from scratch, but then again, I'm assuming anybody building their own car is buying pre-rolled tubing and sheet stock, etc., not literally building everything up from iron ore, aluminum ore, etc.
Cool. My dad builds and races late model stock cars and has been involved in racing as long as I can remember... so yeah, I can relate to exactly what you mean there. The race cars have almost nothing left of the original car except the exterior sheet metal.
http://en.wikipedia.org/wiki/Locost will get you started.
It all comes down to the ability to predict future, and we are really bad at it.
It may be what will kill our civilization after all, if not mankind itself.
200 to 300 years out there is no humanity as we know it today. Whatever we are at that juncture, it won't be "human" as we now define it. Our self directed evolution has long since taken over, and it's accelerating at an extraordinary clip. You can debate the merits of that, the details of it, but it's happening either way.
In the next two or three decades we'll have begun to completely take over genetic alteration / evolution / improvement / etc, of our species. Within a few decades after that, we'll be severely altering what we are. Within 150 or 200 years, it'll be very hard for modern humans to relate to the ancestor humans from 2012.
Concerns about asteroids or global warming and so on are moot. We won't be here as a species pumping CO2 into the atmosphere or waiting around helplessly for a rock to crash into the surface. It's not an issue of if, it's just an issue of how long it takes and how many competing models of our self directed evolution become options.
The sole threat to that future is super virus / disease, most likely man-made. It's the only thing that could stop our evolution and wipe out our species in that couple hundred year time frame. Even nuclear war isn't a threat to species survival, you could detonate thousands of nukes simultaneously and it wouldn't come close to killing us off.
Any claims about our "self-directed evolution" seem to either: define it so loosely as to include purely social conventions or uses of technology (we keep our information in the twitterspheres!, or to assume some magic breakthrough in bioengineering that overnight allows us to start jailbreaking our genome.
The sad, sorry fact of the matter is that in the time it takes to accomplish that, we could blow ourselves up.
Or, we could blow enough of ourselves up to render that future unachievable. The nations that are capable of carrying out this research, and supporting the societies that can support this sort of thing logistically and culturally, are very precarious in their positioning. As a thought exercise, consider how far food on your table had to travel, how many things had to work correctly for that to happen, and how fucked you are if they don't (and how tricky it is to find a replacement, and how many other people would be doing the same).
It would not be unreasonable to believe that a collapse of civilization could set back your scheme several hundred years--for example, consider how difficult it would be to get back to a point where you could use existing machines, much less make new ones. A semiconductor foundry--required for computers, in turn required for any meaningful engineering these days--would be practically unattainable in dire times, even if you find people alive who still knew how to run it!
Or, even more likely, nothing goes wrong, and simple social forces render stagnation even with advanced bioengineering. Read "The Calorie Man" by Paolo Bacigalupi for a recent take on this. There is no reason to expect that we're going to fare any better.
So, no, this isn't an unreasonable pessimism on the rest of our parts.
I remember reading somewhere (maybe one of Cliff Stoll's books?) that the author, a computer person, thought that a biological virus was the greatest threat to humanity. He mentioned this to a biologist friend of his, who assured him that the popular doomsday scenario is essentially impossible, and that his greatest fear was a computer virus!
I wouldn't say that it's the sole threat, however. Yudkowsky seems to take the threat of runaway AI quite seriously. These are crucial times.
NINJA EDIT: Regardless, the certainty with which you seem to regard your claims is hardly justified (unless you've got some concrete evidence you can share). If you're wrong, we'll have wished we were thinking about these things all along.
If this process can be understood mathematically, computers at ever-increasing speeds can come to these understandings for us.
Look up histone acetylation and DNA methylation. We're barely scratching the surface of how gene expression works.
To say we're going to rebuild the human genome in the next 100 years is a little fanciful. Reminds me of the "trains to the moon" that Popular Mechanics said we would all be riding in the 1980s.
It may be possible if you ask "given this arsenal, how do I destroy humanity?" but that is not how wars work.
It estimates that a nuclear war in 1988 would have dropped world population from ~5 billion to ~3 billion (this includes estimates of death due to famine etc).
You can argue with each individual estimate, but it's hard to argue that a nuclear war would actually kill off the human race.
(As noted elsewhere, you can argue that it would be possible to kill off the human race if someone with control of a super-power nuclear arsenal tried to do it. That's a different argument, because you'd have to discuss the likelihood of that circumstance occurring)
Thousands of modern devices would be different in many ways: bombs would likely be more powerful on average than the '45 ones, but they could also be cleaner, and they would likely not be used all in the same place. I would guess the species would survive it, if only because the ones starting it would likely be in a good shelter.
I think it would take worse than that to kill the species, but our civilization is more fragile.
I am an optimist, so I keep thinking that 'we' would survive. I also think we will not be that stupid to do that 'experiment'.
Unless a rock the size of Texas hits tomorrow.
Extinction only a generation before it's possible to avoid it.
It would be ironic except there won't be anyone left to appreciate the irony.