I think Kurzweil is a total blow-hard, along with the Mathematica guy. All these great 'predictions' don't take a lot of intellectual depth to make, to be fair .. all you had to do was read a bit of sci-fi in the 60's and 70's, and try to understand that your fellow engineers were reading the same (mostly Vernor Vinge) stories, on the basis of a few top-10 lists .. and oila: you can see what is going to happen.
Big Sci-Fi has been driving technology - and thus, our modern culture - for decades now. There's not a single advance in our modern lives that wasn't speculated first, by some sci-fi author.
So I don't really think that 'this guy makes amazing predictions' is such a new-worthy article. All this is doing is promoting the Kurzweil brand, and by proxy, the Google brand.
What does get me more interested, is the general reaction to cultists like Kurzweil and Wolfram; because these reactions are a far greater predictor of the future than the original subject. If the industry says "hmm.. Kurzweil is a genius, lets listen to what he has to say" - well then, its a self-fulfilling prophecy.
tl;dr - postulate what will happen, then MAKE IT SO. This is the mantra of any and all technologists - not just those with the wherewithal to spend their idle time pimping themselves, as both Kurzweil and Wolfram do.
I think a lot of Kurzweil's predictions are self-serving. He wants to live forever so bad, that he has:
1. Convinced himself it's possible
2. Convinced himself it's possible in his lifetime
3. Made arrangements to cryogenically freeze himself in the event he dies before it happens
It's just silly. We're nowhere near the ability to live forever and anyone who thinks we are has delusions of grandeur. And you can't freeze yourself to escape this reality. A planet with 7 billion people that live forever and are still actively procreating is likely to quickly have overpopulation issues. Are they really going to be looking to bring people back from the dead?
> A planet with 7 billion people that live forever and are still actively procreating is likely to quickly have overpopulation issues.
The weird part is that it might not make us slam into the inevitable planetary population limit much faster than we otherwise would. Somebody (one of the senescence nuts, maybe - sorry, I don't have a link) did a pretty convincing study indicating the average lifetime might not even double if we get rid of sickness and cellular aging.
You can see the logic: take into account all the things that kill people and aren't related to disease and aging and model around that. Accidents, crime, and war are still going to exist.
The mathematical model isn't perfect. It's interesting to speculate as to whether we as a race would become greater or lesser risk-takers after finding out aging wasn't a problem.
The development of smarter than human AI, or Kurzweil's belief in exponential progress would change that estimate drastically. It's entirely plausible we could have immortality in our lifetimes if you consider those things (not to mention cryonics which is available today.) Accusing him of believing it only out of self-interest is a needless ad hominem.
I wouldn't worry about 7 billion people living forever. Cryogenic technology, nanobots and whatever else will probably remain a bit pricy for a while, so poor people need not apply.
What is most unfortunate about Kurzweil is he doesn't speak about possible social or culture improvements. Technology is his crutch and bandaid to what ails the world, and technology drives Google's profits.
Vilém Flusser's essay on the Object: 'Design: Obstacle for/to the Removal of Obstacles'
> An 'object' is what gets in the way, a problem thrown in your path like a projectile(coming as it does from the Latin Obiectum, Greek problema). The world is objective, substantial, problematic as long as it obstructs. An 'object of use' is an object which one uses and needs to get other objects out the way. ...
> I overturn some of the obstacles (transform them into objects of use, into culture) in order to continue, and the objects thus overturned prove to be obstacles themselves.
The bigger the techno solution, the bigger the side effect. Technology offers fast benefits, the side effects take a long time to be noticed or materialize.
I am interested in folks looking at social, philosophic and non-objective improvements and solutions to our big problems, particularly those of a more conservative and less progressive nature.
What might happen is what I heard recently being called the "morality apocalypse" [1], i.e. technology is improving so fast, but we're not going to use it properly, probably not unlike when US used the atomic bomb on Japan the first time. It was a time when discovering the atomic bomb surpassed our morality by a long shot, and it took quite a few decades for our morality to catch-up to the technology, and understand that countries must not use atomic bombs against each other.
I fear the same will happen when we start using autonomous killer robots [2] (whether drones, or real Terminator-like robots), or when we learn how to make killer mosquito-sized drones that can assassinate anyone in the world anonymously, or when we learn how to use nanotechnology as a weapon. I think NSA doing mass surveillance of the whole planet "just because they can", also fits in this category.
I welcome all such neutral technological progress since I'm not a Luddite, but I also fear our governments hungry for more power or protecting their power and influence will use these technologies nefariously, probably long before we find out about them, in secret. And even if they get public, I fear most people will be too complacent to it, and won't revolt too much against it (US still keeps Guantanamo open, for how many years now? It's not a matter of technology, but the same point applies).
Agree with your first paragraph. This is highlighted most by the whacky idea to "Switch off our fat cells".
It is perfectly possible to not get fat and enjoy delicious healthy food in a sustainable, affordable manner, but this appears to be quite outside the realm of possibility already.
(Mr. Kurzweil aparently lives quite healthy and probably fully understand all of that, but still advocates a technological solution. I remember fondly the old Civilization game, in which you could research mathematics and gunpowder, but also democracy and women's suffrage - cultural improvements equally valued as technological improvements).
I'm much more interested in how we'll deal with Gibson's version of the future...
"And, for an instant, she stared directly into those soft blue eyes and knew, with an instinctive mammalian certainty, that the exceedingly rich were no longer even remotely human." -Count Zero
...than with how we'll achieve flying cars, curing cancer, or immortality.
You can only judge these people if you have a big list of all the predictions that they make. Highlighting the ones that came true is of no use if they made 1000s of other predictions that turned out to be rubbish.
Also, beware of all the predictions that don't have good timelines. Predicting a stock market crash is easy; there's bound to be one at some point or other. Predicting when the next crash might be is much much harder.
>Predicting when the next crash might be is much much harder.
Predicting when is just a stunt. Predicting why is more important. Predicting why can end up giving you when if the event is contingent on a few things falling into place that are being observed falling into place.
Rubbish, imo. We can't agree on the why of past crashes, let alone make good predictions about future ones. How can such a prediction ever be proved right or wrong?
>>Highlighting the ones that came true is of no use if they made 1000s of other predictions that turned out to be rubbish.
Well that's the whole point in 'predictions'. They are supposed to be a large number of them which have to come false. Predictions are a way of extrapolating the future based on current trends. They are bound to be wrong. And they are also bound to be true.
Anything else and we would be asking oracles gazing in crystal balls to narrate us the future.
If you were correct, jetpacks and flying cars would be common. Kurzweil's main prediction is that human-level intelligence is essentially just a matter of having similar processing power to the human brain available, and then guessing when that will be based on current trends. It so happens that our current level of computing was possible with the amount of effort put in, but I don't think it was possible to predict that the amount of effort that could be devoted could produce the results we have, in the 1970s. In fact, I think the conventional wisdom of the time was that it was not possible to get to this level of computing power so quickly.
Strong AI is not a matter of "processing power". These pop-compu-sci guys are selling the 50s AI project which was already dead in the water upon invention (cf. Hubert Dreyfus' critique) but has since been realised by the compu/cog-sci people to be irrevocably flawed.
Today the thesis that Strong AI is a function of processing power is snake oil or a result of a serious lack of knowledge of this area. The latter is OK for a layman, not for this kind of "expert".
To be fair, Kurzweil has always said it will take about 10 years to develop the software for strong AI, after we have the hardware/processing power necessary.
True, however his prediction of computers being a million times larger and several times faster would be a massive improvement for current AI algorithms, not to mention what we will have by that time.
There were some pretty upbeat predictions for the future of computing in the 1970s - here is a quote from a contemporary review of The Mighty Micro from 1979:
"Some of his guesses seem based on other popular books: he thought 'ultra intelligent machines could prolong life to age 1000' - ie things which might measure blood or neoplasms, and even repair them."
Kurzweil is someone who sets crazy goals and actually delivers on them from time to time. Even if his predictions are 1% accurate, I'm inclined to see that as a sign that there's a deficit of Ray Kurzweils to implement them, as well as a surplus of people on the Internet shooting down cool ideas without contributing anything.
If someone wants to make predications, it would be more helpful if someone went to the effort to predict where to best put our resources now so that it will have the most impact in 50 years.
For example, consider that the CERN Supercollider could have been built 20 years ago but the US canceled the project because it only wanted to fund one big science project and the Hubble won.
> I think Kurzweil is a total blow-hard, along with the Mathematica guy. All these great 'predictions' don't take a lot of intellectual depth to make, to be fair .. all you had to do was read a bit of sci-fi in the 60's and 70's, and try to understand that your fellow engineers were reading the same (mostly Vernor Vinge) stories, on the basis of a few top-10 lists .. and oila: you can see what is going to happen.
>Big Sci-Fi has been driving technology - and thus, our modern culture - for decades now. There's not a single advance in our modern lives that wasn't speculated first, by some sci-fi author.
Oh wow. If I judge by the stuff my generation grew up with, my children and grandchildren are going to see their parents trying to invent some really cool/weird stuff.
I mean, sure, string-theoretic worldgates for interstellar transport will never actually work, but apparently by 2040 we should have futurists predicting them for 2065, and by then God only knows. I mean, giant mecha are already in the works, so things'll be pretty cool then.
2011 ray Kurzweil predicts the singularity (enabled by super-intelligent AIs) will occur by 2045, 34 years after the prediction was made.
So the distance into the future before we achieve strong AI and hence the singularity is, according to it's most optimistic proponents, receding by more than 1 year per year. So I predict that when we get to 2045 strong AI will be on the slate to be achieved by about 2090.
If he means solar becomes economically competitive for general purpose household and industrial power generation in most places on Earth by 2033, ok, but that's just the starting line for an incredible amount of capex needed to replace fossil fuels.
I don't understand why 100% solar would be a good thing. Sure, it could hypothetically meet our daytime demands, but what about night time? There's only two options there: import power from another time zone where the sun shines, or store the energy during the day.
The first is very lossy: HVDC is the most efficient way to transport bulk electricity, and it loses 3.5% per 1000km. You would need to transport electricity about half the way around the earth of the earth's circumference, or about 20,000km, meaning you would lose about 50% in transit. So this would depend on a drastically more efficient way of transporting power (superconducting power lines?).
The second is not possible with current battery technology.
And that still doesn't answer why this arrangement would be better than a nuclear (base load) + solar (daytime variable load) + natural gas (nighttime/cloudy conditions variable load) mix.
Storage on this scale is almost never done with batteries. Today its done with pumping water up to a reservoir while we have power and then letting it flow back down through generators when we don't have power. This is know as "Pumped Storage." Current best efficiencies are around 75% (you get out 75% of the energy you put in). The problem is that there are only so many places on earth where you can build these dual reservoirs on the scale required because of geography. Good discussion on this: http://www.withouthotair.com/c26/page_189.shtml
I didn't include pumped storage because I figured the scale of the facilities required to store one night's power demands for the continental US (for example) would be infeasible.
I just did a quick calculation in Wolfram Alpha and found that (assuming perfect efficiency), to store enough energy to power the United States for half a day would take 713km^3 of water raised 1km. That's about half the volume of Lake Ontario pumped up one kilometre during the day, and down again at night.
I guess that's theoretically possible but it would be a hell of a megaproject, and for what?
>>it would be a hell of a megaproject, and for what?
Imagine some one using an oil lamp in the pre-electricity era saying this. Would it even make any sense at all to replace a perfectly working oil lamp/candle with a light bulb at the expense of century worth mining of copper, putting down massive infrastructure, setting up dams for just glowing this small time bulb?
The widespread use of electricity or trains or airplanes or dams or any major paradigm changing invention in human history requires mega engineering projects for a very simple reason. You have to deliver at scale.
If one can go totally without fossil fuels chasing such projects makes all the sense in the world.
The widespread use of electricity or trains or airplanes or dams or any major paradigm changing invention in human history requires mega engineering projects for a very simple reason. You have to deliver at scale.
I disagree there. All the examples you stated developed gradually starting small and expanding. The first railways appeared in Europe in the 14th century and didn't really take off until the 18th century with the development of the steam engine.
Even air travel, which went from the first heavier-than-air flight to first commercial flight in a remarkably short period of time of about 27 years, piggybacked off of earlier lighter-than-air travel that had existed for centuries. In the Napoleonic Wars, balloons were used for surveilling the battle field, and airships appeared in the mid 19th century. People already knew that air travel was commercially viable and militarily useful. Heavier-than-air machines were a major improvement on the concept, not something entirely new.
This is a common theme in technology: a primitive form of the technology exists for a long period of time in niches. Then, after a period of time, there's a key invention -- the steam engine for rail travel, the dynamo for electricity, the airplane for air travel, the transistor for computers -- that causes the technology to explode in a short period of time.
That key invention is the last piece in the puzzle, not the first. The earlier developments prove the viability of the concept, build up the infrastructure to support it, and justify large investments in developing the key invention. But if you don't look at the history before the key invention, you'd think that computers and electric generators and railroads and aircraft appeared suddenly and had large amounts of capital suddenly invested in them.
And that brings us back to my original point: all those technologies had clear advantages over their predecessors. What advantage does an all-solar future have over a mix of sources? Simply getting rid of fossil fuels is not good enough.
Binary thinking... option 3 don't use electricity (at least not much) at night.
It may be that aluminum electrorefineries and data centers simply have to be installed next to hydro plants instead of solar plants. And CNC machine tool plants will only operate when the sun is up. So be it.
Hydro power is proven economical, but it can be, approximately, tripled worldwide IF you displace tens of millions of people. In North America only small increments can be added.
It's fine to cherry-pick the predictions that he got right, but how many was he wrong about? I confess I don't follow Kurzweil, and don't put much stock in "futurists" generally, because a lot of the things that are "predicted" correctly seem like pretty safe bets to me.
I'm not sure that Kurzweil has ever gotten anything right that the majority of the industry didn't also get right. Every prediction of his that wasn't common was wrong. I welcome any citations that prove me wrong, though.
"Thanks to the Human Genome Project, medicine is now information technology, and we’re learning how to reprogram this outdated software of our bodies exponentially."
What does this even mean?
"We’ll be able to send little devices, nanobots, into the brain and capillaries, and they’ll provide additional sensory signals, as if they were coming from your real senses."
Yes, the magic of -~=nanomachines=~-!
Some of it is not unreasonable, some of it is tame (VR), and some of it is hogwash.
Is this a common/standard way of disagreeing with how something is phrased, is it derogatory, or are you really not able to parse the meaning of that sentence?
Sincere question. I saw this same phrase used a while ago by someone else, and while the sentence it referred to was as poorly worded as the one you refer to, parsing the meaning is easy. Isn't it?
> 2023: Full-immersion virtual realities .... We’ll be able to send little devices, nanobots, into the brain and capillaries, and they’ll provide additional sensory signals, as if they were coming from your real senses.
Reading optimistic future predictions is a lot of fun. And it's also fun to read those from the past because they help to point out eternal human motivations as well as trends of the time. Here are a series of postcards created by some artists at the end of the 19th century that imagined the year 2000 in France: https://www.google.com/search?q=Jean-Marc+Côté&source=lnms&t...
For context, the industrial revolutions, both the first and second, were by this point old news, so the notion of creating a machine to accomplish a task was familiar. There isn't much in these postcards to delight cynics and pessimists, though, since World War I, when people turned these machines on each other, had not yet happened.
No need to wait until 2040 as per the article, people do this right now at Glacier National Park in Montana. (edited: I understand now, the reason the article listed 2040 and the alps instead of GNP, is the latest estimate for GNP is its last glacier will melt around 2020-2030 timeframe, figure in the next decade, so by 2040 you'll be going to the alps to see a glacier not GNP...)
Some of the comments are amusing, mostly unintentionally. The general public is much less educated and intelligent than HN. If you think we're uncomfortable with self driving cars and digital womens fashion, imagine how those clowns are going to feel.
I was watching The World in 2030 speech by Michio Kaku in 2009, and I was thinking "wow, Larry Page definitely saw this a few times", because some of Google's objectives since then seem to have been born from this talk:
After my mother died in early 2013, this "technology will solve death" fantasy lost all appeal to me. I finally accepted that there are losses in this world that can never be reversed. If there's something on the other side, then I look forward to seeing it. Joining the 120 billion people who have already died can't be the worst thing. If there's nothing on the other side, and death is the end of existence, then there's nothing to fear in it. I know this is an unpopular opinion here, but a total end seems unlikely. I don't believe in religion or gods in the anthropomorphic sense of "god" but whatever causes brought me to exist in a world can occur again.
As with religion, this idea of a perfect future (heaven?) in the Singularity can become a dangerous distraction from the real problem. It's not death that is a hard problem. It's life. Economic scarcity is the real enemy. That is what we should be fighting with all our force. We should fight first to make life better, and second to prolong it.
If we "solve death" and still have to live with scarcity and inequality, then we've just failed in a way that lasts much longer.
(On practical grounds, of course, I'm all for life-saving medical advances and research that reduces or eliminates costly illnesses. In fact, if rejuvenation were an option, I'd probably take it. I'm OK with death, but I hate getting sick. My point is that the real purpose of technology should be to kill scarcity first, and then aim for immortality second.)
>"It's not death that is a hard problem. It's life. Economic scarcity is the real enemy. That is what we should be fighting with all our force. We should fight first to make life better, and second to prolong it."
That's very well put.
Perhaps not coincidentally, I completely relate to the perspective change brought by such a loss.
These are not mutually exclusive goals. A singularity would fix scarcity as well.
Could, not would.
My point is that if it doesn't, it's not worth having. There are dystopian but plausible future scenarios in which (some or all) people live forever but scarcity still exists. My point is that those are extremely undesirable. Life under scarcity is only tolerable because it ends, and because the embittering, emotional irritation of economic scarcity is minuscule in comparison to the much greater natural scarcity of time imposed by nature (memento mori).
If we had eternal life, or (more relevantly) the billion-year lifespans possible were we to solve aging and accidents, would the sorts of people who hold power and (left unchecked) enslave the world, do so? Or would they, recognizing their (near-)infinitude of time, lighten up and hold back a bit? The big unknown is human nature, and that I could fill pages on that, only to conclude: I don't know.
According to Singularity guys, some point of time in the future one could download his whole brain as-is as a file. Then if you are old enough, they could just kill your body. Upload your file to a super computer hosting many such files. Or a VM running on a super computer. Each file gets it own VM.
Inside that VM they can possibly simulate a paradise. Since its happening inside a computer. Resources are hardly a problem. They can retain the best parts about us. They kill all negative traits inside humans(disease, boredom, violence etc etc) and just let the VM run as long as it possibly can.
The point is Ray Kurzweil thinks, we would no longer need a biological body to stay alive.
But... the servers, while not biological, are still physical constructs requiring energy and maintenance which are, themselves, subject to entropy and eventual breakdown or simply being shut off. You're just trading bio-rot for bit-rot but it's essentially the same thing.
Now I feel as if there ought to be some scifi novel about a war among emulated people over whether their server should use fairness-focused scheduling algorithms or a capitalistic lottery scheduler.
This would be amusing for all the other nine OS nerds in the world who get the joke.
Big Sci-Fi has been driving technology - and thus, our modern culture - for decades now. There's not a single advance in our modern lives that wasn't speculated first, by some sci-fi author.
So I don't really think that 'this guy makes amazing predictions' is such a new-worthy article. All this is doing is promoting the Kurzweil brand, and by proxy, the Google brand.
What does get me more interested, is the general reaction to cultists like Kurzweil and Wolfram; because these reactions are a far greater predictor of the future than the original subject. If the industry says "hmm.. Kurzweil is a genius, lets listen to what he has to say" - well then, its a self-fulfilling prophecy.
tl;dr - postulate what will happen, then MAKE IT SO. This is the mantra of any and all technologists - not just those with the wherewithal to spend their idle time pimping themselves, as both Kurzweil and Wolfram do.