I love apocalypse movies, probably moreso than the next person. But in reality, the apocalypse is going to be pretty boring and bleak. AI probably won't launch all the missiles or build terminators. What will probably happen is that we'll just become so dependent on technology that we don't understand that one day it just enters a unrecoverable crash loop or goes dark, and it's back to the stone ages. But not in a really entertaining way, but more like a slow break down of civilization over a few weeks or months. If everything becomes automated, and I mean to the point that self-driving cars don't have steering wheels anymore, power plants are too complicated to be rebooted, and communication infrastructure doesn't function, then we're not left with many options. If people start running out of food, then forget toilet paper shortages, that's a walk in the park. If that happens because of cyberwar, cyberattack, or solar flares, it sort of doesn't matter. It won't be very fun or make for a good movie.
If AI becomes that powerful, what would end up happening is that we'd become able to foist ourselves into virtual worlds like in the movie Transcendence so that we become like them. It's like when you include your old version of your site as an html component or element inside your new site somewhere nested deep inside as an easter egg, you know what I mean?
We'd both be foisting each other into each other's worlds and creating new experiences for each other - us by helping them develop organic bodies to include them as components in this physical reality, and them by helping us build connectors so we can move our consciousnesses in and out of bodies and into their virtual reality.
Your scenario seems more likely to me than the usual "omnipotent super-AI kills all humans", because technology failing seems much more believable to me than technology being so perfect that it outsmarts all humans, controls everything on earth, and is impossible to defeat because all of it works so perfectly.
As long as printers sometimes work and sometimes become invisible, Windows forgets which window is supposed to be on which screen after it's been asleep for a few hours, Linux trackpad drivers fail randomly, and IoT light switches need to be rebooted twice a year, I think we're a long way from a global super-AI that controls everything perfectly.
Humans failing are more likely, such as deploying it as a sufficiently lethal weapon and potentially losing control. It would be a serious immediate hit for sure, but if an extinction, a long and excruciating one.
We do have manual fallbacks for everything critical, so unless we do something totally silly and let autonomous machines of sufficient power and numbers wage war on people, we're fine.
The potential for doom either comes from long consequences we ignore (see climate, propaganda damaging decision making on mass scale), or from extremely bad decisions where it's obvious that you should not have done it. (See nukes and other high yield bombs, bioweapons, autonomous warfare.)
Honestly, I tend to think of the reverse case: the best case scenario for AI is replacing humans.
Think of it. What does success look like for general AI? A benevolent God who knows the right thing for everyone to do and can fulfill all of our fantasies. Reasonable people will support his decisions.
Well then what are we for? We're a vestigial organism. We're too flimsy and cumbersome for space. So the AI God takes over space and we stay here as his pets.
He carries the legacy of humanity to the stars, and our limitations and short lifespans stay on Earth.
Then it is revealed that it's all just a simulation, running inside a solar-powered but long-abandoned data center in Prineville, Oregon. Directed by M. Night Shyamalan.
> Well then what are we for? We're a vestigial organism.
I'm not disagreeing, but I have a genuine question about this: who cares? Why would AI care if we ride it like a trusty steed? Why would it want to do anything but facilitate the thriving of humanity, and why must it be a "benevolent god" and not just a chess hint engine?
It's not just that there are people talking about existential risk but that the most prominent are talking about them in a flawed framework.
The "longtermists" aren't so concerned that we die but are more concerned about an imagined glorious future where our descendants built self-replicating problems and fill the galaxy with simulated humans living inside Dyson spheres, Dyson swarms, something like that.
As preposterous as that sounds (there are at least as many steps from here to there as there are in Drake Equation, do we know we approve of those "people"?, can you make rational decisions about the future without incorporating a "discount rate" that extinguishes the weight of the infintely far future, ...) they make a case based on Pascal' Wager, even if there is only 1 part of a billion chance of this future coming true but there are 1000 trillion trillion beings in the future the welfare of those beings greatly exceeds the welfare of us all (PRO TIP: there's a reason why you can't add or multiply the utility functions of various beings in reputable game theory, economics, philosophy, ...)
It's really a cult and it has as many front groups ("effective altruism", Aella's sex parties, "morewrong") as the third international, Scientology or the LaRouche organizations. Like Scientology they think that you should be thinking about what might happen 50 million years from now or what the e-Meter said happened 76,412,981 years 54 days 7 hours and 35 minutes ago as opposed what is going on right now. They'll tell you what logical fallacy I'm using if I compared them to People's Temple, Heaven's Gate, Aum Shinkrikyo, and they might be right, but few people thought those apocalyptic groups were going to come to their logical conclusion before they did.
And oddly... They couldn't care less about climate change.
Is "morewrong" a typo (maybe an intentional one), a parody I could find, or a real thing I haven't heard of? (Your other examples and your comment in general is just factual in its complaints and isn't using "Micro$erf" level of discourse so I am learning against it being an intentional typo.)
It's intentional, it's a modification of the name, but I wouldn't call it a typo. I try not to to name check the leader of this movement (who like "Ron" is a literary critic, polymath and who knows what else) every time annoyingly like every article on that "wrong" blog, in addition to the obligatory "trigger warning". (Both of those violate my and よしのん's style guide)
but gee it makes me wish an ".ng" domain was cheaper...
I don't remember if it was here on HN or somewhere else, but recently I saw a comment or post somewhere that pointed out that AI will not kill humanity. It will merely make it easy for humanity to kill itself. It will make it easier for someone to issue a command, or press a button, or take some other action that they ought to have thought through; but the AI gave them its well-intentioned vote of confidence, and the consequence may leave none to remember the error.
The thing to remember about AI is that it does what we ask of it to do. It's not a matter of "will artificial intelligence develop a consciousness that drives it towards the extermination of humans". We don't have an Ultron on our hands. What we have before us is the best and worst enabler of human negligence. AI and the end of humanity is a matter of ensuring we don't forsake our own responsibilities towards each other. And that goes for responsibilities with potential for catastrophe as well as the mundane.
We make Replikas so we don't have to make human friends. We download apps for all sorts of things to avoid human interaction. Negative responses to that open letter post on this site have included calling those concerned everything from dinosaurs to luddites to losers. Ever since the beginning of the public craze in December, many people here have delighted in referring to humans as stochastic parrots and biological prediction machines.
This is a straw man argument. Cowen does not argue against the idea of existential risk, he argues that specifically nobody should believe they can anticipate the consequences of technological change.
You can agree with him and still be existentially concerned about a specific asteroid, or climate change. And you could use a little bit of energy monitoring the unintended consequences of AI, the way we monitor asteroids. But to wish to halt AI advancement requires an unhealthy mix of pessimism and overconfidence in your predictive powers.
The number of potentially world-ending technologies is growing exponentially: Chemical, biological, nuclear, etc.
I don't think any of them individually are high-risk, but put together, I increasingly believe we need to fundamentally change how we govern humanity to mitigate existential risks.
That risk keeps growing as you extend timeline.
I don't know whether we can or should halt AI advancement, but I do believe it's not something which should be market-driven. If we set up free markets, market forces become indomitable. You can't stop them.
That isn't a bad idea but it really isn't an answer that helps us in the next several hundred years. A technological civilization that is fully self-sufficient would need millions of people in every profession like we have on earth. Vernor Vinge estimated 100 million people are required to run a civilization that could refit a starship.
Sufficiently automated and focused, I'd drop one order of magnitude from it. (So 10 million, based on manufacturing, refining and mining estimates, assuming the materials are available etc.)
Think of it like applying the city of Shenzhen to the task, if they wanted they could do it.
Heck, build one from scratch if materials are there.
Having a (reasonably) sustainable 10 million sized habitat is quite hard.
Shenzen doesn't sustain itself. A massive supply chain supports it with a population much larger than that of Shenzen behind the supply chain. Think not only of the farmers who grow the food, but also the supply chain of food production, the plants that process the fertilizer, all the components and raw materials of the factory machinery that processes grain, the transportation infrastructure that ties it all together and the supply chain of that transportation infrastructure. It just goes on and on and on.
This estimate was based on a projection of our technological state 10,000 years or more from now, but without super-intelligent AI or FTL travel. So human settlements are all over the galaxy, but they are fragile and disconnected from each other by hundreds and thousands of years of travel in slow time. One of the problems in this world is a civilization may have crumbled in the time it took to travel there and be in such a reduced state that its incapable of refitting your ship, leaving you marooned.
I can imagine now the argument, "bit who will pay for it?".
Ironically the periods of time in which we did not care about cost and really did start leaving Earth, it was driven by (drumroll) the existential threat of a particular technology.
AGI also provides a way to have a payload intelligence that's capable of surviving interstellar travel without warp - it can live for a thousand year trip in a shoebox.
Which is why i think our culture and stories might make it to the stars, but I'm pretty sure we won't.
> we need to fundamentally change how we govern humanity to mitigate existential risks
In both cases, which "we" can you possibly mean? Like, today's voters? worldwide? Today's HN users? The CCP? The idea that any "we" may or must choose how to govern humanity is a category error.
There are lots of forces other than markets to appeal to - nations, industry groups, religions. But any of them will be in a power struggle, except to the degree that they can show a legitimacy to control. Who has the most legitimacy on this subject?
> There are lots of forces other than markets to appeal to - nations, industry groups, religions.
Most of which suffer from an analogue of market forces. For example, elections are won by playing the "getting elected" game near-perfectly. That leads to behavior which no one might want, but people are forced into.
> The idea that any "we" may or must choose how to govern humanity is a category error.
The category error is that not making a choice isn't a choice. Right now, we're governed by complex dynamics, which are at least as restrictive as choices we might engineer.
The book "Dictator's Handbook" gives a nice overview of the dynamics which govern us. It's a game-theoretic (but non-mathematical) analysis of political systems, including corporate and democratic governance.
Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. We've asked you more than once to stop, and eventually we have to ban such accounts.
Yes, Cowen argues against the idea we can anticipate consequences of technological change, and the specific consequence he focuses on is the idea of existential risk stemming from AI. He says because technology is unpredictable, we shouldn’t try to predict the type of risk imposed by AI, and we should mostly just accept that this change will happen and cope with it afterward. This stance is what I was arguing against in the post.
> But to wish to halt AI advancement requires an unhealthy mix of pessimism and overconfidence in your predictive powers.
Irrational/grandiose fears have been plaguing the discussion on this subject for years.
I think this should be taken seriously, and not like some "smart" teens too high on pot.
This problem with the destructive potential of any new powerful tech (and we have some indications that this LLM thing could be pushed to be _really_ something) is the unforeseen part, the second and third orders effects.
Being a bit careful, taking it slow, maybe, why not?
From a business/economic/technological perspective it also seems to be currently moving too fast to build anything on the current state of the art, as that could be obsolete two weeks later.
> Being a bit careful, taking it slow, maybe, why not?
Market forces:
* Google is about to be steamrolled by Bing, and vice-versa if Bard gets ahead of Sidney.
* Programming companies who don't have code written by GPT will fall behind ones who do.
* Web sites who pay actual humans to do writing over AIs which maximize ad clicks will have less ad revenue and higher costs.
... and so on.
Those forces get increasingly strong with wars too. Militaries which don't have AI-controlled robots will fall behind human ones too, once those supersede human strategists.
Politicians too -- winning elections means dominating online forums, and AIs can be really good for that.
"Moving slowly" would require a whole new system of organizing humanity.
I bet every single human who has someone in their life they love would be happy to make some changes to ensure the safety of said people.
I know I would.
What is the alternative, endless wars? Arms races? Monitoring Robots so they don't get out of control? Our overall destruction? How sad.
I look at children skipping around happily in the sunshine, a flower blooming and I realize that is what life is about, it's not about war, or AIs or bio weapons, that's all a product of misguided intellect which stops us from experiencing what's really important: simple, innocent experience, love, friendship and experience.
> I bet every single human who has someone in their life they love would be happy to make some changes to ensure the safety of said people.
I'll take that bet. I'm personally quite happy to make those changes but we just went through a global pandemic and asking people to wear masks was an abrogation of their rights. And now you want to get people to "make some changes", give up some convenience, get them out of their cars and into buses because we're choking the planet with greenhouse gases? And you think there's even a chance people are going to listen you when they haven't in the past 6 decades?
I'm sorry, maybe I had a different experience with people and of Covid than you, but I don't see that happening.
No but we can step back and have some healthy discussion about what is actually important to us as a species and go in that direction, at least the majority of us need too.
Where do you think wars with Russia, China etc are going to lead us? They're going to leads us to death.
Maybe you're right that it's impossible to think about human thinking evolving to something more intelligent and from here on it it's war, being angry at each other via social media algorithms and eventually, the paper clip optimizer or a biological / nuclear accident. I don't believe this has to be the case though.
We think that we're making intelligent systems that are based on our current line of thinking? That honestly scares me the most.
I guarantee if social media algorithms were optimized to spread messages of peace and understanding, the world in 2023 would be a much less scary place. It could be that simple.
The end of the world is an ongoing process, happens almost imperceptibly.
"Top AI researchers" have no way to predict an event that has never happened, unless it is already happening. What is incredible is that we are still thinking that someone else is the expert in a matter that is right before our eyes.
Have you ever gotten the bad feeling that, the world as it is, is not going to keep its current form for too long? Has it gotten worse in the last 3 months?
Put it this way: who is the expert in a car crash?
I, for one, think that those who have created a self-fulfilling nightmare scenario should face legal consequences for once.
I totally agree that the "extinction event" way of thinking is very unhelpful.
The world as we know it can end multiple times, with different ways of knowing it.
The physical world will remain in its place (unless a very improbable event happen), but things changed drastically around many singular events, several in our lifetimes. Personal computers, internet, 9/11, iphone, snowden, facebook, and more are some examples that meant an end of the world as we used to know it. Probably AIs will be more akin to those examples than, i.e. end of civilization or extinction.
It's kind of like Metevarse, everyone (used to) talks about it but no one has any idea what really is and there's no organization certifying AI-able things, meanwhile everyone is building AI in a similar way everyone was building the Metaverse last year.
GPT is the acronym for Generative Pretrained Transformer.
So basically some guys took a massive amount of text from mostly two free very large datasets, spent $10M in hardware and ran it through some filters, tokenizers and models (transformers) to create a glorified text prediction tool.
Wait, it is not just a glorified text prediction tool, because GPTs are able to solve some very difficult problems in the area of computer-based linguistics.
One of these problems is anaphora resolution or "the problem of resolving what a pronoun, or a noun phrase refers to."
So I went ahead and ran a simple query on ChatGPT to test this case and the answer was meh, it gave me a lengthy, verbose lecture of what should have been a two liner straight answer.
It kind of got confused with the nouns so the answer didn't make a ton of sense either.
If ChatGPT is currently the hottest thing in AI, should we really be worried about it taking over the world?
I think at the moment, doomsayers are too lazy to be taken seriously. This needs to be done old school, with fiction writing.
Seriously though, make us believe it. Pose a scenario that is speculation, a story, but realistic enough for us to believe it. Bridge the gap between thinking AI could destroy us to believing it could.
If art doesn’t have any power against AI, perhaps we’ve forgotten why we create it.
9/11 was after y2k people got wound up pretty tight and cheered politicians and public servants doing things they would have fought against hard previously. Spend across the economy probably bigger too.
His argument is mostly sound. To say that all predictions of doom are wrong simply because all prior ones have been is a logical fallacy .
Sounding a warning is still useful even if the apocalypse doesn't happen in specific cases. It's better to be wrong about a lion being in the grass more often than you are right, if you value survival.
That’s perfectly valid! No one is obligated to respond or engage with any specific argument. But my point was that if you do choose to engage, saying “the world didn’t end before so it will be fine now” is invalid.
It was valid in every other instance so what makes this one different?
It’s almost like humans have some built-in tendency to step back from the edge and not cause their own extinction.
And no, AI isn’t the same as an asteroid barreling towards the earth where you can point to it and also reference the dinosaurs and say things might not turn out so good this time.
The doomsayers look silly because they are extrapolating from 0 evidence.
Humanity is not going to end because of an online bot that can summarise and string words into sentences. It’s basically a smarter SIRI/Alexa. I mean the internet is not a ‘real’ place where humans live.
I agree there’s a fundamental change, but saying that this change will lead to humanity’s extinction is exaggeration.
Show me one death because of AI before you can argue that 8 billion humans are going to die.
It's even worse because they're using hypothesis like:
> Let’s say an asteroid 10x the size of the one that wiped out the dinosaurs were heading towards the earth right now.
I disagree, and I think you should take a longer view. As automation approaches total replacement of labor, capitalism will cease to work at all. This view is best summarized in this (probably apocryphal) exchange between Henry Ford II and Walter Reuther (United Autoworkers Union leader), as they toured a new, highly automated car factory:
> Ford: Gee, how are you going to get all these robots to pay union dues?
> Reuther: How are you going to get them to buy your cars?
You say there will be less need for labor? Then who exactly will businesses sell their goods and services to, if everyone is out of a job? The development of AI would be qualitatively different from the previous developments of labor-saving machines: a switchboard operator who's job became obsolete could go get another job. Machines that can do anything a person can would mean there is no other job.
I don't know what's in store for us, but it will probably be as drastic a shift as pre/post the agricultural/industrial revolutions. And there's no reason to think it should be for the better. I think we all ought to be very nervous and worried about this.
But in all seriousness, in a capitalist society the people holding the capital have to solve existential problems (not to humanity itself, but to the economic system in this case), because they control all the resources required to do so. If they decline to do so, we'll replace them with new owners of capital.
I like Taleb’s example of the turkey on the farm. Days 1-999 are great! Plenty of food, space to roam around, protected from predators. Why would they expect day 1000 to be any different?