The people scared of the AI doom are so funny to me because the idea is highly speculative, when there's climate change happening under our noses and nobody says we should stop anything...
We should stop climate change. We should also be careful about developing the capabilities of new technology that will put us in similarly precarious situations going forward.
The two problems aren’t unrelated either, as we arrived at climate change (in part) because of decades of lies, propaganda, and dissimulation. Before, fossil fuel companies and their ill had to pay quite a bit of money to accomplish this. The prices has now fallen dramatically.
maybe better rephrased: I don't often see an overlap between those who see AI as a realistic existential threat and climate change as a realistic existential threat.
Unfortunately the general case seems to be influenced quite strongly by Yud. It’s a trickle down effect. So, you’re right, and it’s a good point, as you say. But it was also correct for them to point to Yud specifically as one of the original sources of what some might call hysteria, and others might call reasonable concerns.
There are two conflicting requirements- solve climate change without anybody losing any money. When climate change is strongly correlated with consumption that's difficult.
Also we have to solve climate change without using any fancy-shmancy geoengineering, because as sinners we're obligated to suffer rather than have convenient solutions for our problems.
Others downvoted you but I think you’re right to link Puritanism to a specific sliver of climate change activist community. There are some zealots who think we must end modernity as we know it, now!, or all is lost. These people existed before climate change, they just used other arguments to back their agenda. It was over population or before that a fear of factories, industry, progress in pretty much any form. The steam engine was a great fear to many. These are the “return to nature and live off the land” types and authoritarian personalities who are bothered by freedom. Ted Kaczynski was one of these. It’s hard to sort through such peoples thoughts to extract much that’s valuable. It’s like they don’t understand that history moves forward.
Yeah, I believe in climate change but I think that short-term climate doomerism is a pretty clear stalking horse for the sort of anti-capitalist/anti-industrial sliver that you mention. I guess if I were being charitable I'd say that I could be reversing the causality here and that climate change doomerism was simply causing anti-capitalist beliefs rather than the other way around, but I would bet that this isn't the case, and the reflexive attacks on "easy" solutions to climate change is reflective of this.
Or maybe the only kind of geo engineering that is feasible comes with unknowns that people are reasonably suspicous of; especially when it would likely be carried out by profit making entites.
If you want to suck co2 out of the air, no one will have issue with it, you will just run out of money without making a dent.
And trying to go 100% carbon free (or die trying) in the next decade or two doesn’t carry risks? Decarbonizing won’t be carbon neutral that’s for sure.
we could go with untested, still inefficient fancy-shmancy geoengineering which may cause even greater disasters, or maybe we could go with re-engineering our energy systems, with solutions that have been used for decades, if not centuries, at the same time maybe also rethinking a bit the way we live and consume. But, no, where's the fun if we don't destroy an ecosystem or two?
If a sober-minded cost-benefit analysis concludes that geo-engineering isn't viable, fine. But I don't think this is the modal way that geoengineering is being dismissed.
Also these are similar arguments to what has been used to criticize nuclear energy for decades, which has obviously been a disaster on net.
2. if it was not obvious, my comment was sarcastic. Of course there are people who care about both, what surprises me is all this fanfare from CEOs about something so speculative that is completely science fiction right now, when the existential threat is here and now, and very few seem to point at some urgent solution like, IDK, stopping using fossil fuel in 5 years? We have about a decade to fix this [1] and the progress so far has been, at best, lacklustre [2].
Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.
the thing is, there are several of such disasters happening every year because of climate change, and given that climate change is man-made, we can infer that those disasters are too. Pretty tragic for our species indeed, but even more tragic for the other species that live on this planet.
Hey now, plenty of CEOs care about the climate. As long as it doesn’t affect profit margins or the “Remote work” thing gets popular. In that case screw the climate. Make everyone drive cars to work!
The point isn't whether you agree with the approach, it's that it's not intellectually honest to claim nobody who considers AI to have existential risks thinks we should do anything about climate change and not eg. mention that Altman's biggest investment is to fight climate change.
Good discourse only happens when people take efforts to honestly represent the state of affairs.
Still haven't heard anybody who says we should stop developing AI that we should also stop, say, driving cars or stop all datacenters. Yes both are drastic, maybe even silly, but one is a response to a real threat affecting the planet here and now, the other is a the result of a vivid fantasy.
Not comparable at all. I’m not afraid of AI, thinks it’s a lot of marketting on OpenAIs part, but climate change is a trend not something you don’t see one day and then the next day everything’s different.
Yep, this is key. With climate change maybe we should be alarmed now, but few people argue that human extinction without our lifetimes is a possible outcome of inaction in the current argument.
Unfortunately, with AI things are different. By the time humans notice a real problem, we may be months if not hours away from death.
The basic idea is that once AGI hits an unknown capability threshold it will likely recursively self-improve into something very dangerous and difficult to control, and will likely be able to come up with an effective plan to remove any obstacles to its intended desires, ie. humans.
People have varying degrees of confidence in this scenario, but even if you peg it at like a 10% likelihood you're basically conceding that it's the single most important policy issue in the current political climate.