Hacker News new | past | comments | ask | show | jobs | submit login

Arguments in the first part are really bad. I haven't read about ai safety that much, but still see it.

For example, ASI may be able to automate manufacturing really quickly, maybe be able to convince humans easily to do what it needs, etc Moreover, why ai would not be competing with us for resources? Materials on earth are finite.

Also, most likely scenario is not evil AGI, as it outlined in the article, but indifferent. And basing reasoning about AI on Darwinian evolution is barely useful.

Also, talk about this tribal people AI is ironically even worse then silicon valley narrow viewpoint: it is Californian liberal virtue signaling.




Is there a difference between evil and indifferent if you are on the receiving end? Humanity is largely indifferent to animal welfare and animals would qualify it as evil how we treat them.


I am currently a bit of a doomer and if I had a magic wand, I'd pause AI research for the foreseeable future because I don't think it's creating a better world for children, at least from what I've seen so far however...to your point.

I think we're only like this to animals because we are animals (we forgot), and we compete for the same type of resources they need, water, protein, calories, we're not intelligent enough to get ourselves out of this situation, but if we could, we could.

If we could we'd be able to grow tasty meat in a lab, reverse climate change, restore forrest, synthesize water cheaply, live in floating cities above beautiful habitats,,,etc etc. We're just not that smart so we do dumb shit, like factory farm.


I do agree that we might live peacefully with AIs because they need different things. Two points:

We still compete for space and energy at the very least.

Humans are sympathetic to animals because we are alike. We don’t want animals to suffer. However, will an AI be sympathetic to humans because we „think alike“? We don’t know.


How could something survive with other entities without things like fear and empathy though? It would just be constant chaos and nothing could get done. With no fear, you could just wipe yourself out and not thing twice about it. Without empathy, same, you'd just be killing things permanently.

I do wonder if the quest to build AI will teach us a lot about why the thing we take for granted exist. Why have fear, limits, empathy, anger etc.

I suppose bacteria is a bit like this but it’s limited in its capabilities to evolve too quickly and wipe everything out?


No, but I think that was exactly the point being made: that the article suggests evil would be required for existential risk, when indifferent would suffice.


It's noteworthy that even if we do see "evil" it'll just be something painted on an underlying veneer of indifference, eg. ChaosGPT.


OT: I just wanted to very belatedly thank you for this brilliant Walter Benjamin quote: <https://news.ycombinator.com/item?id=23388285>




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: