Hacker News new | past | comments | ask | show | jobs | submit login
Natural Selection Favors AIs over Humans (arxiv.org)
4 points by bondarchuk 11 months ago | hide | past | favorite | 4 comments



Seems like a wishy-washy take to me, based on the abstract.

AI is a software tool. What we need to worry about is humans who can use this tool for good as well as for evil.

AI tools that can put dangerously many members of society out of jobs, destabilizing entire countries? No, human managers are the ones who choose to use AI and fire people, and human engineers working for OpenAI et al. are the ones who provide those AI tools for profit (built using creative work of very humans being put out of work). Self-replicating persistent software that makes it impossible to turn itself off? Sure, but only humans can develop and launch it. A software that exhibits some evil “intelligence” (what is intelligence, by the way)? Ask humans who designed it and prepared training data. Humans who become "convinced by AI" that it is conscious and should be free? Yes, there can be crazy humans, that’s an issue.

Between humans and AI, humans think and have agency; humans are the problem but they are also the solution (and the end goal). Let’s pay more attention to humans.


(Submitter here) This is the article from which [The Darwinian argument for worrying about AI](https://news.ycombinator.com/item?id=36533396) (on the frontpage 3 days ago) was excerpted (thanks to da39a3ee for pointing it out).

Submitting again because I'm completely baffled, really, by the almost complete absence of concern for evolutionary dynamics[1] in both the recent discussions around GPT etc.., and the hardcore alignment communities (LW etc..) of old.

[1] [This](https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators) is the only exception I know of, and it didn't get much traction outside LW at least (and I have no idea about inside LW).


Hmm, I think there is a breathless misunderstanding of both natural selection and evolution presented here. Simply, natural selection does not have any preference for traits or characteristics--what matters is luck (that a mutation exists in the first place) and the means for the propagation of that mutation (again, that depends on several factors, including luck). Evolution is the emergence of characteristics in a population over time. Forget about humans for a minute. We've been inheriting the traits and characteristics of mammals, and mammalian evolution has billions of years of pressure-driven training, https://en.m.wikipedia.org/wiki/Timeline_of_human_evolution.

Altruism seems to be inherently a mammalian trait, though birds and fish also exhibit it perhaps (more research is being done on this). Regardless, if it was true that selfishness was favored by evolutionary pressure, then enough time has passed (billions of years of evo.) that it would be evident by what exists today. Humans, as much as we like to rag on them, are by and large and on average not selfish. In fact, the most selfish and non-altruistic amongst us are safe and sound precisely because of the throngs and throngs and throngs of ironically selfless others--the establishment of and adherence to social and societal norms is predicated on some degree of selflessness. If anything, looking at the history of animal evolution, it's unclear to me how anything intelligent does not favor selflessness. It seems that selflessness is a necessity of group dynamics, and apparently evolutionary pressure favored the emergence of such group dynamics.

As far as AI is concerned, I think we are really just worried about the behavior of the most selfish humans amongst us. I think previous generations had similar issues, which is how a lot of financial regulations came about. Anyways, it seems shortsighted to me to conclude anything about the selective pressures humans may face due to the existince of AI in its current form.

Any intelligent and self-propagating being seems to favor cooperation. I guess it would be interesting to see if this is correct! However, in its current form I don't think we have anything that resembles a self-propagating, autonomous and sentient being. If we do get to that point, we shouldn't call it "AI", it would just be another existence of life.


> To counteract these risks and evolutionary forces, we consider interventions such as

Power switches




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: