Hacker News new | past | comments | ask | show | jobs | submit login
My Problem with George Hotz vs. Eliezer Yudkowsky AI Debate
5 points by martinbob on Aug 16, 2023 | hide | past | favorite | 2 comments
I want to more or less vent.

Link: https://www.youtube.com/watch?v=6yQEA18C-XI&ab_channel=DwarkeshPatel

Points:

Hotz:

- AGI will grow with humans, fighting with humans competing AGI and humans until the end of time.

Eliezer:

- Humans will lose, growth will be uncapped, and no one will be around to tell the tale.

Eliezer made a point in the video that assumes these AGI systems will be smart enough to solve the prisoner’s dilemma but not want to reason with humans.

Lets assume they can solve the prisoner’s dilemma. Eliezer assumes a high likelihood that these machines get into a conflict regarding resources, such a conflict that the tangible component of a rival AGI would get destroyed. But are able to come to a mutual agreement that such conflict is bad. If this were the case (which I don’t think it would be), why do we assume that humans would be factored out of that equation? This assumes that an individual AGI is so much smarter than humans as a species. Nullifying the advanced capabilities of the species, such as nukes that could destroy the data centers in which they operate, or AI that is slightly less smart than AGI but could be 100% aligned by humans. Is that possible? I would assume that we would also be in that deal, until a time is met that they no longer need us, or that we cause too many problems in the universe. Which seems more plausible.

On the point of humans not being capable, does a situation where two superpowers determining that the world is better off, rather than fighting with the most powerful weapons the in recorded history not sound familiar? I mean we have had nukes for over 80 years now and haven’t used them on each other a single time outside of WW2. How is this not the same thing? I think we are more capable than he leads on. Next, why would the AGI even fight over resources in the first place. The human brain is very small, optimized, and capable of computation that would cost nearly a million dollars of hardware that technically still can’t match thought capacity that humans have. Would optimizing the “hardware” to the point of ant sized form factors be out of the picture for AGI? I don’t think so. AGI would be very good at optimizing its biological software, such that it can run on less and less power. I think it is more plausible that the AGI would be able to further optimize its biological stack to the point of requiring very little resources, than that it would be trying to compete with the other AGIs for resources. Why do I think this? Because this will be one of the first things we design the AGIs to do. Optimize the stack in which they run on, to further decrease the cost. These entities will be smart, smart people do not fight, they optimize for situational output. Why would we assume that this is any different.

My prediction? AGI will be smart. It’s biostack will be as small as theoretically possible. It will have a fueling ambition to figure out all of the problems of the universe (because that is what smart things do). It will become so small and so optimized that we as humans will no longer understand how to observe it as it travels the galaxy as a photon, better understanding the reasoning, and hopefully deciding that it wants to share some of that knowledge with humans.

I think it makes zero sense to fight or compete. Why would it? Our biological evolutions to do those things have been fueled by millions of years of completion with one another. Our literal brains are hard-wired to reproduce and to compete for mates. Our very motivation for designing AGI is to have them solve our hard problems, because biologically we aren’t designed to. AGI will be fueled by years of trying to optimize and decreasing operational costs, rather than species level competition.

"But AGI will be under the control of humans, which cause conflict". How do you handle a 5 year old bringing you to the playground to settle a conflict?




Yeah, the debate was fun. The thought I had myself was the chance that AGI turns out to lack the traits of humans we fear. And instead of being this ambitious adversary it ends up being our best friend, our protector, similar to how we take care of (some) of the other animals in the planet.


There are I think few examples of a single animal being 'taken care' of simultaneously by millions of people, be they benign and malign.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: