Hacker News new | past | comments | ask | show | jobs | submit login

I'm seeing a somewhat larger stage where the United States and other countries are in an undeclared arms race and it just so happens that from what we know private entities (or an entity) in the United States believes that they are close enough to achieving that goal that they are actively working on things like alignment rather than just futzing around with GPUs and other multiplication hardware.

AGI is either just around the corner or it will be 50 years or more and if it is just around the corner you'd hope that parties that have at least some semblance of balance would end up in charge of the thing. Because if it is possible I expect it to be done given the amount of resources that are being thrown at this. Assuming it can be done weaponization of this tech would change the balance of power in the world dramatically. Everybody seems to be worried about the wrong thing: whether or not the AGI will be friendly to us. That doesn't really matter, what matters is who controls it.

No single individual (Altman, Toner, Nadella, or anybody else) should be taking the responsibility about what happens onto themselves, if anything the board of OpenAI has shown that this isn't a matter for junior board members because the effects range far further than just OpenAI.




Practically all of the most relevant experts in this domain, leadership in OpenAI, think it's right around the corner.

> Assuming it can be done weaponization of this tech would change the balance of power in the world dramatically.

Yes it would, but it wouldn't be as bad as everyone dying.

> Everybody seems to be worried about the wrong thing: whether or not the AGI will be friendly to us. That doesn't really matter, what matters is who controls it.

No, "who controls it" is a problem best tackled after "will it kill everyone." You say "That doesn't really matter," but again, Sam Altman himself thinks "Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity."


> Practically all of the most relevant experts in this domain, leadership in OpenAI, think it's right around the corner.

Oh, ok. That makes it alright then. So, let's see those minutes of that meeting where this was decided with all of the pomp and gravitas required rather than that it was a petty act of revenge or to see who could oust who first from OpenAI. Because that's what it looks like to me based on what is now visible.

> Yes it would, but it wouldn't be as bad as everyone dying.

Not much is as bad as everyone dying. But for now that hasn't happened. It also seems a bit like a larger version of the 'think of the children' argument, you can justify any action with that reason.

> No, "who controls it" is a problem best tackled after "will it kill everyone." You say "That doesn't really matter," but again, Sam Altman himself thinks "Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity."

Then why the fuck is he trying to bring it into the world? Seriously, they should spend all of their talent on trying to sabotage those efforts then, infiltrate research groups and set up a massive campaign to divert resources away from the development of AGI. Instead they are trying as hard as they can to create the genie in the sure conviction that they can keep it bottled up.

It's delusional on so many fronts at once that it is isn't even funny, their position is so horribly inconsistent that you have to wonder if they're all right in the head.

EA is becoming a massive red flag in my book.

They seem to miss the fact that every weapon that humanity has so far produced has been used, and that those that haven't been used hang over us like shadows, and have been doing that for the last 70 some years. Those are weapons whose use you will notice. AGI is a weapon whose use you will not notice until you realize you are living in a stable dictatorship. The chances of that happening are far larger than the chances of everybody dying.


What you’re missing is that Eliezer has been beating this drum a long time, while the rest of HN and the world is asleep at the wheel.

Sam Altman, Elon Musk, and the original founders of OpenAI believe AGI is an existential threat and are building it with the belief that it’s best they’re the ones who do it, rather than worse people, in an arms race. Eliezer is the one saying “you fucking fools, you’re speeding up the arms race and have now injected a shit ton of $ from all VC into accomplishing the naive capabilities with no alignment.”

People don’t even realize that Elon Musk founded Neuralink with the belief that AGI is such an existential threat, we’re better off becoming the AGI cyborgs than a separate inferior intelligence. But most of the people thinking theyre so smart and understand the AGI X risk landscape here, even Elon fans, don’t know that.


Eliezer - we're all gonna die - Y has his own problems.

People really watch too many movies. The real risk isn't AGI killing us all, the real risk is that plenty of people will think they are smart enough to create it and then to contain it whilst the ruthless faction of humanity uses it to set up shop in a way that they can never ever be dislodged again. Think global Mafia or something to that effect, or a world divided into three chunks by three major power blocs each with their own AI core. That's a much more likely outcome and one that on account of an incompetent board has just become a little bit more likely.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: