Hacker News new | past | comments | ask | show | jobs | submit login

Sutskever: "You can call it (a coup), and I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity." Scoop: theinformation.com

https://twitter.com/GaryMarcus/status/1725707548106580255




That "the most important company in the world" bit is so out of touch with reality.

Imagine the hubris.


I'd argue they are the closest to AGI (how far off that is no one knows). That would make them a strong contender for the most important company in the world in my book.


AGI without a body is just a glorified chatbot that is dependant on available, human provided resources.

To create true AGI, you would need to make the software aware of its surroundings and provide it with a way to experience the real world.


AGI with agent architectures (ie giving the AI access to APIs) will be bonkers.

An AI without a body, but access to every API currently hosted on the internet, and the ability to reason about them and compose them… that is something that needs serious consideration.

It sounds like you’re dismissing it because it won’t fit the mold of sci-fi humanoid-like robots, and I think that’s a big miss.


vision API is pretty good, have you tried it?


Even if that was true, do you think it would be hard to hook it up to a Boston Dynamics robot and potentially add a few sensors? I reckon that could be done in an afternoon (by humans), or a few seconds (by the AGI). I feel like I'm missing your point.


Well, we don't know how hard it is. But if it hasn't been done yet, it must be much harder than most people think.

If you do manage to make a thinking, working AGI machine, would you call it "a living being"?

No, the machine still needs to have individuality, a way to experience "oness" that all living humans (and perhaps animals, we don't know) feel. Some call it "a soul", others "consciousness".

The machine would have to live independently from its creators, to be self-aware, to multiply. Otherwise, it is just a shell filled with random data gathered from the Internet and its surroundings.


It's so incredibly not-difficult that Boston Dynamics themselves already did it https://www.youtube.com/watch?v=djzOBZUFzTw


"Most important company in the world" is text from a question somebody (I think the journalist?) asked, not from Sutskever himself.


I know. I was quoting the article piece.


But it doesn't make sense for the journalist to have hubris about OpenAI.


Something that benefits all of humanity in one person's or organization's eye can still have severely terrible outcomes for sub-sections of humanity.


No it cant, that’s literally a contradictory statement


The Industrial Revolution had massive positive outcomes for humanity as a whole.

Those who lost their livelihoods and then died did not get those positive outcomes.


It could be argued that the Industrial Revolution was the beginning of the end.

For instance, it's still very possible that humanity will eventually destroy itself with atomic bombs (getting more likely every day).


> It could be argued that the Industrial Revolution was the beginning of the end.

"Many were increasingly of the opinion that they’d all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans"


One of my favorite thought nuggets from Douglas Adams


"He said what about my hair?!"

"..."

"The man's gotta go."

- Sutskever, probably


George Lucas's neck used to have a blog [0] but it's been inactive in recent years. If Ilya reaches a certain level of fame, perhaps his hair will be able to persuade George's neck to come out of retirement and team up on a YouTube channel or something.

[0] https://georgelucasneck.tumblr.com/


The moment they lobotomized their flagship AI chatbot into a particular set of political positions the "benefits of all humanity" were out the window.


One could quite reasonably dispute the notion that being allowed to generate hate speech or whatever furthers the benefits of all humanity.


It happily answers what good Obama did during his presidency but refuses to answer about Trump's, for one. Doesn't say "nothing", just gives you a boilerplate about being an LLM and not taking political positions. How much of hate speech would that be?


I just asked it, and oddly enough answered both questions, listing items and adding "It's important to note that opinions on the success and impact of these actions may vary".

I wouldn't say "refuses to answer" for that.


>It happily answers what good Obama did

"happily"? wtf?


'Hate speech' is not an objective category, nor can a machine feel hate


If they hadn’t done that, would they have been able to get to where they are? Goal oriented teams don’t tend to care about something as inconsequential as this


I don't agree with the "noble lie" hypothesis of current AI. That being said I'm not sure why you're couching it that way: they got where they are they got where they are because they spent less time trying to inject safety at a time where capabilities didn't make it unsafe, than their competitors.

Google could have given us GPT-4 if they weren't busy tearing themselves asundre with people convinced a GPT-3 level model was sentient, and now we see OpenAI can't seem to escape that same poison


> Google could have given us GPT-4 if they weren't busy tearing themselves asundre with people convinced a GPT-3 level model was sentient,

Doubt. When was the last time Google showed they had the ability to execute on anything?


My comment: "Google could execute if not for <insert thing they're doing wrong>"

How is your comment doubting that? Do you have an alternative reason, or you think they're executing and mistyped?


Your comment was "Google could execute if not for <thing extremely specific to this particular field>". Given Google's recent track record I think any kind of specific problem like that is at most a symptom; their dysfunction runs a lot deeper.


If you think a power structure that allows people to impose their will in a way that doesn't align with delivering value to your end user is "extremely specific to this particular field", I don't think you've reached the table stakes for examining Google's track record.

There's nothing "specific" about being crippled by people pushing an agenda, you'd think the fact this post was about Sam Altman of OpenAI being fired would make that clear enough.


If you were trying to express "a power structure that allows people to impose their will in a way that doesn't align with delivering value to your end user", writing "tearing themselves asundre with people convinced a GPT-3 level model was sentient" was a very poor way to communicate that.


It's a great way since I'm writing for people who have context. Not everything should be written for the lowest common denominator, and if you lack context you can ask for it instead of going "Doubt. <insert comment making it clear you should have just asked for context>"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: