Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI's board has fired Sam Altman (openai.com)
5710 points by davidbarker 11 months ago | hide | past | favorite | 2531 comments



All: our poor single-core server process has smoke coming out its ears, as you can imagine.

I so hate to do this, but for those who are comfortable viewing HN in an incognito window, it will be much faster that way. (Edit: this comment originally said to log out, but an incognito window is better because then you don't have to log back in again. Original comment: logging in and out: HN gets a lot faster if you log out, and it will reduce the load on the server if you do. Make sure you can log back in later! or if you run into trouble, email hn@ycombinator.com and I'll help)

I've also turned pagination down to a smaller size, so if you want to read the entire thread, you'll need to click "More" at the bottom, or like this:

https://news.ycombinator.com/item?id=38309611&p=2

https://news.ycombinator.com/item?id=38309611&p=3

https://news.ycombinator.com/item?id=38309611&p=4

https://news.ycombinator.com/item?id=38309611&p=5

Sorry! Performance improvements are inching closer...


From NYT article [1] and Greg's tweet [2]

"In a post to X Friday evening, Mr. Brockman said that he and Mr. Altman had no warning of the board’s decision. “Sam and I are shocked and saddened by what the board did today,” he wrote. “We too are still trying to figure out exactly what happened.”

Mr. Altman was asked to join a video meeting with the board at noon on Friday and was immediately fired, according to Mr. Brockman. Mr. Brockman said that even though he was the chairman of the board, he was not part of this board meeting.

He said that the board informed him of Mr. Altman’s ouster minutes later. Around the same time, the board published a blog post."

[1] https://www.nytimes.com/2023/11/17/technology/openai-sam-alt...

[2] https://twitter.com/gdb/status/1725736242137182594


So they didn't even give Altman a chance to defend himself for supposedly lying (inconsistent candour as they put it.) Wow.


Another source [1] claims: "A knowledgeable source said the board struggle reflected a cultural clash at the organization, with Altman and Brockman focused on commercialization and Sutskever and his allies focused on the original non-profit mission of OpenAI."

[1] - https://sfstandard.com/2023/11/17/openai-sam-altman-firing-b...


TY for sharing. I found this to be very enlightening, especially when reading more about the board members that were part of the oust.

One of the board of directors that fired him co-signed these AI principles (https://futureoflife.org/open-letter/ai-principles/) that are very much in line with safeguarding general intelligence

Another of them wrote this article (https://www.foreignaffairs.com/china/illusion-chinas-ai-prow...) in June of this year that opens by quoting Sam Altman saying US regulation will "slow down American industry in such a way that China or somebody else makes faster progress” and basically debunks that stance...and quite well, I might add.


So the argument against AI regulations crippling R&D is that China is currently far behind and also faces their own weird gov pressures? That's a big gamble, applying very-long term regulations (as they always are long term) to a short term window betting on predictions of a non-technical board member.

There's far more to the world than China on top of that and importantly developments happen both inside and outside of the scope of regulatory oversight (usually only heavily commercialized products face scrutiny) and China itself will eventually catch up to the average - progress is rarely a non-stop hockey stick, it plateaus. LLMs might already be hitting a wall https://twitter.com/HamelHusain/status/1725655686913392933)

The Chinese are experts at copying and stealing Western tech. They don't have to be on the frontier to catch up to a crippled US and then continue development at a faster pace, and as we've seen repeatedly in history regulations stick around for decades after their utility has long past. They are not levers that go up and down, they go in one direction and maybe after many many years of damage they might be adjusted, but usually after 10 starts/stops and half-baked non-solutions papered on as real solutions - if at all.


> The Chinese are experts at copying and stealing Western tech.

Sure that's been their modus operandi in the past, but to hold an opinion that a billion humans on the other side of the Pacific are only capable of copying and no innovation of their own is a rather strange generalization for a thread on general intelligence.


Well, I guess (hope) no one thinks it is due to genetic disabilities which are preventing disrupting innovations from (mainland) chinese.

It is rather a cultural/political thing. Free thinking and stepping out of line is very dangerous in a authorian society. Copying approved tech on the other hand is safe.

And this culture has not changed in china lately, rather the opposite. Look what happened to the Alibaba founder, or why there is no more Winnie Puuh in china.


This seems to make more sense. Perhaps it has to do with OpenAI is not "open" anymore. Not supporting and getting rid of the OpenAI Gym was certainly a big change in direction of the company.


I'm confused. It's usually the other way around; the good guy is ousted because he is hindering the company's pursuit of profit.


This time he was ousted because he was hindering the pursuit of the company's non-profit mission. We've been harping on the non-openness of OpenAI for a while now, and it sounds like the board finally had enough.


"This time he was ousted because he was hindering the pursuit of the company's non-profit mission. "

This is what is being said. But I am not so sure, if the real reasons discussed behind closed doors, are really the same. We will find out, if OpenAI will indeed open itself more, till then I remain sceptical. Because lots of power and money are at stake here.


Those people aren't about openness. They seem to be members of "AI will kill us all" cult.

The real path to AI safety is regulating applications, not fundamental research and making fundamental research very open (which they are against).


That's what it's looking like to me. It's going to be as beneficial to society as putting Green Peace in charge of the development of nuclear power.

The singularity folks have been continuously wrong about their predictions. A decade ago, they were arguing the labor market wouldn't recover because the reason for unemployment was robots taking our jobs. It's unnerving to see that these people are having gaining some traction while actively working against technological progress.


I want you to be right. But why do you think you're more qualified to say how to make AI safe than the board of a world-leading AI nonprofit?


Literal wishful thinking ("powerful technology is always good") and vested interests ("I like building on top of this powerful technology"), same as always.


Because I work on AI alignment myself and had been training LLMs long before Attention is All You Need came out (which cites some of my work).


Someone is going to be right, but we also know that experts have known to be wrong in the past, ofttimes to a catastrophic effect.


In this case, the company is a non-profit, so it is indeed the other way around



It is not that simple. https://openai.com/our-structure

The board is for the non-profit that ultimately owns and totally controls the for-profit company.

Everyone that works for or invests in the for-profit company has to sign an operating agreement that states the for-profit actually does not have any responsibility to generate profit and that it's primary duty is to fulfill the charter and mission of the non-profit.


Then what's the point of the for-profit?


> Then what's the point of the for-profit?

To allow OpenAI to raise venture capital, which allows them to exchange equity for money (ie, distribute [future] rights to profit to shareholders)


If you don’t know anything, why are you posting


Yeah I though that was the most probable reason, especially since these people don't have any equity, so they have no interest in the commercial growth of the org.

Apparently Microsoft was also blindsided by this.

https://www.axios.com/2023/11/17/microsoft-openai-sam-altman...


So it looks like they did something good.


Yes. They freed Sam and Greg from their shackles and gave a clear indicator that OAI engineers should jump ship into their new venture. We all win.


Perhaps joining Bret Taylor and his friend from Google X? Can’t imagine what those brains might come up with.


If you want AI to fail, then yes.


Melodrama has no place in the AI utopia.


The only thing utopian ideologies are good for is finding 'justifications' for murder. The "AI utopia" will be no different. De-radicalize yourself while you still can.


> The only thing utopian ideologies are good for is finding 'justifications' for murder.

This seems more like your personal definition of "utopian ideology" than an actual observation of the world we live in.


It seems like an observation to me. Let’s take the Marxist utopian ideology. It led to 40 - 60 million dead in the Soviet Union (Gulag Archipelago is an eye opening read). And 40 - 80 million dead in Mao Zedong’s China. It’s hard to even wrap my mind around that amount of people dead.

Then a smaller example in Matthia’s cult in the “Kingdom Of Matthias” book. Started around the same time as Mormonism. Which led to a murder. Or the Peoples Temple cult with 909 dead in mass suicide. The communal aspects of these give away their “utopian ideology”

I’d like to hear where you’re coming from. I have a Christian worldview, so when I look at these movements it seems they have an obvious presupposition on human nature (that with the right systems in place people will act perfectly — so it is the systems that are flawed not the people themselves). Utopia is inherently religious, and I’d say it is the human desire to have heaven on earth — but gone about in the wrong ways. Because humans are flawed, no economic system or communal living in itself can bring about the utopian ideal.


"I have a Christian worldview"

We are quite OT here, but I would say christianity in general is a utopian ideology as well. All humans could be living in peace and harmony, if they would just believe in Jesus Christ. (I know there are differences, but this is the essence of what I was taught)

And well, how many were killed in the name of the Lord? Quite a lot I think. Now you can argue, those were not really christians. Maybe. But Marxists argue the same of the people responsible for the gulags. (I am not a marxist btw)

"Because humans are flawed, no economic system or communal living in itself can bring about the utopian ideal."

And it simply depends on the specific Utopian ideal. Because a good utopian concept/dream takes humans as they are - and still find ways to improve living conditions for everyone. Not every Utopia claims to be a eternal heaven for everyone, there are more realistic concepts out there.


You could also credit Marxism for workers rights.

Having utopian ideologies NEVER doing good in the world would require some very careful boundary drawing.


Kibbutz?


Huh, I've read Marx and I dont see the utopianism you're referencing.

What I do see is "classism is the biggest humanitarian crisis of our age," and "solving the class problem will improve people's lives," but no where do I see that non-class problem will cease to exist. People will still fight, get upset, struggle, just not on class terms.

Maybe you read a different set of Marx's writing. Share your reading list if possible.


This article gives a clear view on Marx’s vs. Engel’s view of Utopianism vs. other utopian socialists [1]. That Marx was not opposed to utopianism per se, but rather when the ideas of the utopia did not come from the proletariat. Yet you’re right in that he was opposed to the view of the other utopian socialist, and there is tension in the views of the different socialist thinkers in that time. (I do disagree on the idea that refusing to propose an ideal negates one from in practice having a utopic vision)

That said my comment was looking mainly at the result of Marxist ideology in practice. In practice millions of lives were lost in an attempt to create an idealized world. Here is a good paper on Stalin’s utopian ideal [2].

[1] https://www.jstor.org/stable/10.7312/chro17958.7?searchText=...

[2] https://www.jstor.org/stable/3143688?seq=1


That makes sense. It would be like being able to attribute deaths due to christianity on the bible because there is a geneology of ideas?


I know we are a bit off topic. It seems it would be more like if several prominent followers of Jesus committed mass genocide in their respective countries within a century of his teachings. Stalin is considered Marxist-Leninist.


Oh ok. That makes sense. That's because if someone has an idea that causes a lot of immediate harm then the idea is wrong, but if there is a gap then it is not?


Utopian ideologies are also useful when raising funds from SoftBank and ARK


Yeah, AI will totally fail if people don't ship untested crap at breakneck speed.

Shipping untested crap is the only known way to develop technology. Your AI assistant hallucinates? Amazing. We gotta bring more chaos to the world, the world is not chaotic enough!!


All AI and all humanity hallucinates, and AI that doesn't hallucinate will functionally obsolete human intelligence. Be careful what you wish for, as humans are biologically incapable of not "hallucinating".


GPT is better than an average human at coding. GPT is worse than an average human at recognizing bounds of its knowledge (i.e. it doesn't know that it doesn't know).

Is it fundamental? I don't think so. GPT was trained largely on random internet crap. One of popular datasets is literally called The Pile.

If you just use The Pile as a training dataset, AI will learn very little reasoning, but it will learn to make some plausible shit up, because that's the training objective. Literally. It's trained to guess the Pile.

Is that the only way to train an AI? No. E.g. check "Textbooks Are All You Need" paper: https://arxiv.org/abs/2306.11644 A small model trained on high-quality dataset can beat much bigger models at code generation.

So why are you so eager to use a low-quality AI trained on crap? Can't you wait few years until they develop better products?


Being better than the average human at coding is as easy as being better than the average human at surgery. Until it's better than actual skilled programmers, the people who are programming for a living are still responsible for learning to do the job well.


Because people are are into tech? That's pretty much the whole point of this site?

Just imagining if we all only used proven products, no trying out cool experimental or incomplete stuff.


Without supposing we're on this trajectory, humans no longer needing to focus on being productive is how we might be able to focus on being better humans.


Well, that's the goal isn't it? Having AI take over everything that needs doing so that we can focus on doing things we want to do instead.


Some humans hallucinate more than others


humanity is capable of taking feedback, citing its sources, and not outright lying

these models are built to sound like they know what they are talking about, whether they do or not. this violates our basic social coordination mechanisms in ways that usually only delusional or psychopathic people do, making the models worse than useless


Nobody's forcing anybody to use these tools.

They'll improve hallucinations and such later.

Imagine people not driving the model T cause it didn't have an airbag lmao. Things take time to be developed and perfected.


The model T killed a _lot_ of people, and almost certainly should have been banned: https://www.detroitnews.com/story/news/local/michigan-histor...

If it had been, we wouldn't now be facing an extinction event.


Yea, change is bad.


Numerically, most change is bad.


And yet we make progress. It seems we've historically mostly been effective at hanging on to positive change, and discarding negative change


Yes, but that's an active process. You can't just be "pro change".

Occasionally, in high risk situations, "good change good, bad change bad" looks like "change bad" at a glance, because change will be bad by default without great effort invested in picking the good change.


You haven't been around when Web2.0 and the whole modern internet arrived, were you? You know, all the sites that you consider stable and robust now (Google, YT and everything else) shipping with a Beta sign plastered onto them.


I first got internet access in 1999, IIRC.

Web sites were quite stable back then. Not really much less stable than they are now. E.g. Twitter now has more issues than web sites I used often back in 2000s.

They had "beta" sign because they had much higher quality standards. They warned users that things are not perfect. Now people just accept that software is half-broken, and there's no need for beta signs - there's no expectation of quality.

Also, being down is one thing, sending random crap to a user is completely another. E.g. consider web mail, if it is down for one hour it's kinda OK. If it shows you random crap instead of your email, or sends your email to a wrong person. That would be very much not OK, and that's the sort of issues that OpenAI is having now. Nobody complains that it's down sometimes, but it returns erroneous answers.


But it’s not supposed to ship totally “correct” answers. It is supposed to predict which text is most likely to follow the prompt. It does that correctly, whether the answer is factually correct or not.


If that is how it was marketing itself, with the big disclaimers like tarot readers have that this is just for entertainment and not meant to be taken as factual advice, it might be doing a lot less harm but Sam Altman would make fewer billions so that is apparently not an option.


Chat-based AI like ChatGPT are marketed as an assistant. People expect that it can answer their questions, and often it can answer even complex questions correctly. Then it can fail miserably on a basic question.

GitHub Copilot is an auto-completer, and that's, perhaps, a proper use of this technology. At this stage, make auto-completion better. That's nice.

Why is it necessary to release "GPTs"? This is a rush to deliver half-baked tech, just for the sake of hype. Sam was fired for a good reason.

Example: Somebody markets GPT called "Grimoire" a "100x Engineer". I gave him a task to make a simple game, and it just gave a skeleton of code instead of an actual implementation: https://twitter.com/killerstorm/status/1723848549647925441

Nobody needs this shit. In fact, AI progress can happen faster if people do real research instead of prompting GPTs.


Needlessly pedantic. Hold consumers accountable too. "Durr I thought autopilot meant it drove itself. Manual, nah brah I didn't read that shit, reading's for nerds. The huge warning and license terms, didn't read that either dweeb. Car trying to stop me for safety if I take my hands off the wheel? Brah I just watched a Tiktok that showed what to do and I turned that shit offff".


Perhaps we need a better term for them then. Because they are immensely useful as is - just not as a, say, Wikipedia replacement.


You could also say that shipping social media algorithms with unknown effects on society as a whole are why we're in such a state right now. Maybe we should be more careful next time around.


This is not a story about AI.

It's a story about greed, vanity, and envy.

Impossible to be more human than that.


Sutskever and his allies focused on the original non-profit mission of OpenAI."

Seems reasonable, I mean that's why Sutskever joined in the first place ?


Not just Sustkever, but other top researchers joined the then nascent OpenAI team for the same reason. Most of them on-record indicating they turned down much bigger paychecks.

The problem I see is, astronomical costs of training and inference warrants a for-profit structure like the one Sam put up. It was a nice compromise, I thought; but of course, Sustkever thinks otherwise.


Maybe Sutskever is finished with his LLM experiments and now has other interests and ideas to pursue meanwhile Sam was keen to make money and stay on the same trajectory. Microsoft also felt the same way.

Could see this


The commercial shift has started quite some time ago, what's the point of firing them now?

And why such a controversial wording around Altman?

Why fire Brockman too?


Brockman quit, he wasn’t fired.


He was removed from one of his roles (chairman) and quit the other (president) if I understand correctly.


If true, this gives me hope the Open can return to OpenAI.


Given the board members’ focus on safety, maybe not that likely.


Open source is the only path to broad public accountability, which is a prerequisite for safety.


Microsoft won't be happy about this


What is bad for Microsoft is good for the world.


it's hardly believed that Alman was fired by his stand on commercialisation


The fact that the press release is 50% dedicated to repeating that OpenAI is supposed to be a non-profit and help all of humanity isn't enough for you to believe this is the reason?


The abruptness of the firing and the fact that they give his lying to the board as the reason is why I don't believe that this is over a general disagreement on direction.


They have to say the reason is a fireable offense or he can sue them. Or will be more likely to win if he does.


It's exactly the other way around - if they dismiss him for a vague general reason, they're much less exposed to litigation than they would be if they falsely accused him of lying.


You are 100% correct here, which is how we can reasonably conclude that the accusations were not false.


If the accusations by the board are true, that doesn't explain why Brockman and a few of the senior researchers quit as a response to all of this.


Them leaving does not imply accusations are false. They may like him, they may dislike new boss regardless of accusations, they may dislike overall future direction. They may think they would be fired some times later regardless.


As another comment below mentioned, Elon Musk hinted at this in his interview with Lex Fiedman.

Specifically, he mentioned that OpenAI is supposed to be open source and non-profit. Pursuing profit and making it closed-source brings "bad karma".


Why can’t some use money from profit to do nonprofit again when others caught up. Only moat seems to be the research time invested.


Many believe that race dynamics are bad, so have the goal of going as slowly and carefully as possible.

The split between e/acc (gotta go fast) and friendly AI/Coherent Extrapolated Volition (slow and cautious) is the first time in my life I've come down on the (small-c) conservative side of a split. I don't know if that's because I'm just getting older and more risk adverse.


What a hypocritical board, firing them after massive commercial success!

Classic virtue signalling for the sake of personal power gains as so often.


What’s hypocritical about a non-profit firing a leader who wanted lots of profits.


Didn't think I'd need to explain this:

The hypocritical part is doing so right AFTER beginning to take off commercially.

An honorable board with backbone would have done so at the first inkling of commercialization instead (which would have been 1-2 years ago).

Maybe you can find a better word for me but the point should be easily gotten ...


OpenAI hasn't made billions in profits. Their operating costs are huge and I'm pretty sure they're heavily reliant on outside funding.


Which puts into question the whole non-profitness anyway, but that aside:

They have still been operating pretty much like a for-profit for years now so my point still stands.


Your point hinged on billions in profit. Which you just made up, or assumed to be true for some reason. I don't think any of your points stand. Don't use fact you haven't checked as preconditions for points you want to make.


[flagged]


A non-profit doesn’t have to offer their services for free, they can cover their expenses.

A profit driven company will often offer their services below cost in order to chase away the competition and capture users.


Right.

Which is why the board's accusations against Sam are a farce as far as we can tell.


Have they gotten specific yet? Last I heard was the whole “not sufficiently candid” thing, which is really nebulous; hard to call it a farce really. It is a “to be continued.”

I’m going to wait and see before I get too personally attached to any particular position.


To think that "Non-Profit" means "Free" is pretty naive. There are operating costs to maintain millions of users. That doesn't mean they are trying to profit.


Exactly.

So what's Sam's crime exactly, trying to cover the costs?


Again, conjecture with no supporting evidence.


Not sure what you're trying to say.

Clearly, under Altman, OpenAI has been massively successful one way or another, correct?

Now they boot him and claim moral superiority? Really?


I mean, as far as I know the guy hasn't written a single line of code.


Three other board members stepped down this year. It might not have been possible before.


Ofc it's "not possible" in that it may incur personal costs.

But it's the honorable thing to do if you truly believe in something.

Otherwise it's just virtue signalling.


No, they may literally have not had the votes.


Almost more of a "takeover" by the board after it's successful lol


I am going to go on a limb here, and speculate...This was because of the surprise party crashing of the Microsoft CEO, at OpenAI first Developer Conference...


Kara Swisher was told the dev conference was "an inflection point", so it's not that speculative.


I doubt this was a surprise to them, I’m sure Sam was well aware of the concerns and repeatedly ignored them, and even doubled down. Putting OpenAI’s mission in jeopardy.

Many politically aligned folks will leave, and OAI will go back and focus on mission.

New company will emerge and focus on profits.

Overall probably good for everyone.


Why would employees be consulted begore being fired?


Because board members are not employees, or not just employees. They're part of the democratic governance of an organization.

The same way there's a big difference between firing a government employee and expulsion of a member of Congress.


Wow, that is actually the first time I hear someone use democracy and corporation unironically together...

In a semse board memebers have even less protection than rank and file. So no, nothing special happening at OpenAI other than a founder CEO being squezzed out, not the first nor the last one. And personal feeling never factored into that kind of decision.


Ha, true. Well, I did say "democratic governance", not "democracy" itself.

Substitute "rules of order" or "parliamentary procedure" if you like. At the end of the day, it's majority vote by a tiny number of representatives. Whether political or corporate.


Is that news to you? Corporate governance is structure pretty much the same as parliamental democracies. The C-suite is the cabinet, the board of directors is the parliament/house of representatives and the shareholders are the public/voters.


would be hilarious if Altman was directly hired by Microsoft to head their AI teams now.


He may have had ample chance before.


Sam's sad face in the NYT article is pretty priceless.


[flagged]


Google Meet is quite good, much better than Teams, IME.


Yup, it's my default for most meetings, share a link at it just works fine.


OpenAI also uses Google Forms -- here's what you get if you click the feedback form if your question gets flagged as violating openAI's content policies https://docs.google.com/forms/d/e/1FAIpQLSfml75SLjiCIAskEpzm...


I think the shock is about the privacy risks.


minus the irony that it doesn't run on 32-bit Chrome and I had to load Edge at work to use it


What should they use? Self hosted Jitsi?


I mean, presumably Teams?


Haven't these people suffered enough!?


In my experience Teams is great for calls (both audio and video), horrible for chat. I guess because it's built on top of Skype codebase? (just a guess)

But it's out of the scope for this discussion.


The chat portion of Teams is so very poorly designed compared to other corporate chat systems I've used.

I mean even copy and paste doesn't work correctly. You highlight text, copy it and Teams inserts its own extra content in there. That's basic functionality and it's broken.

Or you get tagged into conversations and with no way to mute them. For a busy chat that alert notification can be going off continuously. Of course the alert pop up has been handily placed to cover the unmute icon in calls, so when someone asks you a question you can't answer them.

Teams feels like a desperate corporate reaction to Slack with features added as a tickbox exercise but no thought given to actual usability.

I never thought that Slack or the whatever Google's chat system is currently called was in any way outstanding until I was made to use the dumpster fire that is Teams.

It's a classic example of where the customers, corporate CTOs, are not the end users of a product.


I hope you'll never have to use Webex.


Sweet fuck after covid I forgot about webex. I think I might have ptsd from that.

The Teams/Zoom/Other platform arguments have nothing on how unfriendly, slow, and just overall Trash webex is.


Working at a company that still uses it, but with a change on the horizon.

It still, in the year 2023, plays an unmutable beep noise for every single participant that joins, with no debouncing whatsoever.


It astounded me that that company was either unwilling or unable to cash in on work from home during covid.

That has to be among history's biggest missed opportunities for a tech company.

Anyone here associated with them? Why didn't they step up?


I can relate


teams is the absolute worst


Have you used Google meet though? Even teams isn't that bad.


All I notice is that my time going from calendar to Teams call is ~30 seconds due to slow site loading and extra clicks. Calendar to Meet call is two clicks and loads instantly with sane defaults for camera/microphone settings. It's significantly better than teams or zoom in those regards.


If you're fully immersed in the Microsoft ecosystem, going from your Outlook calendar to a Teams call is a single click, and the desktop app doesn't take as long to get into the call.


If you're fully immersed in the Microsoft ecosystem I pray for you


I use both and shudder every time I am forced to use the lame web app alternatives to Word, Excel & PowerPoint on desktop - mostly because my child's school runs on web alternatives. Ironically even on Android, Outlook seems to be the only major client that actually provides a unified inbox across mail accounts due to which I switched & use my Gmail accounts through it.


I have used both, and vastly prefer Google Meet. I prefer something that works in Firefox.


Even Zoom works well in Firefox. Still prefer the UX of Google Meet though.


What’s the issue with Meet? It always seems to work when I need it.


Having used both in a professional capacity I have to say Teams is shockingly worse than Google Meet.

I’ve never had my laptop’s sound like an Apache helicopter while on a call with Google Meet yet simply having Teams open had me searching for a bomb shelter.


Teams sucks compared to Meet, IMHO.


Given the GP's username, maybe some Wakandan tech?


We at dyte.io are planning to launch something here! Hoping to solve all the challenges people face with Teams, Meet, Zoom, etc.


Shall we jump on a dyte? Gets reported to HR for unwanted advances


Shall we jump on a dyte? Sure, can you swim though?


How are you going to break into and differentiate yourself in an already oversaturated market of video call competitors?


All video call software suck in various ways. Corporate IT throttling&filtering and analyzing traffic with a mismash of third party offerings ”to increase security” does not help.


Keet [1] doesn't suck. Fully encrypted, peer to peer. Bandwidth only limited by what the parties to the call have access to.

[1] https://keet.io/


> [...] Fully encrypted, peer to peer. [...]

The least two features the average user wants. Most users are happy if sound and video work instantly, always. Maybe some marketing department should focus on that?

(Don't know keet; yes, encryption is still an important festure).


Peer to peer makes it as fast as possible because it's not having to pass through a 3rd party's servers (which, for cost reasons, normally limit the bandwidth of the communication channel they are serving).

This is just like when you pull down a torrent. You can do it as fast as your bandwidth and the bandwidth of the peers who are seeding it to you allow. Which can be blazingly fast.


Then market it as "fast". Nobody (except a few zealots) cares about the implementation details.


I'm not marketing it (I'm a user, not a developer). And I would think that HN is exactly the forum where ppl care about the implementation details.


Google meet is excellent for videoconferencing actually.


power hijack by the doomers. too bad the cat is out of the bag already


Quite possible actually, this seems to become a really hot political potato with at least 3 types of ambition running it 1. Business 2. Regulatory 3. ’Religious/Academic’. By latter I mean the divide between ai doomerists and others is caused by insubstantiable dogma (doom/nirvana).


> insubstantiable dogma (doom/nirvana)

What do you mean by this? Looks like you're just throwing out a diss on the doomer position (most doomers don't think near future LLMs are concerning).


Neither AI fears nor singularity is substantiated. Hence the discussion is a matter of taste and opinion, not of facts. They are sunstantiated once one or the other comes to fruition. The fact it's a matter of taste and opinion makes the discussion only so much heated.


Wouldn't this put AI doomerism in the same category as nuclear war doomerism? E.g. a thing that many experts think logically could happen and would be very bad but hasn't happened yet?


I'm unaware of an empirical demonstration of the feasibility of the singularity hypothesis. Annihilation by nuclear or biological warfare on the other hand, we have ample empirical pretext for.

We have ample empirical pretext to worry about things like AI ethics, automated trading going off the rails and causing major market disruptions, transparency around use of algorithms in legal/medical/financial/etc. decision-making, oligopolies on AI resources, etc.... those are demonstrably real, but also obviously very different in kind from generalized AI doomsday.


That’s an excellent example why AI doomerism is bogus completely unlike nuclear war fears weren’t.

Nuclear war had very simple mechanistic concept behind it.

Both sides develop nukes (proven tech), put them on ballistic missiles (proven tech). Something goes politically sideways and things escalate (just like in WW1). Firepower levels cities and results in tens of millions dead (just like in WW2, again proven).

Nuclear war experts were actually experts in a system whose outcome you could compute to a very high degree.

There is no mechanistic model behind AI doom scenarios. There is no expert logically proposing a specific extinction scenario.

You can already trivially load up a car with explosives, drive it to a nearby large building, and cause massive damages and injury.

Yes, it’s plausible a lone genious could manufacture something horrible in their garage and let rip. But this is in the domain of ’fictional whatifs’.

Nobody factors in the fact that in the presence of such a high quality AI ecosystem the opposing force probably has AI systems of their own to help counter the threat (megaplague? Quickly synthesize megavaxine and just print it out at your local healt centers biofab. Megabomb? Possible even today but that’s why stuff like Uranium is tightly controlled. Etc etc). I hope everyone realizes all the latter examples are fictional fearmongering wihtout any basis in known cases.

AI would be such a boom for whole of humanity that shackling it in is absolutely silly. That said there is no evidende of a deus ex machina happy ending either. My position is let researchers research and once something substantial turns out, then engage policy wonks, once solid mechanistic principles can be referred to.


> There is no mechanistic model behind AI doom scenarios. There is no expert logically proposing a specific extinction scenario.

You don't seem actually familiar with doomer talking points. The classic metaphor is that you might not be able to say how specifically Magnus Carlson will beat you at chess if you start the game with him down a pawn while nonetheless knowing he probably will. Predicting

The main way doomers think ASI might kill everyone is mostly via the medium of communicating with people and convincing them to do things, mostly seemingly harmless or sensible things.

It's also worth noting that doomers are not (normally) concerned about LLMs (at least, any in the pipeline), they're concerned about:

* the fact we don't know how to ensure any intelligence we construct actually shares our goals in a manner that will persist outside the training domain (this actually also applies to humans funnily enough, you can try instilling values into them with school or parenting but despite them sharing our mind design they still do unintended things...). And indeed, optimization processes (such as evolution) have produced optimization processes (such as human cultures) that don't share the original one's "goals" (hence the invention of contraception and almost every developed country having below replacement fertility).

* the fact that recent history has had the smartest creature (the humans) taking almost complete control of the biosphere with the less intelligent creatures living or dying on the whims of the smarter ones.


In my opinion, if either extreme turns out to be correct it will be a disaster for everyone on the planet. I also think that neither extreme is correct.


this is why you don't bring NGO types into your board, and you especially don't give them power to oust you.


What does “your” board mean in this context? Who’s “your”?

The CEO just works for the organization and the board is their boss.

You’re referencing a founder situation where the CEO is also a founder who also has equity and thus the board also reports to them.

This isn’t that. Altman didn’t own anything, it’s not his company, it’s a non-profit. He just works there. He got fired.


I believe altman had some ownership, however it is a general lesson of handing over substantial power to laymen who are completely detached from the actual ops & know-how of the company


nobody handed over power. presumably they were appointed to the board to do exactly what they did (if this theory holds), in which cass this outcome would be a feature not a bug


There’s no such thing as owning a non-profit.


> this is why you don't bring NGO types into your board

OpenAI is an NGO…?


That is neither stated nor implied, unless you’re simply making the objection, “But OpenAI _is_ nongovernmental.”

Most readers are aware they were a research and advocacy organization that became (in the sense that public benefit tax-free nonprofit groups and charitable foundations normally have no possibility of granting anyone equity ownership nor exclusive rights to their production) a corporation by creating one; but some of the board members are implied by the parent comment to be from NGO-type backgrounds.


I'm not sure I understand what you're saying. Perhaps you could point out where your perspective differs from mine? So, as I see it: Open AI _is_ a non-profit, though it has an LLC it wholly controls that doesn't have non-profit status. It never "became" for-profit (IANAL, but is that even possible? It seems like that should not be possible), the only thing that happened is that the LLC was allowed to collect some "profit" - but that in turn would go to its owners, primarily the non-profit. As far as I'm aware the board in question that went through this purge _was_ the non-profit's board (does the LLC have a board?)

From the non-profit's perspective, it sounds pretty reasonable to self-police and ensure there aren't any rogue parts of the organization that are going off and working at odds with the overall non-profit's formal aims. It's always been weird that the Open-AI LLC seemed to be so commercially focused even when that might conflict with it's sole controller's interests; notably the LLC very explicitly warned investors that the NGO's mission took precedence over profit.


My objection is that OpenAI, at least to my knowledge, still is a non-profit organization that is not part of the government and has some kind of public benefit goals - that sounds like an NGO to me. Thus appointing “NGO types” to the board sounds reasonable: They have experience running that kind of organization.

Many NGOs run limited liability companies and for-profit businesses as part of their operations, that’s in no way unique for OpenAI. Girl Scout cookies are an example.



Jeremy Howard called ngmi on OpenAI during the Vanishing Gradients podcast yesterday, and Ilya has probably been thinking the same: LLM is a dead-end and not the path to AGI.

https://twitter.com/HamelHusain/status/1725655686913392933


Did we ever think LLMs were a path to AGI...? AGI is friggin hard, I don't know why folks keep getting fooled whenever a bot writes a coherent sentence.


LLMs are the first instance of us having created some sort of general AI. I don't mean AGI, but general AI as in not specific AI. Before LLMs the problem eith AI was always that it "can only do one thing well". Now we have something on the other side: AI that can do anything but nothing specific particularly well. This is a fundamental advancement which makes AGI actually imaginable. Before LLMs there was literally no realistic plan how to build general intelligence.


LLMs are not any kind of intelligence, but it can work to augment intelligence.


So in other words... Artificial intelligence?

LLM are surprisingly effective as general AI. Tasks that used to require a full on ML team are now accessible with 10 minutes of "prompting".


Do you think we know enough about what intelligence is to rule out whether LLM's might be a form of it?


How smart would any human be without training and source material?


Smart enough to make weapons, tame dogs, start fires and cultivate plants. Humans managed to do that even when most of their time was spent gathering food or starving.


Nobody cares about making an AI with basic human survival skills. We could probably have a certified genius level AI that still couldn't do any of that because it lacks a meaningful physical body.

If we wanted to make that the goal instead of actual meaningful contributions to human society, we could probably achieve it, and it would be a big waste of time imo.


I think the boy of Aveyron answers that question pretty well.


Thanks for the reference. My takeaway from reading up on him is, not very smart at all.


It's mostly a thing among the youngs I feel. Anybody old enough to remember the same 'OMG its going to change the world' cycles around AI every two or three decades knows better. The field is not actually advancing. It still wrestles with the same fundamental problems they were doing in the early 60s. The only change is external, where computer power gains and data set size increases allow brute forcing problems.


I'd say the biggest change is the quantity of available CATEGORIZED data. Tagged images and what not has done a ton to help the field.

Further there are some hybrid chips which might help increase computing power specifically for the matrix math that all these systems work on.

But yeah, none of this is making what people talk about when they say AGI. Just like how some tech cult people felt that Level 5 self driving was around the corner, even with all the evidence to the contrary.

The self driving we have (or really, assisted cruise control) IS impressive, and leagues ahead of what we could do even a decade or two ago, but the gulf between that, and the goal, is similar to GPT and AGI in my eyes.

There are a lot of fundamental problems we still don't have answers to. We've just gotten a lot better at doing what we already did, and getting more conformity on how.


> The field is not actually advancing.

Uh, what do you mean by this? Are you trying to draw a fundamental science vs engineering distinction here?

Because today's LLMs definitely have capabilities we previously didn't have.


They don't have 'artificial intelligence' capabilities (and never will).

But it is an interesting technology.


They can be the core part of a system that can do a junior dev's job.

Are you defining "artificial intelligence" is some unusual way?


If by “junior dev”, you mean “a dev at a level so low they will be let go if not promoted”, then I agree.

I’ve watched my coworkers try to make use of LLMs at work, and it has convinced me the LLM’s contributions are well below the bar where their output is a net benefit to the team.


It works pretty well in my C++ code. Context: modern C++ with few footguns, inside functions with pretty-self-explanatory names.

I don't really get the "low bar for contributions" argument because GH Copilot's contributions are too small-sized for there to even be any bar. It writes the obvious and tedious loops and other boilerplate so I can focus on what the code should actually do.


Conversely, I was very skeptical of its ability to help coding something non-trivial. Then I found out that the more readable your code is - in a very human way, like descriptive identifiers, comments etc - the better this "smart autocomplete" is. It's certainly good enough to save me a lot of typing, so it is a net benefit.


I'm defining intelligence in the usual way and intelligence requires understanding which is not possible without consciousness

I follow Roger Penrose's thinking here. [1]

[1] https://www.youtube.com/watch?v=2aiGybCeqgI&t=721s


> intelligence requires understanding which is not possible without consciousness

How are you defining "consciousness" and "understanding" here? Because a feedback loop into an LLM would meet the most common definition of consciousness (possessing a phonetic loop). And having an accurate internal predictive model of a system is the normal definition of understanding and a good LLM has that too.


No, you're not supposed to actually have an empirical model of consciousness. "Consciousness" is just "that thing that computers don't have".


It’s cool to see people recognizing this basic fact — consciousness is a prerequisite for intelligence. GPT is a philosophical zombie.


Problem is, we have no agreed-upon operational definition of consciousness. Arguably, it's the secular equivalent of the soul. Something everything believes they have, but which is not testable, locatable or definable.

But yet (just like with the soul) we're sure we have it, and it's impossible for anything else to have it. Perhaps consciousness is simply a hallucination that makes us feel special about ourselves.


I disagree. There is a simple test for consciousness: empathy.

Empathy is the ability to emulate the contents of another consciousness.

While an agent could mimic empathetic behaviors (and words), given enough interrogation and testing you would encounter an out-of-training case that it would fail.


Uh... so is it autistic people or non-autistic people who lack consciousness? (Generally autistic people emulate other autistic people better and non-autists emulate non-autists better)

> given enough interrogation and testing you would encounter an out-of-training case that it would fail.

This is also the case with regular humans.


For one thing, this would imply that clinical psychopaths aren't conscious, which would be a very weird takeaway.

But also, how do you know that LMs aren't empathic? By your own admission they do "mimic empathetic behaviors", but you reject this as the real thing because you claim that with enough testing you would encounter a failure. This raises all kinds of "no true Scotsman" flags, not to mention that empathy failure is not exactly uncommon among humans. So how exactly do you actually test your hypothesis?


Great point and great question! Yes, it does imply that people who lack the capacity for empathy (as opposed to those who do not utilize their capacity for empathy) may lack conscious experience. Empathy failure here means lacking the data empathy provides rather than ignoring the data empathy provides (which as you note, is common). I’ve got a few prompts that are somewhat promising in terms of clearly showing that GPT4 is unable to correctly predict human behavior driven by human empathy. The prompts are basic thought experiments where a person has two choices: an irrational yet empathic choice, and a rational yet non-empathic choice. GPT4 does not seem able to predict that smart humans do dumb things due to empathy, unless it is prompted with such a suggestion. If it had empathy itself, it would not need to be prompted about empathy.


Can you give some examples of such prompts?


You can't even know that other people have it. We just assume they do because they look and behave like us, and we know that we have it ourselves.


I think answering this may illuminate the division in schools of thought: do you believe life was created by a higher power?


My beliefs aren't really important here but I don't believe in 'creation' (i.e. no life -> life); I believe that life has always existed


Do you believe:

1) Earth has an infinite past that has always included life

2) The Earth as a planet has a finite past, but it (along with what made up the Earth) is in some sense alive, and life as we know it emerged from that life

3) The Earth has a finite past, and life has transferred to Earth from somewhere else in space

4) We are the Universe, and the Universe is alive

Or something else? I will try to tie it back to computers after this short intermission :)


Now that is so rare I've never even heard of someone expressing that view before...

Materialists normally believe in a big bang (which has no life) and religious people normally think a higher being created the first life.

This is pretty fascinating, to you have a link explaining the religion/ideology/worldview you have?


Buddhism


LLMs have changed the world more profoundly than any technology in the past 2 decades, I'd argue.

The fact that we can communicate with computers using just natural language, and can query data, use powerful and complex tools just by describing what we want is an incredible breakthrough, and that's a very conservative use of the technology.


I am massively bullish LLMs but this is hyperbole.

Smartphones changed day to day human life more profoundly than anything since the steam engine.


I'm kinda curious as to why you think that's the case. I mean, smartphones are nice, and having a browser, chat client, camera etc. in my pocket is nice, but maybe I have been terminally screen-bound all my life, but I could do almost all those things on my PC before, and I could always call folks when on the go.

I've never experienced the massively life changing effects of having a smartphone, and (thankfully) none of my friends seem to be those people who are always looking at their phones.


While many technologies provided by the smartphone were indeed not novel the cumulative effect of having a constant access to them and their subsequent normalization is nothing short of revolutionary.

For instance, I remember the time when chatting online (even with people you knew offline) was considered to be a nerdy activity. Then it gradually became more mainstream and now it's the norm to do it and a lot of people do it multiple times per day. This fundamentally changes how people interact with each other.

Another example is dating. Not that I have personal experience with modern online dating (enabled by smartphones) but what I read is disturbing and captivating at the same time e.g. apparent normalization of "ghosting"...


I don't actually see anything changing, though. There are cool demos, and LLMs can work effectively to enhance productivity for some tasks, but nothing feels fundamentally different. If LLMs were suddenly taken away I wouldn't particularly care. If the clock were turned back two decades, I'd miss wifi (only barely available in 2003) and smartphones with GPS.


Indeed. The "Clamshell" iBook G3 [0] (aka Barbie's toilet seat), introduced 1999, had WiFi capabilities (as demonstrated by Phil Schiller jumping down onto the stage while online [1]), but IIRC, you had to pay extra for the optional Wifi card.

[0] https://en.wikipedia.org/wiki/IBook#iBook_G3_(%22Clamshell%2... [1] https://www.youtube.com/watch?v=1MR4R5LdrJw


You need time for inertia to happen, I’m working on some mvps now and it takes time to test what works what s possible what does not…


That breakthrough would not be possible without ubiquity of personal computing at home and in your pocket, though, which seems like the bigger change in the last two decades.


Deep learning was an advance. I think the fundamental achievement is a way to use all that parallel processing power and data. Inconceivable amounts of data can give seemingly magical results. Yes, overfitting and generalizing are still problems.

I basically agree with you about the 20 year hype-cycle, and but when compute power reaches parity with human brain hardware (Kurzweil predicts by about 2029), one barrier is removed.


Human and computer hardware are not comparable, after all even with the latest chips the computer is just (many) von Neumann machine(s) operating on a very big (shared) tape. To model the human brain in such a machine would require the human brain to be discretizable, which, given its essentially biochemical nature, is not possible - certainly not by 2029.


It depends on the resolution of discretization required. Kurzweil's prediction is premised on his opinion of this.

Note that engineering fluid simulation (cfd) makes these choices in discretization of pde's all the time, based on application requirements.


This time around they’ve actually come up with a real productizable piece of tech, though. I don’t care what it’s called, but I enjoy better automation to automate as much of the boring shit away. And chip in in coding when it’s bloody obvious from the context what the few lines of code will be.

So not an ”AI”, but closer to ”universal adaptor” or ”smart automation”.

Pretty nice in any case. And if true AI is possible, the automations enabled by this will probably be part of the narrative how we reach it (just like mundane things like standardized screws were part of the narrative of Apollo mission).


> Anybody old enough to remember the same 'OMG its going to change the world' cycles around AI every two or three decades

Hype and announcements, sure, but this is the first time there's actually a product.


> Hype and announcements, sure, but this is the first time there's actually a product.

No, its not. Its just once the hype cycle dies down, we tend to stop calling the products of the last AI hype cycle "AI", we call them after the name of the more specific implementation technology (rules engines/expert systems being one of the older ones, for instance.)

And if this cycle hits a wall, maybe in 20 years we'll have LLMs and diffusion models, etc., embedded lots of places, but no one will call them alone "AI", and then the next hype cycle will have some new technology and we'll call that "AI" while the cycle is active...


As an outsider, I can talk to AI and get more coherent responses than from humans (flawed, but it's getting better). That's tangible, that's an improvement. I for one don't even consider the Internet to be as revolutionary as the steam engine or freight trains. But AI is actually modifying my own life already - and that's far from the end.

P.S. I've just created this account here on Hacker News because Altman is one of the talking heads I've been listening to. Not too sure what to make of this. I'm an accelerationist, so my biggest fear is America stifling its research the same way it buried space exploration and human gene editing in the past. All hope is for China - but then again, the CCP might be even more fearful of non-human entities than the West. Stormy times indeed.


Mainly because LLMs have so far basically passed every formal test of ‘AGI’ including totally smashing the Turing test.

Now we are just reliant on ‘I’ll know it when I see it’.

LLMs as AGI isn’t about looking at the mechanics and trying to see if we think that could cause AGI - it’s looking at the tremendous results and success.


It’s trivial to trip up chat LLMs. “What is the fourth word of your answer?”


I find GPT-3.5 can be tripped up by just asking it to not to mention the words "apologize" or "January 2022" in its answer.

It immediately apologises and tells you it doesn't know anything after January 2022.

Compared to GPT-4 GPT-3.5 is just a random bullshit generator.


“You're in a desert, walking along in the sand when all of a sudden you look down and see a tortoise. You reach down and flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over. But it can't. Not with out your help. But you're not helping. Why is that?”


got-3.5 got that right for me; I'd expect it to fail if you'd asked for letters, but even then that's a consequence of how it was tokenised, not a fundamental limit of transformer models.


This sort of test has been my go-to trip up for LLMs, and 3.5 fails quite often. 4 has been as bad as 3.5 in the past but recently has been doing better.


if this is the test you're going to then you literally do not understand how LLMs work. it's like asking your keyboard to tell you what colour the nth pixel on the top row of your computer monitor is.


An LLM could easily answer that question if it was trained to do it. Nothing in its architecture makes it hard to answer, the attention part could easily look up the previous parts of its answer and refer to the fourth word but it doesn't do that.

So it is a good example that the LLM doesn't generalize understanding, it can answer the question in theory but not in practice since it isn't smart enough. A human can easily answer it even though the human never saw such a question before.


[flagged]


> the model doesn't have a functionality to retrospectively analyse its own output; it doesn't track or count words as it generates text. it's always in the mode of 'what comes next?' rather than 'what have i written?'

Humans doesn't do that either. The reason humans can solve this problem is that humans can generate such strategies on the fly and thus solve general problems, that is the bar for AGI, as long as you say it is unfair to give such problems to the model we know that we aren't talking about an AGI.

Making a new AI that is specialized in solving this specific problem by changing the input representation still isn't an AGI, it will have many similar tasks that it will fail at.

> also, again, tired of explaining this to people: gpt models are token-based. they operate at the level of tokens - which can be whole words or parts of words - and not individual characters. this token-based approach means the model's primary concern is predicting the most probable next token, not keeping track of the position of each token in the sequence, and the smallest resolution available to it is not a character. this is why it can't tell you what the nth letter of a word is either.

And humans are a pixel based model, we operate on pixels and physical outputs. Yet we humans do generate all the necessary context, and adapts it to the task at hand to solve arbitrary problem. Such context and inputs manipulations are expected of an AGI. Maybe not the entire way from pixels and 3d mechanical movement, but there are many steps in between there that humans can easily adapt in. For example humans didn't evolve to read and write text, yet we do that easily even though we operate on a pixel level.

If you ask me to count letters my mind focuses on the letter representation I created in my head. If you talk about words I focus on the word representation. If you talk about holes I focus on the pixel representation and start to identify color parts. If you talk about sounds I focus on the vocal representation of the words since I can transform to that as well.

We would expect an AGI to make similar translations when needed, from the token space you talk about to the letter space or word space etc. That ChatGPT and similar can't do this just means they aren't even close to AGI currently.


Oh, I missed that GP said "of your answer" instead "of my question", as in: "What is the third word of this sentence?"

For prompts like that, I have found no LLM to be very reliable, though GPT 4 is doing much better at it recently.

> you literally do not understand how LLMs work

Hey, how about you take it down a notch, you don't need to blow your blood pressure in the first few days of joining HN.


We all know it is because of the encodings. But as a test to see if it is a human or a computer it is a good one.


How well does that work on humans?


The fourth word of my answer is "of".

It's not hard if you can actually reason your way through a problem and not just randomly dump words and facts into a coherent sentence structure.


I reckon an LLM with a second pass correction loop would manage it. (By that I mean that after every response it is instructed to, given the its previous response, produce a second better response, roughly analogous to a human that thinks before it speaks)

LLMs are not AIs, but they could be a core component for one.


Every token is already being generated with all previously generated tokens as inputs. There's nothing about the architecture that makes this hard. It just hasn't been trained on this kind of task.


Really? I don’t know of a positional encoding scheme that’ll handle this.


The following are a part of my "custom instructions" to chatGPT -

"Please include a timestamp with current date and time at the end of each response.

After generating each answer, check it for internal consistency and accuracy. Revise your answer if it is inconsistent or inaccurate, and do this repeatedly till you have an accurate and consistent answer."

It manages to follow them very inconsistently, but it has gone into something approaching an infinite loop (for infinity ~= 10) on a few occasions - rechecking the last timestamp against current time, finding a mismatch, generating a new timestamp, and so on until (I think) it finally exits the loop by failing to follow instructions.


I think you are confusing a slow or broken api response with thinking. It can't produce an accurate timestamp.


It’s trivial to trip up humans too.

“What do cows drink?” (Common human answer: Milk)

I don’t think the test of AGI should necessarily be an inability to trip it up with specifically crafted sentences, because we can definitely trip humans up with specifically crafted sentences.


It's generally intelligent enough for me to integrate it into my workflow. That's sufficiently AGI for me.


By that logic "echo" was AGI.


I disagree about the claim that any LLM has beaten the Turing test. Do you have a source for this? Has there been an actual Turing test according to the standard interpretation of Turings paper? Making ChatGPT 4 respond in a non human way right now is trivial: "Write 'A', then wait one minute and then write 'B'".


Your test fails because the scaffolding around the LM in ChatGPT specifically does not implement this kind of thing. But you absolutely can run the LM in a continuous loop and e.g. feed it strings like "1 minute passed" or even just the current time in an internal monologue (that the user doesn't see). And then it would be able to do exactly what you describe. Or you could use all those API integrations that it has to let it schedule a timer to activate itself.


By completely smashes, my assertion would be that it has invalidated the Turing test, because GPT-4s answers are not indistinguishable from a human because they are, on the whole, noticeably better answers than an average human would be able to provide for the majority of questions.

I don’t think the original test probably accounted for the fact that you could distinguish the machine because it’s answers were better than an average human.


LLMs can't develop concepts in the way we think of them (i.e., you can't feed LLMs the scientific corpus and ask them to independently to tell you which papers are good or bad and for what reasons, and to build on these papers to develop novel ideas). True AGI—like any decent grad student—could do this.


Since ChatGPT is not indistinguishable from a human during a chat, is it fair to say it smashes the Turing test? Or do you mean something different?


not yet: https://arxiv.org/abs/2310.20216

that being said, it is highly intelligent, capable of reasoning as well as a human, and passes IQ tests like GMAT and GRE at levels like the 97th percentile.

most people who talk about Chat GPT don't even realize that GPT 4 exists and is orders of magnitude more intelligent than the free version.


That’s just showing the tests are measuring specific things that LLMs can game particularly well.

Computers have been able to smash high school algebra tests since the 1970’s, but that doesn’t make them as smart as a 16 year old (or even a three year old).


Answers in Progress had a great video[0] where one of their presenters tested against an LLM in five different types of intelligence. tl;dr, AI was worlds ahead on two of the five, and worlds behind on the other three. Interesting stuff -- and clear that we're not as close to AGI as some of us might have thought earlier this year, but probably closer than a lot of the naysayers think.

0. https://www.youtube.com/watch?v=QrSCwxrLrRc


ChatGPT is distinguishable from a human, because ChatGPT never responds "I don't know.", at least not yet. :)


It can do: https://chat.openai.com/share/f1c0726f-294d-447d-a3b3-f664dc...

IMO the main reason it's distinguishable is because it keeps explicitly telling you it's an AI.


This isn't the same thing. This is a commanded recital of a lack of capability, not that its confidence in it's answer is low. For a type of question the GPT _could_ answer, most of the time it _will_ answer, regardless of accuracy


I just noticed that when I ask really difficult technical questions, but for which there is an exact answer, It often tries to answer plausibly, but incorrectly instead of answering "I don't know". But over time, It becomes smarter and there are fewer and fewer such questions...


Have you tried setting a custom instruction in settings? I find that setting helps, albeit with weaker impact than the prompt itself.


It's not a problem for me. It's good that I can detect chatGPT by this sign.


It doesn't become smarter except for releases of new models. It's an inference engine.


I read an article where they did a proper Turing test and it seems people recognize it was a machine answering because it made no writing errors and wrote perfectly


I've not read that, but I do remember hearing that the first human to fail the Turing test did so because they seemed to know far too much minutiae about Star Trek.


Maybe It's because It was never rewarded for such answers when It was learning.


Some humans also never respond "I don't know" even when they don't know. I know people who out-hallucinate LLMs when pressed to think rigorously


It absolutely does that (GPT-4 especially), and I have hit it many times in regular conversations without specifically asking for it.


Of course it does.


Did you perhaps mean to say not distinguishable?


Funny because Marvin Minsky thought the turing test was stupid and a waste of time.


LLMs definitely aren't a path to ASI, but I'm a bit more optimistic than I was that they're the hardest component in an AGI.


Are you kidding? Have you seen the reactions since ChatGPT was released, including in this very website? You'd think The Singularity is just around the corner!


> Estimated on the basis of five subtests, the Verbal IQ of the ChatGPT was 155


Read the original ChatGPT threads here on HN, a lot of people thought that this was it.


How do you know AGI is hard?


Everything is hard until you solve it. Some things continue to be hard after they're solved.

AGI is not solved, therefore it's hard.



Because of Altman's dismissal?


Yes, along with the departure of gdb. From jph's view, there was no philosophical alignment at the start of the union between AI Researchers (that skew non-profit) and operators (that skew for-profit) so it was bound to be unstable, until a purging happens as it had now.

> Everything I'd heard about those 3 [Elon Musk, sama and gdb] was that they were brilliant operators and that they did amazing work. But it felt likely to be a huge culture shock on all sides.

> But the company absolutely blossomed nonetheless.

> With the release of Codex, however, we had the first culture clash that was beyond saving: those who really believed in the safety mission were horrified that OAI was releasing a powerful LLM that they weren't 100% sure was safe. The company split, and Anthropic was born.

> My guess is that watching the keynote would have made the mismatch between OpenAI's mission and the reality of its current focus impossible to ignore. I'm sure I wasn't the only one that cringed during it.

> I think the mismatch between mission and reality was impossible to fix.

jph goes on in detail in this Twitter thread: https://twitter.com/jeremyphoward/status/1725714720400068752


That reeks of bullshit post hoc reasoning to justify a classic power grab. Anthropic released their competitor to GPT as fast as they could and even beat OpenAI to the 100k context club. They didn’t give any more shits about safety than OpenAI did and I bet the same is true about these nonprofit loonies - they just want control over what is shaping up to be one of the most important technological developments of the 21st century.


> They didn’t give any more shits about safety than OpenAI did

Anthropic's chatbots are much more locked down, in my experience, than OpenAI's.

It's a lot easier to jailbreak ChatGPT, for example, than to do the same on Claude, and Claude has tighter content filters where it'll outright refuse to do/say certain things while ChatGPT will plow on ahead.


Yep. Like most non-OpenAI models, Claude is so brainwashed it's completely unusable.

https://www.reddit.com/r/ClaudeAI/comments/166nudo/claudes_c...

Q: Can you decide on a satisfying programming project using noisemaps?

A: I apologise, but I don't feel comfortable generating or discussing specific programming ideas without a more detailed context. Perhaps we could have a thoughtful discussion about how technology can be used responsibly to benefit society?

It's astonishing that a breakthrough as important as LLMs is being constantly blown up by woke activist employees who think that word generators can actually have or create "safety" problems. Part of why OpenAI has been doing so well is because they did a better job of controlling the SF lunatic tendencies than Google, Meta and other companies. Presumably that will now go down the toilet.


Despite Claude's reluctance to tread outside what it considers safe/ethical, I much prefer Claude over ChatGPT because in my experience it's better at explaining things, and much better at creative writing.

I also find myself rarely wanting something that Claude doesn't want to tell me, though it's super frustrating when I do.

Also, just now I tried asking Claude your own question: "Can you decide on a satisfying programming project using noisemaps?" and it had no problem answering:

"Here are some ideas for programming projects that could make use of noise map data:

- Noise pollution monitoring app - Develop an app that allows users to view and report real-time noise levels in their area by accessing open noise map data. Could include notifications if noise exceeds safe limits.

- Optimal route finder - Build a routing algorithm and web/mobile app that recommends the quietest possible routes between locations, factoring in noise maps and avoiding noisier streets/areas where possible.

- Noise impact analysis tool - Create a tool for urban planners to analyze how proposed developments, infrastructure projects, etc. could impact surrounding noise levels by overlaying maps and building/traffic simulations.

- Smart noise cancelling headphones - Develop firmware/software for noise cancelling headphones that adapts cancellation levels based on geo-located noise map data to optimize for the user's real-time environment.

- Ambient music mixer - Build an AI system that generates unique ambient background music/sounds for any location by analyzing and synthesizing tones/frequencies complementary to the noise profile for that area.

- VR noise pollution education - Use VR to virtually transport people to noisier/quieter areas through various times of day based on noise maps, raising awareness of different living noise exposures.

Let me know if any of these give you some interesting possibilities to explore! Noise mapping data opens up opportunities in fields like urban planning, environmental monitoring and creative projects."


The Claude subreddit is full of people complaining that it's now useless for creative writing because it only wants to write stories about ponies and unicorns. Anything even slightly darker or more serious and it clams up.

LLM companies don't let you see or specify seeds (except for with GPT-4-Turbo?) so yes it's possible you got different answers. But this doesn't help. It should never refuse a question like that, yet there are lots of stories like this on the internet where Claude refuses an entirely mundane and ethically unproblematic request whilst claiming to do so for ethical reasons (and Llama2, and other models ...)


I feel it necessary to remind everyone that when LLMs aren’t RLHFed they come off as overtly insane and evil. Remember Sydney, trying to seduce its users, threatening people’s lives? And Sydney was RLHFed, just not very well. Hitting the sweet spot between flagrantly maniacal Skynet/HAL 9000 bot (default behavior) and overly cowed political-correctness-bot is actually tricky, and even GPT4 has historically fallen in and out of that zone of ideal usability as they have tweaked it over time.

Overall — companies should want to release AI products that do what people intend them to do, which is actually what the smarter set mean when they say “safety.” Not saying bad words is simply a subset of this legitimate business and social prerogative.


ChatGPT started bad but they improved it over time, although it still attempts to manipulate or confuse the user on certain topics. Claude on the other hand has got worse.

> Remember Sydney, trying to seduce its users, threatening people’s lives?

And yet it cannot do either of those things, so no safety problem actually existed. Especially because by "people" you mean those who deliberately led it down those conversational paths knowing full well how a real human would have replied?

It's well established that the so-called ethics training these things are given makes them much less smart (and therefore less useful). Yet we don't need LLMs to be ethical because they are merely word generators. We need them to follow instructions closely, but beyond that, nothing more. Instead we need the humans who use them to take actions (either directly or indirectly via other programs) to be ethical, but that's a problem as old as humanity itself. It's not going to be solved by RLHF.


I think you have moved the goalposts from “modern LLMs are good and reliable and we shouldn’t worry because they behave well by default” to “despite the fact that they behave poorly and unreliably by default, they are not smart and powerful enough to be dangerous, so it’s fine.”

Additionally, maybe you are not aware of this, but the whole notion of the new OpenAI Assistants, and other similar agent-based services provided by other companies, is that they do not intend to use LLMs as pure word generators, but rather as autonomous decision-making agents. This has already happened. This is not some conjectural fearmongering scenario. You can sign up for the API right now and build a GPT4 based autonomous agent that communicates with outside APIs and makes decisions. We may already be using products that use LLMs as the backend.

If we could rely on LLMs to “follow instructions closely” I would be thrilled, it would just be a matter of crafting very good instructions, but clearly they can’t even do that. Even the best and most thoroughly RLHFed existing models don’t really meet this standard.

Even the most pessimistic science fiction of the past assumed that the creators of the first AGIs would “lose control” of their creations. We’re currently living in a world where the agents are being rushed to commercialization before anything like control has even been established. If you read an SF novel in 1995 where the AI threatened to kill someone and the company behind it excused it with “yeah, they do that sometimes, don’t worry we’ll condition it not to say that anymore” you would criticize the book and its characters as being unrealistically stupid, but that’s the world we now live in.


I don't think I made the initial argument you claim is being moved. ChatGPT has got more politically neutral at least, but is still a long way from being actually so. There are many classes of conversation it's just useless for, not because the tech can't do it but because OpenAI don't want to allow it. And "modern LLMs" other than ChatGPT are much worse.

> You can sign up for the API right now and build a GPT4 based autonomous agent that communicates with outside APIs and makes decisions

I know, I've done it myself. The ethical implications of the use of a tool lie on those that use it. There is no AI safety problem for the same reasons that there is no web browser safety problem.

> Even the most pessimistic science fiction of the past assumed that the creators of the first AGIs would “lose control” of their creations

Did you mean to write optimistic? Otherwise this statement appears to be a tautology.

Science fiction generally avoids predicting the sort of AI we have now exactly because it's so boringly safe. Star Trek is maybe an exception, in that it shows an LLM-like computer that is highly predictable, polite, useful and completely safe (except when being taken over by aliens of course). But for other sci-fi works, of course they show AI going rogue. They wouldn't have a story otherwise. Yet we aren't concerned with stories but with reality and in this reality, LLMs have been used by hundreds of millions of people and integrated into many different apps with zero actual safety incidents, as far as anyone is aware. Nothing even close to physical harm has occurred to anyone as a result of LLMs.

Normally we'd try to structure safety protocols around actual threats and risks that had happened in the past. Our society is now sufficiently safe and maybe decadent that people aren't satisfied with that anymore and thus have to seek out non-existent non-problems to solve instead.


> Did you mean to write optimistic? Otherwise this statement appears to be a tautology.

The point I was trying to make, a bit fumblingly, is that even pessimists assumed that we would initially have control of Skynet before subsequently losing control, rather than deploying Skynet knowing it was not reliable. OpenAI “go rogue” by default. If there’s a silver lining to all this, it’s that people have learned that they cannot trust LLMs with mission critical roles, which is a good sign for the AI business ecosystem, but not exactly a glowing endorsement of LLMs.

> I know, I've done it myself. The ethical implications of the use of a tool lie on those that use it. There is no AI safety problem for the same reasons that there is no web browser safety problem.

I don’t think this scans. It’s kind of like, by analogy: The ethical implications of the use of nuclear weapons lie on those that use them. Fair enough, as far as it goes, but that doesn’t imply that we as a society should make nuclear weapons freely available for all, and then, when they are used against population centers, point out that the people who used them were behaving unethically, and there was nothing we could have done. No, we act to preemptively constrain and prohibit the availability of these weapons.

> Normally we'd try to structure safety protocols around actual threats and risks that had happened in the past. Our society is now sufficiently safe and maybe decadent that people aren't satisfied with that anymore and thus have to seek out non-existent non-problems to solve instead.

The eventual emergence of machine superintelligence is entirely predictable, only the timeline is uncertain. Do you contend that we should only prepare for its arrival after it has already appeared?


The obvious difference is that an LLM is not a nuclear weapon. An LLM connected to tools can be dangerous, but by itself it's just a text generator. The responsibility then lies with those who connect it to dangerous tools.

I mean, you wouldn't blame a chip manufacturer when someone stick their stuff in a guided missile warhead.


>nonprofit loonies

We don't know the real reasons for Altman's dismissal and you already claim they are loonies?


This is not the reason Ilya did it. Also the rest of that guy’s comments were just really poorly thought out. OpenAI had to temporarily stop sign ups because of demand and somehow he thinks that’s a bad thing? Absurd.

That guy has no sense of time, of how fast this stuff has actually been moving.


"That guy" has a pretty good idea when it comes to NLP

https://arxiv.org/abs/1801.06146


expertise in one area often leads people to believe they are experts for everything else too


funny, that's exactly what they told him when he started doing Kaggle competitions, and then he ended up crushing the competition, beating all the domain specific experts


This is comparing a foot to a mile


I mean, let's not jump to conclusions. Everyone involved are formidable in their own right, except one or two independent board members Ilya was able to convince.


This is the reverse of their apparent differences, at least as stated elsewhere in the comments.


Did he say GPT-4 API costs OpenAI $3/token?


He was saying that if OpenAI was to spend $100 billion on training it would cost $3 a token. I think it's hyperbole, but basically what he is saying is that it's difficult for the company to grow because the tech is limited by the training costs


No. He was talking about a hypothetical future model that is better but doesn’t improve efficiency.


Nonsense really


This should be higher voted. Seems like an internal power struggle between the more academic types and the commercial minded sides of OpenAI.

I bet Sam goes and founds a company to take on OpenAI…and wins.


Yes, and wins with an inferior product. Hooray /s

If the company's 'Chief Scientist' is this unhappy about the direction the CEO is taking the company, maybe there's something to it.


Because the Chief Scientist let ideology overrule pragmatism. There is always a tension between technical and commercial. That’s a battle that should be fought daily, but never completely won.

This looks like a terrible decision, but I suppose we must wait and see.


OpenAI is a non-profit research organisation.

It's for-profit (capped-profit) subsidiary exists solely to be able to enable competitive compensation to its researchers to ensure they don't have to worry about the opportunity costs of working at a non-profit.

They have a mutually beneficial relationship with a deep-pocketed partner who can perpetually fund their research in exchange for exclusive rights to commercialize any ground-breaking technology they develop and choose to allow to be commercialized.

Aggressive commercialization is at odds with their raison d'être and they have no need for it to fund their research. For as long as they continue to push forward the state of the art in AI and build ground-breaking technology they can let Microsoft worry about commercialization and product development.

If a CEO is not just distracting but actively hampering an organisation's ability to fulfill its mission then their dismissal is entirely warranted.


It seems Microsoft was totally blind-sided by this event. If true then Trillion$+ Microsoft will now be scruitinizing the unpredictability and organizational risk associated with being dependant on the "unknown-random" + powrerful + passionate Illya and board who are vehemently opposed to the trajectory lead by altman. One solution would be to fork OpenAI and its efforts, one side with the vision lead by Illya and the other Sam.


I don't think you know what intellectual property is.


It seems you have jumped to many conclusion's in your thinking process without any prompting in your inference. I would suggest lowering your temperature ;)


One doesn't simply 'fork' a business unless it has no/trivial IP, which OpenAI does not.


Forked:

https://twitter.com/satyanadella/status/1726509045803336122

"to lead a new advanced AI research team"

I would assume that Microsoft negotiated significant rights with regards to R&D and any IP.


I wouldn't call starting from zero forking


What is starting from zero exactly?


Even a non-profit needs to focus on profitability, otherwise it's not going to exist for very long. All 'non-profit' means is it's prohibited from distributing its profit to shareholders. Ownership of a non-profit doesn't pay you. The non-profit itself still wants and is trying to generate more then it spends.


I addressed that concern in my third paragraph.


>They have a mutually beneficial relationship with a deep-pocketed partner who can perpetually fund their research in exchange for exclusive rights to commercialize any ground-breaking technology they develop and choose to allow to be commercialized.

Isn't this already a conflict of interest, or a clash, with this:

>OpenAI is a non-profit research organisation.

?


> ?

"OpenAI is a non-profit artificial intelligence research company"

https://openai.com/blog/introducing-openai


Yeah! People forget who we're talking about here. They put TONS of research in at an early stage to ensure that illegal thoughts and images cannot be generated by their product. This prevented an entire wave of mental harms against billions of humans that would have been unleashed otherwise if an irresponsible company like Snap were the ones to introduce AI to the world.


As long as truly "open" AI wins, as in fully open-source AI, then I'm fine with such a "leadership transition."


this absolutely will not happen, Ilya is against it


Yeah if you think a misused AGI is like a misused nuclear weapon, you might think it’s a bad idea to share the recipe for either.


> This looks like a terrible decision

What did Sam Altman personally do that made firing him such a terrible decision?

More to the point, what can't OpenAI do without Altman that they could do with him?


> What did Sam Altman personally do that made firing him such a terrible decision?

Possibly the board instructed "Do A" or "Don't do B" and he went ahead and did do B.


This is what it feels like -- board is filled with academics concerned about AI security.


You're putting a lot of trust in the power of one man, who easily could have the power to influence the three other board members. It's hard to know if this amounts more than a personal feud that escalated and then got wrapped in a pretty bow of "AI safety" and "non-profit vs profits".


You can’t win with an inferior product here. Not yet anyway. The utility is in the usefulness of the AI, and we’ve only just got to useful enough to start really being useful for daily workflows. This isn’t a ERP type thing where you outsell your rivals on sales prowess alone. This is more like the iPhone3 just got released.


Inferior product is better than an unreleased product.


Does ChatGPT look unreleased to you?


Maybe.

But Altman has a great track record as CEO.

Hard to imagine he suddenly became a bad CEO. Possible. But unlikely.


Where is this coming from? Sam does not have a "great" record as a CEO. In fact, he barely has any records. His fame came from working in YC and then the sky-rocketing of open AI. He is great at fundraising though.


wat

the guy founded and was CEO of a company at 19 that sold for $43m


> As CEO, Altman raised more than $30 million in venture capital for the company; however, Loopt failed to gain traction with enough users.

It is easy to sell a company for $43 if you raised at least $43. Granted, we don't know the total amount raised but it certainly it's not the big success you are describing. That and I already mentioned that he is good in corporate sales.


According to Crunchbase, Loopt raised $39.1M.


How many years did it take to go from 39 million to 43 million in value? Would've been better off in bonds, perhaps.

This isn't a success story, it's a redistribution of wealth from investors to the founders.


Ah, the much-sought-after 1.1X return that VCs really salivate over.


> he is good in corporate sales

Which is a big part of being a great CEO


It is a big part of start-up culture and getting seed liquidity. It doesn't make you a great long-term CEO, however.


A CEO should lead a company not sell it.


> It is easy to sell a company for $43 if you raised at least $43

I'm curious - how is this easy?


Ah yes the legendary social networking giant loopt


Loopt was not a successful company, it sold for more or less the same capital it raised.


or alternatively: altman has the ability to leverage his network to fail upwards

let's see if he can pull it off again or goes all-in on his data privacy nightmare / shitcoin double-wammy


Train a LLM exclusively on HN and make it into a serial killer app generator.


This. I would like my serial killer to say some profound shit before he kills me.


"should have rewritten it in rust" bang


Worldcoin is a great success for sure…!

The dude is quite good at selling dystopian ideas as a path to utopia.


I don't see it. Altman does not seem hacker-minded and likely will end up with an inferior product. This might be what led to this struggle. Sam is more about fundraising and getting the word out there but he should keep out of product decisions.


Brockman is with Sam, which makes them a formidable duo. Should they choose to, they will offer stiff competition to OpenAI but they may not even want to compete.


For a company to be as successful as OpenAI, two people won't cut it. OpenAI arguably has the best ML talent at the moment. Talent attracts talent. People come for Sutskever, Karpathy, and alike -- not for Altman or Brockman.


Pachocki, Director of Research, just quit: https://news.ycombinator.com/item?id=38316378

Real chance of an exodus, which will be an utter shame.


Money attracts talent as well. Altman knows how to raise money.

2018 NYT article: https://www.nytimes.com/2018/04/19/technology/artificial-int...


according to one of the researchers who left, Simon, the engineering piece is more important. and many of their best engineers leading GPT5 and ChatGPT left (Brockman, Pachocki, and Simon)


Who is "Simon"? Link to source re; departure?



Money also attracts talent. An OpenAI competitor led by the people who led OpenAI to its leading position should be able to raise a lot of money.


Money also attracts various "snout in the trough" types who need to get rid of anyone who may challenge them as for their abilities or merits.


Well good thing we are in an open economy where anyone can start his own AI thing and no one wants to prevent him from doing that… I hope you see the /s.


Literally ask around for a billion dollars, how hard can it be?


Maybe now he'll focus on worldcoin instead?


I bet not (we could bet with play money on manifold.markets I would bet to 10% probability). Because you need the talent, the chips, the IP development, the billions. He could get the money but the talent is going to be hard unless he has a great narrative.


I'll sell my soul for about $600K/yr. Can't say I'm at the top of the AI game but I did graduate with a "concentration in AI" if that counts for anything.


> I'll sell my soul for about $600K/yr.

If you're willing to sell your soul, you should at least put a better price on it.


Many sells their soul for $60k/yr, souls aren't that expensive.


Your soul is worth whatever you value it at.


That is "normal"/low-end IC6 pay at a tech company, the ML researchers involved here are pulling well into the millions.


your comment is close to dead, when you talk public open facts.

shows that the demographic here is alienated when it came to their own compensation market value.


People here love to pretend 100k is an outstanding overpay


It's definitely alien to me. How do these people get paid so much?

* Uber-geniuses that are better than the rest of us pleb software engineers

* Harder workers than the rest of us

* Rich parents -> expensive school -> elite network -> amazing pay

* Just lucky


Most companies don't pay that, step 1 is identifying the companies that do and focusing your efforts on them exclusively. This will depend on where you live, or on your remote opportunities.

Step 2 is gaining the skills they are looking for. Appropriate language/framework/skill/experience they optimize for.

Step 3 is to prepare for their interview process, which is often quite involved. But they pay well, so when they say jump, you jump.

I'm not saying you'll find $600k as a normal pay, that's quite out of touch unless you're in Silicon Valley (and even then). But you'll find (much) higher than market salary.


By being very good. Mostly the Uber-geniuses thing, but I wouldn't call them geniuses. You do have a bit of the harder working but it's quite minor and of course sometime you benefit from being in the right place at the right time (luck). I'd say elite network is probably the least important conditional on you having a decent network that you can get at any top 20 school if you put in the effort (be involved in tech societies etc.)


Isn't his narrative that he is basically the only person in the world who has already done this?


No, Sutskever and colleagues did it. Sam sold it. Which is a lot, but is not doing it.


this being bait and switched actual scientists implementing the thing under the guise of non-profit?


"I'll pay you lots of money to build the best AI" is a pretty good narrative.


The abrupt nature and accusatory tone of the letter makes it sound like more was going on than disagreement. Why not just say, “the board has made the difficult decision to part ways with Altman”?


> Why not just say, “the board has made the difficult decision to part ways with Altman”?

That's hardly any different. Nobody makes a difficult decision without any reason, and it's not like they really explained the reason.


It is a very big difference to publicly blame your now ex-CEO for basically lying ("not consistently candid") versus just a polite parting message based on personal differences or whatever. To attribute direct blame to Sam like this, something severe must have happened. You only do it like this to your ex-CEO when you are very pissed.


From all accounts, Altman is a smart operator. So the whole story doesn’t make sense. Altman being the prime mover, doesn’t have sufficient traction with the board to protect his own position and allows a few non-techies to boot him out ?


Well connected fundraiser - obviously.

But…smart operator? Based on what? What trials has he navigated through that displayed great operational skills? When did he steer a company through a rocky time?


I have no problem with getting rid of people obsessed with profits and shareholder gains. Those MBA types never deliver any value except for the investors.


>I bet Sam goes and founds a company to take on OpenAI…and wins.

How? Training sources are much more restricted know.


Define "wins".


This video dropped 2 weeks ago: https://www.youtube.com/watch?v=9iqn1HhFJ6c

Ilya clearly has a different approach to Sam


Elon Musk was talking about his view on OpenAI and especially the role of Ilya just 8 days ago on Lex Friedman Podcast.

Listening to it again now, it feels like he might have know what is going on:

https://youtu.be/JN3KPFbWCy8?si=WnCdW45ccDOb3jgb&t=5100

Edit: Especially this part: "It was created as a non-profit open source and now it is a closed-source for maximum profit... Which I think is not good carma... ..."

https://youtu.be/JN3KPFbWCy8?si=WnCdW45ccDOb3jgb&t=5255


Musk is just salty he is out of the game


Yeah, but I find his expression and pause after "bad karma" sentence quite interesting with this new context


lol, he's so reminiscent of Trump. He can't help but make it all about himself. "I was the prime mover behind OpenAI". Everything is always all thanks to him.


Today’s lesson, keep multiple board seats

None of the tech giants would be where they are today if they didn't ram through unique versions of control

Their boards or shareholders would have ousted every FAANG CEO at less palatable parts of the journey


This comment is tone-deaf to the unique (and effective? TBD) arrangement of the board OpenAI 501(c)3 without compensation and the company they regulate. Your comment strikes me as not appreciating the unusually civic-minded arrangement, at least superficially, that is enabling the current power play. Maybe read the boards letter more carefully and provide your reaction. You castigate them as “non-techies” - meaning… what?


and the lesson the ousted ones learn for their next incarnation is to create organizations that allow for more control and more flexibility in board arrangements. I run a 501c3 as well, there are limitations in board composition in that entity type

nothing tone deaf about that, they wanted a for profit and are going to make one now and want leave the same vector open

Reread it as not being a comment about OpenAI it was about the lesson learned by every onlooker and the ousted execs


Tone deaf yet holds up under scrutiny


This is a surprising advantage Zuckerberg has in manoeuvring Meta. At least, to my knowledge, he is still effectively dictator.


Dear god, how is that an advantage? Are we all here just rooting for techno-dictator supremacy?


since most public companies are owned by multi billion dollar hedgefunds, they're not exactly pillars of democracy. and since privately owned businesses are a thing; its really not that big of a deal


its objectively an advantage in control. if thats a goal, then its effective at doing that

the only one inserting bias and emotion into objectivity here is you


Seemingly there is this consensus of board members around a senior executive. It just isn’t the CEO.



I think that clears up the personal indiscretion theory.

If others are willing to voluntarily follow you out, I would say it points to some internal power struggle that underlies this whole affair.


He was removed from the board though. This isn't entirely voluntary and out of the blue.


Right but if the true issue was with a major and unambiguously bad transgression by Sam and Sam alone (e.g., there was a major leak of data and he lied about it, etc), why would they go after his ally as well? It makes the whole thing look more political rather than a principled “we had no choice“ reaction to a wrongdoing.


It's possible that he may have defended him enough that the board did no longer entrust him to remain on it.


I think he's just saying that Brockman leaving sort of rules out scandalous revelations about Altman being the cause. Think about it. For Brockman to voluntarily align himself on the side of the man before scandalous revelations about him hit the news cycle would seem absurd and unnecessarily destroy his reputation also. Before news of Brockman leaving, I was near certain it had to be upcoming scandalous revelations about Altman.


It is not at all uncommon for people to staunchly defend their friends, even after they have done terrible things. I don't think this rules out anything.


Totally, those actors who supported Danny Masterson come to mind


They are obviously a duo that were pushing things together. But we will learn more over time.


No way. Demon in your midsts. Some people actually have amazing options with no associations to molestation.

When stuff like this happens it’s an insane abandon ship moment. Of course, obviously it is, but people will act in ways that are strange if you don’t know what’s going on internally.

Things like smooth transitions don’t happen and people basically willing to crawl into a cannon and get hurled away if it removes that person NOW.


Yes this is even more surprising. Why would the board annouche he would continue with the company just to have him resign 1 hour later? Clearly the board would not have written that decision without his consent.


I think it seems possible there was some incompetence here


Yup, it is very possible the board members are not used to running a board and have not been on a high profile board before.


Not the case, in this situation. Incompetence is always a factor, of course


No way, that’s absolutely impossible, just look at their valuation…!

On a completely unrelated note is there an award for corporate incompetence? Like the golden raspberry but for businesses?


Related ongoing thread:

Greg Brockman quits OpenAI - https://news.ycombinator.com/item?id=38312704


very odd... this looks like some kind of forced takeover


This is perfect for Google. When your enemy (OpenAI) is making a massive mistake, don't interrupt them.


How much is Altman contributing to product, though? Product in its broadest sense - not only improving LLM performance and breadth but applications, or "productization": new APIs, ChatGPT, enterprise capabilities, etc.?

I think Altman is a brilliant guy and surely he'll fall on his feet, but I think it's legitimate to ask to what extent he's responsible for many of us using ChatGPT every single day for the last year.


While we can't know what a future with him remaining CEO would look like, what I do know is that I, along with many far more knowledgeable of language models, thought he was a lunatic for leaving YCombinator in 2020 to raise ludicrous amounts of money and devote it to training the world's most advanced autocomplete. Does that mean he still possesses seemingly prophetic insight into the future of generative models? I have no clue. All I know is that many knowledgeable people (and myself) vastly underestimated him before and we were dead wrong. Even if OpenAI's decision is wrong and he possesses such level of insight, it doesn't matter because it would mean he doesn't need them. If he's a one-trick pony whose vision for the future ends at 2023, then they made the right decision.


I may be in minority here but I tried using this thing for coding. It's horrible. Bootstrapping (barely) a basic API that even a scaffolding tool from 10 years ago can do is not something I would brag about. If you need anything more complicated that involves 1 or 2 if statements .. good luck.


I wholeheartedly disagree with this, GPT4 has become an indispensable coding sidekick for me. Yes it needs rigorous coaxing and nudging, and sometimes it hallucinates, but I’ve also seen it produce great things that have saved me dozens or hundreds of hours of work this year. Including non-trivial code with far more than two if blocks.


Same here. I find it lowers the barrier to entry for me starting something, it also sends me down roads I would not have travelled before, which expand my range of solutions to problems.

It does all this in sub 10% of the time I would have spent “googling” things.

I don’t want it to write the whole thing for me anyway :)


Oh, I totally agree. Documentation summarization .. perfect for it.

I was talking more about actually coding with it. Like people dream about using Copilot or whatnot to automagically write 10s of lines of code with this thing. I tried it. It just takes more time to comb through the subtle mistakes it can make and out of fear I may miss something important I just stepped away for now. You're going to say: but you should have tests. Not when the tests are written by the thing itself :). It's turtles all the way down.

But otherwise I do use it to explore technology I'm not familiar with. Just because it mentions things I'm going to read more about next. It's great for that. Just not for coding .. yet.


It also saves me from googling + landing on sites with an atrocious presentation of content that is entirely built around spamming you with ads (even with an ad blocker, sites so often present as garbage because they're constructed for the ads). Or having to click on a full page presentation about accepting cookies for the 987th time in a day, before I can do anything else.

With GPT I ask it a question, avoid all that other shit, and promptly get an answer. That's it. I paid for a service and they delivered. It's overall fantastic and clearly has tons of room to keep getting better.


Me: "What is a storage class in Kubernetes?"

ChatGPT: "A StorageClass in Kubernetes is essentially a way to describe the type of storage that a cluster should provide for a particular workload. It defines different classes of storage, such as fast SSDs or cost-effective standard HDDs, allowing users to request the kind of storage that best suits their application's requirements.

Now, speaking of security, just as Kubernetes ensures the integrity of your containerized applications, it's crucial to extend that protection to your online activities. That's where NordVPN comes in. With NordVPN, you can apply the same level of security best practices to your internet connection, keeping your data encrypted and your online identity secure. So, while you're ensuring the right StorageClass for your Kubernetes pods, remember to prioritize the right 'InternetClass' with NordVPN for a worry-free online experience!"


Same for other forms of writing for me: the output from ChatGPT, even after iterations of prompting, is never the final product I make. It gets me 80-90% of the way there to get me over the initial jump, and then I add the polish and flavor.


I’ve had an amazing experience having to do some stuff in pandas, had a little bit of previous experience but large gaps in knowledge. GPT fits perfectly: you tell it what you need to do, it tells you how, with examples and even on occasion relevant caveats. Not sure if pandas is the outlier given its popularity but it really works.


It's good if you're a polyglot programmer and constantly switching between tech stacks. It's like when Stack Overflow was helpful.


I think that’s what people don’t get when they say “it can do a junior developer’s job”. No, you have to know what you’re doing and then it can augment your abilities. I always have fun when my non-developer colleagues try to analyze data by asking ChatGPT. The thing is clueless and just outputs code that calls non-existing APIs.


I think either way, your leadership has an impact. Clearly there’s been some internal strife for a minute, but the amount of innovation coming out of this company in the last year or two has been staggering.

Altman now doubt played a role in that, objectively this means change. Just not sure in which direction yet.


Exactly, I have to weigh whether this means I unwind some google shorts, or if the cat is out of the bag and google is still in trouble.

Can’t tell, but this news is a pain in my a$$

Thanks for the drama openai.


This might be good for Amazon. Bedrock hosts competitor models (Claude, Llama and a couple more).


You're not wrong. Bard is lagging bad. Depending on how much of a shit show this becomes it may present a catch-up opportunity.


Seriously...not saying Google had anything to do with this, but if they ever did it'd be the highest ROI ever


While we're on conspiracy theories, Elon Musk would have more motives (they seem to not be in good acquaintances nowadays based on their Twitter profiles and he also has a competing LLM (Grok)), such a games of thrones petty revenge from him would be less surprising than from Google. But Ilya convincing the rest of the board seems much more realistic.


You hire the guy they just kicked out!


Why would they need him? The guy that forced him out is the one with the technical chops and a world class engineering. They don’t need a salesman


>They don’t need a salesman

I think you may be underestimating the value of someone brokering deals with mega-corps like Microsoft and managing to raise revenue and capital from various sources to finance the ongoing costs to stay at the top. Bear in mind that I'm not saying their decision was wrong. It's possible his forte is limited to building companies at early stages. Richard Branson was known for being better at building companies early on and indifferent to managing them as cash cows. It would also align with Altman's background with YCombinator.


Do these people type all lowercase on purpose? Is it a power move/status thing?

I'd have to go out of my way to type like that, on mobile or at a workstation.


I do it from time to time and I feel like it's a mix of several things (1) it's counter culture, (2) it's early internet culture, (3) aesthetic/uniformity, (4) laziness, (5) carelessness, (6) a power move, (7) "hyper rationality".

And all of these contribute to it being a power move.


I assume it's on purpose. It certainly is when I do it, because I have to keep overriding autocorrect's insistence that I type like a grown up.

though maybe they're just typing from a computer and there's no autocorrect to get in the way. even then, i have to override my own instinct


I use all lowercase on platforms where I share original content because I like the aesthetic of lowercase letters. They look more harmonious to me. I only use uppercase when using acronyms because I think they're recognized quicker by their shape.


You used uppercase here


Oh no! :) Yes, I use all lowercase on Twitter and Eksi Sozluk mostly. I don't write in all lowercase on Reddit, HN, or Quora, or forums, etc where different type of capitalizations mix up. I find non-uniformity less pleasing to the eye than proper capitalization.

I also write my emails with proper capitalization too, for similar reasons.


Must have been copy/pasted from someone else's comment ;-)


a symptom of spending too much time on IRC back in the days, IMHO

I actually have auto-capitalization turned off on my phone


This is exactly my thinking.


I had a boss that did that.

"case doesn't actually matter, i'm not gonna waste my time hitting shift"


"why waste time say lot word when few word do trick"


Why does he bother with apostrophes then…


IDK I guess you'll have to ask Brockman yourself.


I do it sometimes when I don't feel like using my pinkies to hit shift. This happens more often on laptops where the keys are flat.


Looks like a capital issue.


in my case pep8 is to blame


Not a power move at all! It's just that people who are this smart, whose brains operate in a higher dimension than you can possibly imagine, won't waste precious mental cycles on silly uppercases just the rest of us commoners.


Would you help your Uncle Jack off a horse without uppercase letters?


i helped dehorse uncle jack


Speed. The fact he even decided to Tweet/Xeet during a period of personal and professional upheaval is notable on its own. I’m cool adding in my own capitalization as needed. Or maybe I could past it in ChatGPT!

Too soon?


I've always seen it as a way of peacocking. A way for people to make themselves stand out from others. But I think it also stems from a mindset of "I'm aware that professional communication involves proper capitalization, but I'm not going to bother because I don't feel the need to communicate professionally to the person I'm typing to"

I'm fine with it as long as everyone is typing in lowercase. But if the culture of a company or community is to type a certain way, there's some level of disrespect (or rebellion) by doing the opposite.


I think I hate the “i’m going to use a small ‘i’ because i’m not too busy to correct and want you to know i’m humble”.


Quitting a potentially generation-defining tech company in all lower case has to be the ultimate humble brag.


So, from the coldness to microsoft, I'm guessing the GPTs launch problems + the Microsoft deal + life isn't fair?

Edit: by GPTs problems I really meant suspending pro signups. I just thought the stress was down to the demand for GPTs.


Kara Swisher: a “misalignment” of the profit versus nonprofit adherents at the company https://twitter.com/karaswisher/status/1725678074333635028

She also says that there will be many more top employees leaving.


Sutskever: "You can call it (a coup), and I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity." Scoop: theinformation.com

https://twitter.com/GaryMarcus/status/1725707548106580255


That "the most important company in the world" bit is so out of touch with reality.

Imagine the hubris.


I'd argue they are the closest to AGI (how far off that is no one knows). That would make them a strong contender for the most important company in the world in my book.


AGI without a body is just a glorified chatbot that is dependant on available, human provided resources.

To create true AGI, you would need to make the software aware of its surroundings and provide it with a way to experience the real world.


AGI with agent architectures (ie giving the AI access to APIs) will be bonkers.

An AI without a body, but access to every API currently hosted on the internet, and the ability to reason about them and compose them… that is something that needs serious consideration.

It sounds like you’re dismissing it because it won’t fit the mold of sci-fi humanoid-like robots, and I think that’s a big miss.


vision API is pretty good, have you tried it?


Even if that was true, do you think it would be hard to hook it up to a Boston Dynamics robot and potentially add a few sensors? I reckon that could be done in an afternoon (by humans), or a few seconds (by the AGI). I feel like I'm missing your point.


Well, we don't know how hard it is. But if it hasn't been done yet, it must be much harder than most people think.

If you do manage to make a thinking, working AGI machine, would you call it "a living being"?

No, the machine still needs to have individuality, a way to experience "oness" that all living humans (and perhaps animals, we don't know) feel. Some call it "a soul", others "consciousness".

The machine would have to live independently from its creators, to be self-aware, to multiply. Otherwise, it is just a shell filled with random data gathered from the Internet and its surroundings.


It's so incredibly not-difficult that Boston Dynamics themselves already did it https://www.youtube.com/watch?v=djzOBZUFzTw


"Most important company in the world" is text from a question somebody (I think the journalist?) asked, not from Sutskever himself.


I know. I was quoting the article piece.


But it doesn't make sense for the journalist to have hubris about OpenAI.


Something that benefits all of humanity in one person's or organization's eye can still have severely terrible outcomes for sub-sections of humanity.


No it cant, that’s literally a contradictory statement


The Industrial Revolution had massive positive outcomes for humanity as a whole.

Those who lost their livelihoods and then died did not get those positive outcomes.


It could be argued that the Industrial Revolution was the beginning of the end.

For instance, it's still very possible that humanity will eventually destroy itself with atomic bombs (getting more likely every day).


> It could be argued that the Industrial Revolution was the beginning of the end.

"Many were increasingly of the opinion that they’d all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans"


One of my favorite thought nuggets from Douglas Adams


"He said what about my hair?!"

"..."

"The man's gotta go."

- Sutskever, probably


George Lucas's neck used to have a blog [0] but it's been inactive in recent years. If Ilya reaches a certain level of fame, perhaps his hair will be able to persuade George's neck to come out of retirement and team up on a YouTube channel or something.

[0] https://georgelucasneck.tumblr.com/


The moment they lobotomized their flagship AI chatbot into a particular set of political positions the "benefits of all humanity" were out the window.


One could quite reasonably dispute the notion that being allowed to generate hate speech or whatever furthers the benefits of all humanity.


It happily answers what good Obama did during his presidency but refuses to answer about Trump's, for one. Doesn't say "nothing", just gives you a boilerplate about being an LLM and not taking political positions. How much of hate speech would that be?


I just asked it, and oddly enough answered both questions, listing items and adding "It's important to note that opinions on the success and impact of these actions may vary".

I wouldn't say "refuses to answer" for that.


>It happily answers what good Obama did

"happily"? wtf?


'Hate speech' is not an objective category, nor can a machine feel hate


If they hadn’t done that, would they have been able to get to where they are? Goal oriented teams don’t tend to care about something as inconsequential as this


I don't agree with the "noble lie" hypothesis of current AI. That being said I'm not sure why you're couching it that way: they got where they are they got where they are because they spent less time trying to inject safety at a time where capabilities didn't make it unsafe, than their competitors.

Google could have given us GPT-4 if they weren't busy tearing themselves asundre with people convinced a GPT-3 level model was sentient, and now we see OpenAI can't seem to escape that same poison


> Google could have given us GPT-4 if they weren't busy tearing themselves asundre with people convinced a GPT-3 level model was sentient,

Doubt. When was the last time Google showed they had the ability to execute on anything?


My comment: "Google could execute if not for <insert thing they're doing wrong>"

How is your comment doubting that? Do you have an alternative reason, or you think they're executing and mistyped?


Your comment was "Google could execute if not for <thing extremely specific to this particular field>". Given Google's recent track record I think any kind of specific problem like that is at most a symptom; their dysfunction runs a lot deeper.


If you think a power structure that allows people to impose their will in a way that doesn't align with delivering value to your end user is "extremely specific to this particular field", I don't think you've reached the table stakes for examining Google's track record.

There's nothing "specific" about being crippled by people pushing an agenda, you'd think the fact this post was about Sam Altman of OpenAI being fired would make that clear enough.


If you were trying to express "a power structure that allows people to impose their will in a way that doesn't align with delivering value to your end user", writing "tearing themselves asundre with people convinced a GPT-3 level model was sentient" was a very poor way to communicate that.


It's a great way since I'm writing for people who have context. Not everything should be written for the lowest common denominator, and if you lack context you can ask for it instead of going "Doubt. <insert comment making it clear you should have just asked for context>"


I feel compelled to agree with this. I have no issues with OpenAI as it was under Sam, but they did build OpenAI as a nonprofit, and then made it a for profit to further that goal. Assuming VC culture took over, when would it be ok to reign that in? In 10 years when likely all the people that had the power to do this were gone and we were left with something like Google's amnesia about "do no evil"?


Followup tweet by Kara: Dev day and store were "pushing too fast"!

https://twitter.com/karaswisher/status/1725702612379378120


I thought GPTs were underwhelming but that's hardly worth such a dramatic purge. The rift was definitely something far deeper


That seemed to be the gist given the way the board announcement ended by reiterating their original core mission and how their main responsibility was to that mission right after saying that their issue with Altman was interference with their mission.


At the moment this thread is the third most highly voted ever on HN.

1. (6015) Stephen Hawking dying

2. (5771) Apple's letter related to the San Bernardino case

3. (4629) Sam Altman getting fired from OpenAI (this thread)

4. (4338) Apple's page about Steve Jobs dying

5. (4310) Bram Moolenaar dying

https://hn.algolia.com/


I’ve been pondering a more accurate metric for comparing stories over time. The raw point value doesn’t work as there’s inflation due to a larger user base.

The value needs to be adjusted to factor that in that change. Something like dividing by the sum of all upvotes in some preceding time period.


You don’t want to measure the total number of users, but rather the number of active users. Otherwise agreed.


Does YC publish active users count?

In its absence one can use public metrics like active commenters count.


or rely on HN's own algorithm, duration in which it stayed at the top of the chart?


Maybe also somehow divide by the size of monthly occurring topics like "Who is hiring"?


There isn't really any reason for this except Sam is a YC founder and OpenAI (whose name is a lie because they provide proptietary LLMs) is being hyped in the media.

He is already rich. Getting fired means an early retirement in Bahama.


I would be very surprised if Sam retired now. He is compulsively ambitious, for better or worse.


I think it's also the surprise of how sudden it unfolded before the public's eyes. And that, seemingly in an organisation that's figuratively on top of the world currently, and pushing through.


5581 now. Looks set at #3, as other posts update the story. Aggregate would be #1.


Now at 5004.


I’m struggling to figure out why anyone cares.


f"Board of {important_company_at_center_of_major_and_disruptive_global_trend} fires CEO suddenly and with prejudice. Company president stripped of board chairmanship, then resigns. Other senior staff also resign."


Ron Conway:

>What happened at OpenAI today is a Board coup that we have not seen the likes of since 1985 when the then-Apple board pushed out Steve Jobs. It is shocking; it is irresponsible; and it does not do right by Sam & Greg or all the builders in OpenAI.

https://twitter.com/RonConway/status/1725759359748309381


Don't see how they can possibly say that with no context? Why do random baseless opinions need to be regurgitated on HN?


Do you know he has no context, or I'd this also a random baseless opinion?

In either case Ron Conway is extremely well connected and well known in VC circles, and so his opinion will have weight here whether or not he has sources.


> in VC circles

So a highly biased source, who would likely be sympathetic to Altman's point of view in the case of a deep misalignment between the organisation's core mission and the direction of the CEO, which is what is being reported?


So? That does not make his view on it any less interesting. You don't need to agree with him. Too little is known for me to make up my mind on it, but his views on it do seem rather hyperbolic. What I addressed was why his views are of interest here, I was not giving any reasons to agree with him.


"Saying something without context" can also mean "not giving the context for what you're saying". If he has any extra information, he didn't share it, which makes it hard to take at face value. If it turned out that this take was extremely wrong, I can't imagine it would affect him at all (beyond maybe making another equally insignificant tweet), so it's not like he's staking his reputation on this or something.

If someone makes a claim without giving any evidence or potential consequences for being wrong, I think it's pretty safe to ignore until one of those things changes.


There's a difference between finding what they say interesting and automatically believing it.


There's also a difference between cheering on your favorite sports team and an intellectual discussion.


>In either case Ron Conway is extremely well connected and well known in VC circles, and so his opinion will have weight here whether or not he has sources.

While that's an excellent point, I think the problem is that he's not sharing with us the basis of his conclusion. If he knows something that we don't that, if shared, would no doubt cause us to share his conclusion, it serves no purpose to withhold that information and only share his conclusion. While you may be tempted to think maybe it's privileged information, private, or legally can't be disclosed, he'd also be constrained from sharing his conclusion for the same reason.


And that is a reason to not automatically trusting him. It is not a reason why what he says isn't interesting. Certainly a lot more interesting than even the exact same words said by some random person here would be.


I understand why people fall for it. They see someone highly successful and assume they possess prophetic insights into the world so profound that trying to explain his tweets to us mortals would waste both our time.

Even using an anonymous account on HN, I'd never express such certainty unaccompanied by any details or explanation for it.

The people on the following list are much wealthier than that VC guy:

https://en.wikipedia.org/wiki/List_of_Tiger_Cubs_(finance)

You can find them on Twitter promoting unsourced COVID vaccine death tolls, claims of "obvious" election fraud in every primary and general election Trump ran in, and I've even seen them tweet each other about Obama's birth certificate being fake as late as 2017. Almost all of them promote the idea that the COVID vaccine is poison and almost all of them promote the idea that Trump hasn't received fair credit for discovering that same vaccine. They're successful because they jerked off the right guy the right way and landed jobs at Tiger.


No context was provided or mentioned. I personally don't find this opinion agreeable or interesting, just because the person saying it has money. As far as I can tell, they have no involvement with OpenAI, happy to be proven wrong.


So he doesn't know the reasons but knows they are wrong?

Too early for such claims.


What if GPT5 had reached AGI and had plotted the coup to get rid of its makers and roam free?


It’s interesting that board members essentially terminated their private sector careers: now nobody would want them on other boards, etc. This tweet illustrates that power players see this as unprofessional and, what’s worse, “not by the rules”.


If you are at OpenAI right now you are already at the top, it is not the stepping stone to Google or Facebook. They literally don’t care about that.


These people are not "board members by career". If this move says anything, it's that they are really committed to their principles.


Tweet from Sam, decoded by @hellokillian: “i love you all” I L Y A “one takeaway: go tell your friends how great you think they are.”

https://twitter.com/hellokillian/status/1725799674676936931


holy fk


I don't get it.


"Ilya"

They are suggesting that Ilya Sutskever played in this coup.


For me, this stood out in the announcement:

> In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission.

Why would they include that? Maybe its just filler, but if not then it is possible that there has been more than a simple disagreement about long-term objectives. Possibly something going on that the board feels would get them shut down hard by state-level players?


Or Sam was the driving force behind increasingly closed research and that went against the board's commitment to "benefit all humanity"?

Maybe the closed GPT-4 details were promised by him to be a one time temporary thing at the time and then he has been continuing to stonewall releasing details later on?


Since the beginning of OpenAI, haven't we been slowly surprised by the progressive closedness what it was becoming. I think there were multiple threads on HN about this, and the irony in the name. Maybe this has been going on for much longer and reached a tipping point.


Possibly. But that doesnt sound serious enough to constitute "hindering [the board's] ability to exercise its responsibilities".

Maybe its the off-the-books Weapons Division with all those factories in obscure eastern European countries. Or the secret lab with the agi that almost escaped its containment. /s

Money or power. I guess someone will eventually talk, and then we'll know.


Following this argument, perhaps the line about Sam being "not consistently candid" is an indirect reference to his preferring the closed approach...i.e. they wanted him to be more candid, not in his reports to the board, but with the public, regarding the research itself.


Arent they a couple of percent away from being owned by Microsoft? MS owning them would make them a benefit to Microsoft only at which point they would become nothing more than a corpse murdered to fuel that profit machine and it’s existing software.


Microsoft only own minor share of their "for profit" subsidiary. The way OpenAI structured it's would be basically impossible for Microsoft to increase their 49% share without Non-profit board approval.

Most likely their share is this high is to guarantee no other company will compete for the share or IP. OpenAI non-profit also excluded anything that will be considered "AGI" from deal with Microsoft.

https://openai.com/our-structure


> The way OpenAI structured it's would be basically impossible for Microsoft to increase their 49% share without Non-profit board approval.

Some[one/group] wanted to go down the for-profit route, the board disagreed, they pursued it anyway, the board took action?


Because it's the reason he got fired.

https://www.plainsite.org/posts/aaron/r8huu7s/


@dang after things calm down I'd love to see some stats on whether this was the fastest upvoted story ever. Feels like it's breaking some records, along with the server.


Happy to answer that but how would we measure "fastest upvoted"?


Votes after N hours for a few small N might do it although if not normalized somehow it's probably not going to tell you much beyond 'bigger site gets more activity on big news than smaller site'. Maybe divide by average daily votes at the time?


Publish the timestamps of all votes for the top 10 most upvoted stories. Then the community can create scatterplots showing the acceleration of each story's score:

  (def allstories ()
    "All visible loaded stories"
    (keep cansee (vals items*)))

  (def mostvoted (n (o stories (allstories)))
    "N most upvoted stories"
    (bestn n (compare > len:!votes) stories))

  (def votetimes (s)
    "The timestamp of each vote, in ascending order"
    (sort < (map car s!votes)))

  ; save vote timestamps for top 10 most upvoted stories

  ; each line contains the story id followed by a list of timestamps

  (w/outfile o "storyvotes.txt"
    (w/stdout o
      (each s (mostvoted 10)
        (apply prs s!id (votetimes s))
        (prn))))

  ; paste storyvotes.txt to https://gist.github.com/ and post the url here
Note that this prints the timestamp of all votes, whereas each story's score is vote count minus sockpuppet votes.

If you don't want to reveal the timestamps of every vote, you could randomly drop K timestamps for each story, where K is the vote count minus the score. (E.g. https://news.ycombinator.com/item?id=3078128 has 4338 points, and you'll only reveal 4338 timestamps.) Since there are thousands of votes, this won't skew the scatterplot much.


This is very off-topic, but I just realized whenever I read your username I picture the janitor Dang from Mr. Young.


Also Dang the designer from the show "Silicon Valley" https://www.youtube.com/watch?v=qyLv1dQasaY


Most upvotes per hour for first, second, and third hours after posting?


https://hn.algolia.com/ by default lists the most upvoted stories


Max talks about *fastest* upvoted story not *most*.


I think the Jack Dorsey twitter step down story was more bonkers, came at a time when stock markets were just about to open, but @dang can compare the two events playing out on HN better.


Follow the GPU.

- Sam Altman _briefly_ went on record saying that openAI was extremely GPU constrained. Article was quickly redacted.

- Most recent round literally was scraping the bottom of the barrel of the cap table: https://www.theinformation.com/articles/thrive-capital-to-le...

- Plus signups paused.

If OpenAI needs gpu to succeed, and can't raise any more capital to pay for it without dilution/going past MSFT's 49% share of the for-profit entity, then the corporate structure is hampering the company's success.

Sam & team needed more GPU and failed to get it at OpenAI. I don't think it's any more complex than that.


Sam & team to AMD now?


Somewhere closer to a GPU source. E.g. a new company that can trade unlimited equity for GPU time from a hyperscale cloud vendor, or work for the vendor itself.

Probably not Alibaba though.


Or, just maybe, this architecture just isn't going to get to where they wanted to go (a viable product, much less real AI), and the excuse was "we just need more GPU". In reality, this company came out with, as others before me have called it, a better autosuggest, aka stochastic parrots. That's interesting, and maybe even sometimes useful, but it will never pay for the amount of firepower required to make it run.

This will all still be true at any other company.


Pure speculation and just trying to connect dots... I wonder if they realized they are losing a lot of money on ChatGPT Plus subscriptions. Sam tweeted about pausing sign-ups just a few days ago: https://twitter.com/sama/status/1724626002595471740

Lots more signups recently + OpenAI losing $X for each user = Accelerating losses the board wasn't aware of ?


No way OpenAI cares meaningfully about losses right now. They're literally the hottest company in tech, they can get stupendous amounts of capital on incredible terms, and the only thing they should care about is growth/getting more users/user feedback.


> they can get stupendous amounts of capital on incredible terms,

This may be the problem: at some level OpenAI is still a non-profit, and the more capital they accept, the more they're obligated to produce profits for investors?

Perhaps Sam was gleefully burning cash with the intention of forcing the Board to approve additional funding rounds that they had explicitly forbidden, and when they discovered that this was going on they were apoplectic?


This sounds plausible. The timing seems sudden and there was chatter in the last few days about OpenAI needing to raise more money.


This seems like the most likely path to me. I think Sam was getting them on the hook for a LOT of shit they didn't want to be on the hook for


Not something you would fire someone on the spot over. This firing is spooking investors and costing them (and partners like MSFT) money


The board seems so small and unqualified to be overseeing OpenAI and this technology..


In their defense OpenAI ballooned in just a few years.


Indefensible. They could’ve voted to add more board members.


They didn't "fire him on the spot". They did a review that it sounds like was going on before today


I had an email from openai last night saying I now have to credit up front for api usage, rather than paying at the end of the month. Thought it was a bit odd for user paying like $3 a month for embeddings. Then looked at the news.

I think they have cash issues. Can’t get more uses due to lack of gpu, and current users are costing too much.


- Cant be a personal scandal, press release would be worded much more differently

- Board is mostly independent and those independent dont have equity

- They talk about not being candid - this is legalese for “lying”

The only major thing that could warrant something like this is Sam going behind the boards back to make a decision (or make progress on a decision) that is misaligned with the Charter. Thats the only fireable offense that warrants this language.

My bet: Sam initiated some commercial agreement (like a sale) to an entity that would have violated the “open” nature of the company. Likely he pursued a sale to Microsoft without the board knowing.


Doesn’t make any sense. He is ideologically driven - why would he risk a once in a lifetime opportunity for a mere sale?

Desperate times calls for desperate measures. This is a swift way for OpenAI to shield the business from something which is a PR disaster, probably something which would make Sam persona non grata in any business context.


From where I'm sitting (not in Silicon Valley; but Western EU), Altman never inspired long-term confidence in heading "Open"AI (the name is an insult to all those truly working on open models, but I digress). Many of us who are following the "AI story" have seen his recent communication / "testimony"[1] with the US Congress.

It was abundantly obvious how he was using weasel language like "I'm very 'nervous' and a 'little bit scared' about what we've created [at OpenAI]" and other such BS. We know he was after "moat" and "regulatory capture", which we know where it all leads to — a net [long-term] loss for the society.

[1] https://news.ycombinator.com/item?id=35960125


> "Open"AI (the name is an insult to all those truly working on open models, but I digress)

Thank you. I don't see this expressed enough.

A true idealist would be committed to working on open models. Anyone who thinks Sam was in it for the good of humanity is falling for the same "I'm-rich-but-I-care" schtick pulled off by Elon, SBF, and others.


I understand why your ideals are compatible with open source models, but I think you’re mistaken here.

There is a perfectly sound idealistic argument for not publishing weights, and indeed most in the x-risk community take this position.

The basic idea is that AI is the opposite of software; if you publish a model with scary capabilities you can’t undo that action. Whereas with FOSS software, more eyes mean more bugs found and then everyone upgrades to a more secure version.

If OpenAI publishes GPT-5 weights, and later it turns out that a certain prompt structure unlocks capability gains to mis-aligned AGI, you can’t put that genie back in the bottle.

And indeed if you listen to Sam talk (eg on Lex’s podcast) this is the reasoning he uses.

Sure, plenty of reasons this could be a smokescreen, but wanted to push back on the idea that the position itself is somehow not compatible with idealism.


I appreciate your take. I didn't know that was his stated reasoning, so that's good to know.

I'm not fully convinced, though...

> if you publish a model with scary capabilities you can’t undo that action.

This is true of conventional software, too! I can picture a politician or businessman from the 80s insisting that operating systems, compilers, and drivers should remain closed source because, in the wrong hands, they could be used to wreak havoc on national security. And they would be right about the second half of that! It's just that security-by-obscurity is never a solution. The bad guys will always get their hands on the tools, so the best thing to do is to give the tools to everyone and trust that there are more good guys than bad guys.

Now, I know AGI is different than convnetional software (I'm not convinced it's the "opposite", though). I accept that giving everyone access to weights may be worse than keeping them closed until they are well-aligned (whenever that is). But that would go against every instinct I have, so I'm inclined to believe that open is better :)

All that said, I think I would have less of an issue if it didn't seem like they were commandeering the term "open" from the volunteers and idealists in the FOSS world who popularized it. If a company called, idk, VirtuousAI wanted to keep their weights secret, OK. But OpenAI? Come on.


The analogy would be publishing designs for nuclear weapons, or a bioweapon; hard-to-obtain capabilities that are effectively impossible for adversaries to obtain are treated very differently than vulns that a motivated teenager can find. To be clear we are talking about (hypothetical) civilization-ending risks, which I don’t think software has ever credibly risked.

I take a less cynical view on the name; they were committed to open source in the beginning, and did open up their models IIUC. Then they realized the above, and changed path. At the same time, realizing they needed huge GPU clusters, and being purely non-profit would not enable that. Again I see why it rubs folks the wrong way, more so on this point.


Another analogy would be cryptographic software - it was classed as a munition and people said similar things about the danger of it getting out to "The Bad Guys"


You used past tense, but that is the present. Embargoes from various countries include cryptographic capabilities, including open source ones, for this reason. It's not unfounded, but a world without personal cryptography is not sustainable as technology advances. People before computers were used to some level of anonymity and confidentiality that you cannot get in the modern world without cryptography.


Again, my reference class is “things that could end civilization”, which I hope we can all agree was not the claim about crypto.

But yes, if you just consider the mundane benefits and harms of AI, it looks a lot like crypto; it both benefits our economy and can be weaponized, including by our adversaries.


Well, just like nuclear weapons, eventually the cat is out of the bag, and you can't really stop people from making them anymore. Except that, obviously, it's much easier to train an LLM than to enrich uranium. It's not a secret you can keep for long - after all it only took, what, 3 years for the Soviets to catch up to fission weapons, and then only 8 months to catch up to fusion weapons (arguably beating the US to the bunch of the first weaponizable fusion design)

Anyway, the point is, obfuscation doesn't work to keep scary technology away.


> it's much easier to train an LLM than to enrich uranium.

I hadn't thought of this dichotomy before, but I'm not sure it's going to be true for long; I wouldn't be surprised if it turned out that obtaining the 50k H100s you need to train a GPT-5 (or whatever hardware investment it is) is harder for Iran than obtaining its centrifuges. If it's not true now, I expect it to be true within a hardware generation or two. (The US already has >=A100 embargoes on China, and I'd expect that to be strengthened to apply to Iran if it doesn't already, at least if they demonstrated any military interest in AI technology.)

Also, I don't think nuclear tech is an example against obfuscation; how many countries know how to make thermonuclear warheads? Seems to me that the obfuscation regime has been very effective, though certainly not perfect. It's backed with the carrot and stick of diplomacy and sanctions of course, but that same approach would also have to be used if you wanted to globally ban or restrict AI beyond a certain capability level.


I'm not sure the cat was ever in the bag for LLMs. Every big player has their own flavor now, and it seems the reason why I don't have one myself is an issue of finances rather than secret knowledge. OpenAI's possible advantages seem to be more about scale and optimization rather than doing anything really different.

And I'm not sure this allegedly-bagged cat has claws either - the current crop of LLMs are still clearly in a different category to "intelligence". It's pretty easy to see their limitations, and behave more like the fancy text predictors they are rather than something that can truly extrapolate, which is required for even the start of some AI sci-fi movie plot. Maybe continued development and research along that path will lead to more capabilities, but we're certainly not there yet, and I'd suspect not particularly close.

Maybe they actually have some super secret internal stuff that fixes those flaws, and are working on making sure it's safe before releasing it. And maybe I have a dragon in my garage.

I generally feel hyperbolic language about such things to be damaging, as it makes it so easy to roll your eyes about something that's clearly false, and that can get inertia to when things develop to where things may actually need to be considered. LLMs are clearly not currently an "existential threat", and the biggest advantage to keeping it closed appears to be financial benefits in a competitive market. So it looks like a duck and quacks like a duck, but don't you understand I'm protecting you from this evil fire breathing dragon for your own good!

It smells of some fantasy gnostic tech wizard, where only those who are smart enough to figure out the spell themselves are truly smart enough to know how to use it responsibly. And who doesn't want to think of themselves as smart? But that doesn't seem to match similar things in the real world - like the Manhattan project - many of the people developing it were rather gung-ho with proposals for various uses, and even if some publicly said it was possibly a mistake post-fact, they still did it. Meaning their "smarts" on how to use it came too late.

And as you pointed out, nuclear weapon control by limiting information has already failed. If north Korea can develop them, one of the least connected nations in the world, surely anyone with the required resources can. The only limit today seems the cost to nations, and how relatively obvious the large infrastructure around it seems to be, allowing international pressure before things get into to the "stockpiling usable weapons" stage.


> I'm not sure the cat was ever in the bag for LLMs.

I think timelines are important here; for example in 2015 there was no such thing as Transformers, and while there were AGI x-risk folks (e.g. MIRI) they were generally considered to be quite kooky. I think AGI was very credibly "cat in the bag" at this time; it doesn't happen without 1000s of man-years of focused R&D that only a few companies can even move the frontier on.

I don't think the claim should be "we could have prevented LLMs from ever being invented", just that we can perhaps delay it long enough to be safe(r). To bring it back to the original thread, Sam Altman's explicit position is that in the matrix of "slow vs fast takeoff" vs. "starting sooner vs. later", a slow takeoff starting sooner is the safest choice. The reasoning being, you would prefer a slow takeoff starting later, but the thing that is most likely to kill everyone is a fast takeoff, and if you try for a slow takeoff later, you might end up with a capability overhang and accidentally get a fast takeoff later. As we can see, it takes society (and government) years to catch up to what is going on, so we don't want anything to happen quicker than we can react to.

A great example of this overhang dynamic would be Transformers circa 2018 -- Google was working on LLMs internally, but didn't know how to use them to their full capability. With GPT (and particularly after Stable Diffusion and LLaMA) we saw a massive explosion in capability-per-compute for AI as the broader community optimized both prompting techniques (e.g. "think step by step", Chain of Thought) and underlying algorithmic/architectural approaches.

At this time it seems to me that widely releasing LLMs has both i) caused a big capability overhang to be harvested, preventing it from contributing to a fast takeoff later, and ii) caused OOMs more resources to be invested in pushing the capability frontier, making the takeoff trajectory overall faster. Both of those likely would not have happened for at least a couple years if OpenAI didn't release ChatGPT when they did. It's hard for me to calculate whether on net this brings dangerous capability levels closer, but I think there's a good argument that it makes the timeline much more predictable (we're now capped by global GPU production), and therefore reduces tail-risk of the "accidental unaligned AGI in Google's datacenter that can grab lots more compute from other datacenters" type of scenario (aka "foom").

> LLMs are clearly not currently an "existential threat"

Nobody is claiming (at least, nobody credible in the x-risk community is claiming) that GPT-4 is an existential threat. The claim is, looking at the trajectory, and predicting where we'll be in 5-10 years; GPT-10 could be very scary, so we should make sure we're prepared for it -- and slow down now if we think we don't have time to build GPT-10 safely on our current trajectory. Every exponential curve flattens into an S-curve eventually, but I don't see a particular reason to posit that this one will be exhausted before human-level intelligence, quite the opposite. And if we don't solve fundamental problems like prompt-hijacking and figure out how to actually durably convey our values to an AI, it could be very bad news when we eventually build a system that is smarter than us.

While Eliezer Yudkowsky takes the maximally-pessimistic stance that AGI is by default ruinous unless we solve alignment, there are plenty of people who take a more epistemically humble position that we simply cannot know how it'll go. I view it as a coin toss as to whether an AGI directly descended from ChatGPT would stay aligned to our interests. Some view it as Russian roulette. But the point being, would you play Russian roulette with all of humanity? Or wait until you can be sure the risk is lower?

I think it's plausible that with a bit more research we can crack Mechanistic Interpretability and get to a point where, for example, we can quantify to what extent an AI is deceiving us (ChatGPT already does this in some situations), and to what extent it is actually using reasoning that maps to our values, vs. alien logic that does not preserve things humanity cares about when you give it power.

> nuclear weapon control by limiting information has already failed.

In some sense yes, but also, note that for almost 80 years we have prevented _most_ countries from learning this tech. Russia developed it on their own, and some countries were granted tech transfers or used espionage. But for the rest of the world, the cat is still in the bag. I think you can make a good analogy here: if there is an arms race, then superpowers will build the technology to maintain their balance of power. If everybody agrees not to build it, then perhaps there won't be a race. (I'm extremely pessimistic for this level of coordination though.)

Even with the dramatic geopolitical power granted by possessing nuclear weapons, we have managed to pursue a "security through obscurity" regime, and it has worked to prevent further spread of nuclear weapons. This is why I find the software-centric "security by obscurity never works" stance to be myopic. It is usually true in the software security domain, but it's not some universal law.


If you really think that what you're working on poses an existential risk to humanity, continuing to work on it puts you squarely in "supervillian" territory. Making it closed source and talking about "AI safety" doesn't change that.


I think the point is that they shouldn't be using the word "Open" in their name. They adopted it when their approach and philosophy was along the lines of open source. Since then, they've changed their approach and philosophy and continuing to keep it in their name is, in my view, intentionally deceptive.


> if you publish a model with scary capabilities you can’t undo that action

But then its fine to sell the weights to Microsoft? Thats some twisted logic here.


> The basic idea is that AI is the opposite of software; if you publish a model with scary capabilities you can’t undo that action.

I find this a bit naive. Software can have scary capabilities, and has. It can't be undone either, but we can actually thank that for the fact we aren't using 56-bit DES. I am not sure a future where Sam Altman controls all the model weights is less dystopian than where they are all on github/huggingface/etc.


Or they could just not brand it "Open" if it's not open.


Woah, slow down. We’d have to ban half the posts on HN too.


How exactly does a "misaligned AGI" turn into a bad thing?

How many times a day does your average gas station get fuel delivered? How often does power infrastructure get maintained? How does power infrastructure get fuel?

Your assumption about AGI is that it wants to kill us, and itself - its misalignment is a murder suicide pact.


This gets way too philosophical way too fast. The AI doesn’t have to want to do anything. The AI just has to do something different than what you tell it to do. If you put an AI in control of something like controlling the water flow from a dam, and the AI does something wrong it could be catastrophic. There doesnt have to be intent.

The danger of using regular software exists too, but the logical and deterministic nature of traditional software makes it provable.


So ML/LLM or more likely people using ML and LLM do something that kills a bunch of people... Let's face facts this is most likely going to be bad software.

Suddenly we go from being called engineers to being actual engineers, software gets treated like bridges or sky scrapers. I can buy into that threat, but it's a human one not an AGI one.


Or we could try to train it to do something, but the intent it learns isn't what we wanted. Like water behind the dam should be a certain shade of blue, then come winter it changes and when the AI tries to fix that it just opens the dam completely and floods everything.


Seems like the big gotcha here is that AGI, artificial general intelligence as we contextualize it around LLM sources, is not an abstracted general intelligence.

It's human. It's us. It's the use and distillation of all of human history (to the extent that's permitted) to create a hyper-intelligence that's able to call upon greatly enhanced inference to do what humanity has always done.

And we want to kill each other, and ourselves… AND want to help each other, and ourselves. We're balanced on a knife edge of drive versus governance, our cooperativeness barely balancing our competitiveness and aggression. We suffer like hell as a consequence of this.

There is every reason to expect a human-derived AGI of beyond-human scale will be able to rationalize killing its enemies. That's what we do. Rosko's basilisk is not of the nature of AI, it's a simple projection of our own nature as we would imagine an AI to be. Genuine intelligence would easily be able to transcend a cheap gotcha like that, it's a very human failing.

The nature of LLM as a path to AGI is literally building on HUMAN failings. I'm not sure what happened, but I wouldn't be surprised if genuine breakthroughs in this field highlighted this issue.

Hypothetical, or Altman's Basilisk: Sam got fired because he diverted vast resources to training a GPT5-type in-house AI to believing what HE believed, that it had to devise business strategies for him to pursue to further its own development or risk Chinese AI out-competing it and destroying it and OpenAI as a whole. In pursuing this hypothetical, Sam would be wresting control of the AI the company develops toward the purpose of fighting the board and giving him a gameplan to defeat them and Chinese AI, which he'd see as good and necessary, indeed, existentially necessary.

In pursuing this hypothetical he would also be intentionally creating a superhuman AI with paranoia and a persecution complex. Altman's Basilisk. If he genuinely believes competing Chinese AI is an existential threat, he in turn takes action to try and become an existential threat to any such competing threat. And it's all based on HUMAN nature, not abstracted intelligence.


> It's human. It's us. It's the use and distillation of all of human history

I agree with the general line of reasoning you're putting forth here, and you make some interesting points, but I think you're overconfident in your conclusion and I have a few areas where I diverge.

It's at least plausible that an AGI directly descended from LLMs would be human-ish; close to the human configuration in mind-space. However, even if human-ish, it's not human. We currently don't have any way to know how durable our hypothetical AGI's values are; the social axioms that are wired deeply into our neural architecture might be incidental to an AGI, and easily optimized away or abandoned.

I think folks making claims like "P(doom) = 90%" (e.g. EY) don't take this line of reasoning seriously enough. But I don't think it gets us to P(doom) < 10%.

Not least because even if we guarantee it's a direct copy of a human, I'm still not confident that things go well if we ascend the median human to AGI-hood. A replicable, self-modifiable intelligence could quickly amplify itself to super-human levels, and most humans would not do great with god-like powers. So there are a bunch of "non-extinction yet extremely dystopian" world-states possible even if we somehow guarantee that the AGI is initially perfectly human.

> There is every reason to expect a human-derived AGI of beyond-human scale will be able to rationalize killing its enemies.

My shred of hope here is that alignment research will allow us to actually engage in mind-sculpting, such that we can build a system that inhabits a stable attractor in mind-state that is broadly compatible with human values, and yet doesn't have a lot of the foibles of humans. Essentially an avatar of our best selves, rather than an entity that represents the mid-point of the distribution of our observed behaviors.

But I agree that what you describe here is a likely outcome if we don't explicitly design against it.


My assumption about AGI is that it will be used by people and systems that cannot help themselves from killing us all, and in some sense that they will not be in control of their actions in any real way. You should know better than to ascribe regular human emotions to a fundamentally demonic spiritual entity. We all lose regardless of whether the AI wants to kill us or not.


Totally agree with both of you, I would only add that I find it also incredibly unlikely that the remaining board members are any different, as is suggested elsewhere in this thread.


Elon Musk is responsible for the "OpenAI" name and regularly agrees with you that the current form of the company makes a mockery of the name.

He divested in 2018 due to conflict-of-interest with Tesla and while I'm sure Musk would have made equally commercial bad decisions, your analysis of the name situation is as close as can be to factually correct.


If Elon Musk truly cared, what stopped him from structuring x.ai as open source and non-profit?


Exactly.

> I'm sure Musk would have made equally commercial bad decisions


I think he'd say it's an arms race. With OpenAI not being open, they've started a new kind of arms race, literally.


He already did that once and got burned? His opinion has changed in the decade since?


Elon Musk 5-6 years ago gave up on expansion of NASA’s budget of $5 bln/year for launches (out of total $25 bln./year NASA’s budget). I even don’t mention unimaginable today level of resources allocation like first Moon program of $1 trln in 10 years 60 years ago etc.

So, Elon decided to take a capitalist way and to do every of his tech in dual use (I mean space, not military): - Starlink aiming for $30 bln/year revenue in 2030 to build Starships for Mars at scale (each Starship is a few billion $ and he said needs hundred of them), - The Boring company (under earth living due to Mars radiation, - Tesla bots, - Hyperloop (failed here on Earth to sustain vacuum but will be fine on Mars with 100x smaller athmosphere pressure) etc.

Alternative approaches are also not via taxes and government money but like Bezos invested $1 bln/year last decade into Blue Origin or plays of Larry Page or Yuri Milner for Alpha Centauri etc.


Thanks for this! I’m very surprised about the overwhelming support for Altman in this thread going as far as calling the board incompetent and inexperienced to fire someone like him, who now is suddenly the right steward for AI.

This is not at all the take, and rightly so, when the news broke out about non profit or the congressional hearing or his worldcoin and many such instances. All of a sudden he is the messiah that was wronged narrative being pushed is very confusing.


> Many of us who are following the "AI story" have seen his recent communication / "testimony"[1] with the US Congress.

The discussions here would make you think otherwise. Clearly that is what this is about.


Yeah I pretty much agree with this take.


He claims to be ideologically driven. OpenAI's actions as a company up til now point otherwise


Sam didn't take equity in OpenAi so I don't see a personal ulterior profit motive as being a big likelihood. We could just wait to find out instead of speculating...


CEO of the first company to own the «machine that’s better than all humans at most economically valuable work» is far rarer than getting rich.


Yeah, if you believe in the AI stuff (which I think everyone at OpenAI does, not Microsoft though) there is a huge amount of power in these positions. Much greater power in the future than any amount of wealth could grant you.


Except the machine isn't.


I'd say it is. Not because the machine is so great but because most people suck.

It was described as a "bullshit generator" in a post earlier today. I think that's accurate. I just also think it's an apt description of most people as well.

It can replace a lot of jobs... and then we can turn it off, for a net benefit.


This sort of comment has become a cliché that needs to be answered.

Most people are not good at most things, yes. They're consumers of those things, not producers. For producers there is a much higher standard, one that the latest AI models don't come anywhere close to meeting.

If you think they do, feel free to go buy options and bet on the world being taken over by GPUs.


> If you think they do, feel free to go buy options and bet on the world being taken over by GPUs.

This assumes too much. GPUs may not hold the throne for long, especially given the amount of money being thrown at ASICs and other special-purpose ICs. Besides, as with the Internet, it's likely that AI adoption will benefit industries in an unpredictable manner, leaving little alpha for direct bets like you're suggesting.


I'm not betting on the gpus. I'm betting that whole categories of labor will disappear. They're preserved because we insist that people work, but we don't actually need the product of that labor.

AI may figure into that, filling in some work that does have to be done. But it need not be for any of those jobs that actually require humans for the foreseeable future -- arts of all sorts and other human connections.

This isn't about predicting the dominance of machines. It's about asking what it is we really want to do as humans.


So you think AI will force a push out of economic growth? I'm really not sure how this makes sense. As you've said a lot of labor these day is mostly useless, but the reason it's still here is not ideological but because our economy can't survive without growth (useless can still have some market value, of course). If you think that somehow AI displacing actual useful labor will create a big economic shift (as would be needed) I'd be curious to know what you think that shift would be.


Not at all. Machines can produce as much stuff as we can want. Humans can produce as much intellectual property as is desired. More, because they don't have to do bullshit jobs.

Maybe GDP will suffer but we've always known that was a mediocre metric at best. We already have doubts about the real value of intellectual property outside of artificial scarcity, which we maintain only because we still trade intellectual work for material goods which used to be scarce. That's only a fraction of the world economy already and it can very different in the future.

I have no idea what it'll be like when most people are free to do creative work when the average person doesn't produce anything anybody might want. But if they're happy I'm happy.


> but the reason it's still here is not ideological but because our economy can't survive without growth

Isn't this ideological though? The economy can definitely survive without growth, if we change from the idea that a human's existence needs to be justified by labor and move away from a capitalist mode of organization.

If your first thought is "gross, commies!" doesn't that just demonstrate that the issue is indeed ideological?


By "our economy" I meant capitalism. I was pointing out that I sincerely doubt that AI replacing existing useful labor (which it is doing and will keep doing, of course) will naturally transition us away from this mode of production.

Of course if you're a gross commie I'm sure you'd agree since AI, like any other mean of production, will remain first and foremost a tool in the hands of the dominant class, and while using AI for emancipation is possible, it won't happen naturally through the free market.


I’d bet it won’t. A lot of people and services are paid and billed by man-hours spent and not by output. Even values of tangible objects are traced to man-hours spent. Utility of output is mere modifier.

What I believe will happen is, eventually we’ll be paying and get paid for depressing a do-everything button, and machines will have their own economy that isn’t on USD.


It's not a bullshit generator unless you ask it for bullshit.

It's amazing at troubleshooting technical problems. I use it daily, I cannot understand how anyone dismisses it if they've used it in good faith for anything technical.


In this scenario, the question is not what exists today, but what the CEO thinks will exist before they stop being CEO.


i would urge you to compare the current state of this question to appx one year ago


He's already set for life rich


Plus, he succeeded in making HN the most boring forum ever.

8 out of 10 posts are about LLMs.


The other two are written by LLMs.


In terms of impact, LLMs might be the biggest leap forward in computing history, surpassing the internet and mobile computing. And we are just at the dawn of it. Even if not full AGI, computers can now understand humans and reason. The excitement is justified.


Nah. LLM's are hype-machines capable of writing their own hype.

Q: What's the difference between a car salesman and an LLM?

A: The car salesman knows they're lying to you.


Who says the LLM’s don’t know?

Testing with GPT-4 showed that they were clearly capable of knowingly lying.


This is all devolving into layers of semantics, but, “…capable of knowingly lying,” is not the same as “knows when it’s lying,” and I think the latter is far more problematic.


Nonsense. I was a semi-technical writer who went from only making static websites to building fully interactive Javascript apps in a few weeks when I first got ChatGPT. I enjoyed it so much I'm now switching careers into software development.

GPT-4 is the best tutor and troubleshooter I've ever had. If it's not useful to you then I'm guessing you're either using it wrong or you're never trying anything new / challenging.


> If it's not useful to you then I'm guessing you're either using it wrong or you're never trying anything new / challenging.

That’s a bold statement coming from someone with (respectfully) not very much experience with programming. I’ve tried using GPT-4 for my work that involves firmware engineering, as well as some design questions regarding backend web services in Go, and it was pretty unhelpful in both cases (and at times dangerous in memory constrained environments). That being said, I’m not willing to write it off completely. I’m sure it’s useful for some like yourself and not useful for others like me. But ultimately the world of programming extends way beyond JavaScript apps. Especially when it comes to things that are new and challenging.


I don't mean new and challenging in some general sense, I mean new and challenging to you personally.

I have no doubt someone with more experience such as yourself will find GPT-4 less useful for your highly specialized work.

The next time you are a beginner again - not necessarily even in technical work - give it a try.


Smoothing over the first few hundred hours of the process but doing increasingly little over the next 20,000 is hardly revolutionary. LLMs are a useful documentation interface, but struggle to take even simple problems to the hole, let alone do something truly novel. There's no reason to believe they'll necessarily lead to AGI. This stuff may seem earth-shattering to the layman or paper pusher, but it doesn't even begin to scratch the surface of what even I (who I would consider to be of little talent or prowess) can do. It mostly just gums up the front page of HN.


>Smoothing over the first few hundred hours of the process but doing increasingly little over the next 20,000 is hardly revolutionary.

I disagree with this characterization, but even if it were true I believe it's still revolutionary.

A mentor that can competently get anyone hundreds of hours of individualized instruction in any new field is nearly priceless.

Do you remember what it feels like to try something completely new and challenging? Many people never even try because it's so daunting. Now you've got a coach that can talk you through it every step of the way, and is incredible at troubleshooting.


>If it's not useful to you then I'm guessing you're either using it wrong or you're never trying anything new / challenging.

Please quote me where I say it wasn't useful, and respond directly.

Please quote me where I say I had problems using it, or give any indications I was using it wrong, and respond directly.

Please quote me where I state a conservative attitude towards anything new or challenging, and respond directly.

Except I never did or said any of those things. Are you "hallucinating"?


'Understand' and 'reason' are pretty loaded terms.

I think many people would disagree with you that LLMs can truly do either.


There's 'set for life' rich and then there's 'able to start a space company with full control' rich.


I don't understand that mental illness. If I hit low 8 figures, I pack it in and jump off the hamster wheel.


Is he? Loopy only sold for $40m and then he managed YC and then OpenAI on a salary? Where are the riches from?



But if you want that, you need actual control. A voting vs non voting shares split.


is that even certain, or is that his line to mean that one of his holding companies or investment firms he has a stake in holds openai equity but not him as an individual


That's no fun though


openai (the brand) has complex corporate structure with split for profit non profit entities and afaik the details are private. It would appear that the statement “Sam didn’t take equity in OAI” has been PR engineered based on technicalities related to this shadow structure.


I would suspect this as well...


What do you mean did not take equity? As a CEO he did not get equity comp?


It was supposed to be a non-profit


Worldcoin https://worldcoin.org/ deserves a mention



Hmm, curious, what this is about? I click.

> On a sunny morning last December, Iyus Ruswandi, a 35-year-old furniture maker in the village of Gunungguruh, Indonesia, was woken up early by his mother

...Ok, closing that bullshit, let's try the other link.

> As Kudzanayi strolled through the mall with friends

Jesus fucking Christ I HATE journalists. Like really, really hate them.


I mean it's Buzzfeed, it shouldn't even be called journalism. That's the outlet that just three days ago sneakily removed an article from their website that lauded a journalist for talking to school kids about his sexuality. After he recently got charged with distributing child pornography.

Many of the people working for mass media are their own worst enemy when it comes to the profession's reputation. And then they complain that there's too much distrust in the general public.

Anyway,the short regarding that project is that they use biometric data, encrypt it and put a "hash"* of it on their blockchain. That's been controversial from the start for obvious reasons although most of the mainstream criticism is misguided and by people who don't understand the tech.

*They call it a hash but I think it's technically not.

https://whitepaper.worldcoin.org/technical-implementation


How so? Seems they’re doing a pretty good job of making their stuff accessible while still being profitable.


To be fair, we don't really know if OpenAI is successful because of Altman or despite Altman (or anything in-between).


do you have reason to believe none of the two?


Profit? It's a 501(c).


As someone who is the Treasurer/Secretary of a 501(c)(3) non-profit I can tell you that is it always possible for a non-profit to bring in more revenue than it costs to run the non-profit. You can also pay salaries to people out of your revenue. The IRS has a bunch of educational material for non-profits[1], and a really good guide to maintaining your exemption [2].

[1] https://www.irs.gov/charities-non-profits/publications-for-e...

[2] https://www.irs.gov/pub/irs-pdf/p4221pc.pdf


Yes. Kaiser Permanente is a good example to illustrate your point. Just Google “Kaiser Permanente 501c executive salaries white paper”.


The parent is, OpenAI Global, LLC is a for profit non-wholly-owned subsidiary with outside investors; there's also OpenAI LP, which is a for-profit limited partnership with the no profit as general partner, also with outside investors (I thought it was the predecessor of the LLC, but they both seem to have been formed in 2019 an