Hacker News new | past | comments | ask | show | jobs | submit login
Greg Brockman quits OpenAI (twitter.com/gdb)
1423 points by nickrubin 9 months ago | hide | past | favorite | 662 comments



Edit: I called it

https://twitter.com/karaswisher/status/1725682088639119857

nothing to do with dishonesty. That’s just the official reason.

———-

I haven’t heard anyone commenting about this, but the two main figures here-consider: This MUST come down to a disagreement between Altman and Sutskever.

Also interesting that Sutskever tweeted a month and a half ago

https://twitter.com/ilyasut/status/1707752576077176907

The press release about candid talk with the board… It’s probably just cover up for some deep seated philosophical disagreement. They found a reason to fire him that not necessarily reflects why they are firing him. He and Ilya no longer saw eye to eye and it reached its fever pitch with gpt 4 turbo.

Ultimately, it’s been surmised that Sutskever had all the leverage because of his technical ability. Sam being the consummate businessperson, they probably got in some final disagreement and Sutskever reached his tipping point and decided to use said leverage.

I’ve been in tech too long and have seen this play out. Don’t piss off an irreplaceable engineer or they’ll fire you. not taking any sides here.

PS most engineers, like myself, are replaceable. Ilya is probably not.


This doesn't make any sense. If it was a disagreement, they could have gone the "quiet" route and just made no substantive comment in the press release. But they made accusations that are specific enough to be legally enforceable if they're wrong, and in an official statement no less.

If their case isn't 100% rock solid, they just handed Sam a lawsuit that he's virtually guaranteed to win.


I agree. None of this adds up. The only thing that makes any sense, given OpenAI has any sense and self interest at all, is that the reason they let Altman go may have even been bigger than even what they were saying, and that there was some lack of candor in his communications with the board. Otherwise, you don't make an announcement like that 30 minutes before markets close on a Friday.


[flagged]


Don't take this as combative, but that sounds like nothing more than a science fiction plot.

Putting arguments for how close we are or aren't to AGI aside; there's no way you could spend the amount of money it would take to train such a basilisk on company resources without anyone noticing. We are not talking about a rogue engineer running a few cryptominers in the server room, here.


[flagged]


Or they know more than we do.

And it takes time to put people in jail.

So it's way too early to call anyone delusional.


I doubt that a quite route is possible on that matter.

So better be the first to set the narrative.


Even if their case is 100% solid, they wouldn't have said it publically. Unless they hated Sam for doing something, so it's not just direction of the company or something like that. It's something bigger.


I mean I think it would be pretty hard to prove that you weren't at any time "less than candid" in reports to a board.


More new information from Swisher:

> "More scoopage: sources tell me chief scientist Ilya Sutskever was at the center of this. Increasing tensions with Sam Altman and Greg Brockman over role and influence and he got the board on his side."

> "The developer day and how the store was introduced was in inflection moment of Altman pushing too far, too fast. My bet: He’ll have a new company up by Monday."

[source: https://twitter.com/karaswisher/status/1725702501435941294]

Sounds like you exactly predicted it.


With VCs cat fighting in the queue and sama just sticking it to them with bootstrapping independently.

I don’t like this whole development one bit, actually. He lost his brakes and I’m sure he doesn’t see it this way at all.


ClosedAI?

> My bet: He’ll have a new company up by Monday.


Apart from some piddly tech, silicon valley startups primarily sell stock. And a monday company will be free to capitalize on the hype and sell stock that won't have its shoelaces tied like a non profit.


No doubt Sam will have another AI company up in no time.

Which is good!


Well, he tweeted that once he “goes off” the board won’t be able to do anything about it, because he never owned any equity. That’s how I read it.


It's very good, competition is all you need.


I thought that [regulatory] attention is all you need.


That too, otherwise ASI could be another PTFE and asbestos moment but on crack


I doubt that there will be any ASI in the near future, with or without regulation.


I think you're completely backward. A board doesn't do that unless they absolutely have to.

Think back in history. For example, consider the absolutely massive issues at Uber that had to go public before the board did anything. There is no way this is over some disagreement, there has to be serious financial, ethical or social wrongdoing for the board to rush job it and put a company worth tens of billions of dollars at risk.


Per other profiles of OpenAI, this is an organization of true believers in the benefits and dangers of AGI. It's also a non-profit, not a company.

All this to say that the board is probably unlike the boards of the vast majority of tech companies.


This. There were no investors on the board -- as Jason @ all-in said "that's just crazy".


> as Jason @ all-in said

lol

> "that's just crazy".

why is it crazy? the purpose of OpenAI is not to make investors rich - having investors on the board trying to make money for themselves would be crazy.


Exactly, if we assume Altman wanting to pursue commercialization at the cost of safety was the issue, the board did its job by advancing its mandate of "AI for the benefit of humanity" although not sure why they went with the nuclear option.


Very true.

Though I would go further than that: if that is indeed the reason, the board has proven themselves very much incompetent. It would be quite incompetent to invite this type of shadow of scandal for something that was a fundamentally reasonable disagreement.


The board, like any, is a small group of people, and in this case a small group of people divided into two sides defined by conflicting ideological perspectives. In this case, I imagine the board members have much broader and longer-term perspectives and considerations factoring into their decision making than the significant, significant majority of other companies/boards. Generalizing doesn’t seem particularly helpful.


Generalizing is how we reason, and having been on boards and worked with them closely, I can straight up tell you that's not how it works.

In general, everyone is professional unless there's something really bad. This was quite unprofessionally handled, and so we draw the obvious conclusion.


I am also not a stranger to board positions. However, I have never been on the board of a non-profit that is developing technology with genuinely deep, and as-of-now unknown, implications for the status quo of the global economy and - at least as the OpenAI board clearly believes - the literal future and safety of humanity. I haven’t been on a board where a semi-idealist engineer board member has played a (if not _the_) pivotal role in arguably the most significant technical development in recent decades, and who maintains ideals and opinions completely orthogonal to the CEO’s.

Yes, generalizing is how we reason, because it lets us strip away information that is not relevant in most scenarios and reduces complexity and depth without losing much in most cases. My point is, this is not a scenario that fits in the set of “most cases.” This is actually probably one of the most unique and corner-casey example of board dynamics in tech. Adherence to generalizations without considering applicability and corner cases doesn’t make sense.


If it was really just about seeing eye to eye, why would the press release say anything about Sam being "consistently candid in his communications?" That seems pretty unnecessary if it were fundamentally a philosophical disagreement. Why not instead say something about differences in forward looking vision?


Which they can do in a super polite "wish him all the best" way or an "it was necessary to remove Sam's vision to save the world from unfriendly AI" way as they see fit. Unlike an accusation of lying, this isn't something that you can be sued for, and provided you're clear about what your boardroom battle-winning vision is it probably spooks stakeholders less than an insinuation that Sam might have been covering up something really bad with no further context.


Ah, the old myth about the irreplaceable engineer and the dumb suit. Ask Wozniak about that. I don't think he believes Apple would be without Steve Jobs.


Sam Altman is no Steve Jobs.


OpenAI is no Apple.


Apple without Woz did fine. Apple without Jobs not so much.


Stock market is full of dumb suits falling for salesman


The first Steve was totally irreplaceable, the second Steve was probably arguably the difficult one to be replaceable. Without the first firing, the second Steve would never exist. But then when Apple current ball rolling, he was replaced fine with Tim Cook.


Steve Jobs would be nothing without Wozniak to design something people wanted.


But the point is that Woz was replacable, because he was replaced. Jobs on the other hand was replaced and the company nearly died, he came back and turned it around. Of course it only because a trillion dollar company after Tim Apple took over, which I guess just shows that nobody is irreplacable.


Tim Cook?


Trump said Tim Apple instead of Tim Cook: https://knowyourmeme.com/memes/tim-apple


vice versa ;)


If Steve Jobs couldn't claim Wozniak's work as his own, he wouldn't have landed "his" Atari contract. Who knows where things would have gone after that, but I have a hard time tracing Apple's history of success without, say, the Apple II.


The milieu in which Apple came to fruitition was full of young small microcomputer shops. So it's not like Woz invented the microcomputer for the masses - his contribution was critical for early Apple for sure, but the market had tons of alternatives as well. Without Apple II it's hard to say were Jobs and Wozniak would have turned out. Jobs is such a unique, driven figure that I'm fairly sure he would have created a lasting impression on the market even without Woz. This is not to say Woz was insighnificant - but rather the 1970's Silicon Valley had tons of people with Woz's acumen (not to disparage their achievements) but only few tech luminaries who obviously were not one trick ponies but managed to build their industrial legacy over decades.


I don’t believe the evidence backs this up. Woz is as much a one off as Jobs.

Woz did two magic things just in the Apple II which no one else was close to: the hack for the ntsc color, and the disk drive not needing a completely separate CPU. In the late 70s that ability is what enabled the Apple II to succeed.

The point is Woz is a hacker. Once you build a system more properly, with pieces used how their designers explicitly intended, you end up with the Mac (and things like Sun SPARCstafions) which does not have space for Woz to use his lateral thinking talents.


I’ve always in my mind compared Apple 2 to the likes of Commodore 64, ZX Spectrum etc - 8 bit home computers. Apple 2 predates these several years ofc. You could most certainly create an 8 bit home computer without Woz. I haven’t really geeged out on the complete tech specs & history of these, it’s possible Apple 2 was more fundamental than that (and would love to understand why).


Apple already had competitors back then in the likes of Franklin Computers. From a pure "can you copy Woz" perspective, it's not really even a question; of course you could. It was always a matter of dedication and time.

It's foolish for any of us to peer inside the crystal ball of "what would Jobs be without Woz", but I think it is important to acknowledge that the Apple II and IIc pretty much bankrolled Apple through their pre-Macintosh era. Without those first few gigs (which Woz is almost single-handedly responsible for), Apple Computers wouldn't have existed as early (or successfully) as it did. Maybe we still would have gotten an iPhone later down the line, but that's frankly too speculative for any of us to call.


If Steve Jobs didn't have Woz, or if the Apple 1 or 2 flopped, Steve Jobs would have gone on to sell something else - medical supplies, timeshares, or who knows what and maybe he'd be famous in another industry, but not necessarily attaining a reality distortion field. If Apple flopped early, maybe he would have stayed with selling computers, for another company but he wouldn't be elevated to the status he was, because he had Woz's contribution and had total marketing control over it. If he had to work for another company in sales, Steve Jobs would be reprimanded and fired for his toxic attitudes. There is probably an alternate universe where Steve Jobs is a nobody. The reality is that he got very, very lucky in this universe.


> Ilya is probably not.

If he is, at this point, so much so not replaceable that he has enough leverage to strong-arm the board into firing the CEO over disagreeing with the CEO, then that would for sure be the biggest problem OpenAI has.


Or there was a disagreement about whether the dishonesty was over the line? Dishonesty happens all the time and people have different perspectives on what constitutes being dishonest and on whether a specific action was dishonest or not. The existence of a disagreement does not mean that it has nothing to do with dishonesty.


So I should cancel any plans to build on the GPT platform? Because it doesn't align with the values of Ilya and Helen?

Open AI- we need clarity on your new direction.


Do you really want to build a business on a platform with 100% lock in.

It not like to can just move to another AI company if you don't like their terms.


My Open AI bill was (opens dashboard...), $43,xxx last month.

First thing tomorrow I'm kicking off another round of searching for alternatives.


Just out of curiosity, what are you using it for that makes it that valuable?


It's probably the gpt cost of a metered product.


I have basically shipped gpt-4-32k to a few million boomers. Hard to say more without doxxing myself.


I understand the meaning of this statement but please reconsider the way you view your customer base - it's not a good grounding for the success of your business to think this way. Great work with what you've built so far!


I think you're confused. If Altman was allowed to continue you'd end up in a vendor lock-in situation with that guy endlessly bumping the fees.


I think that if there were a lack of truth to him being less-than-candid with the board, they would have left that part out. You don’t basically say that an employee (particularly a c-suiter with lots of money for lawyers) lied unless you think that you could reasonably defend that statement in court. Otherwise, it’s defamation.


I’m not saying there is lack of truth. I’m saying that’s not the real reason. It could be there’s a scandal to be found, but my guess is the hostility from OpenAI is just preemptive.

There’s really no nice way to tell someone to fuck off from the biggest thing. Ever.


I mean I'm not a lawyer (of the big city or simple country varieties, or any other variety) but if you talk to most HR people they'll tell you that if they ever get a phone call from a prospective employer to confirm details about someone having worked there previously, the three things they'll typically say are:

1) a confirmation of the dates of employment

2) a confirmation of the role/title during employment

3) whether or not they would rehire that person

... and that's it. The last one is a legally-sound way of saying that their time at the company left something to be desired, up to and including the point of them being terminated. It doesn't give them exposure under defamation because it's completely true, as the company is fully in-charge of that decision and can thus set the reality surrounding it.

That's for a regular employee who is having their information confirmed by some hiring manager in a phone or email conversation. This is a press release for a company connected to several very high-profile corporations in a very well-connected business community. Arguably it's the biggest tech exec news of the year. If there's ulterior or additional motive as you suggest, there's a possibility Sam goes and hires the biggest son-of-a-bitch attorney in California to convince a jury that the ulterior or additional motive was _the only_ motive, and that calling Sam a liar in a press release was defamation. As a result, OpenAI/the foundation, would probably be paying him _at least_ several million dollars (probably a lot more) for making him hard to hire on at other companies.

Either he simply lied to the board and that's it, or OpenAI's counsel didn't do their job and put their foot down over the language used in the press release.


Someone at OpenAI hates the man's guts. It's that simple.

Even with very public cases of company leaders who did horrible things (much worse than lying), the companies that fired them said nothing officially. The person just "resigned". There's just no reason open up even the faintest possibility of an expensive lawsuit, even if they believe they can win.

So yeah, someone definitely told the lawyers to go fuck themselves when they decided to go with this inflammatory language.


Well, for their sake, I hope they either issue a retraction soon, have good lawyers and documentation of their decision, or Sam turns out to be a forgiving person.

I wouldn't put money on the last one, though.


You can't say a person resigned if they refused to resign, correct? If the person says they refuse to resign you have to fire them. So that's one scenario where they would have to say they fired him.

You also wouldn't try to avoid a lawsuit if you believed (hypothetically) it was impossible to avoid a lawsuit.


> So yeah, someone definitely told the lawyers to go fuck themselves when they decided to go with this inflammatory language.

You're assuming they even consulted the lawyers...


I don't know that this is always the case. For example, when BK was forced to resign from Intel, the board's announcement was quite specific on why.


There is no legal justification for ever saying those dates, much less their department and role. I have never heard of any HR department saying anything of the sort, even if this is an oft-quoted meme of HR. I suspect you have actually never worked in HR to provide such statements, you are merely speculating.


This was an answer given to me by a VP of HR last month.


John, I don't think you understand how corporate law departments work. It's not like a romantic or friend breakup where someone says a mean remark about the other to underline that it's over; there's a big legal risk to the corporate entity from carelessly damaging someone's reputation like that, so it's smarter to just keep the personality/vision disagreements private and limit public statements to platitudes.


Please don’t patronize me. It indeed looks like the press release from OpenAI is under scrutiny. What you fail to understand is human nature and the way people really do things ^TM

https://twitter.com/karaswisher/status/1725685211436814795


I'm not patronizing you, I'm just responding on the same level as the post I replied to. There's an endless supply of examples of corporate/legal decisions and communication being made on very different criteria from interpersonal interactions.

Of course the press release is under scrutiny, we are all wondering What Really Happened. But careless statements create significant legal (and thus financial) risk for a big corporate entity, and board members have fiduciary responsibilities, which is why 99.99% of corporate communications are bland in tone, whatever human drama may be taking place in conference rooms.


>John

>I'm not patronizing you

(A)ssuming (G)ood (F)aith, referring to someone online by their name, even in an edge case where their username is their name, is considered patronizing as it is difficult to convey a tone via text medium that isn't perceived as a mockery/veiled threat.

This may be a US-internet thing; analogous to getting within striking distance with a raised voice can be a capital offense in the US, juxtaposed to being completely normal in some parts of the Middle East.


> referring to someone online by their name is considered patronizing

This has to be a joke, right?


It's not the "online" that's the issue exactly, I think Jerrrry didn't describe it exactly right, but it's still correct. I, too, personally, thought it was very clear that the "John, " was ... I dunno if it was patronizing or what, but marginally impolite or condescending or patronizing or something. Unless, unbeknownst to us, anigbrowl and johnwheeler are old personal associates (probably offline), in which case it would mean "remember that I know you", and the implication of that would depend on the history in the relationship.

I recognize that the above para sort of sounds like I think I have some authority to mediate between them, which is not true and not what I think. I'm just replying to this side conversation about how to be polite in public, just giving my take.

The broad pattern here is that there are norms around how and when you use someone's name when addressing them, and when you deviate from those norms, it signals that something is weird, and then the reader has to guess what is the second most likely meaning of the rest of the sentence, because the weird name use means that the most likely meaning is not appropriate.


I don't think it's a joke. I would find it patronizing unless I'm already on a first name basis with the commenter through some prior relationship.


It happened to me recently on a list where I post under my real name, and yes, it's irritating, especially if it is someone you never met, and they are disagreeing with you.


Really? Referring to someone by first name is perfectly ordinary where I’m from, regardless of relationship. If someone doesn’t want me to do that, I’d expect them to introduce themselves as “Mr. so-and-so”, instead.


It's not the first name alone, it's also the sentence structure. "Hey John, did you hear about..." sounds perfectly normal even when talking on-line to strangers. "John, you misunderstand..." is appropriate if you're their parent or spouse or otherwise in some kind of close relationship.


You have explained this much more concisely than me.


In person, sure, that's totally normal. It's unusual on a forum for a few reasons:

1) The comments are meant to be read by all, not just the author. If you want to email the author directly and start the message with a greeting containing their name ("hi jrockway!"), or even just their name, that's pretty normal.

2) You don't actually know the person's first name. In this case, it's pretty obvious, since the user in question goes by what looks like <firstname><lastname>. But who knows if that's actually their name. Plenty of people name their accounts after fictional people. It would be weird to everyone if your HN comment to darthvader was "Darth, I don't think you understand how corporate law departments work." Darth is not reading the comment. (OK, actually I would find that hilarious to read.)

3) Starting a sentence with someone's name and a long pause (which the written comma heavily implies) sounds like a parent scolding a child. You rarely see this form outside of a lecture, and the original comment in question is a lecture. You add the person's name to the beginning of the comment to be extra patronizing. I know that's what was going on and the person who was being replied to knows that's what was going on. The person who used that language denies that they were trying to be patronizing, but frankly, I don't believe it. Maybe they didn't mean to consciously do it, but they typed the extra word at the beginning of the sentence for some reason. What was that reason? If to soften the lecture, why not soften it even more by simply not clicking reply? It just doesn't add up.

4) It's Simply Not Done. Open any random HN discussion, and 99.99% of the time, nobody is starting replies with someone's name and a comma. It's not just HN; the same convention applies on Reddit. When you use style that deviates from the norm, you're sending a message, and it's going to have a jarring effect on the reader. Doubly jarring if you're the person they're naming.

TL;DR: Don't start your replies with the name of the person you're replying to. If you're talking with someone in person, sure, throw their name in there. That's totally normal. In writing? Less normal.


Is it the first name or the personal touch that would make you feel patronized? What if you read a reply “… a 24 year old, such as yourself, will know …”.


Perhaps the wording here is a bit confusing, but I think it's unambiguous that responding to a comment using the commenter's name ("John, you misunderstand") comes off as patronizing.

The commenter above doesn't mean that any reference to someone else by name ("Sam Altman was fired") is patronizing.


- it means you answer more to the person than its argument (ad hominem) - it is uneccessary and 9/10 when used for a disagreement, especially at the beginning of a response, it is meant to be patronizing.


No. Look at examples where people hurl veiled threats at dang. They almost always use his real first name. It's a form of subtle intimidation. That kind of intimidation, whether the users real name is incorporated into their username in some way or they're using other open source intel goes back to the early days of the internet.


No. More than that, comes off as patronizing to start a comment with the other person's first name when speaking, off-line, face-to-face, unless you're their spouse, parent, or in some other close relationship.


Jerrrry, thank you for your opinion.


One imagines "human nature" cuts both ways here - sometimes damage control is just damage control.


What’s the legal risk? Their investors sue them for..? Altman sues for..?

How is the language “we are going our separate ways” compared with “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI” going to have a material difference in the outcome of the action of him getting fired?

How do the complainants show a judge and jury that they were materially harmed by the choice of language above?


The legal risk comes if Altman decides he wants a similar job and can't find it over the next few months or years, and has reason to believe that OpenAI's statements tainted his reputation.

OpenAI's board's press release could very easily be construed as "Sam Altman is not trustworthy as a CEO", which could lead to his reputation being sullied among other possible employers. He could argue that the board defamed his reputation and kept him from what was otherwise a very promising career in an unfathomably lucrative field.


It’s not defamation if it’s true


Truth is subjective and if there is anything that could suggest other motive, as I said earlier, it would be open to interpretation by a jury.

Really they should have just said something to the effect of, "The board has voted to end Sam Altman's tenure as CEO at OpenAI. We wish him the best in his future endeavors."


Meh, they don't need to prove that much. It would be Altman that had to prove a lot, because the law favors defendant in this situation. To protect the speech, actually.


The onus is on OpenAI to prove that in a court of law, in front of a jury.


No, the onus would be on Sam Altman to prove that the statement was materially false, AND intended to slander him, AND actually succeeded in affecting his reputation.

When you're a public person, the bar for winning a defamation case is very high.


I don't know. The board statement, peeling away the pleasantries, says he lied to the board repeatedly. That's a very serious accusation. I don't know how US law works here, but in the UK you can sue and win over defamation for far milder infractions.


Even in the UK, if you sue, it is on you to prove that you didn't lie, not on the person you're sueing to prove that you did.

Also, as long as you are a public person, defamation has a very high bar in the USA. It is not enough to for the statement to be false, you have to actually prove that the person you're accusing of defamation knew it was false and intended it to hurt you.

Note that this is different from an accusation of perjury. They did not accuse Sam Altman of performing illegal acts. If they had, things would have been very different. As it stands, they simply said that he hasn't been truthful to them, which it would be very hard to prove is false.


> Even in the UK, if you sue, it is on you to prove that you didn't lie, not on the person you're sueing to prove that you did.

No, in the UK it's unambiguously the other way round. The complainant simply has to persuade the court that the statement seriously harmed or is likely to seriously harm their reputation. Truth is a defence but for that defence to prevail the burden of proof is on the defendant to prove that it was true (or to mount an "honest opinion" defence on the basis that both the statement would reasonably be understood as one of opinion rather than fact and that they did honestly hold that opinion)


In a specific case, perhaps. But surely, I can't go out, make a broad statement like, "XYZ is a liar and fornicator" and leave it there. And how would XYZ go around proving they are not a liar and fornicator? Talk to everyone in the world and get them to confirm they were not lied to or sexually involved?

Surely, at some level, you can be sued for making unfounded remarks. But then IANAL so, meh.


How much total compensation could Altman have gotten from another company, if not for this slander? Yeah no one knows for sure, but how much could he argue? He's a princeling of Silicon Valley, and just led a company from $0 to $90 billion dollars. I'm guessing that's going to be a very, very big number.

Unless OpenAI can prove in a court of law that what they said was true, they're on the hook for that amount in compensation, perhaps plus punitive damages and legal costs.


None of these people seem to be typical corporate board members, except maybe Altman.



Doesn't justify the hostile language and the urgent last minute timing. (partners were notified just minutes before press release). They didn't wait even 30 min for the market to close causing MSFT to drop billions in that time.

A mere direction disagrement would have been handled with "Sam is retiring after 3 months to spend more time with his Family, We thank him for all his work". And surely would be taken months in advance of being announced.


> last minute timing

Only feels last minute to those outside. I've seen some of these go down in smaller companies and it's a lot like bankruptcy - slowly, then all at once.


Everything points towards this being last minute both for people outside and people inside. Microsoft caught with their pants down, announcement before markets closed rather than just waiting a bit, and so on.


Announcing something huge like this before market close is not something that can be interpreted as anything other than either a huge timing mistake or a massive feeling of urgency


I find it hard to believe that the board of OpenAI isn't smart, savvy and self-interested enough to know that not delaying the announcement by an hour or so is the wrong move. That leads me to believe that yes, this was something big and worthy enough of being announced with that timing, and that it was probably not a mistake.


They also said Greg was going to stay at the company and then he immediately quit. I find it very hard to believe that smart, savvy, and self interested are adjectives that apply to a board who doesn't know what their own chairman thinks.


Even smart, savvy, and self interested people can't always predict what individual humans are going to do. It's certainly an interesting wrinkle, but I don't think it's relevant to the limited scope of the analysis I've presented here.


He was the chair of the board. And they were wrong very quickly. It very much sounds like they spoke for him. Or he pretended that he was going to stay and then backstabbed them. Which, given how strongly aligned with altman he seems to be, not really a surprise. I have yet to see a single action from them that means towards saavy rather than incompetent.


Takeaway Sam, Greg and Ilya and who are the others even in the board. Doesn't inspire any confidence.


Exactly. They call it: The shit hitting the fan.


Yeah, this is more abrupt and more direct than any CEO firing I've ever seen. For comparison, when Travis Kalanick was ousted from Uber in 2017, he "resigned" and then was able to stay on the board until 2019. When Equifax had their data breach, it took 4 days for the CEO to resign and then the board retroactively changed it to "fired for cause". With the Volkswagon emissions scandal, it took 20 days for the CEO to resign (again, not fired) despite the threat of criminal proceedings.

You don't fire your CEO and call him a liar if you have any choice about it. That just invites a lawsuit, bad blood, and a poor reputation in the very small circles of corporate executives and board members.

That makes me think that Sam did something on OpenAI's behalf that could be construed as criminal, and the board had to fire him immediately and disavow all knowledge ("not completely candid") so that they don't bear any legal liability. It also fits with the new CEO being the person previously in charge of safety, governance, ethics, etc.

That Greg Brockman, Eric Schmidt, et al are defending Altman makes me think that this is in a legal grey area, something new, and it was on behalf of training better models. Something that an ends-justifies-the-means technologist could look at and think "Of course, why aren't we doing that?" while a layperson would be like "I can't believe you did that." It's probably not something mundane like copyright infringement or webscraping or even GDPR/CalOppa violations though - those are civil penalties, and wouldn't make the board panic as strongly as they did.


You are comparing a corporate scandals, but the alternative theory in this forum seems to be a power struggle and power struggles have completely different mechanics.

Think of it as the difference between a vote of no confidence and a coup. In the first case you let things simmer for a bit to allow you to wheel and deal and to arrange for the future. In the second case, even in the case of a parliamentary coup like the 9th of Thermidor, the most important thing is to act fast.


A boardroom coup isn't remotely like one where one looks for the gap where the guards and guns aren't and worries about the deposed leader being reinstated by an angry mob.

If they had the small majority needed to get rid of him over mere differences of future vision they could have done so on whatever timescale they felt like, with no need to rush the departure and certainly no need for the goodbye to be inflammatory and potentially legally actionable


Yeah, but Uber is completely different organization. The boards you mention were likely complic in stuff they kicked their CEOs out about.


what examples are you considering here, bioweapons?


Well OpenAI gets really upset when you ask it to design a warp drive so maybe that was it.


promising not to train on microsoft's customer data, and then training on MSFT customer data.


I don't think the person you are replying to is correct, because the only technological advancement where a new OpenAI artifact provides schematics that I think could qualify is Drexler-wins-Smalley-sucks style nanotechnology that could be used to build computation. That would be the sort of thing where if you're in favour of building the AI faster you're like "Why wouldn't we do this?" and if you're worried the AI may be trying to release a bioweapon to escape you're like "How could you even consider building to these schematics?".

I don't think it's correct not because it sounds like a sci-fi novel, but because I think it's unlikely that it's even remotely plausible that a new version of their internal AI system would be good enough at this point in time to do something like that, considering all the many problems that need to be solved for Drexler to be right.

I think it's much more likely that this was an ideological disagreement about safety in general rather than a given breakthrough or technology in specific, and Ilya got the backing of US NatSec types (apparently their representative on the board sided with him) to get Sam ousted.


> I don't think it's correct not because it sounds like a sci-fi novel, but because I think it's unlikely that it's even remotely plausible that a new version of their internal AI system would be good enough at this point in time to do something like that

Aren't these synonymous at this point? The conceit that you can point AGI at any arbitrary speculative sci-fi concept and it can just invent it is a sci-fi trope.


No, not really. Calling something "science-fiction" at the present moment is generally an insult intended to say something along the lines of "You're an idiot for believing this made up children's story could be real, it's like believing in fairies", which is of course a really dumb thing to say because science fiction has a very long history of predicting technological advances (the internet, tanks, video calls, not just phones but flip phones, submarines, television, the lunar landing, credit cards, aircraft, robotics, drones, tablets, bionic limbs, antidepressants) so the idea that because something is in science fiction it is therefore a stupid idea to think it is a real possibility for separate reasons is really, really dumb. It would also be dumb to think something is possible only because it exists in science fiction, like how many people think about faster than light travel, but science fiction is not why people believe AGI is possible.

Basically, there's a huge difference between "I don't think this is a feasible explanation for X event that just happened for specific technical reasons" (good) and "I don't think this is a possible explanation of X event that just happened because it has happened in science fiction stories, so it cannot be true" (dumb).

About nanotechnology specifically, if Drexler from Drexler-Smalley is right then an AGI would probably be able to invent it by definition. If Drexler is right that means it's in principle possible and just a matter of engineering, and an AGI (or a narrow superhuman AI at this task) by definition can do that engineering, with enough time and copies of itself.


How would a superhuman intelligence invent a new non-hypothetical actually-working device without actually conducting physical experiments, building prototypes, and so on? By conducting really rigorous meta-analysis of existing research papers? Every single example you listed involved work IRL.

> with enough time and copies of itself.

Alright, but that’s not what you the previous post was hypothesizing,which is that OpenAI was possibly able to do that without physical experimentation.


Yes, the sort of challenges you're talking about are pretty much exactly why I don't consider it feasible that OpenAI has an internal system that is at that level yet. I would consider it to be at the reasonable limits of possibility that they could have an AI that could give a very convincing, detailed, & feasible "grant proposal" style plan for answering those questions, which wouldn't qualify for OPs comment.

With a more advanced AI system, one that could build better physics simulation environments, write software that's near maximally efficient, design better atomic modelling and tools than currently exist, and put all of that into thinking through a plan to achieve the technology (effectively experimenting inside its own head), I could maybe see it being possible for it to make it without needing the physical lab work. That level of cognitive achievement is what I think is infeasible that OpenAI could possibly have internally right now, for several reasons. Mainly that it's extremely far ahead of everything else to the point that I think they'd need recursive self-improvement to have gotten there, and I know for a fact there are many people at OpenAI who would rebel before letting a recursively self-improving AI get to that point. And two, if they lucked into something that capable accidentally by some freak accident, they wouldn't be able to keep it quiet for a few days, let alone a few weeks.

Basically, I don't think "a single technological advancement that product wants to implement and safety thinks is insane" is a good candidate for what caused the split, because there aren't that many such single technological advancements I can think of and all of them would require greater intelligence than I think is possible for OpenAI to have in an AI right now, even in their highest quality internal prototype.


> With a more advanced AI system, one that could build better physics simulation environments, write software that's near maximally efficient, design better atomic modelling and tools than currently exist, and put all of that into thinking through a plan to achieve the technology (effectively experimenting inside its own head), I could maybe see it being possible for it to make it without needing the physical lab work.

It couldn't do any of that because it would cost money. The AGI wouldn't have money to do that because it doesn't have a job. It would need to get one to live, just like humans do, and then it wouldn't have time to take over the world, just like humans don't.

An artificial human-like superintelligence is incapable of being superhuman because it is constrained in all the same ways humans are and that isn't "they can't think fast enough".


I think you're confused. We're talking about a hypothetical internal OpenAI prototype, and the specific example you listed is one I said wasn't feasible for the company to have right now. The money would come from the same budget that funds the rest of OpenAI's research.


Human cloning


Actual humans or is this a metaphor for replicating the personas of humans via an LLM?


Great point. It was rude to drop a bombshell during trading hours. That said, the chunk of value Microsoft dropped today may be made back tomorrow, but maybe not: if OpenAI is going to slow down and concentrate on safe/aligned AI then that is not quite as good for Microsoft.


It only dropped 2% and it’s already half back in after hours. I don’t think the market thinks it’s Altman who’s the golden boy here.


It's still a completely unnecessary disturbance of the market. You also don't want to bite the hand that feeds you. This would be taking a personal disagreement to Musk-levels of market impact.


What is all this nonsense about MSFT stock price? Nothing material has happened to it.

https://www.google.com/finance/quote/MSFT:NASDAQ


People zooming in too far. If you look at the 1d chart, yeah, something happened at 3:30. If you look at the 1m chart, today is irrelevant.


Why is OpenAI responsible for protecting Microsoft’s stock price?


Well if for nothing else they are their biggest partner and investor


Even Microsoft themselves shouldn’t care about the traders that react to this type of headline so quickly.

This will end up being a blip that corrects once it’s actually digested.

Although, the way this story is unfolding, it’s going to be hilarious if it ends up that the OpenAI board members had taken recent short positions in MSFT.


Yeah and if antitrust regulators weren’t asleep at the wheel they’d be competitors


it's not that OpenAI is reponsible, but those board members have burned a lot of bridges with investors with this behaviour. the investor world is not big so self-serving interest would dictate that you at least take their interest in consideration before acting especially with something like waiting 1 hour before press release. No Board would want them now because they are posined apple for the investors.


Alternately, there may be mission-minded investors and philanthropists who were uncomfortable with Microsoft's sweetheart deals and feel more comfortable after the non-profit board asserted itself and booted the mission-defying VC.

We won't know for a while, especially since the details of the internal dispute and the soundness of the allegations against Altman are still vague. Whether investors/donors-at-large are more or less comfortable now than they were before is up in the air.

That said, startups and commercial partners that wanted to build on recent OpenAI, LLC products are right to grow skittish. Signs are strong that the remaining board won't support them the way Altman's org would have.


MSFT is still up this week.


> . They didn't wait even 30 min for the market to close causing MSFT to drop billions in that time

Ha! Tell me you don't know about markets without telling me! Stock can drop after hours too.


After market prices are just a potential trend as the volume traded is very small and easily manipulated.


Not as much tho right?


That may be the case, but I have a feeling that it will end up being presented as a alignment and ethics versus all in on AGI consequences be damned. I'm sure OpenAI has gotten a lot of external pressure to focus more on alignment and ethics and this coup is signalling that OpenAI will yield to that pressure.


Purging an non-profit organization from a greedy MBA aggressively focused on sales and nothing else is always a good riddance in my book.


>nothing to do with dishonest

Who knows, maybe they settled a difference of opinion and Altman went ahead with his plans anyway.


You don't call someone a liar because you have philosophical differences.


So, in this round, Woz won?


What is confusing here is Why would Greg have agreed to the language in the press release (that he would be staying in the company and report to the CEO) only to resign 1 hour later. Surely the press release would not have contained that information without the his agreement that he would be staying.


> Why would Greg have agreed to the language in the press release

We have no evidence he agreed or didn't agree to the wording. A quorum of the board met, probably without the chairman, and voted the CEO of the company and the chairman of the board out. The chairman also happens to have a job as the President of the company. The President role reports to the CEO, not the board. Typically, a BOD would not fire a President, the CEO would. The board's statement said the President would continue reporting to the CEO (now a different person) - clarifying that the dismissal as board chairman was separate from his role as a company employee.

Based on the careful wording of the board's statement as well as Greg's tweet, I suspect he wasn't present at the vote nor would he be eligible to vote regarding his own position as chairman. Following this, the remaining board members convened with their newly appointed CEO and drafted a public statement from the company and board.


He didn't. Greg was informed after Sam (I'm assuming the various bits being flung about by Swisher are true; she gets a free pass on things like this), so I think the sequence was: a subset of the board meets, forms quorum, votes to terminate Sam and remove Greg as chair (without telling him). Then they write the PR, and around the same time, let Sam and then Greg know. If OpenAI were a government, this would be called a coup.


Governments fall all the time in non-coup. They just lost majority.


Rushing out a press release with vague accusations and without consulting the relevant parties certainly feels more like a coup than a traditional vote of no confidence.


Ha Most people don’t know how slipshod these things go. Succession had it right when people were always fighting over the PR release trying to change each others statements.


He didn't. From https://sfstandard.com/2023/11/17/openai-sam-altman-firing-b...:

> When Altman logged into the meeting, Brockman wrote, the entire OpenAI board was present—except for Brockman. Sutskever informed Altman he was being fired.

> Brockman said that soon after, he had a call with the board, where he was informed that he would be removed from his board position and that Altman had been fired. Then, OpenAI published a blog post sharing the news of Altman’s ouster.


the theory that Altman did something in bad faith means that it might not be a disagreement but it's something that forced Sutskever to vote against Sam


The theory is Altman gave his eye scanning crypto company early access to openAI tech without telling anyone and what ensued is just FAFO in action.


That’s not big enough to immediately terminate him.


Self-dealing like that really is, not saying I see any reason to suspect its that and not something else, but, yeah, doing that and cocncealing it absolutely would be a reason for both firing him and making the statement made.

Unless Brockman was involved, though, firing Brockman doesn't really make sense.


Yeah, looking at the self-dealing going on with WeWork and Adam Neumann, it's not that.


Not sure what you mean here. They tried to IPO in 2019 and all the dirty laundry came out, scuttled the IPO and Neumann got ousted.


Ousted to the tune of millions and millions of dollars tho. Yeah he didn't get an IPO pop, but he's still worth more than you or I could make in several lifetime.


There's a difference between self-dealing you sell the board on and self-dealing you conceal from the board (also, different where its a pure for-profit where the self-dealing happens and where there is a non-profit involved, because the latter not only raises issues of conflict of interest with the firm, but also potential violations of the rules governing non-profits.)


The board knew and agreed to it in that case.


I'm also inclined to believe something like this happened.


No way, no Sam - crypto scandal again.


[flagged]


…”Woke”? What does that have to do with anything?


The AI doomerism that seems to underly the boards decision is just a non serious endeavor that is more virtue signaling than of substance.

The leap I’m making, but seems plausible given their chief scientist, is that the area of more research they want to focus on, rather than being a business, is on the super alignment theme.


Feels like Griffindor beheaded Slytherin right before Voldemort could make them his own. Hogwarts will be in turmoil but that price was unavoidable given the existential threath?


>https://twitter.com/ilyasut/status/1707752576077176907

Dang! He left @elonmusk on read. Now that's some ego at play.


Without Sam running the company, OpenAI can’t get what it is today.

And this time around he would have the sympathies from the crowd.

Regardless this is very detrimental to OpenAI brand, Ilya might be the genius behind ChatGPT, he couldn’t do it just himself.

The war between OpenAI and Sam AI is just the beginning


Alec Radford always left out in these convos, curious



Wait, this guy really landed an AI lead job in a few hours?

Edit: Ok seems to be a joke account. I guess I’m getting old.


That's a parody account



Sam and Ilya have recently made public statements about AGI that appear to highlight a fundamental disagreement between them.

Sam claims LLMs aren't sufficient for AGI (rightfully so).

Ilya claims the transformer architecture, with some modification for efficiency, is actually sufficient for AGI.

Obviously transformers are the core component of LLMs today, and the devil is in the details (a future model may resemble the transformers of today, while also being dynamic in terms of training data/experience), but the jury is still out.

In either case, publicly disagreeing on the future direction of OpenAI may be indicative of deeper problems internally.


That is just disagreement in technological aspects, which any well-functioning company should always have a healthy amount of within its leadership. The press release literally said he was fired because he was lying. I haven't seen anything like that from a board of a big company for a very long time.


Additionally, OpenAI can just put resources towards both approaches in order to settle this dispute. The whole point of research is that you don't know the conclusions ahead of time.


Seemingly, OpenAIs priorities shifted after the public ChatGPT release, and seems to be more and more geared towards selling to consumers, rather than a research lab that it seemed they initially were aiming for.

I'm sure this was part of the disagreement as Sam is "capitalism incarnated" while Ilya gives of much different feelings.


Maybe some promise was made by Sam to MS for the funding that the board didn't approve. He may have expected the board to accept the terms he agreed to but they fired him instead.


That might be part of it. They announced that they were dedicating compute to researching superintelligence alignment. When they launched the new stuff on Dev Day, there was not enough compute and the service was disrupted. It may have also interfered with Ilya's team's allocation and stopped their research.

If that happened (speculation) then those resources weren't really dedicated to the research team.


The question has enormous implications for OpenAI because of the specifics of their nonprofit charter. If Altman left out facts to keep the board from deciding they were at the AGI phase of OpenAI, or even to prevent them from doing a fair evaluation, then he absolutely materially misled them and prevented them from doing their jobs.


If it turns out that the ouster was over a difference of opinion re: focusing on open research vs commercial success, then I don't think their current Rube Goldberg corporate structure of a non profit with a for profit subsidiary will survive. They will split up into two separate companies. Once that happens, Microsoft will find someone to sell them a 1.1% ownership stake and then immediately commence a hostile takeover.


Interesting, is this how they're currently structured? It sounds a lot like Mozilla with the Mozilla Foundation and Corporation.


Rightfully so?

No-one knows. But I sure would trust the scientist leading the endeavor more than a business person that has interest in saying the opposite to avoid immediate regulations.


>Ilya claims the transformer architecture, with some modification for efficiency, is actually sufficient for AGI.

I thought this guy was supposed to know what he's talking about? There was a paper that shows LLMs cannot generalise[0]. Anybody who's used ChatGPT can see there's imperfections.

[0] https://arxiv.org/abs/2309.12288


Humans don't work this way either. You don't need the LLM to do the logic, you just need the LLM to prepare the information so it can be fed into a logic engine. Just like humans do when they shut down their system 1 brain and go into system 2 slow mode.

I'm in the definitely ready for AGI camp. But it's not going to be a single model that's going to do the AGI magic trick, it's going to be an engineered system consisting of multiple communicating models hooked up using traditional engineering techniques.


> You don't need the LLM to do the logic, you just need the LLM to prepare the information so it can be fed into a logic engine.

This is my view!

Expert Systems went nowhere, because you have to sit a domain expert down with a knowledge engineer for months, encoding the expertise. And then you get a system that is expert in a specific domain. So if you can get an LLM to distil a corpus (library, or whatever) into a collection of "facts" attributed to specific authors, you could stream those facts into an expert system, that could make deductions, and explain its reasoning.

So I don't think these LLMs lead directly to AGI (or any kind of AI). They are text-retrieval systems, a bit like search engines but cleverer. But used as an input-filter for a reasoning engine such as an expert system, you could end up with a system that starts to approach what I'd call "intelligence".

If someone is trying to develop such a system, I'd like to know.


They should fire Ilya and get you in there


> We provide evidence for the Reversal Curse by finetuning GPT-3 and Llama-1 on fictitious statements such as "Uriah Hawthorne is the composer of 'Abyssal Melodies'" and showing that they fail to correctly answer "Who composed 'Abyssal Melodies?'". The Reversal Curse is robust across model sizes and model families and is not alleviated by data augmentation.

This just proves that the LLMs available to them, with the training and augmentation methods they employed, aren't able to generalize. This doesn't prove that it is impossible for future LLMs or novel training and augmentation techniques will be unable to generalize.


No, if you read this article it shows there were some issues with the way they tested.

> The claim that GPT-4 can’t make B to A generalizations is false. And not what the authors were claiming. They were talking about these kinds of generalizations from pre and post training.

> When you divide data into prompt and completion pairs and the completions never reference the prompts or even hint at it, you’ve successfully trained a prompt completion A is B model but not one that will readily go from B is A. LLMs trained on “A is B” fail to learn “B is A” when the training date is split into prompt and completion pairs

Simple fix - put prompt and completion together, don't do gradients just for the completion, but also for the prompt. Or just make sure the model trains on data going in both directions by augmenting it pre-training.

https://andrewmayne.com/2023/11/14/is-the-reversal-curse-rea...


LLMs can't generalise no, but the meta-architecture around them can 100%.

Think about the RLHF component that trains LLMs. It's the training itself that generalises - not the final model that becomes a static component.


[flagged]


There is lack of humility in making such an assertion about a path to AGI.


It is funny believing we have AGI and yet resorting ad-hominem to prove/defend that!

At some point in time Ilya was a nobody going against the gods of AI/ML. Just slightly over a decade ago neural networks were a joke in AI.


>rightfully so

How the hell can people be so confident about this? You describe two smart people reasonably disagreeing about a complicated topic


The LLMs of today are just multidimensional mirrors that contain humanity's knowledge. They don't advance that knowledge, they just regurgitate it, remix it, and expose patterns. We train them. They are very convincing, and show that the Turing test may be flawed.

Given that AGI means reaching "any intellectual task that human beings can perform", we need a system that can go beyond lexical reasoning and actually contribute (on it's own) to advance our total knowledge. Anything less isn't AGI.

Ilya may be right that a super-scaled transformer model (with additional mechanics beyond today's LLMs) will achieve AGI, or he may be wrong.

Therefore something more than an LLM is needed to reach AGI, what that is, we don't yet know!


Prediction: there isn't a difference. The apparent difference is a manifestation of human brain delusion about how human brains work. The Turing test is a beautiful proof of this phenomenon: so and so thing is impossibility hard only achievable via magic capabilities of human brains...oops no actually it's easily achievable now so we better re-define our test. This cycle Will continue until the singularly. Disclosure: I've been long term skeptical about AI but that writing is up on the wall now.


Clearly there's a difference, because the architectures we have don't know how to persist information or further train.

Without persistence outside of the context window, they can't even maintain a dynamic, stable higher level goal.

Whether you can bolt something small to these architectures for persistence and do some small things and get AGI is an open question, but what we have is clearly insufficient by design.

I expect it's something in-between: our current approaches are a fertile ground for improving towards AGI, but it's also not a trivial further step to get there.


But context windows got to 100K now, RAG systems are everywhere, and we can cheaply fine-tune LoRAs for a price similar with inferencing, maybe 3x more expensive per token. A memory hierarchy made of LoRA -> Context -> RAG could be "all you need".

My beef with RAG is that it doesn't match on information that is not explicit in the text, so "the fourth word of this phrase" won't embed like the word "of", or "Bruce Willis' mother's first name" won't match with "Marlene". To fix this issue we need to draw chain-of-thought inferences from the chunks we index in the RAG system.

So my conclusion is that maybe we got the model all right but the data is too messy, we need to improve the data by studying it with the model prior to indexing. That would also fix the memory issues.

Everyone is over focusing on models to the detriment of thinking about the data. But models are just data gradients stacked up, we forget that. All the smarts the model has come from the data. We need data improvement more than model improvement.

Just consider the "Textbook quality data" paper Phi-1.5 and Orca datasets, they show that diverse chain of thought synthetic data is 5x better than organic text.


I've been wondering along similar lines, although I am for all intents and purposes here a layman so apologies if the following is nonsensical.

I feel there are potential parallels between RAG and how human memory works. When we humans are prompted, I suspect we engage in some sort of relevant memory retrieval process and the retrieved memories are packaged up and factored in to our mental processing triggered by the prompt. This seems similar to RAG, where my understanding is that some sort of semantic search is conducted over a database of embeddings (essentially, "relevant memories") and then shoved into the prompt as additional context. Bigger context window allows for more "memories" to contextualise/inform the model's answer.

I've been wondering three things: (1) are previous user prompts and model answers also converted to embeddings and stored in the embedding database, as new "memories", essentially making the model "smarter" as it accumulates more "experiences" (2) could these "memories" be stored alongside a salience score of some kind that increases the chance of retrieval (with the salience score probably some composite of recency and perhaps degree of positive feedback from the original user?) (3) could you take these new "memories" and use them to incrementally retrain the model for, say, 8 hours every night? :)

Edit: And if you did (3), would that mean even with a temperature set at 0 the model might output one response to a prompt today, and a different response to an identical prompt tomorrow, due to the additional "experience" it has accumulated?


> Clearly there's a difference, because the architectures we have don't know how to persist information or further train. Without persistence outside of the context window, they can't even maintain a dynamic, stable higher level goal.

Nope, and not all people can achieve this as well. Would you call them less than humans than? I assume you wouldn't, as it is not only sentience of current events that maketh man. If you disagree, then we simply have fundamental disagreements on what maketh man, thus there is no way we'd have agreed in the first place.


Isn't RAG essentially the "something small you can bolt on" to an LLM that gives it "persistence outside the context window?" There's no reason you can't take the output of an LLM and stuff it into a vector database. And, if you ask it to create a plan to do a thing, it can do that. So, there you have it: goal-oriented persistence outside of the context window.

I don't claim that RAG + LLM = AGI, but I do think it takes you a long way toward goal-oriented, autonomous agents with at least a degree of intelligence.


From my experience there's definitely context beyond the current set of LLM state. It's how they're able to regurgitate facts or speak at all.


> regurgitate facts or speak at all.

Most of that is encoded into weights during training, though external function call interfaces and RAG are broadening this.


> Without persistence outside of the context window, they can't even maintain a dynamic, stable higher level goal.

I mean, can't you say the same for people? We are easily confused and manipulated, for the most part.


I can remember to do something tomorrow after doing many things in-between.

I can reason about something and then combine it with something I reasoned about at a different time.

I can learn new tasks.

I can pick a goal of my own choosing and then still be working towards it intermittently weeks later.

The examples we have now of GPT LLM cannot do these things. Doing those things may be a small change, or may not be tractable for these architectures to do at all... but it's probably in-between: hard but can be "tacked on."


Former neuroscientist here.

Our brain actually uses many different functions for all of these things. Intelligence is incredibly complex.

But also, you don't need all of these to have real intelligence. People can problem solve without memory, since those are different things. People can intelligently problem-solve without a task.

And working towards long-term goals is something we actually take decades to learn. And many fail there as well.

I wouldn't be surprised if, just like in our brain, we'll start adding other modalities that improve memory, planning, etc etc. Seems that they started doing this with the vision update in GPT-4.

I wouldn't be surprised if these LLMs really become the backbone of the AGI. But this is science– You don't really know what'll work until you do it.


> I wouldn't be surprised if these LLMs really become the backbone of the AGI. But this is science– You don't really know what'll work until you do it.

Yes-- this is pretty much what I believe. And there's considerable uncertainty in how close AGI is (and how cheap it will be once it arrives).

It could be tomorrow and cheap. I hope not, because I'm really uncertain if we can deal with it (even if the AI is relatively well aligned).


That just proves we real-time fine tuning of the neuron weights. It is computationally intensive but not fundamentally different. A million token context would look close to long short-term memory and frequent fine-tuning will be akin to long-term memory.

I most probably am anthropomorphizing completely wrong. But point is humans may not be any more creative than an LLM, just that we have better computation and inputs. Maybe creativity is akin to LLMs hallucinations.


Real-time fine tuning would be one approach that probably helps with some things (improving performance at a task based on feedback) but is probably not well suited for others (remembering analogous situations, setting goals; it's not really clear how one fine-tunes a context window into persistence in an LLM). There's also the concern that right now we seem to need many, many more examples in training data than humans get for the machine to get passably good at similar tasks.

I would also say that I believe that long-term goal oriented behavior isn't something that's well represented in the training data. We have stories about it, sometimes, but there's a need to map self-state to these stories to learn anything about what we should do next from them.

I feel like LLMs are much smarter than we are in thinking "per symbol", but we have facilities for iteration and metacognition and saving state that let us have an advantage. I think that we need to find clever, minimal ways to build these "looping" contexts.


> I most probably am anthropomorphizing completely wrong. But point is humans may not be any more creative than an LLM, just that we have better computation and inputs.

I think creativity is made of 2 parts - generating novel ideas, and filtering bad ideas. For the second part we need good feedback. Humans and LLMs are just as good at novel ideation, but humans have the advantage on feedback. We have a body, access to the real world, access to other humans and plenty of tools.

This is not something an android robot couldn't eventually have, and on top of that AIs got the advantage of learning from massive data. They surpass humans when they can leverage it - see AlphaFold, for example.


Are there theoretical models that use real time weights? Every intro to deep learning focuses on stochastic gradient descent for neural network weights; as a layperson I'm curious about what online algorithms would be like instead.


I agree with your premise.

You're right: I haven't seen evidence of LLM novel pattern output that is basically creative.

It can find and remix patterns where there are pre-existing rules and maps that detail where they are and how to use them (ie: grammar, phonics, or an index). But it can't, whatsoever, expose new patterns. At least public facing LLM's can't. They can't abstract.

I think that this is an important distinction when speaking of AI pattern finding, as the language tends to imply AGI behavior.

But abstraction (as perhaps the actual marker of AGI) is so different from what they can do now that it essentially seems to be futurism whose footpath hasn't yet been found let alone traversed.

When they can find novel patterns across prior seemingly unconnected concepts, then they will be onto something. When "AI" begins to see the hidden mirrors so to speak.


If LLMs can copy the symbolic behaviors that let humans generate new knowledge, it'll be there.


> , they just regurgitate it, remix it, and expose patterns

Who cares? Sometimes the remixation of such patterns is what leads to new insights in us humans. It is dumb to think that remixing has no material benefit, especially when it clearly does.


> They are very convincing, and show that the Turing test may be flawed

The only think flawed here is this statement. Are you even familiar with the premise of Turing test?


Maybe "rightfully so" meant "it is totally within Sam's right to claim that LLMs aren't sufficient for AGI"?


Did Ilya give a reason why transformers are theoretically sufficient? I've watched him talk in a CS seminar and he's certainly interesting to listen to.


From the interviews with him that I have seen, Sutskever thinks that language model is a sufficient pretraining task because there is a great deal of reasoning involved in next token prediction. The example he used was that suppose you fed a murder mystery novel to a language model and then prompted it with the phrase "The person who committed the model was: ". The model would unquestionably need to reason in order to come to the right conclusion, but at the same time it is just predicting the next token.


Can a super smart business-y person educate this engineer on how this even happens.

So, if there's 6 board members and they're looking to "take down" 2... that means those 2 can't really participate, right? Or at the very least, they have to "recuse" themselves on votes regarding them?

Do the 4 members have to organize and communicate "in secret"? Is there any reason 3 members can't hold a vote to oust 1, making it a 3/5 to reach majority, and then from there, just start voting _everyone_ out? Probably stupid questions but I'm curious enough to ask, lol.


The details depend on what's specified in the non-profit's Bylaws and Articles of Incorporation. As a 501(c)3 there are certain requirements and restrictions but other things are left up to what the founding board mandated in the documents which created and govern the corporation.

Typically, these documents contain provisions for how voting, succession, recusal, eligibility, etc are to be handled. Based on my experience on both for-profit and non-profit boards, the outside members of the board probably retained outside legal counsel to advise them. Board members have specific duties they are obligated to fulfill along with serious legal liability if they don't do so adequately and in good faith.


I had the same questions, and have learnt now that non profit governance is like this and that is why is a bad idea for something like OpenAI. In a for profit the shareholders can just replace the board.


Asking ChatGPT (until someone else answers) says that to remove a board member usually takes a super majority, which makes much more sense... but still seems to imply they need at least 4/6.


Could have been only Sam that was under Vote and Greg being forced to step down after he voted in his favour maybe in a second vote later.


Why would Greg have said "after learning today's news" if he took part in the vote? If he decided to quit immediately after the vote then why would the board issue a statement saying he was going to stay on? I don't think he took part, the others probably convened a meeting and cast a unanimous vote, issued the statement and then contacted Greg and Sam. The whole thing seems rushed so that's probably how it would have played out.


> If he decided to quit immediately after the vote then why would the board issue a statement saying he was going to stay on?

Why would they issue a statement saying that he was going to stay on without some form of assurance from him?

I mean, you're writing a release stating that you're firing your CEO and accusing him of lack of candor. Not exactly the best news to give. You're chasing that with "oh by the way, the chairman of the board is stepping down too", so the news are going from bad to worse. The last thing you want is to claim that said chairman of the board is staying as an employee to have him quit hours later. I find it hard to believe that they'd make mistake as dumb as announcing Greg was staying without some sort of assurance from him, knowing that Greg was Sam's ally.


> Why would they issue a statement saying that he was going to stay on without some form of assurance from him?

Maybe to make it clear that if he leaves, it is him quitting not him being fired. This would avoid potential legal issues.

Maybe they thought there was a chance be would stay.


After looking into it they would have had to give notice in case they wanted to attend but from the sounds of it they may not have bothered to go which would make sense if they knew they were screwed.


Ah, I really like that theory!

I mean, I still wonder though if they really only need 3 ppl fully on board to effectively take the entire company. Vote #1, oust Sam, 3/5 vote YES. Sam is out, now the vote is "Demote Greg", 3/4 vote YES, Greg is demoted and quits. Now, there could be one "dissenter" and it would be easy to vote them out too. Surely there's some protection against that?


>Can a super smart business-y person educate this engineer on how this even happens.

There is nothing business-y about this. As a non-profit OpenAI can do whatever they want.


Well, there has to be some sort of framework in which they operate, no?

OpenAI isn't a single person, so decisions like firing the CEO have to be made somehow. I'm wondering about how that framework actually works.


You'd have to read the company charter and by-laws.


There isn't really any rules specified in the law for this, unlike with corporate law which mandates companies to be structured a certain way. We'd have to see OpenAI's operating by-laws.


Thank you for a real answer, this is what I was looking for!


Funnily enough, I just started watching Succession last week.

This feels like real like succession panning out. Every board member is trying to figure out how to optimize their position.


It's a comedy but I feel like I learned a lot about SV and VC/board culture from watching HBO's Silicon Valley.


This suggests that Greg Brockman wasn't in the board meeting that made the decision, and only "learned the news" that he was off the board the same way the rest of us did.


You've put "learned the news" in quote, but what Greg Brockman wrote was "based on today's news".

That could simply mean that he disagreed with the outcome and is expressing that disagreement by quitting.

EDIT: Derp. I was reading the note he wrote to OpenAI staff. The tweet itself says "After learning today's news" -- still ambiguous as to when and where he learned the news.


It's all very ambiguous, but if he had been there for the board meeting where he was removed, I imagine he would have quit then and it would have been in the official announcement. It comes across like he didn't quit until after the announcement had already been made.


> and it would have been in the official announcement.

It is:

> As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.

https://openai.com/blog/openai-announces-leadership-transiti...


> and will remain in his role at the company

The portion you quoted says he will remain at the company. This post is about him quitting, and no longer remaining with the company.


Boards cannot meet or act without notice to board members and the opportunity for them to participate.


They can if the meeting is about the problematic board member(s).


Still have to give notice


I'm not sure those rules apply to non-profits.


They're usually in the Bylaws. MIRI's Bylaws, iirc 23 years after I wrote them, contain a provision like that.


Well, yeah, he wouldn't be allowed to participate in deliberation about his own removal.


Maybe, but there's a difference between not being in the deliberation, and not being notified until the entire planet was.


wait...isn't "the decision" referred to in the parent comment about the removal of Altman?


Greg's removal was announced in the same press release as Altman's


ah ok. I thought the board decided to remove Altman, then Brockman quit in response, so there was no deliberation about his (Brockman's) removal.


He was renoved as Chairman at the same time (close enough that they were announced together, and presumably linked in cause, though possibly a separate vote) as Altman was removed as CEO.


ah ok makes sense. I thought he just resigned in response to Altman's ouster, so there was no board decision to remove Brockman.


He was chairman of the board, no? surely he was in the meeting? More likely it's some kind of schism and he was on Sam's side.


> He was chairman of the board, no? surely he was in the meeting?

Since he was removed as Chairman at the same time as Altman was as CEO, presumably he was excluded from that part of the meeting (which may have been the whole meeting) for the same reason as Altman would have been.


Just guessing here, but I think the board can form a quorum without the chair, and vote, and as long as they have a majority, i think they can proceed with a press release based on their vote.


It varies by jurisdiction and board rules, but this is a common setup and a very reasonable guess.


Since he was chair of the board... I'm curious how the rest of the board implemented this.


In the board I served on in the past we had an agreed quorum where we could make binding decisions if ~2/3rds of the members were present.

Probably a similar situation.


which makes sense because it'd take 2/3 to implement something if they're unanimous.

just base logic.


Whether Greg knew of the decision beforehand? I don’t think so. This totally business-as-usual post from Greg Brockman happened 1 hour before the one from OpenAI: https://x.com/gdb/status/1725595967045398920 https://x.com/openai/status/1725611900262588813 How crazy is that?!


"Greg Brockman, co-founder and president of OpenAI, works 60 to 100 hours per week, and spends around 80% of the time coding. Former colleagues have described him as the hardest-working person at OpenAI."

https://time.com/collection/time100-ai/6309033/greg-brockman...


What’s the likelihood this is over a Microsoft acquisition? Purely speculative here, but Sam might have been a roadblock.

Edit: Maybe this is a reasonable explanation: https://news.ycombinator.com/item?id=38312868 . The only other thing not considered is that Microsoft really enjoys having its brand on things.


Axios claims MS had no prior knowledge.


This suggests that a real schism is occurring at OpenAI. I anticipate hearing of two different philosophies at play.


It would be interesting if Altman and Brockman, assuming that they want accelerated development, ended up in some high level roles at Microsoft. That seems like a win-win-win all the way around since OpenAI could follow their new path, Microsoft and Google could build new things fast, and the open model proponents can keep up their good work.

Except for a clumsy fast press release, this doesn’t really have to end badly for anyone.

Even though I have been an OpenAI fan every since I used their earliest public APIs, I am also very happpy that there is such a rich ecosystem, other commercial players like Anthropic, open model support from Meta and Hugging Face, and the increasingly wonderful small models like Mistral that can be easily run at home.


Sam Altman is not the kind of person that goes into a bigco management job. He'll be doing another startup come Monday.


Yes. I changed my perspective - Altman would not fit in and be happy at Microsoft.


It seems like your first opinion was correct.


Sam and those that want to continue stealing ip and destroying entire industries in the process will leave, while ethical machine learning scientists will remain.


As a sanity check - when Hinton left google we celebrated it as the faceless business corpo shouldn’t be leading the velocity of the dev, it should be AI pioneers who understand the risks.

If indeed a similar disagreement happened in OpenAI but this time Hinton (Ilya) came on top- it’s a reason to celebrate.


In my opinion, these people were not fit to run an enterprise originally labeled as "Open"AI, especially when Musk donated 100 million dollars to making sure it remained open while others in the company deemed it better to be closed. At this point, I must wonder if I support XAI instead over these companies.



And satya seems to have been caught off guard as well. What is happening?


I think I see it now. Speculation following:

They achieved AGI internally, but didn't want OpenAI to have it. All the important people will move to another company, following Sam, and OpenAI is left with nothing more than a rotting GPT.

They planned all this from the start, which is why Sam didn't care about equity or long-term finances. They spent all the money in this one-shot gamble to achieve AGI, which can be reimplemented at another company. Legally it's not IP theft, because it's just code which can be memorized and rewritten.

Sam got himself fired intentionally, which gives him and his followers a plausible cover story for moving to another company and continuing the work there. I'm expecting that all researchers from OpenAI will follow Sam.


> Legally it's not IP theft, because it's just code which can be memorized and rewritten.

That is not how IP law works. Even writing new code based on the IP developed at OpenAI would be IP theft.

None of this really makes sense when you consider that Ilya Sutskever, arguably the single most important person at OpenAI, appears to have been a part of removing Sam.


Would Ilya maybe the one to push for AGI, and Sam didn't? And the board wants the Skynet they were promissed?


I thought this was a joke and would end with "/s" and now I'm just left with mouth slightly agape, completely in awe.


No I don’t think that’s what happened at all. Also memorizing code and rewriting it is very much IP theft.


Is it?

Seriously, I’m asking. Like… if you were an engineer that worked on UNIX System V at AT&T/Bell Labs and contributed code to the BSDs from memory alone, would you really be liable?


GPT and transformer code has been open sourced many times over by different companies. The weights and the operational model is where the IP really is. This will include the architecture for distributing training and inference at this scale. That said m, any developer and scientist worth their salt will be able to replicate it from memory - without having to copy stuff over 1:1.

So unless any of the necessary bits are patented, I highly doubt an argument against them starting a new company will hold in the courts.

Sometimes the contracts can include a cool-down period before a person can seek employment in the same industry/niche, I don’t think that will apply in Sam’s case - as he was a founder.

Also - the wanting to get himself fired intentionally argument doesn’t have any substance. What will he gain from that? If anything, him leaving on his own terms sounds like a much stronger argument. I don’t buy the getting-fired-and-having-no-choice-but-to-start-an-AGI-company argument.

An interesting twist would be if he joins Elon in his pursuit. Pure speculation, sharing it just for amusement. I don’t think they’ll ever work together. Can’t have two people calling the shots at the top. Leaves employees confused and rarely ever works. Probably not very good for their own mental health either.


> Sometimes the contracts can include a cool-down period before a person can seek employment in the same industry/niche, I don’t think that will apply in Sam’s case - as he was a founder.

It's very difficult to enforce anything like this in California. They can pay him to not work, but can't just require it.


It's actually easier to enforce a noncompete in California on a founder or principal of a firm than it is on an employee. I don't recall the exact, legal specifics, but it has something to do with the fact that those people are in some way attached to the "goodwill" of the original business, which is something of value that the company can protect.

Someone else can probably say it better than I can, but that's how I understand it at this moment.


Probably depends on what the code is and how material it is to AT&T’s business and what agreements are in place. IANAL. Youre not gonna get sued for routine stuff.


It is not a copyright issue unless you typed out the exact code from memory. It could be a patent issue if it behaves the same way.


It's not if it's not literal. You can easily reimplement the same ML architecture which you have written before. Also, it's not really OpenAIs IP, if they kept it secret.


Even if it is literal, it may not be infringement. See "rangeCheck" in the Oracle v. Google case.

https://www.theverge.com/2017/10/19/16503076/oracle-vs-googl...


Interesting view - I, and many others presumably, would really like more insight into the source and nature of this speculation.

I am not dismissing the possibility, far from it. It sounds very plausible. But are there any credible reports to back it up?


Speculation means: the forming of a theory or conjecture without firm evidence.

It's just a fun theory, which I think is plausible. It's based on my personal view of how Sam Altman operates, i.e. very smart, very calculative, makes big gambles for the "greater purpose".


I believe the comment was edited. The original comment made a mention of “seeing a lot of speculation” (paraphrasing) that piqued my curiosity.

The source of the speculation could further enhance or remove the probability of this being true. For instance, a journalist who covers OpenAI vs. a random tweeter (now X’er?) with no direct connection. It’s a loose application of Bayesian reasoning - where knowing the likelihood of one event (occupation of speculator and their connection to AI) can significantly increase the probability of the other event (the speculation).


This is certainly a take, but MSFT would probably sue the hell out of them if they tried to do this.


US patent lawyers can’t touch China mainland.


Right, so he’s going to flee the US for China?!? Come on.


If you understand a bit of math you would know there is no AGI and there would be no AGI on the current path.


What "bit of math" are you referring to? Similarly, would you have said the same things one year ago about the capabilities that ChatGPT currently possesses?


The AI safety people may be one of the most destructive forces in tech history.


Could you elaborate on what you mean by that/why you think so?


AI x-risk is a load of hogwash based on extremely faulty reasoning ill-adapted to real ML architectures and political/economic reality. There is absolutely no reason to worry about the deceptive turn of a paperclip maximizer. Yet because of these sci-fi trope fears, real human progress is being held back.


Because there is nothing even approaching the claimed risk of "AI", and they're stifling the growth and potential that LLMs have at vastly improving our lives.


It is much ado about nothing, to quote our laudet poet in the English canon. Who gives a shit about AI, really?


I agree, let me query what I wish from the AI; it is literally no different from current search engines.


How do I profit from this news. Should I buy or sell MSFT?


The same way you profit without the news, buy VTSAX and wait 10 years.


This, tho.


Both, you can't lose!


Well, if you are doing put and call options betting on increased volatility, I think you’re right!


> > How do I profit from this news

Ignore and focus on your life, the grapevine in your neighborhood about who is selling their car or their house is not as exciting but will net you way more money than this happening thousands of miles away from you. And most importantly without having to fuck with leverage.


Thanks. I'm doing reasonably well in life.


That is true for millionaires and billionaires alike, the best ROI opportunities investments are geographically close to you before they are sniffed out by other people.

Betting on the World Cup Final vs. betting on a local match where you know a team has been clubbing and drinking until late into the night at your bar.

Local advantage.


Who isn't? That does not mean you cannot do well on financial news. The best way forward is to buy VTI/VTSAX (they are eqiivalent, just based on whether you want to buy a mutual fund or ETF) and wait 20 to 50 years.


MSFT stock already dropped a bit before the bell on the news. That may be baked in by faster movers. This is not stock advice but I'm more inclined to sell NVDA as whatever happens next is a distraction and will slow AI market growth and inclined to buy Google as they have an opportunity here to do some poaching.

But for clarity sake I'm doing neither personally because I'm not a day trader and look more long term.


if you wanted to day trade, you'd buy msft because this drop is clearly nominally related but the odds that it has a substantive effect is really unlikely. similar stuff happened during Trump's tenure where they just had bots trading on Trump tweets.


Buy Worldcoin, obviously /s

They're probably firing up the eyeball scanning machines on this news.


Ooof, I would sell personally unless they manage to seize control of OpenAI by Monday.


I did both. ;)


If the primary issue is was safety vs performance, then in the near-future-end performance is going to win. As is the nature of AI that has been written about for decades.

But right now, the board undoubtedly feels the most pressure in the realm of safety. This is where the political and big-money financial (Microsoft) support will be.

If all true, Altman's departure was likely inevitable as well as fortunate for his future.


I wonder if OpenAI employees will start resigning en masse because of this as a form of protest. The board better have a very good reason to back up their decision, if they decide to elaborate at all anyway.


There will definitely be people in Sam's camp that want to leave (I would guess a lot of product people?), but a lot of other people in Ilya's camp who want to stay. Notably, Ilya is the actual scientific and technical asset, and his team are much more likely to be loyal to him than Sam because they work with him. Even if Sam takes away a lot of admin and product roles, the core of the tech capabilities is likely to stay under OpenAI controls. That said, Microsoft doesn't have to keep giving them compute (well, they've got signed agreements Microsoft will honour, but MSoft doesn't have to go further than that), so the new OpenAI will still have to make concessions to their commercial investors for the same reason the company got a for profit subsidiary in the first place: because compute costs money, a lot of it.


So given that there are 6 board members, Ilya had to have voted to oust Sam?


Presumably Sam recused himself so not necessarily


Why would he recuse himself? Sam seemed happy to work at OpenAI.

FWIW, radio silence from Ilya on twitter https://twitter.com/ilyasut


> Why would he recuse himself?

Usually mandatory for decisions about a board member for them to be recused. That there is an overwhelming potential for conflict between personal interest and the firm’s is pretty clear in that case.


Because it's a conflict of interest - a CEO should absolutely recuse themselves from a vote on whether they should be removed. If a CEO refused to do so, the board should adjourn and reconvene without the CEO present.


Do you mean X?


Given he was the subject of the vote, he likely wouldn’t even be able to participate.


This all seems so weird, and the list of Board members doesn't make this any easier to understand. Apart from the 3 insiders, there are 3 other board members. 2 of them seem complete no names and might not qualify for any important corporate board. In a for profit shareholders in theory control the board, in a non profit I am not even sure of who really has control over things.


Three other people have left the board this year: Reid Hoffman, Will Hurd and the person from Neuralink.


What I find incredibly odd is the lack of a Microsoft board seat, considering their large ownership in OpenAI. Something does not add up.


Microsoft has zero ownership of the entity the board controls (the OpenAI nonprofit), and a for-profit firm having seats on a nonprofit board especially if it was because they invested in a for-profit subsidiary of the nonprofit would raise serious issues of the “nonprofit” being run for purposes incompatible with its status.


Sure; but its still weird that Microsoft agreed to the deal with the board in the state that it was; not just no board seat, but three absolute outsiders, two of them extremely unqualified. We may look back on their decision to buy 49% of OpenAI as a big misstep.


Wasn't it a Hail Mary for Microsoft? They're not doing anything else particularly earth shattering, and if this came out without them, they'd be even less relevant. If OpenAI fought this and won without them, Microsoft would have nothing to compete against Google and everyone else with.

Did Microsoft have any other route to AI relevance?


Sure, they could simply copy the GPT papers (as they are entirely public) and implement them inside their own products, as they are doing already with GitHub and Office. There is really no need to hang onto OpenAI's word.


Microsoft got access to their IP and capitalized on it.

Likely it's already brought them more than $10B they paid.


And much of that money is/will be spent on Azure. That’s incredibly valuable data and return on investment


How much did they pay for that 49% stake?


Something like $10 billion.


Already a good investment then, even if this fundamentally changes how impactful OpenAI is going forward.


There is one OpenAI board member who has an art degree and is part of some kind of cultish "singularity" spiritual/neo-religious thing. That individual has also never had a real job and is on the board of several other non-profits.

What the hell were they thinking? Just because you are a non-profit doesn't mean you should imitate other non-profits and put crazies on the board.


The only explanation I can find is that their importance went through a step function at the launch of ChatGPT, and before that it didn't matter who was a board member.


Smells like XYZ agencies, or some white gloves.


No, all native Californians are like this (this just replaces hippie Buddhism with hippie computer worship) and the singularity stuff is the reason OpenAI was founded in the first place. And the reason Elon is mad at them, because they pivoted from it.


Hippie computer worship seems like indirectly self worship as the creators.


Thanks, love the insight.


And their old CEO even runs a cryptocurrency scam. Truly an interesting bunch of people.


Non profits too often can be uniquely bureaucratic, undertrained in governance and efficiency, and more tied to deeper personal interpretations or none at all and being open to bouts of oversimplification.


> I am not even sure of who really has control over things.

Honestly, this is the big problem with Big Non Profit (tm). The entire structure of non-profits is really meant for ladies clubs, Rotary groups, and your church down the street, not openai and ikea.


ikea is a non-profit?!?



I think the Novo Nordisk foundation is the largest now. It owns a majority of both Novo nordisk and Novozymes.

https://en.wikipedia.org/wiki/Novo_Nordisk_Foundation


Ikea has the wildest legal structure, but yes, a lot of IKEA is technically owned by a couple of "nonprofits" which happen to pay out a lot of money to the Kamprad family.


The same way that Rolex is technically a non-profit. Complete bullshit legal wrangling.



It's a foundation in Luxembourg, with a Dutch subsidiary that owns some offices in Sweden.


So is Rolex!


What are ladies clubs and rotary groups?


The latter is for Mazdas, the former is something we can't discuss in a SFW forum


Small scale social groups


> ladies clubs

Or lads clubs. Don't leave us out.


See my comment above. I don't think OpenAI's absurd corporate structure will survive this.


> in a non-profit I am not even sure of who really has control over things

The board is in absolute control in a not-for-profit. The loophole is that some have bylaws that make ad-hoc board meetings and management change votes very difficult to call for non-operating board members, and it can take months to get a motion to fire the CEO up for a vote.

In some not-for-profits, the board often even manages to recruit and seat new board members. Some not-for-profits operate as membership associations, where the organization’s membership elects the board members to terms.

On the few not-for-profits where I was a board member, we started every meeting with a motion to retain the Executive Director (CEO). If the vote failed, so did the Executive Director.


The board does and they are not supposed to have a financial stake in the non-profit. Usually they just vote their friends on. Welcome to the loony tunes that is nonprofit management.

Clearly Microsoft staked its whole product roadmap on 4 random people with no financial skin in the game.


> Usually they just vote their friends on

You actually think that for-profit corporate boards are significantly different, especially in the startup/early IPO phase?


From Tom Perkins’s biography - after serving on boards both big private companies and non-profjts, he said that non profits were much worse. His theory was that with no money on the stake it’s all about egos, and they cause weird situations to happen.

Also, I worked in startups and my ex-gf in various nonprofits, and the amount of drama she saw was way higher than in the commercial world


Sure, the investors own the company and the board answers to them. Nonprofits are significantly disconnected from their own financial incentives. I have witnessed it at every nonprofit I have worked for.


I mean.. the OpenAI foundation is literally not motivated by profit. I guess the main question here is how was the board chosen, and why didn't Sam much sure they were friendly to them.


In the early stage the investor does not own the startup. 20-30% stake would be typical. Hence why a Series A investor usually demands a board seat and special considerations.


Investor here is not someone who puts cash in professionally without running the company. Investor here means whoever owns the stock. There is always an investor in a company even if its just the founder owning 100% stock.

The board reports to the shareholders and the management reports to the board.

In early stage companies it is possible and likely that all three are the same person, that doesn't change the different fiduciary responsibilities for each role they play.


The word you are looking for is "shareholder."


I specifically did not use the word shareholder.

This has not do with beneficial ownership of the underlying asset alone. Principals sometimes do not have that relationship. Asset ownership is a common way to benefit from a entity, but not the only way.

Specifically here Sam Altman does not own shares in the for-profit entity and non profit entities do not have shares.

I don't have direct knowledge on how OpenAI handles it, however it is not uncommon to do revenue sharing, or lease an underlying asset like a brand name (WeWork did this) from the Principal directly, or pay for perks like housing, planes etc, or pay lot of money in Salary/Cash compensation, there are myriad ways to benefit from control without share ownership.


> Sure, the investors own the company and the board answers to them.

Huh? Plenty of startups in the stage being referenced are still majority owned by the founders.


Even if I only owned 1% of Google I’d be very motivated to vote in the best financial interests of the company. If I owned 0% not so much.


But those are people who have some skin in the game right? And shareholders can change the board structure right?


I was at amzn when jeff formed the first board. No skin in the game, and no shareholders with any votes. I gather this is pretty typical.


But Jeff was the shareholder and those were his nominees right? Not to mention he was mostly able to pick the board as needed. In for profit corporation there is a clear ultimate ownership in shareholders. No such thing here.


The claim was that non-profits "just put their friends on the board". No difference.


That does sound like loony tunes. If the board elects itself then I think it is a very very bad arrangement for something as important as OpenAI.


That is one problem with non-profits. They end up with completely unprofessional leadership because they hire their friends who are crazies just like themselves.

When things cool down in a few months we will learn Altman and Brockman were some of the few sane people on the board.


Sort of remind you of the Silicon Valley Bank board


Maybe non-profits are just frontends of some three letter agencies :)


You think too highly of the government


What if government is also some frontend?

Man I'm drunk in conspiracy theories tonight. Between a huge lay off and the Open AI fiasco please allow me indulge myself...


The Government is a front for the Illuminati.

The Illuminati are a front for the Jews™ (not to be confused with Jewish people).

The Jews™ are a front for the Catholic Church.

The Catholic church is a front for the Lizard People.

The Lizard People are a front for the Government.

Nobody is in control. The conspiracy is circular. There is no conspiracy. Everything in this post is false. Only an idiot cannot place his absolute certainty in paradoxes.


Are we going to start speculating about insane conspiracy theories now?


That doesn't seem insane to me in this case. OpenAI is easily the most important non profit for any Government in the whole world.


Governments will have their own black-budget private LLM networks, they don't need OpenAI. The NSA probably has a whole cluster of them in its data center in Utah, trained on every public and private communication they've slurped up over the years, likely a generation or two ahead of what's available to the public.


This is immensely dumb. What secret cabal of researchers would they be hiring that would be capable of being ahead of Deepmind/OpenAI? Where exactly would they find these people? Shadow MIT? CalTech2?


Military and intelligence technology is almost always ahead of the private sector. Governments have practically infinite money and resources to throw at the problem, including for recruiting and industrial espionage.


The only people who think this are people who have never been associated with a top research org. You NEVER hear about anyone, let alone the top people, going to work for government. They all get scooped up with big tech salaries or stay in academia.

The military would need to be literally breeding geniuses and cultivating a secret scientific ecosystem to be ahead on AI right now.


The military does have a secret scientific ecosystem. Where do you think all of its advanced classified technology and cryptography comes from, the Hammacher Schlemmer catalog?


I can tell you right now that the government agencies are ahead of for-profit ones. Whether you choose to believe it is up to you.


They don't need you to be their pr department. Their products are based on research done at Google and Meta, they're not the only ones working on this and they're also one of the smaller players in the space.


I never made any of the points you are contesting and my point still stands. And they are not a smaller player in this space, they are the most well known player.


yes. this is a very strange event and given the relationship to what we may call "cutting edge applied DL" technology, right after DevDay, with two key players dropping out. GDB leaving is pretty wild, IMO. indicates something maybe on the engineering level wasn't above board. Anyways, we shall see. I think some conspiratorial thinking is fine, especially if its backed up with some evidence. This comment isn't, but the fact remains this is pretty weird and people should let their minds wander and connect dots that maybe they half-remember. IMO


CIA has been destabilizing and puppeteering governments around the world. Why are you so steadfastly assured that they wouldn't meddle in the US?

Not saying there is proof, but we just found out Ukraine blew up the Russian pipeline so it seems weird to just squash debate at the 'that's too crazy to ever happen'. Way crazier things have happened/are constantly happening.


Is the thing with the pipeline actually confirmed?

Anyway if I was in business of destabilizing governments around the world I would not bother dealing with board meetings. But maybe that's just me.


Knowing what some of those three letter agencies have gotten caught doing, I'm not so sure this particular one would be so insane.


More like four letter agencies. AKA the stock tickers of large companies.


Twist: The AGI is not only already there, it is running rampant already.


Building something on top of GPT, I am now worried.


Claude is a drop in replacement for most use cases. You'll be fine.


Who is Claude?


claude sucks


why? you should always assume that your dependencies might go down, or shut up shop, or become your competitor.


GPT-4 has effectively no competition for what your can extract out of it without any fine-tuning.


Mr President, the second tower has been hit


The current deal with MSFT is cut by Sam is in such a way that Microsoft has huge leverage. Exclusive access, exclusive profit. And after all the profit limit is reached OpenAI sill need to be sold to msft to survive. This is like a worst deal for OpenAI whose goal is to do stuff open source way and it can’t do so due to this deal. If it is not for the MSFt deal, OpenAI could have open sourced and might have resorted to crowdsourcing which might have helped humanity. Also, quickly reaching profit goals is only for good for MSFT. There is no need to actually send money to OpenAI team, just provide operating expenses with 25% profit and take 75% profit. OpenAI has built a great software due to years of work and is being simply being milked by MSFt when the time comes for taking profit.

And SAM allowed all this under his nose. Making sure OpenAI is ripe for MSFT takeover. This is a back channel deal for takeover. What about the early donors who donated with humanity goal whose funding made it all possible?

I am not sure Sam has any contribution to the OpenAI Software but he gives the world an impression that he owns that and built the entire thing by himself and not giving any attribution to fellow founders.


Either the board will resign and Altman will return.

Or Altman will start a competitor.


If non profit boards elect themselves, I see no reason for the board to resign, for half the board it is their biggest life accomplishment, and there is no shareholder to vote them out.


He's probably on Atlas right now starting the company.


Exactly...new competitor forming as we speak


so I guess non-profits do not have non-competes? I have no idea


Non competes are illegal in California (if there is no sale of equity in excess of assets involved).


Meta should do a ChatGPT clone just like they did a Twitter clone.


They sort of did one before ChatGPT (and after InstructGPT) with Galactica, but didn't have the nerve to keep it up.


Galactica was focused on academic use cases and got a lot of hate due to hallucinations.


Because their Twitter clone went so well for them?


100 million monthly active users at the end of October, and still growing, according to Zuck. Is that not considered successful anymore?


disappointing @veec, with this mindset you'll never be in three comma club. /s


Llama?


Has the same lack of capitalisation as Sam Altman's message, wonder why


Some conflict over using toUpperCase() ?


snake_case debate got out of control


Showing it wasn't written by ChatGPT? (While one can easily ask ChatGPT to use lower case only ...)


i predict because it's hip to talk that (this) way nowadays


Not sure if that's why they're doing it, but all lower case writing has strong hacker culture roots.


1 qu17.


Yeah, this is not a good sign for OpenAI. Or at least not for those who appreciate what OpenAI was working to become.


I'm glad OpenAI happened, but I'd be happier if it stumbles a little and does not full-on capture the entirety of AI. I think a shake-up is good for the world.


I guess we'll see much faster enshittification now?

Buy 100 prompts now with AIBucks! Loot box prompts! Get 10 AI ultramax level prompts with your purchase of 5 new PromptSkin themes to customize your AI buddy! Pre-order the NEW UltraThink AI and get an exclusive UltraPrompt skin and 25 AIBucks!


Altman was the one who stood to benefit financially from OpenAI selling out. The rest of the board do not have equity in the company. If anything we'll see a reversal of the whole "Open"(to paying customers)AI


He didn't have equity in OpenAI either, he seemed to be running it for fun. Of course, he can get cash payments.


I'm highly suspicious of these types of claims. Steve Jobs was famously on a salary of $1, to get Apple back on track....no mention of the 7.8 million stock options backdated to maximize the gains on the share price.

They're all getting paid one way or another.


Sure, they can promise to give him some later. But he doesn't have them now; unvested anything would have expired.

That's the reason nobody does stock options anymore though, it's all RSUs now.


Someone had mentioned in the other thread - he had equity via some yc investment somehow.


He appears to say he doesn't.

https://x.com/sama/status/1725748751367852439

Though any fund containing MSFT must be correlated.


80-clicks captchas is not enough enshittification for you?


Remember folks, the trigger point that makes MS's investment worthless is reaching AGI point as that then triggers the charter rules to take the IP back out of commercial products....that is the crux of the firing...one board faction felt they had reached AGI and the people that were forced our or left felt they had not yet reached AGI.

From my brief dealings with SA at Loopt in 2005, SA just does not have a dishonest bone in his body.(I got a brief look at the Loopt pitch deck due to interviewing for a mobile dev position at Loopt just after Sprint invested).

If you want an angel invest play, find out the new VCfund firm Sam is setting up for hard research.


What an unbelievable turn of events! To the outside observer, OpenAI is one of the most successful, well-oiled machines, shipping nearly weekly and doing an unbelievably good job marketing itself. Clearly, there's a lot of turmoil going on behind the scenes.


Ehh the whole “I don’t have equity” thing was a bit strange to me.


what was strange about it? seemed pretty straight forward to me


>what was strange about it?

There's 8 billion people on the planet nowadays, of those, about 7.9 billion would not lift a finger if there's no material benefit to them. Hence why it's strange.


I think the ratio is basically the opposite, but thanks for explaining.


Ok, now I'm curious, do you live in a monastery or a very small community?


no, I live in a US urban environment and my experience is that most people enjoy doing good and helping others, especially when it is simple and low effort.


> simple and low effort

You would agree the OpenAI is neither ? the comparison doesn't do well then?


You set the bar for most people at lifting a finger, not running a global company.

If you want to talk about rarer cases, there at lots of examples of people that literally sacrifice their lives and die for no personal benefit


So weird.


Definitely lots of tape and bubblegum holding things together. Like any fast growing company! And this one is breaking speed records.


And Sam was the face of AI to the general public, which makes these moves all the more perplexing.


I think you're likely vastly overestimating the amount of people in the general public who have any idea who Sam is at all.


Just look at how this news is doing on reddit (a service i conflate a little more with the general public than hacker news which learns towards silicon valley technology) and you can easily see the truth of your statement.


Interestingly parts of this comment section are behaving in Reddit-y ways, posting board members' Linkedins and questioning their credentials, as if their jobs are to just rubber stamp the CEO's calls.


that's not un-HN. Noone has any clue, and all are speculating, so it is one theory after another. We have dozen posts on capitalization.



he wasn't a very good face, most of my friends don't recognize who sam is, they know what chatgpt is though.


OpenAI is still riding on a fast wave of success from ChatGPT. Let's see how they're doing in a year.


We don't really know anything about the internal finances at this point. The product is solid but who knows how fucked up things are. Maybe they were on track to run out of money. GPU compute ain't cheap.


None of that can be a reason for a step like this. OpenAI can easily charge much more for their products, and there is a market for even extremely high prices (even if not as big) and given this is a non profit, it doesn't even need to make billions of dollars in money.


OpenAI exists both as a nonprofit and, for several years now, as a for-profit company [1] that has taken billions of dollars in investment. It needs to make billions of dollars to return to investors just as much as any other for-profit company does.

[1] https://openai.com/blog/openai-lp


The non profit is the majority owner of the for profit, and there is no investor pressure here to make billions.


Could that not change as the board changes?


I think the board is required to be a majority non-equity-holders precisely because an equity-holding board will not keep to their non-profit mission.


Since it's a private non-profit corp it might be whatever they want the rules to be.

Arms-length neutrality on a board in silicon valley might still work like the rest as other comments have stated. Maybe someone can shed some light on it


I’m presuming it was put into place as part of creating the capped-for-profit entity, to make sure the for-profit couldn’t itself permanently misalign the non-profit’s board.


One should consider the fact that in this case the ex CEO is independently wealthy so clearly wasn’t doing it for the money. In addition he started it with partners as originally as a non profit. Finally he didn’t own any equity in the company despite being one of its core team and founding members, in addition to its very public face. How many of us in the same position as him could be controlled given that the only thing one has to lose is their ability to have influence over the other board members and he seems to have lost that anyway.


Given Greg seems to not have known about the board meeting there's a good chance Mira didn't either. Is she next?


Mira wasn't on the board, but if they were concerned about her they wouldn't have made her interim CEO.


Despite what the charter says, why is OpenAI called OpenAI and not OpenAGI? Is that at the core of this issue?


Is this like Star Trek where the mission automatically ends if you lose enough of the bridge crew?


Greg is the one who announced GPT-4. Sam enabled Greg and vice-versa.

The next AI winter may have just begun...


Yea cause Steve Jobs dying stopped apple from becoming a juggernaut. People need to stop idolizing the fact that one or two people are "indispensable". Humanity moves forward eventually, even if Einstein wasn't born, someone would have figured out general relativity.


It's also quite silly that society often credits one guy at the top who supposedly has "incredible vision" and yet would likely fail at explaining even the most basic technical details. And if such a person must be credited, why not the CTO, chief engineers, or principal scientists, who are at least closer to what actually drive the technical innovations than the CEO?

In reality, it's actually the 1000s of actual engineers that deserve most of the credit, and yet are never mentioned. Society never learns about the one engineer (or team) that solves a problem that others have been stuck on for some time. The aggregate contributions of such innovators are a far more significant driving force behind progress.

Why do we never hear of the many? It's probably because it's just easier to focus on a single personality who can be marketed as an "unconventional genius" or some such nonsense.


Our stupid monkey brains are evolved to work in a primitive, human centric way, we always need a "figure", a "leader" to look up to, we can't comprehend that many people can be involved in something, that doesn't satisfy our primate brains need to follow or worship someone.


Human motivations and effort are like Brownian motion completely stochastic and hard to direct in any one direction to make any significant impact .

A effective leader whether it is Musk, Jobs, Altman, Gandhi, Mandela (or Hitler for that matter) has the unique to skill to be able to direct everyone in a common direction efficiently like a superconducting material.

They are not individually contributing like say a Nobel laureate doing theoretical research. They get accolades they get is because they were able to direct many other people to achieve a very hard objective and keep them motivated and focused on the common vision, That is rare and difficult to do.

In the case of Altman, yes there were 1000s researchers, programmers who did the all the actual heavy lifting of getting OpenAI where it is today.

However without his ability and vision to get funding none of them would be doing what they are doing today at OpenAI.

All those people would not work a day more if there is no pay, would not be able train any model without resources. A CEO's first priotity is to make that happen by selling that vision to investors, Secondly he has to sell the vision to all these researchers to leave their cushy academic and large company jobs to work in small unproven startup and create an environment they can thrive in their roles. He has done both very well.


Unrelated, but maybe you mean special relativity. Poincaré was very close and others like Lorentz would have made the logical leap to discover special relativity. Most scientists however agree that GR would have taken much longer for someone to fill in the crucial gap of modeling gravity as the geometry of space time.

But sooner or later someone would have done it.


Thermonuclear winter is more likely at this point. 4 AI safety believers that formed a board majority just got spooked.


> The next AI winter may have just begun...

Because two executives were ousted from a company? That's dramatic.


this entire thread is a fascinating treatise on AI philosophy mixed with business conspiracy.


If the AI ecosystem is so fragile that the ouster of two men from one start up is enough to destroy it then it wasn't ever a solid bet. I don't think this will mean much for the broad viability of these systems. Gpt is clearly valuable, but I guess we need to figure out if these systems can be run profitably. I'm not sure people have given much thought to how insanely expensive running massive gpu clusters is. It might just fundamentally not scale well.


what a fucking ridiculous statement - sam altman is just a YC VC machine man, and I'm sure openai can find another CTO in the hottest ML market in history.


We just don’t know enough yet. Sam could’ve been let go over a disagreement about direction. Or he could’ve been cooking the books. Or he could’ve been concealing the true operating costs. Or subscriber numbers. Some of those things just require a change in leadership. Others are existential risks to OpenAI.


Or their first real AGI could have ousted him.


The CEO and board aren't the people who create the actual products or do the research.


Thomas Crapper stepped down from the Crapper company in 1904, which is why we don't have Crappers today.


"the next AI winter may have just begun" -- good!

time to stop playing with existential fire. humans suffice. every flaw you see in humans will be magnified X times by an intelligence X times stronger than humans. whether it is autonomous or human lead.


we can only hope

i'm sick and tired of everyone sticking a chatbot on random crap that doesn't need it and has no reason to ever need it. it also made HN a lot less interesting to read


Irrelevant but it struck me how both him and Sam Altman write in all lowercase.


Quitting like that makes it seem like Greg already knew what was brewing, whatever the conflict was and it came to a head and he made his call. So not a total surprise to him, at least as far as the backstory goes.


On the contrary this makes it seem like he was surprised.


No, he does not seem surprised as he had his mind made up about an important life decision and a carefully crafted resignation letter ready in a matter of hours.


> carefully crafted resignation letter

Are you trolling, the letter is short and all lowercase lol


Maybe he used ChatGPT to write the letter.


I think if he didn't know why, he would wait to find out what the story was.

Instead he knew enough to make is call immediately, knew what he was going to do.



Just hope this will bring more "Open" into "AI.com"


This Board has to go.


Sam Altman is much more replaceable than Ilya Sutskever


Finally some non-Elon-related drama in tech happening.


Elon co-founded OpenAI. So, this isn’t exactly non-Elon.


and mira murati spent 3 years at Tesla


non-Elon-related so far


If it comes out Elon had a hand in this, I might as well cancel my Netflix.


You still need it to check Silicon Valley references.


Soon: "New CEO of xAI revealed to be Sam Altman"


Never, Elon despised Sam's pivot from non-profit to for-profit. He invested 100M in the beginning.


I'm sure there's a non compete clause in his employment


Unlikely, non-compete clauses are not enforceable in California.


Not for principals/ founders. The law applies to employees.


That would be happening but Elon won't want to be overshadowed


And then in a plot twist it turns out Elon was the AGI


Or Musk returns to the board of OpenAI after a hiatus.


yeah I'm sure he'll have something stupid to say about it very soon to attract the attention to himself.


this keeps getting better and better



Let's suppose all that stuff is absolutely true, that when Sam Altman was a 13 year old kid he assaulted his 4 year old sister, that she's troubled because of it, and he made some attempts to buy her off, perhaps money in exchange for silence. Why would the board decide to suddenly fire him because of that, after all this time? He was a minor who would not understand the consequences.

No, I'm confident that it has nothing to do with that. It must have to do with the current business. Maybe there's a financial conflict of interest. Maybe he's been hiding the severity of losses from the board. Maybe something else. But you don't fire a CEO because you discover that he committed a crime at age 13.


If this is true, which it very well could be, it's clearly not ok/right. However, this was known for some time. She was tweeting about this in 2021, and this was discussed again in October of this year.

None of that makes sense as to why the board would randomly fire him. I don't think it's this.


That is very old for it to cause a reaction like this now.


substantive evidence might have cropped up.


I don't like the direction this is heading in at all...



Jesus. I saw he was "demoted" but he's totally out now, right?


Well "out" as in he explicitly quit. He wasn't fired.


He was removed as Chairman of the Board but was allowed to remain in his other role as President reporting to the new acting CEO, but apparently wasn't interested.

OTOH, I don't think it would surprise anyone that he would quit, and that may well have been the intent.


That's what I thought.

OpenAI's statement implies he was aware of the demotion... but his statement seems to imply he wasn't.

I guess the most likely situation is that they put out the press release, told him (or vice versa) and it took him a bit to decide to quit.


> OpenAI's statement implies he was aware of the demotion.

Maybe my experience with corporate communications is different, but all it implies to me is that he was not removed as President and was being permitted to stay on under the new CEO.


>As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.

Its totally fair (your interpretation) to think that. He was removed as chairman though, which IMO is a demotion. I think its disingenuous on the part of OpenAI _unless_ originally Greg said he was ok with the new terms. If a company says "Person X will remain as President and report to the CEO" you would think they have worked it out with person X _before_ announcing it.


For sure.

Wasn't he essentially demoted before quitting though? I guess this means he wasn't even aware he was demoted.


His last sentence is the main hint. Disagreement over dangers of AGI discoveries and how to handle I’d guess.


Given the way the board announcement ended on the commitment to the original charter of AI for everyone and the way this mention of safety is being thrown in, I suspect it may have been strong disagreements over continuing to be closed with research in the name of safety or open with the research community in the sake of advancement for all.


GPT4 has been closed for a long long time, nothing technical was ever released. The board can't suddenly one day wake up from a long sleep and decide that things have to become open right this minute.


Maybe it didn't. Maybe it's been pushing on this for a bit and Altman kept paying lip service to it or made commitments in their eyes, and those weren't being followed through on


> pushing..for a bit

They could have waited another 30 mins for the markets to close before making the move. This isn’t the culmination of a long-standing problem.


A non profit board is not as professional as a public company board.

Looking at the people on the current board, it doesn't seem they have a lot of experience being independent board members in large public corporations.

No non-profit has this level of public scrutiny, It could just be that they were sloppy because they are not professional board members.


I think claiming to quit over AGI danger is more believable to outsiders than “I want to spend time with family, totally coincidentally after my boss was fired”


No one actually believes in that baloney, not enough to fire the CEO over anyway.


The money & startup people co-opted the scientific non-profit. Not vice versa.


I mean AGI


Honestly it could be something as simple and sordid as a credible sexual assault allegation that he lied to the board about, or just some plain old fashioned embezzlement.


Hope they fire the person that forced verified phone numbers on new accounts.


Verifying phone number is one of the last things which is still effective when fighting against bot registrations. Alternative is to ask money for registration.

Here is idea for hacker news crowd: make service which is a proxy for phone number validation: user needs to validate his phone number once in that app and any other 3rd-party service can ask this app for security code which confirms phone number ownership. We use something similar by offloading phone number confirmation via Telegram bot. Also this proxy service could optionally offload management of "bad" phone numbers used by spammers and add other protections


> Alternative is to ask money for registration.

I'm 100% ok with this. I have the choice of using a Visa/MC gift card I bought with cash. Same as I can do with Netflix. Better than linking a unique ID I use everywhere else.

I think what bugs me the most is that there's no direct need for the phone. It's reasonable to give my phone number to a doctor's office because I need to hear from them over the phone.


> I have the choice of using a Visa/MC gift card I bought with cash.

Technically you can also pay for a burner service to get temporary phone numbers to receive SMSs for registering to services. Can’t attest if any of them are good or trustworthy. I recently looked into it but everything I found was a subscription and/or shady looking.


KeyBase could have been that service. I really wish they'd stayed focused on identification rather than crypto wallets and chat.


Nice idea, I hope somebody builds it. Signal just showed that they're spending $6 mil/year for verifying phone numbers.


That's why we offloaded phone validation to Telegram - it is too costly to send SMS in other countries than our home market and spammers are finding ways to get phone numbers for free from different VoIP providers. We need to implement complicated SMS sending limit logic to avoid abuse


And the person who decided that the settings (disable usage for training data) and (save prompts) can't be individually controlled. Also that the default is that your data is used for training purposes. Both are clear indications that privacy was no importance to them.


You can opt out of training and keep your history turned on in the privacy center. You fill out a form and indicate your country of residence.


Which privacy center? I only have settings. And in settings there is the sub-category Data controls where I disable "Chat history & training".


It was previously a Google form without any confirmation. As of late October, they moved it to this privacy center that they keep conveniently well hidden.

This page provides confirmation that your request is processed: https://privacy.openai.com/policies


> After learning today’s news,

Which news is that?



many things are happening


what a day


Holy shit


There's no job quitting note like an all lower-case job quitting note.


it's a way to soften your tone so it doesn't sound angry. but then it sounds like soft anger.


I just read it as illiterate and am amazed someone that deliberately writes like that was chairman of the board.


This is honestly par for the course for anyone who didn't grow up a digital native and is a career businessman who didn't spend it hand-composing messages. I wouldn't read too much into it. Direct emails from executives that aren't filtered by their assistants all look like this.


They're working so hard they don't have time to use the shift key.


You don't think Greg is a "digital native?"


Oh god no, born in '89 and went straight from college to an executive role and stayed there.

Like it's not a bad thing, I'm not implying any kind of judgement but keeping those things in context helps you know that "K." means something totally different coming from your dad.


Because the I’s weren’t capitalized? It just looks like he turned off auto-caps


Slightly easier to read in many cases in all lower case.


just having a little chat on irc here

nothing too crazy


2010s business power move


His previous posts are a mix of all small and regular capitalization. Could be as simple as mobile vs laptop.


it's so pretentious


Here is Helen Toner's resumé: https://www.linkedin.com/in/helen-toner-4162439a/details/edu...

I am genuinely flabbergasted as to how she ended up on the board. How does this happen?

I can't even find anything about fellow board member Tasha McCauley...


Just linking to the education tab of her profile is misleading.

Many people in AI safety are young. She has more professional experience than many leaders in the field.

https://www.linkedin.com/in/helen-toner-4162439a/


McCauley's linkedin: https://www.linkedin.com/in/tasha-m-25475a54/ . From some digging, full name appears to be Aimee Nastassia 'Tasha' McCauley, and is married to Joseph Gordon-Levitt (the actor).


People are forgetting that OpenAI started as a legit non-profit. It was not meant to be a big money making startup. So presumably getting a cushy board member position was not as hard because this was just some weird philanthropy thing that some SV people were funding...


Helen Toner is famous among the AI safety community for being one of the most important people working to halt and reverse any sort of "AI arms race" between the US & China. The recent successes in this regard at the UK AI Safety Summit and the Biden/Xi talks are due in large part to her advocacy.

She is well-connected with Pentagon leaders, who trust her input. She also is one of the hardest-working people among the West's analysts in her efforts to understand and connect with the Chinese side, as she uprooted her life to literally live in Beijing at one point in order to meet with people in the budding Chinese AI Safety community.

Here's an example of her work: AI safeguards: Views inside and outside China (Book chapter) https://www.taylorfrancis.com/chapters/edit/10.4324/97810032...

She's also co-authored several of the most famous "survey" papers which give an overview of AI safety methods: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=%22h...

She's at roughly the same level of eminence as Dr. Eric Horvitz (Microsoft's Chief Scientific Officer), who has similar goals as her, and who is an advisor to Biden. Comparing the two, Horvitz is more well-connected but Toner is more prolific, and overall they have roughly equal impact.

(Copy-pasting this comment from another thread where I posted it in response to a similar question.)



Getting "page not found". I don't think I have an account....


Oh man. https://twitter.com/apples_jimmy/status/1725615804631392637?...

Really wonder what this is all about.

Edit: My bad for not expanding. Noone knows the identity of this "Jimmy Apples" but this is the latest in a series of correct leaks he's made for Open AI for months now. Suffice to say he's in the know somehow.


...that's a random post, Jimmy's not some OpenAI insider lmao. Hope he sees this


Pretty sure he works at OpenAI. Beyond that, hard to say how important he is.


Lolwut? Who is Jimmy Apples?


Don’t you know?

Random twitter guy has thoughts on $Current_Event and a witty quip about the “vibes”. It’s crucial we post this without context to the discussion


Nobody knows who he is but this is the latest in a series of correct leaks he's made at Open AI for months now. Fair enough I didn't expand but "Jimmy Apples" of all people being unphased by this revelation that supposedly even Microsoft was unaware of is the funniest timeline.


You probably could have included that information in the original comment, as it’s super useful anyone not intimately familiar with the twitter-sphere around openAI.


That's Fair. I've included it now.


Escaped AGI /s

Random? Twitter account who's leaked a few things at Open AI for months now.


Founder of e/acc movement.



Thank you for correcting me. I'm not sure why I thought he was the e/acc founder.


What is this thing of people not using their shift key anymore? Sam Altman does the same. I don’t trust these people. ;)


Why? Do they seem shiftless?


My phone and apps autocapitalize I have to go out of my way to use all lowercase. Weird.


all lower case has a longer hacker culture history.


Context please.


a callback to the old IRC days


I still maintain that IRC is better than Discord. It was so nice to have "real" servers.


I'll take lazily not using Shift over pecking at Caps Lock!


It amazes me there are people who don't remap their capslock to something useful.


Capslock -> Ctrl gang


I don't get that one though, Ctrl is already right there? I do Esc personally, because vim.


Ok what’s with the weird punctuation and lack of capital letters in their tweets?

Both seem like they are horribly rushed and no auto complete?


Reminds me of how rich YouTubers make apology videos. No makeup (but yes makeup), no script (but yes script)... make an effort to seem effortless


I've seen lowercase wording used by people who don't want to put effort into something. Personally, I consider it a sign of dismissal towards the audience. If you see this coming from someone, it might be an indication they don't case. This is based purely on personal experience.


it’s a causal writing style that has become popular online in the last decade or so. don’t overthink it


It's usual in chats, not for formal writing like officially announcing your resignation. At least that's my impression.


Maybe that's why it's used ? Twitter is just addressed to the public, so it's not official in any way, it's just communication. The lowercase casual style is only used to drive empathy probably.


This is the sector of the economy that ipos in cargo shorts I don't think formal writing would be expected or appreciated


To be fair, that's not a very formal resignation. He just... quit.


Yes it likely is very likely he had absolutely no idea and rushed something out


Neither @sama nor @gdb are known to communicate in all lowercase. That fact that both did today must mean something.


It means there was no shifty business.


Hopefully somebody who makes crosswords for the NYT uses this as inspiration.

No shifty business?

LOWERCASEWRITING is one too long for a M-S but could work on a Sunday I guess.


Facepalm

It took a minute (and reading other comments) before that clicked :)

SHIFT key...


There is no reason for Altman or Brockman to use the shift key, since as far as OpenAI is concerned they have already lost control.


There is one set of capitalized letters - in Greg's post: "AGI"


The coordinated and disciplined communication from both suggests this wasn’t a surprise. Is this a planned move in a grander scheme?


@sama tweets are all in lowercase, except for the acronyms. @gdb has a mix of them.


Don't usually see schizo posting leaking into hn. This is some good stuff


OK this was clearly BS.



Wow look at that garbage Twitter thread following it, with shitty ads and .eth jackasses. Twitter has really turned into 4-chan lite


Is there some weird chic thing where you intentionally don’t use capital letters? What is up with that behaviour?

Is it some cute attempt at saying “an AI didn’t write this”?


> Is there some weird chic thing where you intentionally don’t use capital letters? What is up with that ridiculous behaviour?

I was wondering the same thing. Always, on purpose, avoiding starting sentences with capital letters. Both this guy and Sam Altman. What ... why ... ?


I've seen this from a couple of VP/SVP level execs at companies I've worked. My pet theory is it's some kind of weird "My time is too important to even use the shift key" signal. They probably add up the cumulative amount of time they would spend using the shift key and multiply by their compensation and realize they could buy another car if they just optimized that useless key out of their lives.


at this point, it's work to keep autocorrect from fixing this.


Plenty of people don't use autocorrect.


Sure — my point is only that by and large it’s on by default, and you have to actively turn it off (or de-correct manually).


So you know that they are rule and norm breakers, no rule too tiny to be broken and skipped in the name of productivity/health/insert-hot-thing-of-the-day


That's it. Disruption as a way of life. True entrepreneur mindset. Yeahhhh.


personally, it comes from spending too much time on IRC back in the day and now thinking it's normal not to capitalize ;-) a bit of a stylistic choice

but over time I've become accustomed to capitalizing a bit more often and it's become sort of random. I actually have auto-capitalization turned off on my phone


Oh that’s an interesting source I haven’t considered. It is rather stylistic.



Maybe auto-correction is off because you need to type too much acroynms and jargons and auto-correction is annoying?


Unlikely. They are separate settings both on iOS and Android.


a habit of a certain era of internet kids.


Some people just don't like to press shift. I moaned at one of my friends about this, years ago, and I got a 1-word reply that really stuck in my mind:

> thomas.

And there is indeed no law against not pressing shift.


Yea, I'm not a fan. I'm not grammar nazi but it makes reading a bit unpleasant.


The screw is tightening. Britains largest newspaper has just called out ai companies on their intellectual property and content theft.

The game is over boys. The only question is how to make these types of companies pay for the crimes committed.


companies will just go to japan where its legal


I don’t think you can conclude that the ship is sinking per say, rather those at the helm are being changed.


The cat's out of the box for profit or not ... those who make living off of copyright will just have to deal with it as they dealt with Napster and all other innovations.


There is no crime. observing something is not a crime.


Oh no not Britain's largest newspaper! A nation known for the quality of its media. It's over for openai, I'm sure they'll fold rather than just not doing business with a comparatively small market if anything came out of that lol.


You’re missing the point. Raising awareness of the scale of this theft is aimed at swaying public opinion.

That way, legitimate machine learning companies can thrive and research for ai can continue without the nuisance.

Incidentally forums are filling up with horror stories from people working at or interviewing with openai, in spite of their paid trolls spamming forums left and right and reporting reddit posts to suppress people. Perhaps the bubble has burst.

Openai has done more harm to ai than any other company.

The cat’s out of the bag.


nice, hope there are openings in high level positions and they switch to remote

I’m never going back to Noe Valley for less than $500,000/yr and a netjets membership


You willing to slum it in a shared seat that has supported other billionaires' behinds?

I wouldn't ... but you do you!


I’d otherwise be slumming it in a shared seat that supported middle class behinds

I wouldn't move back to San Francisco anywhere and hybrid would be a midweek affair


I found the lack of information of the CTO and certain board members disturbing.

Like, who is Mira Murati? We only now that she came from Albania (one of the poorest countries) and somehow got into some pretty good private schools, and then to pretty good companies. Who are her parents? What kind of connections does she pull?


Why would you consider the fact that she was born in Albania to be suspicious?


You should look at the list of board members to get even bigger questions.


Yes. Actually that was my motivation to check things up.


Why is the fact that she is unknown disturbing?


It is more like someone from one of the poorest countries in EU going straight to pretty good private schools and pretty good companies weird.

Not going to say it's impossible, but she is doing so good but left so few footprints on the Internet.

Again just my personal early night conspiracy drink. Don't take it seriously.


If I had a dollar for every Ivy school student whose parents were government functionaries in poor second and third world countries I could probably get a few years of Amazon Prime. But so what? I think it's good for the talent, and good for this country.


And Ilya Sutskever was born in the Soviet Union during its end stage crisis days. Your point being?


Hopefully not too pedantic but Albania is not in the EU.


Who cares where she came from? Do you think that a poor country cannot produce really smart people? Making assumptions that her family has some connections and that's why she is successful is pretty stupid.


This is turning into a situation that OpenAI may not be able to recover from. Typically if the CEO and Chair of the board depart under these circumstances there was something illegal happening.


> typically if the CEO and Chair of the board depart under these circumstances there was something illegal

When they are fired by the board, it sends a very different signal.


So is this the way the government takes control of the AI, next OpenAI will have new but familiar owners from Raytheon and the other Pentagon crowd?


Governments don't need to "take control" of things, they get tax payments and can pass laws.

The US has never spent less on its military than it does now, and the military industrial complex has never been less important, because the rest of the economy has grown so much larger. So it's funny to see people still using Cold War-era conspiracy theories from when it actually mattered.


Where is the conspiracy theory?


Raytheon and the Pentagon are secretly controlling OpenAI by changing its CEO?


I didn't declare it, it was a question, but thanks for slandering me as a conspiracy theorist


Postulating crazy conspiracy theories is no way to go through life


Where is the conspiracy theory?


It’ll be far more mundane than a spy novel plot.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: