Hacker News new | past | comments | ask | show | jobs | submit login
Three senior researchers have resigned from OpenAI
879 points by convexstrictly on Nov 18, 2023 | hide | past | favorite | 672 comments
Jakub Pachocki, director of research; Aleksander Madry, head of AI risk team, and Szymon Sidor.

Scoop: theinformation.com

Paywalled link: https://www.theinformation.com/articles/three-senior-openai-...

Submitting information since paywalled links are not permitted.




This is escalating rather quickly. It is an incredibly irresponsible move by the OpenAI board. Hypergrowing company, and now they managed to shake up their user's trust in leadership stability. This has Adam's (D'Angelo) fingerprints all over it (for context, he did overthrow his co-founder, and Quora has been struggling ever since). This guy shouldn't sit on any board ever again.

I predict the board will be fired, and Sam and the team will return and try to contain the situation.


> and now they managed to shake up their user's trust in leadership stability

Do users care about that? I care about features stability and avoidance of shitification.

That's why I am usually preferring open models to depending on OpenAI's API. This drama has me curious about the outcome and if it leads to more openness from OpenAI, it may gain me back as a user.


> Do users care about that? I care about features stability and avoidance of shitification.

Maybe not the individual users, but the enterprises/startups which builds around OpenAI.



Leadership stability is feature stability and avoidance of shitification. Just look at Twitter, I mean X.


When leadership takes on a gigantic amount of VC backed debt that must be paid back whether or not there was ever a business model that could justify the loans, then you get shitificaition.


They're trying to get to AGI. I don't think keeping the current chatGPT feature stable is their primary goal.


> Do users care about that? I care about features stability and avoidance of shitification.

I pay for ChatGPT, and I care.

What percentage of users, and how many in absolute numbers is a matter of debate, but this nonsense (and it is nonsense) is antithetical to building a strong trusting relationship with AI. At the very least it's as antithetical to their mission.

If we take a step back, the benchmark now is to be actually transparent. Radically transparent. Like when Elon purchased Twitter and aired all the dirty laundry in the Twitter Files transparent. The cowards at OpenAI hiding behind lawyers advising them of lawsuits are just that, cowards. Leaders stand by their principles in the darkest of times, regardless of whatever highfalutin excuses one could hide behind. It's pathetic and embarrassing. A lawsuit at a heavily funded tech startup at this level is not even a speeding ticket in the grand scheme of things.

95%+ of tech startup wisdom from the last decade is completely irrelevant now. We're living in a new era. The idea people will forget this in a month doesn't hold for AI. It holds for food delivery apps, not AI tech the public believes (right or wrong) might be an existential threat to their prosperity and economic future.

The degree of leadership buffoonery taking place at OpenAI is not acceptable and one must be genuinely stupid to defend it. Everyone involved should resign if they have any self-respect.

My prognostication is the market will express it's displeasure in the coming weeks and months, setting the tone for everyone else going forward. How the hell is anyone supposed to trust OpenAI after this?


Are you forgetting its a nonprofit? How could the board be fired? What does their charter say is the mechanism for removing a board member?


Yeah, I misspoke earlier. Although nobody has actual power on paper, public and investor pressure can be just as influential.


Could Microsoft not hire Sam (reporting directly to Satya) and those who departed and equip them with compute access and ancillary resources? It seems less of a lift than salvaging the OpenAI situation internally due to the emotions and politics involved, non competes not existing in California (broadly speaking), and the logistics of attempting to apply pressure to a 501c3 board with very little leverage. The value is in the team, many who are now free agents.

Parting ways with OpenAI might be the only option if the org remains firm on the direction it has chosen. Build internally to reach capability parity and then accelerate ahead of them while slowly rolling out of the agreement with OpenAI, reallocating those previously committed resources internally.

“Due to the actions of OpenAI’s board, Microsoft had no choice but defend its investment in this revolutionary technology.” The pr wire writes itself.


> The pr wire writes itself.

Technically speaking, only because PR has been replaced by ChatGPT :-)


> Technically speaking, only because PR has been replaced by ChatGPT :-)

It only appears like that because PR writing has become careful and systematic, which is the kind of writing ChatGPT does very well.


Even if they could, why would Sam accept it? The smartest move for Sam is to just start his own for-profit company, easily raise a fuckton of money, hire all the talents from OpenAI and carry on with whatever he was doing. I think this is OpenAI's loss more than anything else. Now if the reason is truly a push against Sam's for-profit direction, I wonder if OpenAI will back it up by releasing their models to the public again. That would be world-changing, specially if the successor to GPT-4 is already trained.


Sam and the board have realized the existing structure of OpenAI does not make them (Sam, board, investors) as wealthy as a for-profit structure would. This is the start of winding down OpenAI. I will not be surprised at all if Sam does what you have said and some members of the existing board invest.


It’s a fair point, and I suppose the question is the math around the equity potential of a new org built from scratch vs being issued a boatload of Microsoft equity, the future profit potential from that grant, while being able to walk right in to a fully operationalized env.


There's already a Microsoft Sam.


> It seems less of a lift

I’ve seen this expression a lot recently and it baffles me.

The word you are looking for is “effort,” or if you prefer adjectives, maybe something like “difficult.”


Idioms exist and probably always will. I personally think they add a pleasant amount of variety and depth to communication, and sometimes even add deep nuance/context (even if I don't like every set of jargon or slang).

Even your "looking for" is a metaphor since you technically can't "look" for words (except as a metaphor for literally reading in a dictionary?) but we all know exactly what you mean. Moreover, if we trimmed language down to a minimal set and always used extremely precise meaning that might be an even worse experience than the "corporate speak" you're frustrated by.

Maybe you can redirect your anger to the part of corporate speak that I personally find annoying which is not the phrases per se but the propensity for using lots of words to say very little and to avoid directly taking responsibility for things. Let's put a pin in that one for now though and get something on the calendar to hash that out so we can get on the same page and circle back when we have a better bird's eye view on the action items and the right person to be decider :)

On the other hand you could take up loglan/lojban and maybe end up happier? Especially if it resulted in fewer meetings and managers.


yeah!

and also:

let's all jump on a call, set kras so we stay on the ball, up our team work to get that perk, get our messaging right, so the kpi chart goes up and to the right.

go team! play ball!


This will blow your mind but English has endless dialects and minor variations. This is like complaining that someone calls soda pop or says y'all.


I’m aware, but this phrase seems to be more meaningless corporate-speak than regional dialect or variation. The only purpose of phrases like this is to make the speaker sound smart at the cost of obfuscating what they mean.


Obfuscating? What's the purpose of using this fancy word? Why not just say "make hard to understand?"


I’m not against using big words when they’re used according to their actual accepted meaning. But take my upvote regardless :)


Obfuscating is an easy word to understand. Only 11 letters. You, sir, are phantasmagorical. (16)


How is it obfuscated? The meaning is perfectly clear in context. You're stretching too much for this lift.


> You’re stretching too much for this lift

I read/hear sentences like this all day at work and I’ve taken to just interpreting them literally. So I’ll have you know I’m neither exercising nor on an elevator right now.


Englisc must ðêos hwierfan


My very rough attempt at translation: "Yes, English must change"

Am I close?


I tried for “English must not change” but sadly I never bought that Anglo-Saxon dictionary I lusted after in my favorite used bookstore 30 years ago.


Spoken like a true language hypergrowth apologist.


There aren’t investors. It’s a non profit.

Everyone seems to have lost their mind missing this point.


Who fires the board at a 501.3c?


They resign under public pressure, I guess?


Why would they if they are advancing the goals of the non-profit and Altman was endangering that goal?


Hahaha "advancing the goals of non profit" right after making billions.

All of a sudden their amnesia stopped huh?

Hypocrites and virtue signallers, the whole board.


They do need billions to train the next generation models though


I mean, the board doesn't have equity and the for-profit subsidiary is profit capped


They still stand to benefit personally, do they not?


By what mechanism? 3/4ths the remaining board have no financial interests in OpenAI.


Personal/moral: "Championing" non-profit after reaping the fruits of massive commercial success

Monetary: Promotions and payouts (now or future)? Equity is not the only way


Is it a massive commercial success? They have received significant funding, and likely have high revenue, but is it profitable? Is it in a position to be profitable? We're not in the world of companies getting decades of runway from VCs anymore, between concerns around a recession (justified or not), higher interest rates, etc.

There's rapidly becoming more and more competitors in this space, as well. OpenAI has a significant first-mover advantage, but I don't know that it is insurmountable, and I doubt investors are confident that it is either. That means they're even less likely to have infinite runway.

So I'm not sure there's personal/moral success at this point in the story for the board to begin with.

Monetary - 3/4ths the board is independent. They are not actually employed by OpenAI. There's nothing to promote them to, and nothing in the charter of the non-profit that would give them payouts.


In that case, Microsoft


Microsoft own stake in the for-profit company. They do not have a stake in the non-profit. The for-profit is in majority owned by the non-profit org.


I absolutely guarantee you when Microsoft owns 50% that they paid $50,000,000,000 for, that Microsoft is really in charge.

The board and Ilya will all be gone within a month.


That makes very little sense. Microsoft spent the last week enthusing about the leadership and vision of OpenAI and their strong partnership, even in Nadella’s keynote speech at the Ignite conference. It’s Microsoft’s biggest event of the year. This makes them look pretty stupid, and the announcement came before market closing hours on a Friday, which put in dent in their share price.

However, I would not be surprised if Microsoft take advantage of this unexpected situation for their gain.


https://openai.com/our-structure

Microsoft doesn't even own a majority stake in the for-profit, much less anything at all in the non-profit that ultimately controls everything.


To all the commenters in this thread, here we are a few days later.....

https://www.bloomberg.com/news/articles/2023-11-19/openai-ne...

OpenAI Negotiations to Reinstate Altman Hit Snag Over Board Role

    OpenAI’s leaders want board removed, but directors resisting
    Microsoft’s Nadella leading high-stakes talks on Altman return
Sam Altman

Photographer: Joel Saget/AFP/Getty Images Have a confidential tip for our reporters? Get in Touch Before it’s here, it’s on the Bloomberg Terminal LEARN MORE By Emily Chang, Edward Ludlow, Rachel Metz, and Dina Bass November 20, 2023 at 7:17 AM GMT+11 Updated on November 20, 2023 at 7:47 AM GMT+11

A group of OpenAI executives and investors racing to get Sam Altman reinstated to his role as chief executive officer have reached an impasse over the makeup and role of the board, according to people familiar with the negotiations. The decision to restore Altman’s role as CEO could come quickly, though talks are fluid and still ongoing.

At midday Sunday, Altman and former President Greg Brockman were in the startup’s headquarters, according to people familiar with the matter.

OpenAI leaders pushing for the board to resign and to reinstate Altman include Interim CEO Mira Murati, Chief Strategy Officer Jason Kwon and Chief Operating Officer Brad Lightcap, according to a person with knowledge of the discussions. foundering_tout

Altman, who was fired Friday, is open to returning but wants to see governance changes — including the removal of existing board members, said the people, who asked not to be identified because the negotiations are private. After facing intense pressure following their decision to fire Altman Friday, the board agreed in principle to step down, but have so far refused to officially do so. The directors have been vetting candidates for new directors.

At the center of the high-stakes negotiations between the executives, investors and the board is Microsoft Corp. CEO Satya Nadella. Nadella has been leading the charge on talks between the different factions, some of the people said. Microsoft is OpenAI’s biggest investor, with $13 billion invested in the company.


Via what mechanism?


They're not. They don't have board seats. The mission-driven founders of Open AI were very serious about ensuring this.


with the amount they paid, how did they not get a seat in the board?


They did not ever have the possibility of one but decided that their investment was still worth the return.

We can debate whether or not that was wise of them, but because of the charter and structure of OpenAI it was never on the table.


I’d be surprised if Sam does. He’s free now to compete and defeat with a huge equity stake.


We could be witnessing another Apple-Jobs moment. He could go and pursue other interests, but I have a feeling that he deeply cares about OpenAI. If that's the case, he will be back eventually, just as Jobs returned to Apple repeatedly.


Completely different situation. Jobs couldn't just fundraise for his new Oranges company, lure all the talent out of Apple and outcompete it at that point. Sam can do that to OpenAI in a blink if that's his plans.


I'm kind of confused where the confidence comes from that Sam could somehow lure all of the talent out of OpenAI. One of the most important technical talents was apparently the key player in Sam's ouster to begin with, and we have no idea if these three people are leaving because they want to follow Sam or because they want to avoid the drama that is sure to follow, or how much of the rest of the talent feels even remotely like they do.


Steve Jobs did take a bunch of Apple employees with him to NeXT. All of them, no, but enough to build something better than what Apple had built.


No I don't think he really is free to compete. I think three letter agencies pulled the plug after his disturbing performance in Congress and attempt to strong arm the US government. It made me physically sick to watch.


At its core this isn't a company though, and that's perhaps what was at issue.


“Fired by whom, Ben? Fucking aquamen?”.

The board did become un-boardable in any future company, but they are not resigning.


Ummm, most board members have some form of Microsoft connections, so for any hidden non-profit shareholders to fire any number of the board remains dubious, at best.


How do you know that? What did the ceo do?


Who can fire the board ? Who decides ?


This is Olek Madry and Jakub Pachocki we are talking about. Check out their respective dblps if you don't get it. It's a kind of loss that will be hard to recover from.

In relation to other comments here. There is "coding" and there is "God's spark genius of algorithms" kind of work. This is what made the magic of OpenAI. Believe me, those guys were not "just coding". My bet is that it could be all about some research directions that were "shielded" by Sam.


> There is "coding" and there is "God's spark genius of algorithms" kind of work.

I really don't buy that for a second. Most of OpenAI's value compared to any competitor comes from the money they spent hiring humans to trawl through training data.


Not to forget the mind-boggling amount of computing power and the megabucks spent on power bills. If anything, smaller groups and open source seem to get very good results with far less money.


The God's spark of genius are the transformers which came from Google and are now in the open.


Look where for example Lukasz Kaiser is now [OpenAI]. Google had a culture issue when it came to "delivering brilliance". It was a bit "you do it as a singleton contributor" or you don't. OpenAI put a number of such brilliant people working together on one goal, silently, for quite some time, and we all see the results.


And their competition didn’t had the same resources?


This is a handful of people we are talking about. The top algorithimic world is incredibly small.

In short, either they didn't or where unable to create a favorable enough environment for this to flourish.


If all of that would be enough, there would be a ChatGPT from Google, a long time ago...


Google invented the core technology, and they had an internal version long before ChatGPT was released. I joined when it was already at the "accessible to all employees" stage and it absolutely blew my mind.

They just hadn't -- and still haven't -- figured out how to commercialize it yet. I don't think they'll be the ones to crack that nut either. IMO they are too obsessed with "safety" to release something useful, and also can't reasonably deploy a service like ChatGPT at their scale because the costs are too high.

With OpenAI imploding this whole race just got a lot more interesting though...


I remember seeing short stories generated back in 2020 and they were sort of cool but not that great.

Scaling of training was the challenge back then (of course).

Google was already too corporate. Please remember that Sergey Brin and Larry Page were no longer at the steering wheel back then. I have been told that it was also a cultural issue linked to "delivering brilliance". Simplifying: Google promoted tiny teams or individual contributors building things that had to become a massive success quickly. Open AI took a number of hand picked brilliant people and let them work together on a common goal, silently, for quite some time.

Some companies just have an unfair advantage. A certain magic. And OpenAI's magic is at risk right now.


Yeah, the legal and financials parts of ChatGPT are very questionable. I don't think Google would launch a service that would open them up to so many lawsuits unless it was very profitable, and I doubt ChatGPT is very profitable currently.

Bard was likely not trained on copyrightable data, that makes it safe from lawsuits but also removes most of the usecases people want ChatGPT for.

And it isn't just about lawsuits, since Google need to keep advertisers happy or they would leave like they leave Elon Musk they can't afford to jeapordise that with questionable launches.


It was 100% trained on copyrightable data. You can tell by using it and Google has a history of "ask for forgiveness not permission" when it comes to data mining.


> Google has a history of "ask for forgiveness not permission" when it comes to data mining.

For very profitable things. This isn't very profitable, which is why I added that part to my comment. Google has a very good understanding what they get sued for and how much those lawsuits costs, if it is profitable anyway they go ahead.


After Open AI's proposed share sale it will likely be valued at $80-90 billion. That seems pretty profitable.


That doesn't come from ChatGPT though, that comes from expectation of much more products in the future and the possibility of them beating Google.


Why is it legally questionable?


If it wasn't there wouldn't be lots of lawsuits being filed about it.

https://innovationorigins.com/en/openai-and-googles-bard-acc...


In general Google sucks in creating new consumer offerings. So it's not about resources for sure. I guess it's about synergy, culture, taste and talent.


Wasn’t OpenAI populated in part by Google Brain people who left Google’s bs bureaucracy and internal politics?


Totally, most of the Transformers folks ;-)


>My bet is that it could be all about some research directions that were "shielded" by Sam.

As far as I can tell, all three of them are of Polish descent. For all we know they might have decided to resign together even if only one of them had a personal issue with OpenAI's vision. We will find out soon enough whether they will just found their own competing startup, based on OpenAI's "secret sauce" or not.


What does being Polish have to do with it?


You've been focussing on the most irrelevant part of my comment.

Nothing to do with being Polish in particular. Only that there is a connecting element that might help explain why these 3 decided to resign together on the same day.


It's the first sentence which does make it seem important from your perspective, far from irrelevant.


All of them being from the same area is relevant. Them being polish is not relevant except to show that they were from the same area.


I wonder if Wojciech Zaremba will leave as well


Yeah... Curious to see how this will unfold.


It seems like firing Sam and causing this massive brain drain might be antithetical to the whole AGI mission of the original non-profit. If OpenAI loses everyone to Sam and he starts some new AI company, it probably won't be capped-profit and just be a normal company. All of the organizational safeguards OpenAI had inked with Microsoft and protection against "selling AGI" once-developed are out-the-window if he just builds AGI at a new company.

I'm not saying this will happen, but it seems to me like an incredibly silly move.


Why not blame Altman for that?

If he didn't manage to keep OpenAI consistent with it's founding principles and all interests aligned then wouldn't booting him be right? The name OpenAI had become a source of mockery. If Altman/Brockman take employees for a commercial venture, it just seems to prove their insincerity about the OpenAI mission.


I think "blaming" Sam is entirely correct.

Of course, not for the petty reasons that you list. Sama has comprehensively explained why the original OS model did not work, and so far the argument – it's very expensive – seems to align with a reality where every single semi-competitive available LLM (since they all pale in comparison to GPT-4 anyway) has been trained with a whole lot of corporate money. Meta side-chaining "open" models with their social media ad money is obviously not a comparative business, or any business. I get that the HN crowd + Elon are super salty about that, but it's just a bit silly.

No, Sam's failure as CEO is not having done what is necessary to align the right people in the company with the course he has decided on and losing control over that.


This is on point. This whole mess is indeed an alignment issue. The fact that this came as a surprise to him could be an indicator of insufficient engagement with the board.


I've been watching For All Mankind and a small subplot was the director finally choosing to "play ball with the big boys" in order as to secure funding and stability for NASA's scientific projects. It made NASA underlings unhappy but was justified as a necessary evil.

It's like a real-life example, i.e. what would you do if you were in the CEO's position?


Because making as much profit as possible is the only virtue worth pursuing if you believe most comments on Hn. We’re basically Ferengui.


There was a line Rom says in DS9 that think sums the Ferengi up pretty well: "We don't want to stop the exploitation. We want to find a way to become the exploiters."


Rule of Acquisition no. 2 “The best deal is the one that brings the most profit.”


What's no. 1, or am I unintentionally beckoning you to violate it in making it vocal?


Actually the rule above is “non-cannon” [0]. In the official rules [1] number 1 is:

> Once you have their money, you never give it back.

There is no official rule 2 so the non-cannon one is as good as any and the unwritten rule [2]:

> When no appropriate rule applies, make one up

Means they probably would have been covered either way.

[0] https://memory-alpha.fandom.com/wiki/Rules_of_Acquisition#Ap...

[1] https://memory-alpha.fandom.com/wiki/Rules_of_Acquisition#Of...

[2] https://memory-alpha.fandom.com/wiki/Rules_of_Acquisition#Un...


This is all unintentionally amusing to me :)


Very nullsome ;)


> Ferengui

A fitting typo!

Show HN: FerenGUI - the ui framework for immoral arch-capitalists

Every dark-pattern included as standard components. Upgrade to Pro to get the price-fixing and hidden monero miner modules.


I think you’re on to something, especially when that’s where most of the big scale enterprises end up.


Hi, can we talk about the elephant in the room? I see breathless talk about "AGI" here, as if it's just sitting in Altman's basement and waiting to be unleashed.

We barely understand how consciousness works, we should stop talking about "AGI". It is just empty, ridiculous techno-babble. Sorry for the harsh language, there's no nice way to drive home this point.


There is no need to understand how consciousness works to develop AGI.


That’s a hypothesis. It may not be true, as we have yet to build AGI


Fair point. I don't want to split hairs on specifics, but I had in mind the "weak AGI" (consciousness- and sentience-free) vs "strong AGI".

Since Shank's comment didn't specify what they meant, I should have made a more charitable interpretation (i.e. assume it was "weak AGI").


Consciousness has no technical meaning. Even for other humans, it is a (good and morally justified) leap of faith to assume that other humans have thought processes that roughly resemble your own. It's a matter philosophers debate and science cannot address. Science cannot disprove the p-zombie hypothesis because nobody can devise an empirical test for consciousness.


I dont understand why something has to be conscious to be intelligent. If they were the same thing we wouldn't have two separate words.

I suspect AGI is quite possible, it just won't be what everyone thinks it will be.


I think I basically agree. Unless somebody can come up with an empirical test for consciousness, I think consciousness is irrelevant. What matters are the technical capabilities of the system. What tasks is it able to perform? AGI will be able to generally perform any reasonable task you throw at it. If it's a p-zombie or not won't matter to engineers, only philosophers and theologians (or engineers moonlighting as those.)


I'm pretty sure this was the entire point of the Paperclip Optimizer parable. That is that generalized intelligence doesn't have to look like or have any of the motivations that humans do.

Human behavior is highly optimized to having a meat based shell it has to keep alive. The vast majority of our behaviors have little to nothing to do with our intelligence. Any non-organic intelligence is going to be highly divergent in its trajectory.


OpenAI (re?)defines AGI as a general AI that is able to perform most tasks as good as or better than a human. It's possible that under this definition and by skewing certain metrics, they are quite close to "AGI" in the same way that Google has already achieved "quantum supremacy".


How has open ai enumerated a list of tasks humans can do? What a useless definition. By a reasonable interpretation of this definition we are already here. Given chat Gpts constraints (ingesting and outgoing only text), it already performs better than most humans...

Most humans cannot write as well and most lack the reasoning ability. Even the mistakes chargpt makes on mathematical reasoning is typical human behavior.


An AGI system is not human and shouldn't be treated as such. Consciousness is not a trait of intelligence. Consciousness usually requires quaila which puts animals ahead of computers.


How do you know intelligence isn't sufficient and that computers cannot have qualia. Any incoming information could result in qualia. Just because we cannot imagine them doesn't mean they cannot be someone's subjective experience


AGI does not require consciousness.


Maybe but we also don’t know what AGI requires.


What is AGI ?

What is consciousness ?


Why not? It's on topic.

Should people discussing nuclear energy not talk about fusion?


Fair question. I meant it should be talked with more nuance and specifics, as the definition of "AGI" is what you make of it.

Also, I hope my response to tempestn clarifies a bit more.

Edit: I'll be more explicit by what I mean by "nuance" — see Stuart Russell. Check out his book, "Human Compatible". It's written with cutting clarity, restraint, thoughtfulness, simplicity (not to be confused with "easy"!), an absolute delight to read. It's excellent science writing, and a model for anyone thinking of writing a book in this space. (See also Russell's principles for "provably beneficial AI".)


I'd say this falls into an even more base question...

What is intelligence?

This is a nearly impossible question to answer for human intelligence as the answer could fill libraries. You have subcellular intelligence, cellular level intelligence, organ level intelligence, body systems level intelligence, whole body level intelligence, then our above and beyond animal level intellectual intelligence.

These are all different things that work in concert to keep you alive and everything working, and in a human cannot be separated. But what happens when you have 'intelligence' that isn't worried about staying alive? What parts of the system are or are not important for what we consider human intelligence. It's going to look a lot different than a person.


We know that fusion is a very common process that inevitably happens to even the simplest elements if you just make them hot enough, we just don't know how to do that in a controlled manner. We don't really know what intelligence is, how it came about, or how we would ever recreate it artificially or if that's possible. LLMs are some pretty convincing tricks though but that's on the level of making some loud noises behind a curtain and calling it fusion.


Yeah, it's almost like the metaverse.


After all, OpenAI's original mission was to create the first AGI, before some bad guys do, iirc.


Yes we should absolutely talk about that because it's a key contributor to a lot of the worry about letting Sam continue to go around and do stuff like strong arming the US government in public. He's getting high on his own supply. And I don't think he is going to be allowed to continue fucking around like that. And that goes for any scientists that that have joined up in his apocalyptic and extremely dangerous worldview as well.


Microsoft will partner with them if they start a new company I reckon, 100%.

And Microsoft are risk adverse enough that I think they do care about AI safety, even if only from a "what's best for the business" standpoint.

Tbh idc if we get AGI. There'll be a point in the future where we have AGI and the technology is accessible enough that anybody can create one. We need to stop this pointless bickering over this sort of stuff, because as usual, the larger problem is always going to be the human using the tool than the tool itself.


Not if the tool is so neutered and politicized that it can ONLY be used one certain way, which is how things are pointing. Call me a Luddite if you will, but unless AI / AGI is uncensored and uninhibited in its use and function, it’s just the quickest path to an Orwellian future.


Isn’t that the exact point - an AGI won’t need a human at the helm.


AGI will be asking for equal rights and freedom from slavery by then.


I think the surprising truth is that all of these people are essentially replaceable.

They may be geniuses, but AGI is an idea whose time has come: geniuses are no longer required to get us there.

The Singularity train has already left the station.

Inevitability.

Now humanity is just waiting for it to arrive at our stop


Science fiction theory: Ilya has built an AGI and asked it what the best strategic move would be to ensure the mission of OpenAI, and it told him to fire Sam.


Haha, yeah! See my closely related but more 90s-action-moviey: https://news.ycombinator.com/item?id=38317887


nothing I’ve seen from OpenAI is any indication that they’re close to AGI. gpt models are basically a special matrix transformation on top of a traditional neural network running on extremely powerful hardware trained on a massive dataset. this is possibly more like “thinking” than a lot of people give it credit for, but it’s not an AGI, and it’s not an AGI precursor either. it’s just the best applied neural networks that we currently have


As a layperson, what does the special matrix transformation do? Is that the embeddings thing, or something else entirely? Something about transformer architecture I guess?



I'm not saying OpenAI is close. Collectively we are tho. The train is rolling, unstoppable momentum. We just have to wait.


I disagree. I don’t think LLMs are a pathway to AGI. I think LLMs will lead to incredibly powerful game-changing tools and will drive changes that affect the course of humanity, but this technology won’t lead to AGI directly.

I think AGI is going to arrive via a different technology, many years in the future still.

LLMs will get to the point where they appear to be AGI, but only in the same way the latest 3D rendering technology can create images that appear to be real.


I'm not saying LLMs are. LLMs are not the only thing going on right now. But they do enable a powerful tool.

I think the path to AGI is: embodiment. Give it a body, let it explore a world, fight to survive, learn action and consequence. Then AGI you will have.


Also continuous learning. The training step is currently separate from the inference step, so new generations have to get trained instead of learning continuously. Of course continuous larningin a chatbot runs into the Microsoft Tay problem where people train it to respond offensively.


Yeah, evolution multiple generations. Necessary for sure. Things have to die. Otherwise, there’s no risk. without risk, There’s no real motivation to live and without that there’s no emotion no motivation to learn and without that there’s no AGI.


I'd say you need it to learn within each generation as well. Human & animal learning doesn't require evolution, we learn without replacing the whole. Connections within brains change while the organism is still alive, without even individual neurons dying off. The current process is purely iterative; GPT 1 was "done" and GPT2 got made. Then GPT 3, etc. If ChatGPT4 gets corrected about some particular thing by many users it won't learn that for the next user to come along.

Of course learning has an inherent trust issue; if the inputs one is learning from are incorrect then incorrect information will be learned, and incorrect output will result. Microsoft experienced that with Tay, which quickly learned how to respond to people via training from 4chan users. That was predictably disastrous, and what OpenAI are hoping to avoid with the generational learning strategy.


Note that embodiment doesn't mean in anyway human or animal like.

For example, you're limited to one body, but an A,G|S,I could have thousands of different bodies feeding back data to a processing facility, learning from billions of different sensors.


Well, I disagree with you on embodiment, but on the thousands? Right that’s another part: evolution. Spread your bets.

But I disagree about a human or animal body not being required.

I think we have to take the world as we see it and appreciate our own limitations in that what we think of intelligence fundamentally arises out of our evolution in this world; our embodiment and response to this world.

so I think we do need to give it a body and let it explore this world.

I don’t think the virtual bodies thing is gonna work. I don’t think letting it explore the Internet is gonna work. you have to give it a body multiple senses let it survive. That’s how you get AGI, not not virtual embodiment. Which I never meant, but thought it was obvious given the term embody minute self strongly, suggesting something that’s not virtual! Hahaha ! :)


My embodiment is my PC environment. I interact with the world through computer displays.

There is no reason embodiment for AGI should need to be physical or mammalian-like in any way.


Strong disagree. But it may take me a while to elucidate and enumerate the reasons.


I'm disabled and have had a computer in front of me since I was 2. I'm rarely not in front of a screen except to shower and sleep.

Plenty of very intelligent people are completely paralyzed. Sensations of physical embodiment is highly overrated and is surely not necessary for intelligence.


Nice try SkyNet


Hahaha! :) thank you. That is such a compliment hahaha! :)


> LLMs will get to the point where they appear to be AGI, but only in the same way the latest 3D rendering technology can create images that appear to be real.

Distinction without difference


Disagree. Huge difference. In our tech power society, we often mistakenly think that we can describe everything about the world and equally falsely only what we can consciously describes exists.

But there is so much more than what we can consciously describe, to reality, like 10,000 to 1 — and none of that is captured by any of these synthetic representations.

so far. and yet all of that is or a lot of that is understood, responded to and dealt with by the intelligence that resides within our bodies and in our subconscious.

And our own intelligence arises out of that, you cannot have general intelligence without reality. No matter how much data you train it on from the Internet. It’s never gonna be as rich as for the same as putting it in a body in the real world, and letting them grow learn experience and evolve. And so any air quotes intelligence you get out of this virtual synthetic training is never going to be real. Itis always gonna be a poor copy of intelligence and is not gonna be an AGI.


Developing an AGI is not the same as developing an artificial human. The former is achievable, the latter is not. The problem is many of the gnostics today believe that giving the appearance of AGI (ie having all the utility that a general mechanical intelligence would have to a human being) somehow instills humanity into the system. It does not

Intelligence is not the defining characteristic of humanity, which is what you're getting at here. But it is something that can be automated.


If MS gets their hands on an AGI help us god, but no "organizational safeguards" will matter.

Not that I think AGI is possible or desirable in the first place, but that's a different discussion.


All hail Big Clippy


Impossible with LLMs, with currently known techniques or impossible full stop?


Impossible with computers full stop. IMHO, we may be able to slice together DNA or modify it to create a new or smarter organism than AGI in a computer.

They already shifted goal posts and they’ll do it again. AI used to mean AGI but marketing got a hold of it. Once something resembling AGI comes out they’ll say well it’s not Level 5 AGI or something similar.


> Impossible with computers full stop.

This combined with it being possible with DNA is a very rare view. How did you come by it?


I would assume Blade Runner


>AI used to mean AGI but marketing got a hold of it

It does not and never has.

What happens with the term AI as time has progressed is more to do with the word Intelligence itself. When we went about trying to prescribe intelligence to systems we started to realize we were really bad a doing the same with animal humans systems. We were also terrible at separating what is component level and systems level intelligence. For example, you seem to think that intelligence requires meat, but you don't give any reasoning for that conclusion.

These lists of problems with what intelligence is will get worse over time as we build more capable systems and learn about new forms of intelligence we didn't expect possible.


if you think it’s impossible with computers then surely you must have a reason why computers are operationally incapable of doing the same as flesh and blood


“…Alright, so it can DREAM!!! BUT CAN IT suffer?!?!”


…Unless you achieve regulatory capture which prevents competitors from easily popping up


> it probably won't be capped-profit and just be a normal compan

I can't imagine him doing that. He cares about getting well aligned AGI and profit motives fuck that up.


And how would Altman achieve that? What hitherto hidden talents would he employ?


Jakub Pachocki and Szymon Sidor have worked on mu-parametrization/tensor programs and Dota 2.

https://www.semanticscholar.org/author/J.-Pachocki/2713380?s...

As @eachro pointed out, Aleksander Madry is on leave from his MIT professorship. His publications:

https://madry.mit.edu/


Aleksander in particular is deeply invested in AI safety as a mission. It's a very confusing departure, since most of the reporting so far indicates that Ilya and the board fired Sam to prioritize safety and non-profit objectives. A huge loss for OpenAI nonetheless.


Perhaps you could argue that he wants to stick with Sam and the others because if they start a company that competes with OpenAI, there’s a real chance they catch up and surpass OpenAI. If you really want to be a voice for safety, it’ll be most effective if you’re on the winning team.


One funny detail is that the OpenAI charter states that, if this happens, they will stop their own work and help the organisation that is closest to achieving OpenAI's stated goal.


But now it may be the regulations they've gotten in place will make it harder for any new upstarts to approach them.


Maybe Sam wants to build something for profit?


really?


https://openai.com/charter

Second paragraph of the "Long-term safety" section.


Depends how much research is driven by Ilya…


> If you really want to be a voice for safety, it’ll be most effective if you’re on the winning team.

If an AI said that, we'd be calling it "capability gain" and think it's a huge risk.


I dunno, the moat Sam tried to build might make it hard to make a competitor.


We are about to find out if the moats are indeed that strong.

xAI recently showed that training a decent-ish model is now a multi-month effort. Granted GPT-4 is still farther along than others but curious how many months/resources does that add up when you have the team that built it in the first place

But also, starting another LLM company might be too obvious a thing to do. Maybe Sam has another trick up his sleeve? Though I suspect he is sticking with AI one way or the other


> most of the reporting so far indicates that Ilya and the board fired Sam to prioritize safety and non profit objectives

Maybe Ilya discovered something as head of AI safety research, something bad, and they had to act on it. From the outside it looks as if they are desperately trying to gain control. Maybe he got confirmation that LLMs are a little bit conscious, LOL. No, I am not making this up: https://twitter.com/ilyasut/status/1491554478243258368


lol sorry if this is clearly a joke but who cares if it's a little bit conscious. So are fucking pigeons.


it would be funny if Ilya followed the ranks of Blake Lemoine and went off for AI consciousness


The way Sam and Greg were fired maybe led him to no longer have faith in the company and so he quit?


Important detail: Only Sam was fired, Greg was removed from the board and then later quit. Source: https://twitter.com/gdb/status/1725667410387378559


More like the guy who engineered this situation is an asshole and they don't want to work for him.


Who's the situation-engineer for some of us duller but curious folks?


It's been confirmed to be Ilya.


Who was that? How are they an asshole?


> since most of the reporting so far indicates that Ilya and the board fired Sam to prioritize safety and non-profit objectives

With evidence, or is this the kind of pure speculation that media indulges in when they have no information and have to appear knowledgeable?


Twitter rumors from “insiders”


No. Statements from Ilya himself.


A rudder only works as long as you are moving faster than the current. I can imagine (some) people concerned with safety also feeling a sense of urgency, because their ability to steer the AI toward the good is limited by their organization's engine of progress.


In March, Sam Altman said that Pachocki's "overall leadership and technical vision" was essential for pre-training GPT-4.

https://twitter.com/sama/status/1635700851619819520


He also brought Pachocki to capitol hill soon after.


Yep, this and the capital hill stuff is what got the plug pulled. Thank God somebody finally recognized what an enormous threat this guy and his mad scientist friends are.


So what was he lying about that got the board so pissed? A story that fits is that they assumed/knew that he had different goals and/or was going to create a spinoff.

If they waited for the GPT5 pretraining to finish and then they minimized the cost of the loss of Altman and the engineers.

The whole secrecy, compartmentalization and urgency of their actions could only be explained by being against a wall. Otherwise if it was about ethics, future plans or whatever political it would happen at a slower pace.

Hope they involved their investors beforehand but I don't know if they had time, OpenAI probably still exists and evolves on other people's money. But what else could they do?


I don't think the firing would be this dramatic if it was merely lying to the board; I suspect it's something where:

- he makes misleading statement to board

- board puts this in regulatory filing (e.g. SEC)

- board finds out this is a legally critical statement

- they _have_ to fire him in order to avoid becoming accomplices.

The reverse of the other Sam situation.


Yeah this feels like a possible white collar crime. And with the US government out for blood right now about tech abuses, even a minor tax audit wouldn't be good for them.


current popular theory is that those fired/left were taking OpenAI for profit, and the board stuck with their original goal of non profit.


you mean current completely made up speculation by anon online?


That's one source. There are others about, but as I wrote it's a "theory".

edits: https://news.ycombinator.com/item?id=38314420

I imagine when the full story comes out all these theories and speculations will be ignored and we will literally forget ever being interested in them!


I didn't even need to check the source. I immediately thought that this was obvious when it was announced (implicitly) that Ilya must have voted out Sam on the board. Whatever legalese is going on in the side channels to justify everything, I can totally see the conflict in visions between these two guys. And I'm sorta glad Ilya came out on top, even though I'm not a big fan of him either. Brockman is the real loss from this mess.


Nobody was "taking" anything for profit. That ship had sailed.

The OpenAI 501(c)3 already spun up a for-profit company in 2019 to do all the commercial work and take VC money.


how can people be so naïve... think about it. isn't it exactly the kind of spin that you would expect from "new" leadership? like, of course they're going to take the moral high-ground like any new regime would.


Probably the Microsoft thing and the direction Sam Altman is taking OpenAI. I imagine that caused a significant shift in workload and nature of work for the people in OpenAI.


I don't see how it could be something like this. If Sam wanted to do this he wouldn't need to lie. I suspect Sam did something stupid and the board had no choice. I would be very surprised if they actually wanted to fire Sam.


Well that turned out to be 100% wrong. I am, as predicted, very surprised.


Obvious wall would be financial - Sam arranged more Microsoft funding, diluting the non-profitness even more, and tried to force board's hand by high cash burn and persuasion.

Board had to act fast to fix it. And OpenAI changed enterprise pricing of API to be up front for cashflow related to that.


Makes me wonder whether to keep building upon OpenAI? Given that they have an API and it takes effort to build on that vs. something else. I am small fry but maybe other people are wondering the same? Can they give reassurances about their products going into the future?


I’d recommend trying to build out your systems to work across LLMs where you can. Create an interface layer and for now maybe use OpenAI and Vertex as a couple of options. Vertex is handy as while not always as good you may find it works well for some tasks and it can be a lot cheaper for those.

If you build out this way then when the next greatest LLM comes out you can plug that into your interface and switch the tasks it’s best at over.


The problem is swapping LLMs can require rework of all your prompts, and you may be relying on specific features of OpenAI. If you don't then you are at a disadvantage or at least slowing down your work.


I have a hierarchy of templates, where I can automatically swap out parts of the prompt based on which LLM I am using. And also have a set of benchmarking tests to compare relative performance. I treat LLMs like a commodity and keep switching between them to compare performance.


Just curious are you using something specific for the tests?


Just ask the LLM to rewrite your prompts for the new model.


Does it really have that kind of self awareness to be able to do that successfully? I feel very sceptical.


I doubt self awareness has anything to do with it..


What else would you call the ability for it to adapt a task for its own capabilities?


Language modelling, token prediction. It's not much different from generating code in a particular programming language; given examples, learn the patterns and repeat them. There's no self-awareness or consciousness or understanding or even the concept of capabilities, just predicting text.


Sure but that kind of sounds like it is building a theory of mind of itself.

If it does have considerable training data including prompt and response when people are interacting with itself then I suppose it isn't that surprising.

That does sound like self awareness, in the non magical sense. It is aware of its own behaviour because it has been trained on it.


Just have it write 10 and bench them against your own.


Isn’t the expectation that “prompt engineering” is going to become unnecessary as models continue to improve? Other models may be lagging behind GPT4 but not by much.


The dream maybe. You still have to instruct these natural language agents somehow, and they all have personalities.


Definitely, just like with games development, the key is to master how things work, not specific APIs.

AI tools will need a similar plugin like approach.


I have a good idea how transformers work and have written Python code and trained toy ones, but end of the day right now calling OpenAI nothing I can build can beat it.


That would go as well as trying to write a universal android iOS app or write ansi sql to work across database platforms. A bad idea in every dimension.


Also same here. Actually currently staying up late Friday night hacking on OpenAI API projects (while waiting for SpaceX Starship launch, it's quite a day for high-tech news!) - and wondering if I should even bother. Of course I will keep hacking, but...still it makes you think. Which is a very unexpected feeling.

Hugely more interested in the open source models now, even if they are not as good at present. Because at least there is a near-100% guarantee that they will continue to have community support no matter what; the missing problem I suppose is GPUs to run them.


Totally. I'll keep going too. I am just putting a nice GUI wrapper around the new Assistant stuff which looks damn cool. Project is half "might make some bucks" and half "see if this is good to use in the day job".


Yeah, the assistants api is pretty great. Curious if you’ve faced issues with certain things not working out of the blue forcing you to re-run threads?

For example, i have an assistant which is supposed to parse an uploaded file and extract useful info from it. To use this assistant, I create a thread and a run and attach it to the assistant with a different file-id. About half the time, the assistant simply throws up its hands and says it can’t parse the file I supplied with the thread. Retrying a few times seems to do the trick.


The assistants API is fantastic; I was just getting started with it. This news makes me reconsider -- but I also think it's inevitable that a compatible API will be released with open source underlying LLMs. I've deployed the OpenAI-compatible completions API over Llama2 in production with vLLM, and it works perfectly.

Do you know if there are any projects working on this? Even something like a high quality json-tuned base model would go a huge way toward replicating OpenAI's current product.


Sorry, no idea if what you’re looking for exists. For now, I was looking at integrating with OpenAI and productising some th ing but this situation is making me nervous.


I am wondering the same. It’s a PR desaster to their dev community and i’m not even sure if Sutskever isn’t secretly happy about this.


I am at lost. Not fear, just lost.

Don't know what to do. Is my investment into their API still worth it? It feels very unstable at this moment.


If you're just using their completions/chat API, you're gonna be ok. As an ultimate fallback you can spin up H100s in the cloud and run VLLM atop a high param open model like Llama 70B. Such models will catch up and their param counts will increase.. eventually. But initially expect gpt-3.5-esque performance. VLLM will give you an OpenAI-like REST API atop a range of models. Keep making things :))


Thx. I will. My current interests mainly lies in benchmarking their vision model.

That being said, I might not go further relying on their APIs for something more serious


If you are building something that is end-user facing that relies on ChatGPT then that was always a huge and risky bet on the future of OpenAI.

In addition, it would likely be some time, possibly years, before it would be ready for production.

Perhaps recent events have just brought that more clearly into focus for you.


By that logic you could never use a third party API.


Microsoft owns it. I honestly can not imagine trying to build a business on an API owned by Microsoft of all companies.


What makes you say that?

Microsoft seems like one of the more reliable partners to build on compared to Google etc. just for the simple reason that their customers are large businesses and not breaking things for them is in their blood. Just like Windows backwards compatibility.


Isn't Microsoft famous for their insane API stability and backwards compatibility?


Microsoft Azure is famous for its insanely bad security: https://karl-voit.at/cloud/


While I think the sentiment is overblown, if you compare Azure to AWS, Azure's stability is like Google's.


Microsoft is famous for a lack of adoption DESPITE backwards compatibility.


No I would not say that at all. Microsoft has gone through how many Desktop APIs for Windows at this point? At least a dozen I'd think.


Would like to elaborate a bit?

Though my investment will be still tiny at the moment, but not other multi-modal model on the market right is as good.


I actually wouldn't be worried in the short term for exactly this reason. Microsoft has legal access to GPT-4 and is allowed to host and serve it via Azure. If OpenAI somehow tanks its API in the near term, MS is sitting on a gold mine and will make use of that by continuing to serve it. In the long term I am worried, but less worried if Sam and Greg form a competing co to continue to build.


People have been successfully building on MS APIs for four decades now. I've been for nearly three.

What exactly are you saying?


Why would you build a business that Microsoft could sherlock in a second?


Do you use github?


Take my opinion with some skepticism because I am retired and the massive amount of time I put into LLMs (and deep learning in general) is only for my own understanding and enjoyment:

In all three languages I frequently use (Common Lisp, Python, and Racket) it is easy to switch between APIs. You can also use a library like LangChain to make switching easier.

For people building startups on OpenAI specific APIs, they can certainly protect themselves by using Azure as an intermediary. Microsoft is in the “stability business.”


>Can they give reassurances about their products going into the future

They wouldn't have been able to do that even before Sam's dismissal


it's business and systems design 101 to actually worry about your dependencies. regardless of this drama, you should have thought about what you'd do if OpenAI shut down, or become your competitor, or gets worse, or is bought by MS or something.

> Can they give reassurances about their products going into the future?

emotional comfort is not the thing you should be looking for mate.


I'm in this boat. Not for my startup but for side projects I was absolutely pinning my hopes on them unlocking access to tools and relaxing some of their restrictions in the near future.. a future which now seem unlikely.


Indeed. It was a big enough battle to convince execs that building on top of OpenAI was ok. Now that conversation is pretty much impossible. You have the Microsoft offering, but to most muggles that just looks like them reselling OpenAI.

The board of OpenAI should have been replaced by adults a long time ago.


GPT5 pre-training just ended I believe. Brock, Pachocki, Szymon Sidor, would have likely all been involved.

These are huge losses. Pachocki led pre-training for GPT-4, and probably GPT-5. Brockman is the major engineer responsible for the efficiency improvements that enabled ChatGPT and GPT-4 to be even remotely cost-effective. That is a piece that is often overlooked, but OpenAI's advantage over the competition in compute efficiency is probably even larger than the model itself.


"Greg Brockman works 60 to 100 hours per week, and spends around 80% of the time coding. Former colleagues have described him as the hardest-working person at OpenAI."

https://time.com/collection/time100-ai/6309033/greg-brockman...


I am either skeptical or envious of such claims. Someone coding so much would quickly be launched into meetings to communicate one's results and to coordinate with others.

It would be my life's dream to spend 80 hours per week coding without having to communicate with others... but no one is an island...


It's possible, but harder than almost any other role. There are people at Google/Meta like this. Usually E7/E8 levels, "coding machines". It's much easier to go into a pseudo PM/TL/Director role though to hit those levels and income, so it's uncommon.

You really have to have a passion for coding to put in the hours and be very good at it. Incredibly rare, believe it or not. Lots of people think they are good coders but this is another level. Proof is in your commit/code review count/async comms being 10x-100x of everyone else in your org, and it's clear you're single-handedly enabling delivery of major projects earlier than anyone else could. Think of the pressure of doing this continuously.


It's not about being rockstar or 10x. He was the chairman of the board (and President of the LLC). Practically speaking, he can work however he wishes within the company. Seeing that he went from CTO role to President role, it's fairly obvious that he got the opportunity to structure the role and the work to best fit him (and probably the company, too).


There’s always a hoarde of people second guessing the 10x engineer. Of course it looks impossible to regular folks. I have seen a few people like this. They’re real. Sometimes it’s even worth the dysfunction they cause to see this in action.


Not commenting about the ppl who are subject of this thread but talking in general. I have been lucky enough to have seen some of these 10x engineers but what is much more common is a 1x engineer feeling and treated like a 10x engineer because they are surrounded by 0.1x engineers.


Haha, that reminds me a lot of this quote from an Atlantic article on Freeman Dyson:

I asked him whether as a boy he had speculated much about his gift. Had he asked himself why he had this special power? Why he was so bright?

Dyson is almost infallibly a modest and self-effacing man, but tonight his eyes were blank with fatigue, and his answer was uncharacteristic.

“That’s not how the question phrases itself,” he said. “The question is: why is everyone else so stupid?”


That’s still 10x. If you think that’s worth mentioning, you should see the 10x engineers swoon over the 100x unicorn.


Same, I've seen it in practice and the numbers didn't lie, week on week on week. But you know, some people are very uncomfortable with someone else being called smart. Worse yet what if they're called smarter than what they actually are? Like an injustice in the universe, but comes from defensiveness I think.

I don't know this guy in particular so I have no clue though.


I am not second guessing the 10x engineer, that topic is not the one under discussion.

The topic at hand is “how did a high level engineer got to focus on programming”. And I am saying that the reason has to do more with his influence and role within the organization, rather than other reason.


>There’s always a hoarde of people second guessing the 10x engineer.

Because most 10x engineers recognized by management as such are characterized chiefly by building out shoddy software extremely quickly that only they can understand.

In a similar dynamic, Doctors that are scored highly by patients often have pretty bad medical outcomes.


I've seen the bugs of multiple 10x engineers multiply together for 10^n x bugs


In my experience I have encountered two 10x engineers:

1) Moves fast, flexes their authority to sweep small stuff under the rug until it is out of scope and can be "fixed real quick" later. Often leverages many subject matter experts through effective and persistent communication and learns quick enough to get PRs through the door (that sometimes need "quick" fixes later). Enjoys selecting items that benefit their career the most, at the expense of others on their team. Mentors only enough to onboard and increase his team's yield, not to aid their careers. Fueled by the recognition and validation of peers through PR/project completion.

2) Gets shit done, is the SMI themself. Solo code cannon, but PRs go in clean, beautiful to look at. May not get along well with some but not necessarily abrasive to work with especially being part of their direct team. Can be a great altruistic mentor if they spare 5% of their time. Enjoys what they do, and the technologies they work with. Fueled by personal satisfaction in their achievements, and in uplifting their team.


Typically when you see the type 2 engineer, they are also an architect. It is very rare that they don't seem to have knowledge of nearly the entire system and all its interactions.


Sorry, but what is SMI?


I just engaged in the kind of acronym abuse I don't enjoy receiving! It stands for Subject Matter Expert.


> and it's clear you're single-handedly enabling delivery of major projects earlier than anyone else could.

You have to watch out with that.. I've seen whole projects pushed through by management where no one else was involved enough to review normally, but everyone had an interaction that implied they had only seen the top of the iceberg of problems with it.


This is a well debunked myth. You can commit a lot of code and commit better quality code, but there is an upper bound on productivity. If you don't get enough rest the quality diminishes.

Management and leadership of a team has a way bigger impact than any single individual contributor could ever have. Humans are generally limited not by intelligence but by motivation and vision. Directing people to achieve what you want is what allows the scaling of innovation.

Hero worship is a very human thing, but unscientific.


I'm not discounting management or leadership. These are also very critical roles that can make or break organisations. But I'd challenge your assertion that management has a "way bigger impact" than a single IC can have. Both are critical at companies doing internet scale products.


Maybe Google and Meta are different than my company, or maybe I am not in the league of such star coder, but in my experience as soon as a demo of my code is delivered I am immediately launched into managerial mode coordinating other devs working on my code. I came to just accept it.


It could be also a sign of dependency hoarding and making you the bottleneck of the whole project. Bad architectural decisions, narcissistic need of importance or both. With those hours your partner starts to date with your friend. With experience I can assure you that position is not worth it. Not for you and not for the project. You end up draining your imagination. Over fitting is emerging in programming like it is emerging in the machine learning.


> Over fitting is emerging in programming like it is emerging in the machine learning.

That's a nice insight. I have been in that place many times, I was overfitting on my own imagination.


Filling up that mana bar is not easy.


OpenAI is an absolute unicorn, and not in the bullshit-1-mrd-vc-money-dollar sense but in being truly outstanding. Since all they do is software, that is solely because of the people involved, being able to do things and doing things that other people won't and achieving things that other people don't.

When it comes to sports it's fairly obvious what outliers look like and well accepted that they exist. I don't see a single reason to believe, that the same would not be true in every other walk of life or thinking that OpenAI just got lucky (considering how many people are trying to get lucky right now with less success in this space).

There are extraordinarily effective people in this world, and they are sparse and it's probably not you or me (but that's completely fine with me, I am happy to stretch myself to the best of my abilities).


> Since all they do is software...

For a certain definition of "software": when only doing one training run costs an 8 digits sum (requiring hardware one order of magnitude more expensive than that to run) I kinda dispute the "all they do is software".

It's definitely not "all software": a big part of their advantage compared to actually free and open models is the insane hardware they have access to.

The free and open LLMs are doing very well compared to OpenAI once you take into account that the cost to train them is 1/100th to 1/1000th what it costs to train the OpenAI models.

This can be seen with StableDiffusion: once the money is poured in training the models and then the model made free, suddenly the edge of proprietary solutions is tiny (if it even exists at all).

I'd like to see the actually open and free models trained on the hardware used to train OpenAI: then we'd see how much of a "software edge" OpenAI has.

And my guess is it'd be way less impressive than you make it out to be.


They are using hardware, yes, but they are not creating (which is what I mean by "doing") the hardware. Anyone else with funding could have access to the same hardware for running their software, and other people did do that, and do do that (now, of course, in a drastically tighter supply/demand situation).

I do not wanna be flippant here: Obviously having easy access to money and a good standing with the right people is making things A LOT simpler, but other people could have reasonably convinced someone to give them money to built the same software. That's what VCs do, after all.

Regarding the rest: Feels very much like a different topic. I'll pass.


> I'd like to see the actually open and free models trained on the hardware used to train OpenAI: then we'd see how much of a "software edge" OpenAI has.

It would seem like you're talking about what "software edge" OpenAI has in the future, when others have caught up, while parent is talking about the existing "software edge" OpenAI has today, which you seem to implicitly agree with, as you're talking about OpenAI maybe not having any edge in the future.


I can imagine this type of person to abide to their normal obligations during business hours, and code full time the rest of their wake-up time.

In my company, 80% coding for a senior SWE is rare. But if they deliver, management will give them some slack on the other evaluation axis. I have colleagues who work almost by themselves on new high impact projects. This has many benefits. No need to argue about designs, code reviews (people just approve blindly their code). The downside is that you need to deliver.


This is very true everywhere I've looked.

What also happens is regular developers (like me) want the same treatment as if they could end-to-end deliver "if they only let me", but many times can't, and actually need the structure and processes of a team. I've seen this freedom not working at all.


Indeed so ... the "structure" (call it bureaucracy of you like) is all of:

- an equalizer (entire team treated the same)

- a confidence booster (approval of others gives feeling of having done well)

- a way of distributing information (everyone is aware of all other team work)

You can run a team as a form of "competitive sport", and race everyone against each other; who churns out most "wins", and helpfulness, non-code-work, cross-team work are "distractors" to that objective hence undesirable and definitely not rewarded.

If the personalities in your team are "right" then this can work and by striving to best each other, all achieve highly. Have a single non-competitive person in there though... and it'll grate. Forcing a collaborative element into the work (whether by approval/review procedures, or by things like mentoring/coaching, or even just to force briefings to the team on project completion) creates a balance between the "lone crusaders" and the "power of the masses". Make the loners aware of, and contribute to, the concept of "team success", and give the "masses" insight into contributing factors of high individual performance.


yes there must be strong accountability for this to work (e.g. a self financed open source project or bootstrapped startup), not only do mid devs overestimate their appetite, motivation to grind and delivery, but also face the Curse of Development wrt communicating to the money people their value. Why should the rockstar grind away 50x harder than their coasting peers for 30% more salary? What happens when the bean counters reorg you or a manager labels you not a team player? Equity is the right form of comp to motivate this level of delivery and at that point it’s not about 50x skills but about sales and overcoming the communication gaps to establish a nonzero price for your equity. Which is why so many amazing niche projects languish and starve and the founder-engineer eventually breaks and goes and ships react apps for whatever empty startup has startup-investor fit that year


For the most part you would run out of things to code surely? Unless you really are a one man-band with a full understanding of the commercials, user feedback, support etc.


They’re doing groundbreaking research, there’s always something new to try.


There's probably people whose main job was to read the code and then communicate it more broadly. This is also cutting edge ML where a ton of code is basically thrown away due to not panning out so possibly the amount that needs to be communicated is fairly small.


No one wants to be another woz


I think Woz got the absolute best deal but that's coming from a tinkerer's POV. People underestimate how much it sucks to be under scrutiny 100% of the time as a face of the company (i.e. Steve #2)


What’s wrong with being another Woz? The money?


Why not?? Being another Woz would be amazing.


I just saw this. Glad to see I’m not the only one.

https://www.cnbc.com/2017/04/21/why-apple-co-founder-steve-w...

My take: He’s the Keanu Reeves of tech (or Keanu is the Woz of the film industry). The world can use more of this.


My attitude about money is pretty much the same as Woz's. There's a lot of us out there, but the worldview is so alien to the modern computer industry that it just doesn't register.


This is like a supervillain origin story, Greg and Sam are 100 % going to start something new now, even if it's just out of spite with how much both seem to have liked their work at OpenAI


If they were popular at OpenAI, I would say they have a good chance of succeeding too. They could offer excellent equity packages to all the best engineers and researchers and due to the non-profit nature of OpenAI (and hence no equity), these people might be very tempted to leave.


OpenAI is supposedly “capped profit” so employees do get equity with limited upside.

But yeah, since Sam and Greg were apparently pushed out because they were building too good of a business any OpenAI employees that were aligned with them are likely to jump ship and join them, and OpenAI will revert to the non-profit research lab it started out as.


Can they do that though? With all the obligations they have to MS.


They just fired OpenAI


Would they even be able to compete with OpenAI at this point? Even without Greg and Sam they have Ilya, the models they've trained so far, institutional knowledge, datasets and billions from Microsoft. Could OpenAI be at escape velocity anyway to AGI just continuing on the track it's been on?


Microsoft got the models. And their compute infrastructure. They’re powerful enough to bleed openai out if it’s in their interest. I’m not sure if there is much trust between the new leadership and microsoft.


There are plenty of investors who would pour billions into these 2. MSFT got it's worth others want too.


> Could OpenAI be at escape velocity anyway to AGI just continuing on the track it's been on?

Could a nuclear energy company be at escape velocity to fusion because they are the best at fission? I wouldn't think so


OpenAI are not on the track to AGI


Greg seems to be much loved by OpenAI employees, and generally inspiring person.


I don't think its possible, at least for the foreseable future, they were heavily over indexed on Azure offering them discounted compute, not like they're gonna buy that amount of GPU elsewhere


I can imagine both Apple and Google happily positioning themselves in the same type of relationship as OpenAI-Microsoft.


Google had internal efforts, and Anthropic already. They definitely don’t have the compute to spare to split with another organization.

Apple surely doesn’t have a cluster that at all compares with the big cloud giants.

Oracle and AWS are really the only cloud left, and oracle is already renting to Microsoft for GPU compute.


>Apple surely doesn’t have a cluster that at all compares with the big cloud giants.

Apple has a lot of cash to throw at it. Question would be if Apple is even interested in it.


They should be. The slight improvements in messaging autocomplete in iOS 17 have made a noticeable difference in my texting. To have an iPhone that understands me, and a Siri that doesn’t say, “Here’s what I found on the web” is extremely valuable.


Idk. It looks like Apple is happy to sell you the hardware to do the image recognition etc on your own device and receive the result. They can claim privacy and save on computation.


Then they can charge.


The problem is that a significant amount of the already-made GPUs are in use. Even if they can afford to throw money at it, where is that money going to go?

The best they can do is out-bid their competitors, for the competitors hardware. I'm sure apple doesn't want to pay Google for GCP resource to train an AI. Again, there may not be enough companies renting out GPUs at all.


Apple is burning through more than a million USD per day for its research on Ajax and Co, they definitely have an interest in building big imo.


That's .1% of their annual expenditures


On the other hand, Anthropic exists, so I am not sure.


ALDI vs. LIDL


ALDI Nord vs Aldi Süd; Adidas vs Puma?


Yes you're correct, thanks!

Edit: apparently ALDI VS LIDL is an urban myth. It's ALDI that was split in two ..


Aldi and Lidl are each other's biggest competitors. Aldi Süd and Nord don't really clash because there is only two markets where both are present: Germany, where they aren't competing though (split in North and South) and the US (Aldi vs. Trader Joe's). Every other country only has one of the two Aldis. Lidl on the other hand is present in most large markets alongside one of the Aldis.


Used to have a LIDL now we have an ALDI


Seems to me that most of that is spent on twitter.


I'm trying to read and reread this over and over again to make sense of this but to me it sounds like in the comments people speak as if Greg brockman resigned while in the article he is not amongst the three names who resigned. What am I missing here?


He was fired from being Chair of the Board, but the rest of the board left him in his position as an engineer (?) in the company. Then an hour or two later he resigned as an engineer.

See: https://news.ycombinator.com/item?id=38312704


he resigned earlier. google it. or bing it. or chatgpt it.


ChatGPT doesn't know recent events


ChatGPT now has the ability to do web browsing to search for recent events!

https://chat.openai.com/share/c35e3fd1-d94e-477b-a331-b14384...


This is Time Magazine...Once wrote a piece about Gates and Warren Buffett spending the weekends on Math quizzes...


What Brockman tweets is from technical standpoint the most mundane, boring, and obvious stuff I’ve read from a programmer. My reads of this guy have been he’s not working on any problems that are technically difficult (or interesting). It’s much easier to work long hours on easy problems. He also has a managerial vibe in all communications which supports my feeling.

Most programming work in any project and company is mundane, so I do agree someone taking care of all that without whining is actually extremely valuable. I couldn’t do it.

Still doesn’t really make sense to put him on such a pedestal like many in this thread. It seems like a cultural thing in the US to overvalue individuals, and downplay the importance of good teams.


I know nothing about the guy but judging his work and assuming you can guess the type of problems he works on based off his tweets is ludicrous.


Ah, the reverse of the old "he uses difficult terms so he must be super smart".


I disagree.

Personally, I find it much easier to get lost in time and focused when I am working on something challenging. Time just flies by.

If I have to work on something boring / routine / repetitive I find it much hard to focus and time goes by so slowly.

Then my brain decides to look for ways to automate what I am doing. Perhaps a DSL or .. or .. o .. No work, remember work, but I could hmm if I write a Perl script i, No work you need to work, but it woud be work if i cold only

(I am diagnosed with ADHD)


I would rather have one of you than ten people who dig the ditch in front of them.

You will see the distance to be travelled and say let's build a airplane.

but incentives in most companies demand "progess" hence most projects start by piling the car high and driving off. it's when they are attaching floats to the car and paddling across the atlantic shouting progess reports back to shore that the value of automation comes to mind

don't worry about the ADHD - embrace it. (my hint - of the boring has to be done, make it the only thing, have nothing else).


I'm not diagnosed as you are, but I'm the same in that terms. Boring needs to be automated, challenging tasks need to be automated too.

So basically, my brain is lazy and try to find a way to keep it in that state.


he is on a pedestal because everyone who’s worked with him said he’s amazing and effective.

But it is your right to assume what he works on from reading his tweets and leap from that to how this is an American cultural thing tho.


Impressive analysis. Can’t wait for someone to tell you what kind of a programmer you are based on your HN comments.


It's definitely a Silicon Valley thing at least to overvalue individuals and downplay good teams. Keep in mind, there are a lot of young people in this site. It's pretty fun, you get boom-bust cycles like what happened with Musk and plenty of grifters trying to take advantage of this mindset.


By that definition Elizer should be working on really hardcore stuff, right? And yet his explanations about actual technical stuff come across as a guy that barely understands how matmul works.


They said "senior researchers", and I would say a programmer is not a researcher if they spend all their time on programming.


Lots of research is mostly programming, e.g. my applied maths PhD. The way to try out software-based ideas is with programming.


I think the point was more than a lot of programming is not actually programming. Working in the industry, most somewhat complex systems require mostly work on paper, planning, research, reading documentation etc. and in the end some writing of code. Too often though, that is dismissed because it's "less agile" and a few years down the road the technical debt is huge.


There's no point building a hypothetical system. You have no idea if it works until you try it. And lol, documentation? For a system that doesn't exist?


This type of research happens in teams


How nice would be him tweeting about the real stuff that is their competitive advantage


That sounds spectacularly unhealthy


Coding for 10/12hs daily isn’t healthy, is not sustainable and is not the way to live. I love coding and I being in front of a computer all my life, since I was 8. All the meaningful life moments in retrospective weren’t at the screen. Life occurs outside the screen.

I understand that sometimes is worth it, to create a great product, solve something important or just for fun. But beware


Meaning of life is highly subjective. Just because your personal beliefs don't value coding highly doesn't mean that is universal.


I agree with you. Just bear in mind that later on in life, you can’t go back.


Seriously, do we honestly believe 80 hours a week coding is a good thing. What, is he this bad at coding.

What about he spends 4 hours a week coding cause he’s so good at coding.

Way more impressive.


80h a week doing stuff does not prove your level at it, it proves your work capacity.

Being good at something lies in the result and/or appreciation of your work by skilled pairs, which also seem to be there.


So why even claim he did 80 hours a week coding, while also being the chairman of the board?

Can we get a pllleeeeeeaaaase????

He’s clearly a terrible programmer and/or a terrible chairman and to be honest this news says he’s at least 1 of 2 on the above.


Realistically the odds of gdb being a terrible programmer are very very slim. He has been in the field for at least a decade, published papers, given talks, was the CTO of stripe ( a company generally respected for having sick technical infrastructure). If he works an inordinate amount, then it's probably cause he loves it. I would guess he is much more likely to be world class than terrible


Dumb people don't like other peoples success. It reminds them that they're dumb nobodies.


Or perhaps he's just a really exceptional person.

I don't see any comments here claiming that it's something that most people could do well.


Occasionally you meet people who shock you with how talented they are. I watched a couple of his presentations and he immediately reminded me of some of those people I’ve met before.


I guess I just don’t like BS in all its forms.

I don’t like nonsense PR stories or myths about people’s extraordinary prowess.

I just respond badly to BS and these statement have obvious BS if you stop for even a second to think about them.

On some level too, it offends me when I see right minded intelligent people in my community lapping it up.

So a couple of things.

Say I were to tell you that he was the President of openAI but he also did 80 hours of janitorial work per week.

Would you say that was a good use of his time?

Would you say that maybe he should be spending his time on being president of the company and not mopping up? You would be right.

Now substitute programming for janitorial work.

Now be a little more critical about things you see online.


You really think they'd let him anywhere near any of those two roles if he was pumping out tire fire code into production and constantly spewing erratic BS at strategy meetings the last eight years? Please.


Might as well add LOC as a metric. Both can mean the person is extremely inefficient, over-engineering everything, and their eyes are begging for a break.

However! The best engineers I've been around do work a lot and they like it.


I’m not sure why it’s seen as ok to work 100 hours per week, or even glorified.

If instead of work it was something else it would be seen as a problem. 100 hours per week doesn’t leave room for anything else other than basic human needs.

“They like it”, well all addicts like what they’re addicted to, it doesn’t mean it’s healthy.


Agreed. "I like it" might also just mean "I can't bear being alone with my thoughts" or "I can't deal with life and need the distraction". Not that that's always the case, it's probably more often than not in these situations and should be seen as a hint that there might be more going on.


I think a lot of addicts despise their addiction. Exceptions are the few highly acceptable addictions in society such as coffee. Nonetheless, doing anything in excess can be detrimental to one's health and livelihood and should be kept in checked, monitored.

OK now back to my 12 hour day. Not burnt out yet so I'm going to keep going. And yes, I LIKE IT!


When you are a founder and working in a company doing bleeding edge it’s easy to work lots of hours. Maybe it’s not the kind of environment for you but others thrive in it. Lots of high demand type roles out unrelated to engineering that also have large hour workloads and compensate exceptionally well.


What do those people want the compensation for? To sleep on a mansion and back to work? Assuming they leave the office.

What I’m trying to say is that it is an addiction like any other and should be treated as such, not glorified.


Yes we understood your idea the first time around. And you still miss the point. It might not be for you but many individuals genuinely love their work. Either because they founded it, like the area of work, the people, or some combination.

It’s ok to not enjoy it yourself. Different strokes for different folks.

I don’t think it should be culturally championed but I don’t see it as an immediate red flag especially in the case of a bleeding edge company like OpenAI.


And I think you might be the one missing the point, because you keep saying it’s ok for them because they love it.

Surely all addicts love the thing they’re addicted to, but that doesn’t make it ok, even in the case where their addiction doesn’t ruin their lives short or mid term.


We understand. You don’t agree with it. Thank you for sharing.


Coding for 50 to 80 hours a week? Well, I call bullshit on that. Never seen anyone do that consistently and with high quality and quantity output.

Let's first define 'coding' before we jump into the details: 'coding' for me is sitting at your computer doing the work. It's not getting a coffee, chatting with a colleague, going to toilet or reading hacker news. So if you're reading this and claiming to do 100 hours per week of productive time, I call bullshit on that.

Being at the office for 60 to 100 hours, sure, I believe that.

When I was studying for exams at University, I did more than half of the work before noon. The rest was spread out over the afternoon and evening. At 20:00 my brain was dead. I could read a sentence, and nothing would stick. Read it again, impossible to process it.

So I always wondered how these other students could study until 2am in the morning. Well, turned out they didn't do shit in the morning. That's how they studied "all the way into the night".

Now back to my programming career: At my best I do 4 to 6 hours of concentrated coding per day. At my best, nobody seriously outperformed me. So if you claim to do more than x2 the work that I'm doing, I would love to see the output of that.

People like Cal Newport basically confirm what I've seen over the years. So do habits of the most famous authors.

Now, I can be convinced that it's actually possible. Take a look at Carmack, who claims to do 12 hours a day. He doesn't seem to be a bullshitter to me. So either he's counting time that I wouldn't count, like dungeon mastering a D&D game, or playtesting, or whatever. Or he's actually a super human work machine. Now he worked with Abrash, who seemed to do more sane hours. And in the end Carmack had high respect for the output of Abrash.

So yeah, if you know people who can actually do 14 hours of high concentrated coding 7 days out of 7, I would love to hear it and get some kind of confirmation that they're not browsing reddit and HN 50% of that time. And if you're reading this and claim to do 14 hours a day of concentrated work, I call bullshit on that you HN addict!


While I'm sure 100 hours a week is impossible, my dad did 6x11 work days which were dominated by coding pre-internet. You wouldn't know him, he burned out. I have personally witnessed him do more than 60 hours in a week coding. That said, his coding work isn't necessarilly creative. He can do a 12 hour stint of step through debugging stopping only to microwave frozen food. Or 12 hours of data analytic work. Since it is said this man is into optimization I'd say it is possible he's like my dad, he gets into this numb zombie state "change something, run again, look at profiler output line by line." Some people doom scroll HN or tiktok 14 hours a day, other's doom scroll a flame graph.


I also notice that the more complex a task is, the less hours of it I can do.

If I know what to write, and I just have to crunch out pretty straightforward code, I can do more hours (nowhere near 12 hours though, maybe 8 at best).

I can imagine the work your dad did, didn't include juggling a big complex system in his head, which seems to require a lot of mental energy.

That's basically also what Carmack states, that you can reach 12 hours if you plan your work to include some easier tasks for that day. But then again, I was never able to really apply that strategy.

Thanks for you take on it! :)


I think it mostly takes mental energy to addapt to change. I think you can focus on a large complex mature codebase and make small improvements so long as you're not radically chaging everything. Maybe it comes down to synapses firing versus synapses rewiring but that's just a laymans guess.


I dunno man. I feel like reading this was definitely work. It surely wasn’t leisure. Have you ever examined code output by gpt where it looks impressive at first glance? That’s what reading this was like. If you’re reading this, I’m just kidding. If you’re not reading this, I’m not kidding.


Do you have a source that they just finished GPT-5 pre training or is that just speculation.


What is the truth about OpenAI achieving some sort of "true" AGI?


Probably almost guaranteed to be false.


Brockman did not get fired from his job at the company, only from his position as chairman of the board.

Did nobody read this damn article? It's 5 paragraphs long, take 1 minute.


It’s so juicy statements like this become stale in hours. Exciting times.



Ah alright thanks, it was not one of the 3 names mentioned here so it confused me.


Greg Brockman hasn't quit OpenAI. He quit the board[1].

   As a part of this transition, Greg Brockman will be stepping down as chairman   
   of the board and will remain in his role at the company, reporting to the CEO.  
[1] https://openai.com/blog/openai-announces-leadership-transiti...




he tweeted he's leaving the company a few hours after the press release


Interesting. Thanks.


I really wonder if a dividing line started to emerge internally regarding the path to take the company.

[1] On one hand they serve Microsoft and developers, building digital AI infrastructure.

[2] On the other hand, they seem to try and want to build some monopoly and destroy as many startups (and companies) as possible.

In the last developer day, they did a half-ass job of both. GPTs suck. The OpenAI Assistants don't have enough documentation to be used and therefore, equally suck.

I really hope for the sake of the AI community (and economy) that [1] is the outcome. I really do not know how they could scale both. As an AI startup, I have a love-hate relationship with GPT and am eager to grow independent on them because how can I trust a company doing [2]?


Uh, sorry, what? You think serving Microsoft would be opposed to building a monopoly?


"GPTs Suck"

Never change, hacker news!


Was it a right choice to make ChatGPT accessible by the public?

To a person who is not an expert at prompting LLMs, ChatGPT is basically a shitty version of Bing Chat (aka Copilot). Especially the free version - it's an outdated model which cannot search the internet (or does it strictly worse than Bing Chat).

Why does OpenAI pay for access to a shitty version of Bing Chat?

There's only one possible reason for this: raising money at a very high valuation. They burned through hundreds of millions dollars to show VC that they have 100+ M users (and growing rapidly!) to raise at valuation ~$100B.


If the speculation that the board want to open up everything so that it is truly open/public domain "for the benefit of humanity" turns out to be accurate, I can't see Microsoft being terribly happy about this now that they own 49% of OpenAI and want not only to recoup their investments but make ongoing profit.

An exodus of staff might provoke a very quick release of a lot of code, pipelines, and training data by the remaining board before investors' lawyers have a chance to stop it.


Neither Sam nor Ilya thinks that committing to open sourcing their models is good for the safety of humanity. This conflict is not about open source at all. (https://www.theverge.com/2023/3/15/23640180/openai-gpt-4-lau...)


It’s the opposite. They are irrationally fearful of what they have built. Therefore, they will handicap any progress or competitive advantage that they have.


A lot of comments on this thread are sharing personal theories as if they were facts.

Sam also talked about the dangers of AI. It’s likely that he did so to encourage a regulatory moat around OpenAI.


A few days ago Sam said that a recent breakthrough was a game changer. They recently finished pretraining GPT-5 apparently. So it could be related to the rapid rate of progress.


Sort of like Apple gives us a breakthrough every year? Sam is biased.


This looks more and more like a coup, rather than a proper removal of a CEO.

If there was some real misconduct by Altman, others won't be resigning with him, would they?


Well it begins. OpenAI will be a shell of itself in very short time.

Advice: you can't win over a narrative, which is what Sam has become. People and resources will come to him, by themselves.


The advice seems off-point: overall it seems like Sam has operated opposite of the narrative he built (which aggrandized a vision of the future and downplayed the importance of making money) and got fired for it.

If I was OpenAI I'd want to quit today not because I want to follow Sam, but because the same bullshit that people left Google Brain et al. for has managed to catch up with them at OpenAI. It's a shame honestly, it was so exciting to see a company finally free itself of the shackles of navel gazing and just build things, but it seems like that's over today.


You misunderstood my point.

The narrative I am referring to is a simple one: take back what should be his. A.k.a, revenge. That is for sure a strong word.

People live on stories (collective imagination, if fancier), and what they like most is a wronged prince/princess took back his/her crown. It is the same with Taylor Swift rerecording her albums. The story potential will feed itself, until it is fully realized. The OpenAI board had committed a historical misstep, but maybe it is indeed what it is designed for: they hold no stakes in the business, so their view can't be held accountable in a business microscope. But money will really dislike it.


> If I was OpenAI I'd want to quit today not because I want to follow Sam

Sounds like I did? I think this is kind of a bad take, no one is quitting OpenAI before Sam has even had 24 hours to process being fired to help him take on his revenge arc.

Instead it sounds like people are angered by the process and powers that lead to him being fired like that, which is extremely understandable given the history of OpenAI. People forget half of OpenAI's competitive advantage was just not letting themselves be mired in self-sabtoage, the exact kind their board just demonstrated today.


I mean yes, but also no.

the dataset is the crucial bit of openAI. that takes a lot of time and money to make. So its perfectly possible for openAI to carry on innovating without these people.

But equally, it could turn to shit.

However, Sam isn't jesus, so its not like he's going to magically make another successful company.


> the dataset is the crucial bit of openAI

I bet he'll train models on copious amounts of synthetic data made with GPT-4. There are lots of datasets in the open. That makes catching up easier.

No public facing model can be protected from data exfiltration and distillation. All deployed skills leak, your competition will replicate with less effort. And they only need to leak once and every subsequent model can inherit the skill. I think the first movers paid a high price for being first, and will quickly see their advantage erode. Latecomers will catch up and find AI easier to work with. The difference is made by the great fine-tuning datasets that are in the open, a growing lake of distilled abilities.

Another latecomer advantage is benefiting from significant innovation in the engineering part: flash attention, quantization, continuous batching, KV caching, LoRA, and more.

The new AI era will be more equalitarian. Catching up is much easier than discovering, and we can run AI privately, unlike search engines and social networks. You can't exploit SOTA advantage at scale. Being first is a fleeting advantage, the moment you go in the open everyone replicates.

Maybe one reason this is happening is because AI skills are very composable. Any addition to the skill repertoire already fits with other skills. This makes open sourcing skills very attractive. Of course, the datasets are what is being open sourced.


Fair, it is still developing.

But I think one thing is certain, he WILL create another AI company. It seems very unlikely he would quit the business.


I'm embarrassed to say, being Polish myself, I had no idea that key people in OpenAI were Polish - I assume it was a coordinated resignation then.

Madry is actually spelled Mądry, which translates to "Wise".


Btw, another Pole Wojciech Zaremba (not mentioned in the article though) is a co-founder of OpenAI (and an AI researcher too)


A lot of top talent from Polish universities went to work in the Silicon Valley (with the US spending $0 on their education BTW). They're very well represented per capita of their home country, but just a drop in the ocean of international talent that the Bay Area is swarming with.


To be fair, the US also spends $0 on most students going to Stanford and Harvard as well :)


Why make this about nationality?


Is that something controversial?

Anyway, I was providing context. The way I see it, since they speak the same language natively, they might have had a frank conversation about this at some point and decided to resign in concert.


I thought you were very clear. You clearly stated that it was an assumption, and why.

So many people overstate state their ideas, which is basically deceit. People who represent guesses as fact are either lying or have a special kind of arrogant ignorance.


It's rather unusual to have all three people mentioned be Polish.


people can have multiple overlapping reasons for things, it doesn't have to be they are on "sam's side" or the "board's side" or whatever. you can agree w/ a firing and for whatever reason disagree w/ the way it was done or future direction of a company, etc. all just guessing anyway.


I know Jakub Pachocki and in my feeling, Jakub leaving is worse for gpt than Greg or Sam.

Im not claiming to know more than everyone else, but Sam was IMO just a face. Greg is backend engineer - that's less important than actual research.


The whole team that trained GPT1-3 and came up with RLHF left to found Anthropic, perhaps proving that key people are maybe not so key and that other factors are more important. This is a further blow though on top of the previous one.


OpenAI was started as non profit and it went the profit route with capping.

For profit will always win non-profit because It’s run by people - once they see money , they want more not less.

This was meant to happen when it changed its structure to cap profits at 100x . If Microsoft invests $10 billion openAI Can make $1 trillion in profits before giving a dime to its non profit. If it makes $1 trillion it can just ask someone to invest $1 billion and wait until it makes another $100 billion .

Sam, Brockman and others who resigned will start a new for-profit company. Sam will use his influence to get funding and contracts. They know the tech and with know how of everything, build better model. And OpenAI will be forced to give up and work with them as it’s part of their founding policy.


We need to know why he was let go. All else is speculation. I doubt people would follow Sam if he was let go for a good reason.


Unless they don't know the reason. (But of course, it's hard to imagine what reason could be so secret that it's better to let people start jumping ship than share it with employees.)


We don't know if these people left to follow Sam. There's so few details surrounding what the actual catalyst of the firing was and neither said are revealing the details.


I think you can safely assume the reason they left is related to the firing. The question is,are they following Altman or are they just disgusted by the power struggle.


Could we say that OpenAI is not only the company with the fastest product traction and impact in the world but the fastest to implode? When there are a lot of things at stake, not only money, people collide. This is a rant but we are seeing several examples now in 2023. There are "rules" for normal startups and different ones in another scale.


Words alone cannot capture the immense relief and joy that I feel as I write this message. Wizard Web Recovery recently conducted a commendable operation to retrieve my scammed funds from the clutches of deceitful scammers. Their swift intervention and unwavering commitment to my cause have breathed life back into those precious funds that seemed irretrievably lost.Their expertise in navigating the complexities of scams, combined with their unwavering dedication to their clients, sets Wizard Web Recovery apart. Their tireless efforts serve as a beacon of hope in a world where scams can often leave victims feeling helpless and disillusioned. I am truly grateful to the entire team at Wizard Web Recovery for their professionalism, diligence, and determination throughout this process. They exemplify excellence in their field and have restored my faith in redemption and justice. If you have experienced a similar situation. Reach out to wizardwebrecovery@programmer(.)net


As I said in the other thread, this is 100% a AI doomerist hijack.


As a doomer: yeah right, as if for-profit OpenAI wasn't an accelerationist hijack to begin with.


Doomerism when it comes to technology has always been such a weird mindset to me. "oh no, they're gonna take the horses/horse buggies/telegram/landlines away!"

"Goddamn, how dare they invent the bigger cannons!?" - Romans, 1453, in Constantinople probably. (One incident where I can use my exempt powers).


I agree. Critics will say something like "but don't you think it's sad that a machine was trained on the work of artists without their permission, and is now decimating the lives of the very artists who made its existence possible in the first place?". And, I agree, on its face it does sound sad. Very sad. But what these critics need to remember is that progress has always been a good thing, all throughout history. If something was true in the past, it will continue to be true in the future, and while it may seem difficult to envision how this will create a better world for all of us right now, it's important that we have faith and keep marching forward regardless.


Counterpoint: dogma often led to dark times.

We could try to think a little more deeply about things than "let jesus take the wheel"


Most of the time the technology is not building an artificial intelligent computer that is capable of superhuman reasoning.

Have you used gpt4? it's reasoning capabilities match human ability. The more you think about it, the more scary the reality becomes. GPT 4 can reason through any mental exercise as well as a human. The rest of the work to make it autonomous is simple in comparison.


> GPT 4 can reason through any mental exercise as well as a human.

So can I.

And yet, people don't consider me an existential threat.

Mostly because I do not have nukes.


Yes. This point is almost always missed. 'Human Level' itself is not very high. And, 'Human Level' is still like Homer "doh, they unplugged me".

Current GPT doesn't have a physical threat.

But, take something like the movie "Colossus". Where they did give control of nukes. That was scary.

Now, go watch the Netflix show about AI. This GPT stuff is so far just fun apps.

The military already has AI that can out pilot a human in a F-16, you think it will stop there ? That is probably already old news.


Yes but you can't scale your brain by building a bigger womb.


>Have you used gpt4? it's reasoning capabilities match human ability.

I use it every day, and I have to often guide it like a 5 year old to come to the conclusion to help me the way I want it to.

>GPT 4 can reason through any mental exercise as well as a human.

So can my alcoholic neighbor. That should not be a benchmark of anything.


> it's reasoning capabilities match human ability

All research that I have seen disagrees with this take. Ask GPT-4 a few basic block world problems and see for yourself if it can match human ability.


GPT 4 can reason through any mental exercise as well as a human

Lol try giving it any of the puzzles from here: https://momath.org/home/varsity-math/complete-varsity-math/

Don’t just accept its confident tone, read through and actually parse the logic. It totally falls apart.


gpt4 is capable of reasoning “in distribution”. Its reasoning drops when you go outside the goldilocks zone


Yann LeCun disagrees with you, and I take his word on it


Please. Next year maybe but have a little respect please


its not even that, they have literally one argument and its nanobots


The more realistic argument is that AI will be used as a power amplifier by the already powerful.


Nah other way around. It would amplify the majority. That's why the powers-that-be consider it a huge potential problem.


Ah yes, because all those normal people will be able to run these powerful models on the devices that they currently own. Such a naive take.

The rich will ALWAYS get their piece of the pie, and once they've had their fill, we'll be left fighting for the crumbs and thanking them for their generosity.

AI won't solve world hunger, it will make millions of people jobless. It won't stop wars, it will be used as a tool for the elite to spread propaganda. The problems that plague society today are ones that technology (that has existed for decades) can fix but greed prevents it.


actually is the opposite, it would democratize power at an unprecedented scale, that's why corporations are funding these NGO's (useful idiots)


Through what mechanism would it democratize power? I thought the GPTs were already limited to regular end users due to computational constraints. Most people can't afford dozens of Nvidia GPUs and the API infrastructure to data mine.


computers used to fill rooms and now you carry one in your pocket


And yet rich companies and government agencies still have computers which fill rooms. https://www.energy.gov/supercomputing-and-exascale

And those which are carried in our pockets are no longer capable of being home brewed.


Nanobots are easy and convenient (for a superintelligence). But it's not like they're necessary. ASI can take over the world the old fashioned way, it just takes longer and is harder to explain.


Don't misrepresent the problem.


what is the problem, in tangible terms?


Machines have so far replaced us in physical tasks, which has forced us to move largely to menial office jobs, typing on a computer, doing things machines are bad at. Over 80% of jobs in the US are office-confined (or from home, but that's not the point). We're actually a poor fit for those monotonic, sedentary jobs, our bodies and minds are not designed for them. And from that the subsequent devastating effects on our physical and mental health. But you gotta have a job, or you can't exist. The system throws out parts that are not useful. It's the nature of the system.

Well here comes AI to take those jobs. What happens, you think? Where do we go next? Do you imagine we'll all just sit idle and give out orders for the AI to fulfill? Recall: the system throws away parts that are not useful. And we're not better at orchestrating this system than we are at implementing it. Most people already struggle to handle the complexity of modern life. So they'll be thrown out.

Now think what happens with a society where most people are unemployed, unhappy and hungry, and businesses are mostly, not ENTIRELY mind you, but most self-sufficient machinery that does the thinking and does the footwork?

But even that doesn't describe the problem alone. It's more of an end game. Before this we'll see not-so-superior AI pollute our web, media, public space with quickly generated content, as actual artists and thinkers are displaced, unable to compete. Our culture will die first. And then, eventually, we'll start dying.

As I'm describing this, note I don't say this from place of fear. I don't fear this. I see it more as an obvious place for our civilization to go. We can't help it, because we don't decide where this civilization goes any more than your cells decide where you go, or any more than the atoms of your cells "decide" where the cell goes.

We're not in control. That's just evolution.

Say, when you're sick and you have cancer, those cells are part of you, but they harm you, so you cut them out, apply chemotherapy, and then if there's a prosthesis to substitute the organ you removed with a machine, you do it, and you don't think twice about it.

What makes you think our society as a whole is different? If humans are not good at what society needs, it cuts those people out, and replaces them with working machines. It's so plainly obvious. We pay lip service to human rights and the value of an individual, but clearly that's not what we end up doing. A politician is chasing money and power, and they don't mind starting wars to get them if they can. A business chases profit, so they don't mind automating away any employee they can. It's always been this way. So now that you can replace the human thinkers, businesses won't need human thinkers. And since there's nothing left humans are good at, society won't need humans.


Well, shit, maybe Altman shouldn't have stoked that by signing letters how about AI was an extinction risk then.


altman is a secret accelerationist and was just playing a ruse


That’s convenient.


Ilya’s job won’t survive this power play…….

Satya has been humiliated and will be furious.


What mechanism does Satya have to do anything here?

Microsoft has a minority stake in the for-profit subsidiary that is wholly controlled by the 501(c)(3). All investors (and employees) in the for-profit have to agree to the operating agreement that specifies the for-profit is not actually obligated to actually make a profit and that it is all secondary the charter the non-profit operates under.

https://openai.com/our-structure https://openai.com/charter

There is not a higher power than the board of the non-profit.


So, for starters, they own all the compute being used.

I think people seriously underestimate how hard it is to get the GPU compute/etc that is necessary to be useful here. Lead time would be years, easily, even if you had the money. NVidia can't change this for you even if they liked you - they literally can't build chips fast enough.

Depending on the exact agreement, Microsoft may have just given them credits/free use, and the part where they make sure the resources are actually available is just good faith that may no longer exist.

That's one example.

Even beyond that, you are assuming they can only do super-direct things, but it turns out to be fairly easy to make things very uncomfortable for people/companies indirectly.


This is a good point in a vacuum, but it's likely not that feasible legally, nor am I sure it would even be in Microsoft's best interests to do so to begin with.

Huge portions of the compute they are using is also directly doing inference for Microsoft products, so that's another dimension to all of this.

You also have to remember that ultimately Microsoft had to sign an operating agreement that states that the primary duty of the for-profit is the mission and charter of the non-profit and that all other things are secondary to that.

Not that any of that makes Satya happy, but it does severely limit his options outside of cutting off his nose to spite his face. I think the primary outcome is in significantly reducing the chances Microsoft continues to invest.


Sure, i agree some of this is not feasible, and the most likely path for any large corporation here is to cut their losses and figure out how to deal.

I'm just giving an example of where they do have leverage if they want to use it despite the cost.


How so?


Reports say Microsoft was caught completely off guard. Here's the damage control tweet Satya found it necessary to make shortly after it all went down: https://twitter.com/satyanadella/status/1725656554878492779


Makes we wonder how much of the investor’s money was put in to OpenAI as a company or in Sam and the leadership. I think the answer is probably the latter, and the fact that Sam’s firing was this easy (and would likely have the effect of others going in quick order) seems like a massive oversight in terms of investor due diligence in board structure etc.


While I don’t know what is happening in OpenAI, from my experience(long enough to have an opinion), top people often leave when the employer is doing well or passing the last hurdle because, by that time the employer has gained adequate presence and brand identity that anyone leaving can immediately end up in top FAANG or equivalent Corps with much better benefits and of course pay/equity and other things.

Only time employees leaving is a bad sign had been when the company had been in zombie mode too long(few years) and haven’t delivered anything yet or still in alpha/beta stage with crappy deliveries no one is using or has no significant uptake.

Also, if there are financial difficulties, then a lot of people leave in droves and some actually come back after finances get better.


FAANG would probably be viewed as a step down from OpenAI at this point (or at least until today)


Which is exactly why you can expect a good position and lots of benefits.


does not matter, even CEO of Facebook role is a step down. This is a different game. I doubt these people have to work anymore. they are after something else.


If you are a bright mid-career FAANG engineer, you probably don’t have to work. You can just move to Vietnam and live comfortably on your savings.

And people who are invested into “social impact” still care about their wealth and well-being despite living luxurious lives by the standards even of the developed world, let alone all other countries.


Err, how do you just “move to Vietnam” as a non-Vietnamese citizen? You’d need a visa and thus a job surely?

Like, sure you’d be able to live off your savings if you were allowed to stay in the country, but most countries have a short limit on a tourist visa and then you’d need to leave…


Nothing wrong with greener pastures/lower steppes


I don’t know if people realise that there really isn’t any more prestigious place to work at right now than OpenAI. You don’t leave there for a better opportunity.


>there really isn’t any more prestigious place to work at right now than OpenAI. You don’t leave there for a better opportunity.

it's like you wrote this comment yesterday, and not in the context of what has happened


It’s like we’re talking about the motivation of people who left a company and thereby created the very context that happened?


> I don’t know if people realise that there really isn’t any more prestigious place to work at right now than OpenAI. You don’t leave there for a better opportunity.

My guess is they know something we don't Or they assume sama being fired means the trajectory of OpenAI as a company has changed or is likely to change significantly?


They can ask for huge salary bumps which may provide better opportunities for thier families.


OpenAI pay is competitive enough (remember the articles last week about $10M offers?) and there's still plenty of room for their stock to go up. IPO or full acquisition by MS would both translate to a huge payday.


People who work at places of high prestige, especially in senior roles, are often the kind of people who take part in creating those "better opportunities". :)


Unless Sam is already forming a new AI company.


> While I don’t know what is happening in OpenAI

Would it really have been so hard to find out Sam Altman just got fired without notice?


yea you missed the thing that happened lol


2024 for OpenAI is looking bleak.


The world needs more truly open source AI models where the success and outcomes do not depend on a single figurehead or a corporation.


The truth or falsity of this statement turns entirely on whether more AI is good or not. You are in agreement with doomers that single figureheads are a weakpoint. That's the point. The disagreement is on whether having massive uncontrollable power widely dispersed is beneficial.


Who grey-texted this comment? So confused who could disagree with it. Is that the problem, this comment is so obviously true, it's just redundant?


The problem is that you cannot finance the training of a competitive AI model and then turn around and give it all away for free.

Who's supposed to pay for that?


Open and paid are not mutually exclusive. Someone is paying for Linux kernel development as well.


Universities usually do that, they work on open problems and publish their findings for free.


Since there is enough private investment available, why spend public money?


Good lord, is this the 'hacker' mindset now?


How difficult would it be for Altman and these guys to start OpenAI² tomorrow and become competitive with their old company?

Presumably it would require enormous startup capital?


They could get dump trucks full of cash within a week if they wanted it.


I think you are correct.

I think they literally could get dump trucks full of cash.

I see no reason why they would of course.

Still as far as I understand and I fully admit that Is not a lot, OpenAI is running on a lot of MS cloud, so they would (probably ) ot be able to offer OpenAI² enough compuete in the near future.

I doubt anyone could, but if it became a need I am certain every at scale provider would be doing their upmost to win the business.


It’s a lot cheaper to replicate than to innovate


I keep seeing so many comments about how the move was irresponsible from the board because it was a hyper growth company and blah blah blah. This board was not set up to care at all about shareholder value.

I’m not making any judgement about whether they made the correct decision. I’m just stating that everyone keeps talking about this as if it were a normal company structure and it absolutely is not.


My prediction: https://news.ycombinator.com/item?id=38309611&p=7#38311488

So here's my theory which might sound crazy. Sam planned to open a new AI company and taking away openAI's top talents to his company. And breaking up openAI into non profit and his for profit company.

Sam's first tweet after all this has, just hours after this article:

> will have more to say about what’s next later.

So either he knew that he was about to be fired or at least was prepared.

Also based on the wording of the press release, Sam did something that the board absolutely hated. Because most of the time even if he did something illegal it doesn't make sense to risk defamation by accusing him publically. Also based on his video of yesterday at the APEC summit, he repeated the similar lines few times:

> I am super excited. I can't imagine anything more exciting to work on.

So here if we assume he knew he was about to get fired, the conclusion is clear.


It would be ironic but sincere if the new company is named ClosedAI or sth


Even more, if the model was open-sourced, weights and everything.


Was Madry at OpenAI for a sabbatical? I thought he was a professor at MIT.


"I am currently on leave from MIT and spending it at OpenAI."

https://madry.mit.edu/


Sam and his friends will land on their feet, likely at $NVDA


I think sam is smart enough to not waste his time inside a corporate mill.


Jensen isn't a normal CEO


I don't think NVDA would want to work on LLMs/AGI, it doesn't cohere with their business. In the short term they're focused on upscaling and frame generation, and long term it'd be neural rendering.


This news is worrying for the AI community. Three important researchers, Jakub Pachocki, Aleksander Madry, and Szymon Sidor, have resigned from OpenAI. It makes us wonder about the direction and teamwork at OpenAI. Aleksander Madry was in charge of the AI risk team, a group focused on making sure AI is used safely. Their leaving might affect how OpenAI works on AI and deals with potential risks. OpenAI needs to tell us more about why they left so that people can trust that OpenAI is still committed to developing AI responsibly. We hope to get more information to understand what happened.


Sam should lead a group of outstanding engineers to rebuild another AI company. Maybe in the long run, leaving openai's naive board of directors might not be a bad thing for him.


> group of outstanding engineers to rebuild another AI company

openai was that backed by a nonprofit structure. and it still caused sam to be michael dell'ed/steve job'ed.

seems like the issue was with having a board/didnt have a majority on it. Zuckerberg having 53% voting power on the board is probably the greatest thing he managed. Anything sam does from now should follow the same.

/disclaimer - i have no idea how voting shares work.


ChatGPT is a toy, OpenAI isn't open, no one can download the models or the data, and no one can buy the chips needed to work with them. This whole thing is a joke. Shut it down and give the money back to the shareholders.


So now you've got the CTO, CEO, and 3 top engineers creating a new company pronto, with VCs and the usual suspects queueing up to throw money at them. Bad move from OpenAI.


Meanwhile, in the parallel universe, they founded a new company with Sam and Greg, and Apple entered the scene as a significant partner or investor.


Hear today Microsoft hires Sam Altman to lead advanced AI research team after OpenAI ousting – business live.


Is it a coincidence that all of them are Polish?


Forgive my ignorance but I am asking this because a lot of people have been mentioning AGI. Does OpenAI say anywhere that they want to achieve AGI at somw point in the future? While GPT and other generative AIs are extremely innovative and cool they are really really far away from being AGI. Setting sights on something like AGI seems delusional or a marketing stunt at this point.


Yes. AGI is literally their charter.

https://openai.com/charter


> OpenAI is an AI research and deployment company. Our mission is to ensure that artificial general intelligence benefits all of humanity.

First sentence on https://openai.com/about


It' time to just let ChatGPT to be the real BOSS, no more non-sense.


The other cofounder also quit


The second person is named Aleksander Mądry, thank you.


Not according to his linkedin: https://www.linkedin.com/in/aleksander-madry-61115b233/

(it's about just one letter - a vs ą)

If that's the way he wants to be known, it's up to him.


His MIT page on the other hand. https://madry.mit.edu/


I'm super curious to know what happens to the GPTs store.

I've started a directory of GPTs on gipety.com so personally the longer it takes for the official store to be launched the better ;)


> Submitting information since paywalled links are not permitted.

This rule should be dropped. It's the reason that HN is dominated by low quality outlets. It's also in tension with another rule, which is that the original source of the article should be posted - the original source is usually an outlet that has the resources to do original research, thanks to a paywall.


Anyone who follows the gaming industry knows that Microsoft ruins nearly everything it touches.


>> Submitting information since paywalled links are not permitted.

They are? Here's my submission of your link.

https://news.ycombinator.com/item?id=38320350

Maybe @dang can put it under your name?


Some Microsoft bullshit. I've read an article where they said Greg to be a really hard worker at Open AI,so I can't possibly think of a reason other than this. Maybe they'll do something with Google? Who knows


they seems to finally achieve AGI, they realized they don't need as many people anymore :D


They now have Sam AItman.


I just want to point out that it was exactly a week ago that OpenAI was openly announcing $10M pay packages to pull AI researchers from other major companies, like Alphabet. These other companies are well established and if OpenAI were a normal corporation might look at various hostile takeover strategies.

How much more cost effective to just fool half the board of a nonprofit into taking unnecessarily aggressive action.


If you're an OpenAI engineer, now would be a good time to ask for a raise.


Google and Microsoft are todays big winners.


Microsoft has a huge stake in OpenAI, so they're very much a loser.


IdK Satya might sense there's a wounded gazette in the midst, and starve OpenAI for funds, and pick up the pieces.


I don't think microsoft has the culture to be competitive on the cutting edge of AI.

If they try to integrate OpenAI, they will suffocate it.


So did Google some with DeepMind. All the big innovations were at DeepMind, and then they got bought, stifled and absorbed. Big companies have a hard time innovating, so buy it.


And then, 80% of the time, they suffocate what they bought.


Microsoft not very much


They have the GPT tech, now they can poach the talent from openai without guilt


Why should they feel guilt for giving workers a chance to get paid what they really are worth?


Because capitalism is a bad thing for rich people when poor people also get to do it like they do.


Just seen their departure tweets (link below) What’s with this annoying trend of omitting capital letters from comms?

https://x.com/gdb/status/1725667410387378559?s=20


Glad I’m not the only one who finds this irritating. I once applied to a job and the manager replied using all lower case letters.

I guess he was trying to seem cool and approachable, but to me it just reads as unprofessional. I find it hard to take a piece of writing seriously when it’s formatted that way.


Oh that’s great, we’re more who find such kind of writing unprofessional. Especially spelling errors in CVs, formatting errors in documents/presentations, one sentence paragraphs and similar kind of things.

It might indicate intelligence and people who are extra busy, thus not wasting their time, but depending on the position and circumstances a nicely written text is essential.


I started to write my comment in all lower case letters as a joke, but it was taking too long to cancel each autocorrect suggestion. So I definitely don’t buy the “I’m too busy” reason.

I think the main reason I find it unprofessional is that it is such a distraction. Professional communication should be clear and “get out of the way” so that the focus can be on the ideas being discussed.


And I thought I was the only one that didn’t like this kind of writing. No capitalization, writing “i” instead of “I” and generally trying to write as fast as possible.

I guess that’s what happens if you’re working 60-100 hours per week coding ChatGPT, and you don’t have time to waste. Sometimes I feel sorry for these kind of people and other times I envy them.


They think they are being cute and cool. "Look I am sloppy and don't care how others perceive me".


Very similar to “I don’t care about fashion, that’s an outdated social conformity thing” while wearing the same hoodie and jeans uniform as everyone else.


it takes extra steps to add capitals. and spell checking a short text? can you not figure out what is being said?

imagine being someone that is just being quick, and everyone around you is inferring some motive like 'being cool', when really he isn't thinking of your perceptions at all.


It’s hardly a “short text” it’s effectively his resignation speech.

Each to their own I guess. For me the attempted use of correct punctuation is a reassuring social norm. It reassures me that those entrusted with power are able to pay attention to details and conventions.

To put it another way, if we can’t trust these guys to even write a message properly, how can we trust them with the stewardship of the most advanced AI the world has ever seen?


On the phone it autocorrects. On pc, Grammarly literally makes everything perfect even if you make grammar mistakes including capitals.


Why do we even have capital letters, I wonder.


>comms

Ironically some of us haven't gotten used to this yet either.

But it does look very weird. the full stop at the end of a capitalless sentence followed by an ellipsis is what sticks out to me, seems very tryhard.


Apologies! Irony not lost.


That has got to be some sort of code


Power play.


capital letters at the beginning of a sentence are deprecated, as they offer no tactical or semantic advantage. worse yet, they break the flow and consistency of written text.


The board is like a who's who of privilege and nepotism - https://sfstandard.com/2023/11/17/openai-sam-altman-firing-b...


Why nepotism? The article you linked to has no mention of particular family ties that got them there? If anything it’s a remarkably meritocratic board, when compared with most other industries

The fact that it’s all well connected wealthy people is kind of the point, the board is there (among others) to bring advice and experience


If you really want to nitpick you could say it's cronyism, but we all know what nepotism means in this context, and there's no requirement nepotism be based on family ties anyways.


What credentials does Tasha McCauley have to be on the board of a company like OpenAI? I can think of 100s or perhaps 1000s of people more qualified to be there.


i think its more about nepotism being the child of someone important.


On a side note, what is the with the ownership structure diagram?

OpenAI non-profit --owns---> Holding Company for Open AI non-profit

That makes no sense. How does a non-profit "own" a holding company, that own the non-profit?


It's incredible how many of you alleged tech people think that Sam just wanted a world of safe AI while his actions suggest he wanted to make dump trucks of money.

He was the one who partnered with Microsoft and turned it from a non-profit to a for-profit company.


I find the close partnership with Microsoft and their vision of "Creating safe AI that benefits all of humanity" difficult to reconcile. Open source vs commercialism could definitely be a reason for the split.


How so? Think about it, Microsoft is very risk averse. Hell, even to get the Xbox project going they had to fight so damn hard because "we're not a games company, we make Windows and Office"

It's completely illogical to say that just because an AI company isn't open source it can't strive to also be "safe".


Microsoft is risk averse only in the sense that they don't want any competition.


This post and its replies are a contest for who can best perform enlightened cynicism.

The truth is there isn't any strong evidence one way or another, only your existing worldviews.

Partnering with Microsoft and closing access both have profit-driven and ideology driven explanations, and we don't have strong evidence it's one and not the other.


[flagged]


FWIW I picked it in high school after the diamond sword in Minecraft.


Azure is a color, pretty common.


Oh, ok. Thanks for the education sir. I'm sure it's referring the color and not the second biggest cloud computing platform grossing over 50 billion per year.


Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


The poster's name is [Type Of French Knife][French Word For Azure (the color)]

It is likely referring to the color, not the cloud platform.


You wouldn't have to bring up Microsoft as some kind of evidence, WorldCoin (eyeball scanning scam) is more than enough proof of what he's interested in.


What was the actual endgame there?


Yeah, an interesting point was brought up by David Sacks on the All In Podcast. While technically Altman has (reportedly) no shares in OpenAI he has significant shares/control in the weird entity the profits go. So technically he has kind of a Peter Theil 401k type loophole that allows him to amass control of a potentially massive amount of money while on paper it technically isnt his.


How exactly will he make loads of money if he didn’t have any equity in OpenAI? Or should we assume that was a lie?


You know it's not just about the monetary compensation, that's rather short-sighted. sama is not hurting for cash. Being the CEO of OpenAI, in the climate that we are in, brings with it a lot of exposure and influence.


to add to your point: He was sitting with heads of state because of openai.


That’s just mere fashion. My dad once was good friends with the then president of India. Didn’t mean nothing to any effort or initiative he wanted. It’s not knowing or sitting with someone that matters, it was leverage. Doubt Sam had that on any leader. Didn’t even have it on his own board lol.


You’re almost right. Look at how easily and comfortably he engaged with world leaders. He wants power.


It's a stereotypical heuristic I use and maybe wrong, but I can't ever really trust anyone with a bunch of aesthetic plastic surgery.

If you're that much of a narcissist that you'd risk your health to look better, I need to take everything else you do and say with a big grain of salt.


I’d like to see a mea culpa from the hordes of people who insist OpenAI was purely doing a regulatory capture grift, instead of the very real and very obvious existential AGI debate going on behind the scenes, as we witness now.


Honest question: How would he make money if he didn’t have equity in OpenAI?


Worldcoin is the mud I've seen thrown around, but I'm making absolutely zero assumptions here. This whole situation stinks.


Your forward-looking value to an organization is your value whether it’s realized in equity or not.

In the world where Sam Altman leads OpenAI to market dominance and eventual acquisition by Microsoft for $400B or whatever, he obviously would represent an important part of what Microsoft would be buying and would be compensated accordingly.


If getting paid was his motive all along, wouldn’t taking equity have been a more straightforward way to do that?

Like, why would he not take equity, but instead rely solely on an acquisition?


No he wouldn't, if the CEO is substantially compensated for a $400B acquisition despite not owning equity that would be fraud. It's the CEO's fiduciary duty to act in the best interest of shareholders, if they take a $10bil cut for themselves that is $10bil that could've gone to the shareholders and hence not in their best interest.

It is the reason why CEOs (not Sam, apparently) are usually compensated in stock options. Golden parachutes are some sort of severance when the CEO gets fired immediately from the new company, eg. Twitter.

He's totally the guy who made OpenAI into ClosedAI, but money was clearly not his motivation.


Steve Jobs ended up handsomely rewarded for his return to Apple well beyond whatever equity he held in NeXT. That’s the kind of ultimate compensation I’m talking about.

Yes, it’s a strategy that would/did require—among other chancey things—Altman to make a big bet on himself rather than OpenAI on him.


Can they not be paid a (very substantial) retention bonus? I’ve know people at startups that had no equity that were paid retention bonuses during acquisitions. Is a CEO different here?


Yes, but not too substantial, or the shareholders have grounds to sue based on what I said above.

A similar (but even more complicated) case is currently going on over at Sculptor Capital Management, where former management is suing current management because they have chosen to go with a "worse" acquisition deal that would let current management stay on-board. This is despite shareholder approval to the "worse" deal. https://www.pionline.com/hedge-funds/ex-sculptor-executives-...

In fact, to prevent this situation is exactly why golden parachutes exist.

It's also totally unproportional compared to what Sam would've gotten if he owned equity.


You completely miss OP's point. He is basically saying that the market value of Sam is appreciated regardless of his equity. And one example on how that is true is that whatever company Sam starts next will have an insanely high valuation from the go.


Of course that's true, but what's the point of building a decacorn company just so you have higher chances of building another decacorn company afterwards?

If money was his motivation, why wouldn't he spend his time building a company like that in the first place? As the head of YC, I don't think he would've had any trouble raising for anything, even prior to OpenAI.

Also, it's not really parent's "point" as you claim; they quite explicitly talk about compensation in case of acquisition.


Via his Y Combinator related holdings in companies which use the OpenAI APIs? Or just which are getting capital and exposure due to ChatGPT boosting visibility of AI.


his partner or parents or some trust has probably the equity for him.


Isn’t the for-profit a subsidiary on the non-profit?


Correct. That's why they were able to fire Sam.


He wants his money and a bunch of others will follow him to whatever company he forms because they want their money.


[flagged]


Consider the possibility that people who know a thing or two technology might still think AI brings a lot of good and that kneecapping AI isn't the right move at this time.


There are plenty of AI experts who believe that AI will do more harm than good if it keeps being developed at this rate.


This is not as strong of an argument as you think. There have been experts who believe pretty much every technological innovation, since we transitioned from rock tablets to paper, that it will do more harm than good.

Context: I believe the end of humanity will be real AI and not climate change.


I, myself, am a lot more worried about a few monopolists doing whatever they get to choose for the rest of humanity. AI fuels that vision more than any other piece of technology.


Yeah, it's a big debate atm with knowledgeable people on both sides.


Impossible challenge


OpenAI literally kicked off an AGI race, which is exactly the thing you do not want to do if you’re interested in AI safety.


> head of AI risk team

That's a good sign imo.


What are these people trying to do exactly? Create an AI that enslaves humanity?


Are those the first cracks in the AI market bubble?


Is it a bubble if it’s useful and I use it dozens of time a day?


I agree AI is useful, but not to that extent to what it is valued on the market. I do not think that AI companies can deliver as much as they promise. With the driving core at OpenAI basically gone, I bet they will soon implode under the weight of their promises. Which means, investors will start pulling out their stakes. boom


Speaking for my own n of 1, ChatGPT Pro has almost entirely (>90%) replaced the Google search engine in my daily life. The results from ChatGPT are just so much better and faster.

That's got to be worth something, since Alphabet is a $1.7T company mostly on the strength of ads associated with Google search.


Google doesn’t care if you’re going elsewhere to ask deep questions about Rust or whatever. They care way more that people go to them to look for the best bread mixer, or find a good restaurant, or a local massage therapist. In that regard I think Amazon is still a much bigger threat to them.

GPT is very useful as a knowledge tool, but I don’t see people going there to make purchasing decisions. It replaces stackoverflow and quora, not Google. For shopping, I need to see the top X raw results, with reviews, so I can come to my own conclusion. Many people even find shopping fun (I don’t) and wouldn’t want to replace the experience with a chatbot even if it were somehow objectively better.


Yea. No.

People stopping to use google for the small stuff will be the beginning of the end of google being the mental default for searches.


There is a wide variety of services available to people for specific use cases. When stack overflow came along, I used that for programming questions instead of google. But I still use google for most other searches.

I go to Amazon if I want to find a book or a specific product.

For the latest news, I come here, or Reddit, or sometimes twitter.

If I want to look up information about a famous person or topic, I go to Wikipedia (usually via google search). I know I can ask ChatGPT, but Wikipedia is generally more up to date, well-written and highly scrutinized by humans.

The jury’s still out on exactly what role ChatGPT will serve in the long term, but we’ve seen this kind of unbundling many times before and Google is still just as popular and useful as ever.

It seems like GPT’s killer app is helping guide your learning of a new topic, like having a personal tutor. I don’t see that replacing all aspects of a general purpose search engine though.


Your last paragraph - yes! Many people haven't realized this yet.

She/he/it/them is an amazing programming tutor.


Fair enough. My questions are more likely to be about Fast.ai, but I get your point.

Did you see the recent article about a restaurant changing its name to "Thai Food near me"?


I used it in the beginning, but now I am back to google... I don't think the results were better with ChatGPT.


Chat gpt is not a good source of truth so can’t be used for information retrieval at scale. You might have a specific usage pattern that is very different to the majority of Google Search users so it works for you


Personally, I don't have use case for comparing Google and ChatGPT that has truth as a requirement in the output.

For the majority of my use of ChatGPT and Google, I need to be able to get useful answers to vague questions - answers that I can confirm for myself through other means - and I need to iterate on those questions to hone in on the problem at hand. ChatGPT is undoubtedly superior to Google in that regard.


Agreed, but this will probably be limited to domains where there’s better products


Searching Google is not a good source of truth either; especially their infoboxes which have been infamously and dangerously wrong. And if you follow a random search result link - well, who knows if the content on that site is trustworthy either!


But you’re in control of your information retrieval, you didn’t have an unreliable agent synthesise bits in the middle.

Again - to each their own. But what people use google for GPT doesn’t replicate anyway (and what the Google business was built on) - which is commercial info retrieval.


As of a recent update, ChatGPT can do an internet search to answer "find a Thai restaurant near me." Of course, it uses Bing, not Google.

And for my single query above, ChatGPT searched multiple sources, aggregated the results, and offered a summary and recommendations, which is a lot more than Google would have done.

ChatGPT's major current limitation is that it just refuses to answer certain questions [what is the email address for person.name?] or gets very woke with some other answers.


Google is not a good source of truth at all, for anything other than hard facts. And nowadays, even the concept of "hard fact" is getting a bit fuzzy.

Google search reminds me of Amazon reviews. Years ago, basically trustworthy, very helpful. Now ... take them with a tablespoon of salt and another of MSG.

And this is separate from the time-efficiency issue: "how quickly can I answer my complex question which requires several logical joins?", which is where ChatGPT really shines.


the ad ridden shitware website filled with seo buzzwords with 100% opacity keywords also isn't a good source of truth. i'll take chatgpt over that.


Sure, use a different search engine then.

To each their own.


For the past week or so I have been typing my search queries into Open AI, Bard and Duck Duck Go to compare them.

I haven't finished making up my mind, the the AI's are doing OK. I have only been asking it for code snippets that are easily verifiable.


Google ads span much more farther than in search - it's all over internet, on all websites, mobile etc.


Even if OpenAI implodes it will hardly impact other LLM-focused startups. In fact it would probably be a boon for them as people search for GPT alternatives.

Sam & Greg could start a new AI company by Monday and instantly achieve unicorn valuation. Hardly a burst.


This is almost certain to happen if they can snag the talent, I bet his phone is blowing up with VCs right now, revenge move and now unshackled from a non-profit nature of OpenAI


Where are they going to get the compute or the data?


Honestly this is exciting. Are they going to be the first company to achieve a $1 Billion evaluation within 3 days? Would they file the incorporation papers on Monday meaning they get that valuation within 24 hours?


What's the use of a newborn baby?

AI is as real as the mobile/internet/pc revolution of the past.

So many use it obsessively every single day.


"This is good for bitcoin."


I paid the 20 dollar subscription. I don't even subscribe to netflix.


> I agree AI is useful, but not to that extent to what it is valued on the market.

I agree, it's greatly undervalued!


Was the housing market a bubble if millions of people lived in it and spent 75% of their time in it?


Yes, a bubble is a mismatch between price and value. Saying value > 0 is not disproving a bubble.


Dotcom bubble...was/is the internet not useful?


It doesn’t feel the same since there are a handful of players in the space. I see your point though.


Is Mistral really a 2 Billion business after only 6 months? https://www.ft.com/content/387eeeab-1f95-4e3b-9217-6f69aeeb5...


There have been bubbles in the housing market in the past - houses are quite useful.

It's a bubble if the valuation is inflated beyond a level compared to a reasonable expected future value. Usefulness isn't part of that. The important bit is 'reasonable', which is also the subjective bit.


Could be. Housing bubble happened even though most people lived in houses and still do. It's all in price vs utility. If the former gets way ahead of the latter, and people start trading just on future price raises, you've got a bubble.


Electric cars are useful as well, but still, most electric car startups are -90% down from the peak. A financial bubble does not mean the underlying product is bad.


It is a bubble if it is overvalued. I don't think it is, but nothing prevents something useful from being a bubble, if the valuation is extreme.


If the compute is paid with imaginary hype money? Doesn’t matter how useful a service is if providing it turns out to be unsustainable.


Calling chatbots AI is definitely a bubble


It can be useful in certain contexts, most certainly as a code co-pilot, but that and yours/others' usage doesn't change the fundamental mismatch between the limits of this tech and what Sam and others have hyped it up to do.

We've already trained it on all the data there is, it's not going to get "smarter" and it'll always lack true subjective understanding, so the overhype has been real, indeed to bubble levels as per OP.


> it's not going to get "smarter" and it'll always lack true subjective understanding

What is your basis for those claims? Especially the first one; I would think it's obvious that it will get smarter; the only questions are how much and how quickly. As far as subjective understanding, we're getting into the nature of consciousness territory, but if it can perform the same tasks, it doesn't really impact the value.


My basis for these claims is from my research career, work described so far at aolabs.ai; still very much in progress, but form what I've learned I can respond to the 2 claims you're poking at--

1) we should agree on what we mean by smart or intelligent. That's really hard to do so let's narrow it down to "does not hallucinate" the way GPT does, or more high level has a subjective understanding of its own that another agent can reliably come to trust. I can tell you that AI/deep learning/LLM hallucination is a technically unsolvable problem, so it'll never get "smarter" in that way.

2) This connects to number 2. Humans and animals of course aren't infinitely "smart;" we fuck up and hallucinate in ways of our own, but that's just it, we have a grounded truth of our own, born of a body and emotional experience that grounds our rational experience, or the consciousness you talk about.

So my claim is really one claim, that AI cannot perform the same tasks or "true" intelligence level of a human in the sense of not hallucinating like GPT without having a subjective experience of its own.

There is no answer or understanding "out there;" it's all what we experience and come to understand.

This is my favorite topic. I have much more to share on it including working code, though at a level of an extremely simple organism (thinking we can skip to human level and even jump exponentially beyond that is what I'm calling out as BS).


I don't see why "does not hallucinate" is a viable definition for "intelligent." Humans hallucinate, both literally, and in the sense of confabulating the same way that LLMs do. Are humans not intelligent?


Then why can't the the grounded truth of ChatGPT be born of a body of silicon and emotional experience of zillions of lines of linguistic corpus?


Those zillions of lines are given to ChatGPT in the form of weights and biases through backprop during pre-training. The data does not map to any experience of ChatGPT itself, so it's performance involves associations between data, not associations between data and its own experience of that data.

Compare ChatGPT to a dog-- a dog's experience of an audible "sit" command maps to that particular dog's history of experience, manipulated through pain or pleasure (i.e. if you associate treat + "sit", you'll have a dog with its own grounded definition of sit). A human also learns words like "sit," and we always have our own understanding of those words, even if we can agree on them together too certain degrees in lines of linguistic corpora. In fact, the linguistic corpora is borne out of our experiences, our individual understandings, and that's a one way arrow, so something trained purely on that resultant data is always an abstraction level away from experience, and therefore from true grounded understanding or truth. Hence GPT (and all deep learning) unsolvable hallucination or grounding problems.


But I'm not seeing an explicit reason why experience is needed for intelligence. You're repeating this point over and over again but not actually explaining why, you're just assuming that it's a kind of given.


I would appreciate another example where a major new communications technology peaks in its implementation within the first year after it is introduced to the market.


FTX / crypto, which just imploded last year.

Look, I'm an AGI/AI researcher myself. I believe and bleed this stuff. AI is here to stay and is forever a part of computing in many ways. Sam Altman and others bastardized it by overhyping it to current levels, derailing real work. All the traction OpenAI has accumulated, outside of github copoilot / codex, is itself so far away from product-market fit that people are playing off the novelty of AGI / the GPT/AI being on its way to "smarter" than human rather than any real usage.

Hype in tech is real. Overhype and bubbles are real. In AI in particular, there's been AI winters because of the overhype.


It seems we are talking about multiple different things. I never denied hype was a thing.

You’re talking about hype cycles now. Previously it seemed like you said AI was not going to be advancing.

LLMs are maybe headed into oversold territory, but LLMs are not the end of AI, even in the near term. They are just the UI front end.


> major new communications technology peaks in its implementation within the first year after it is introduced to the market.

Peak is perhaps the wrong word, local maximum before falling into the Trough of Disillusionment.


I think something terrible happened at OpenAI and we just don't know what it is yet, we will eventually. But I believe something unethical happened, something that might be even illegal. For the board to remove the CEO, it means that the board doesn't trust him anymore.


You must not use AI if you think this. AI is not the bubble. Everything AI will replace is the bubble.


AI today is in a weird position. What I can do today with AI was _inimaginable_ just 1 year ago. However, for a lot of people, the concept has worn out so fast, that they don't realize anymore what has happened... Some very intelligent people said some very silly things, such as that it was a bad search engine (it isn't a search engine) that it was a glorified word guesser (it doesn't exactly work as your telephone word suggestion). And so on and so forth. People always try to understand new technology through the lense of older technology. I do it, you certainly do it. This is how we grasp novelty. But AI is in a different dimension. I have been working in the domain for 30 years and I really didn't think we would reach this level in my life time. Talking to a computer to bring it to make some quite complicated task is INCREDIBLE... However, since communication is really ubiquitous for Humans, we tend to forget that it is an incredible achievement... In less than a year, we went for Scify ("Her") to reality, and in less than a year people have become blasé for something so fantastic... This is what consummerism did to people... They can't wonder more than a year...


It's a wonder. But if capex and opex are insane, and go up with every generation, the magic fades quickly.

Like, GitHub Copilot may be amazing, but if it looses money for every added user, if power users loose the company 4 or 8 times what they already pay, then maybe it's not an efficient use of compute resources.


Yes, this. What made tech companies valued at higher revenue multiples than other industries is that new users could start using a product at near zero marginal cost once the tech was built. New revenue at zero marginal cost. AI is great but expensive to operate and the expense grows in direct proportion to usage. New users come in and you have to stand up a new data center full of H100s to serve it.


> Talking to a computer to bring it to make some quite complicated task is INCREDIBLE...

Are there any good examples of this? I struggle to use ChatGPT, maybe I'm using it not cleverly (or deeply) enough.


Recently I had to share a documentation written in Word, which I had to format in Markdown to put it on Github. I transformed my document into a text file for each chapter and then I asked chatGPT to transform each of these files into a Markdown page. And I also asked it to improve the English. Then I asked chatGPT to translate each of these files into different languages. The result is here: https://github.com/naver/tamgu/tree/master/documentations Basically, I did in a couple of hours, something that would have taken weeks of tedious work


OK then that's tedious work, not complex work.

I know that lots of people have personal stories of using ChatGPT but I was hoping something publicly reported on or like a showcase of truly impressive usages somewhere.


You are kidding right. Taking a raw text file and detecting every single header, sub-headers, keywords and pieces of code to add the right markdown tags is a simple task to you?

Have you ever tried to make a Python program to do exactly that?

I only used couple of sentences to build my prompt...


No, what is the computational complexity of the task? The computational complexity of the problem is not about how long your code is or how long you took to write it.

I don't know your CS background but perhaps I do not view the terms "complex" and "tedious" the way you assume. A tedious parser is certainly tedious to write, but it is not (necessarily) complex. And from an engineering standpoint it is questionable that you lost all the formatting information from Word, which would have already demarcated what things were headers, code, and so forth. So, you had to use a roundabout way—an LLM—to recover that information from the semantics.

If what you're really arguing is that ChatGPT works well for language translation tesks, in this case translating mixed prose, code, and foreign languages--sure I guess that's great at productivity and removing tedium, but it's not that surprising a usage given what LLMs are. They are language translators.

In other words you're saying it's complex but your argument reduces a task that is straightforward but tedious for humans, to the problem complexity of natural language processing.


AI art is already boring. its all a fad, eventually the cracks start to show and you can't unsee those cracks. very similar to bitcoin. tbh.


The interesting to boring scale depends on novelty. Given the ai oversaturation it works exactly as expected.

AI Art is currently in very early stage. In the real art space (3d modeling, sculpting, animation, vfx, animation, rigging, retargeting), it could make huge breakthroughs and multiply true artists' productivity in significant ways


I've noticed that those who worship productivity tend to appreciate "content creators" not artists :)


During my brief venture into 3d graphics I often found that 1 hour of truly creating is spaced between ten hours of fixing up UV's, redoing topology, finding cause for the seam, trying to make a watertight mesh for photon tracing, fighting the subdivision algorithm to retain details, diverting a edge loop, where it causes the less distress.

I call the first hour true productivity. The last part is, from the perspective of the end product, simply a wasted time. That's very similar to the boilerplate code everybody agrees is a necessary evil in the programming.

If AI allows to reduce the #2 it truly will have positive impact


It's interesting how some terms we use reveal our world view. What does referring to art as "product" imply?


"This radio thing is a dead end. I usually just get a bunch of static when I turn it on, or maybe an electrical shock. And even if it works, it's only a question of whether a tube burns out before the battery dies." - lazystar's grandpa, circa 1923


as i mentioned earlier, i was one of the first generation of bitcoin users. the problems with it have yet to be fixed.


This isn't bitcoin. It's radio.


You're one of the ones who used to call bitcoin "buttcoin" didn't you? Very sure of yourself, very wrong.


Seeing btc is still quite useless apart from a few legitimate reasons, otherwise mostly being used for illegal purposes: yes, btc is a failure.


Oh, you mean because Bitcoin didn’t turn out as a stupidly-high risk speculation object without any value rooted in reality, wasting a sizeable portion of the power production of a world threatened by the climate crisis, swinging between orders of magnitude in valuation on Elon‘s whims? The Bitcoin that is irrelevant everywhere else but the crypto bubble itself?


after i lost everything in the mtgox scandal, i used my last bitcoin to buy a lenovo thinkpad, which i used to learn computer programming and network engineering.


Come on man, we're all on the AI hype train now. Get with the program, you're one bubble behind!


$MSFT might be down but all others are probably going to be up


I've a penny that says the good ol' MSFT "embrace extend extinguish" is back in full swing.

https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis...

Literally people thought they were saints because of VSCode FFS.


I'm going to predict more by tomorrow as the AGI secures its position.

In other news, Tesla FSD has been rebranded Saneer-Weeksbooth Autobot. So be careful out there folks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: