Hacker News new | past | comments | ask | show | jobs | submit | skepticATX's comments login

My feeling is that this is one reason they decided to hide the reasoning tokens.

yes indeed

> People should realize we’re reaching the point where LLMs are surpassing humans in any task limited in scope enough to be a “benchmark

This seems like a bold statement considering we have so few benchmarks, and so many of them are poorly put together.


Why does no one read the actual proposal before commenting?

It specifically states that this only applies to individuals with 80% of their wealth in tradeable assets. No founder is going to lose control because this doesn’t apply to them!


Surely that would just mean no IPOs ever again right?


I can IPO, sell some shares and make billions but have to take 25% in tax or I can not do that and what happens? How do I turn my non IPO shares into profit? Presumably if you liquidate to cash you owe the capital gains right?

You’re still going to end up making a higher valuation on the stock market than you would trying other means to avoid this tax.


I'd love to. I'm genuinely interested in how this policy could be implemented and would love to read their suggestions. I think it's very hard to pull off successfully. Can you provide a link?

I thought Harris was adopting the President's 2025 budget proposal [1], which doesn't specifically state this is specific to tradable assets, but according to the downvoters I'm wrong about that. As far as I can tell it provides no comment on how "wealth" is determined.

[1] https://home.treasury.gov/policy-issues/tax-policy/revenue-p...

I suppose the whole argument is moot anyway as the President doesn't pass a budget, Congress does. And this document is really about communicating priorities, not actual policy.

And if one wants to get really persnickety, Harris didn't actually say anything. Some people working for her campaign did.

https://www.nytimes.com/2024/08/22/us/politics/kamala-harris...


Shares of a company are tradeable assets, no? Maybe not before the company is public depending on wording, but definitely after.


Because that detail, like the $100milliom limit are irrelevant details subject to change. (Most) People have the ability to synthesize issue and are worried of the proposal's fundamental core:

"Do we want the government to tax unrealized gains?"

No. I find it very scary frankly, even though I believe that the top 0.01% of the US population are parasitical and their financial and political clout should be reined in.


Zoning and car-centric design are intimately connected, though. A walkable suburb just can’t support the number of cars that a typical western suburb supports today.

If you loosen zoning, what you’re going to end up with is denser development and less room for cars.

Streetcar suburbs are an example of this. You have more room than living in the core of the city, but you don’t have enough room for 3 F150s for the family.


The streetcar suburbs in my area have larger lots than the new suburbs they are building on the edge of town. Back in those days e*erpone had a garden and so lots were bigger.

of coures how the lot is built is different but it isn't size.


This. Zoning and car-centric life are endogenous. I would even add NIMBYism w.r.t. adding public transportation stops or bicycle lanes.


I’d argue that by definition, code producing value is good code. It may not be the best, but it has to be at least good.

Almost everything else about code is subjective, but value is objective.


Exactly. All this reporting says is that the actress wasn’t explicitly told to copy “Her”. We still don’t know about the intentions of OpenAI throughout all of this. With Sam’s seeming obsession with the movie, are we really supposed to believe that the company never discussed it internally?


I don’t believe in the AGI claims, or in X-Risk. But I do think it’s apparent that AI will only become more powerful and ubiquitous. Very concerning that someone like Sam, with a history of dishonesty and narcissism that is only becoming more obvious time, may stand to control a large chunk of this technology.

He can’t be trusted, and as a result OpenAI cannot be trusted.


There are models that are nearly as good as GPT-4 now. For personal usage, I've been using them for a while now. OpenAI has jumped the shark so much that I'm going to advocate for moving to Anthropic/Google models at work now. OpenAI simply can't be trusted while Sam is at the helm.


Interesting that OpenAI has completely destroyed their reputation over the last year. For me, they've gone from an admirable company to cringe-worthy. I really think that the best outcome here is for OpenAI to collapse and get absorbed by Microsoft, where adults can continue some of the good work that is being overshadowed by ego and cult-like vibes.


I find this kind of hilarious because I had just been commenting that, "at the moment all the engineers at OpenAI, including gdb, who currently have their credibility in tact are nerd-washing Altman's tarnished reputation by staying there."

And lo and behold, Altman chooses gdb's twitter account to make the first joint statement instead of his own, creepily signing off "Sam and Greg" instead of "Greg and Sam" on gdb's own personal twitter account.

EDIT: The only reason to go to these weird lengths is if he is feeling especially vulnerable right now and feels the need to broadcast a "Greg is with me, I swear" message to signal he is not alone.


There seem to be only three options here:

1) AGI is not around the corner, so no worries

2) He no longer cares about the possible negative effects, a departure from his past statements

3) I missed something, please help me learn


> 1) AGI is not around the corner, so no worries

I agree with this.

I no longer agree that the "safety" people want the same things we do. I suspect after the bard debacle a bit back that "safety" looks a lot more like 1984.

> He no longer cares ... I missed something

Sam was gone, none of these people spoke up, and now they are leaving. I think the story is plain as day but we're pretending it isnt.

Sam isnt a leader, he never has been. This is not what leadership looks like, its want a megalomanic trying to keep control looks like.


> I no longer agree that the "safety" people want the same things we do. I suspect after the bard debacle a bit back that "safety" looks a lot more like 1984.

AI Safety will inevitably look a lot like DEI programs. They're trying to thread a needle with two fundamentally opposed ideas, and with a bit of time it turns into little more than a combination of Cover Your Ass policies.

The ridiculous thing is that we actually need initiatives that follow what both AI safety and DEI set out to be. Somehow along the way they get ruined though, leaving us in an even worse spot because for a while we have an excuse to think there are adults in the room actually making sure things are moving in the right direction.


I didn't say DEI for a reason, here is the other side talking rather overtly: https://www.youtube.com/watch?v=xnhJWusyj4I ... Pick a topic and someone is gonna have a really bent take on how things would be... and keen on jamming that take down someone else's throat so they can feel better. Everyone has shitty takes that they want others to buy into... everyone.

Is Taiwan a country? I would say yes, most Americans would (I hope). Ask ChatGPT...

4chan is still a thing, the library is still a thing. The nuclear boy scouts made a giant mess before there was an internet.

As for safety, well, it's gonna be a long time before shutting down the world isnt a weekend project for a hand full of people if they are allowed.


I raised DEI as an example of a field started with good intentions to solve very real and important problems that, in my experience, completely lost its way. That's not to say there isn't good work being done or good people doing it, but from what I've seen DEI has in many ways been whittled down to a combination of public relations and CYA policies.

DEI has definitely become a political trope, but regardless of what Newt Gingrich or any other talking head might say, there are plenty of valid critiques that can and should be made of the field if its going to get back on track.


Have you actually talked to any AI safety people? They despise the DEI-flavored AI ethics "make sure it generates enough black people" stuff just as much as you do, and I think you're lumping the two of them together just based on your own dislike, without checking whether that's actually reflective of the reality on the ground.


I didn't actually mean to refer to the DEI-flavored AI ethics. That was totally unintentional on my part, I should have caught that.

I raised both because I see them going down very similar paths. DEI conceptually is a really important idea and could lead to really important change or course correction. As implemented today though, it seems to most often be implemented as a much more surface level program that sure feels more like PR or legal protection than anything else.

With AI safety, based on those in the industry I have spoken with, there's a very real risk the same happens. AI safety, again as I've seen implemented and what I've heard from people working in the field, is much more concerned with minor risks and has given up on concerns like the alignment problem. AI safety isn't concerned with moral or ethical questions of what happens as the technology progresses.

I'm not just talking about job loss concerns. Thinking bigger for a minute, what rights would a real AI have? Can it be turned off? Can it commit crimes or be punished? Does it have rights? No one in AI safety is realistically considering whether these systems should be on the public internet at all, unless there are drastically more powerful systems kept under wraps due to these risks. No one is seriously asking about risks to privacy, I'm sure some in the field share those worries but they are the outlier and don't seem to be given the ability to meaningfully move the industry.


> DEI conceptually is a really important idea and could lead to really important change or course correction. As implemented today though, it seems to most often be implemented as a much more surface level program that sure feels more like PR or legal protection than anything else.

I suspect that some DEI efforts are helpful and effective, and some DEI efforts are hollow or foolhardy. We probably can’t speak of “DEI today” as a monolith. Also we may be biased to hear about instances of it being stupid and ineffective because that can be a useful talking point to some. Instances where it works well and gets more people hired and engaged are less interesting to a predominantly white society, so maybe aren’t discussed as much outside of non-white communities.

Idk that’s all a load of speculation but I wanted to share these thoughts/observations about your argument.


I do agree that my referring to DEI today may be too broad, that's a great point.

> Instances where it works well and gets more people hired and engaged are less interesting to a predominantly white society, so maybe aren’t discussed as much outside of non-white communities.

This got me curious, have you sewn any examples of DEI programs helping to get more people hired rather than different people hired? Either can be useful, but that distinction would be a big one as the former means DEI is somehow growing the job market rather than refocusing hiring practices.

Nothing wrong with speculation as far as I'm concerned! Reliable and accurate data is hard to come by, I'd argue that most of what is presented as fact is little more than speculation backed by fuzzy data full of assumptions.


Our DEI program was great. It helped us scale from 100 people to 1000 people by scouring HBCU colleges across the US for talent. Had we hired only from Silicon Valley and Stanford, where we were located, things would have sucked during that growth and our previously global hodgepodge of a team that built products bringing in hundreds of millions of dollars would have been White-washed by rich kids, atrophied and died. Instead, we we went on to billions of dollars because we had a group that weren't all fucking Stanford grads with a couple years of Google or Facebook under their belts and mommy and daddy to fall back on.


> Thinking bigger for a minute, what rights would a real AI have?

None, and the court has already spoken on this. It's a pretty dead issue.

> Can it be turned off?

Yes it's a machine. Save state, power down.

> Can it commit crimes or be punished?

See PGE and its own death toll.

> Does it have rights?

No, and we aren't even on a path where these questions are relevant. It's all an exercise in mental mastrubation. If you think that were going to accidentally stumble into sentience never mind sapenenc I have a bridge I would like to sell you.


We can't formally stumble into sentience because we don't have a definition of it let alone a test which can confirm or refute its presence.

People will think they "know it when they see it". The arguing will be fun.


> None, and the court has already spoken on this. It's a pretty dead issue.

The question of rights would be legislative rather than judicial. More importantly, the discussion should actually be among the people first in anything resembling a democracy, the few in charge are meant to represent us not rule us.

> See PGE and its own death toll.

Unless I'm mistaken, PGE isn't an artificial intelligence or sentient.

> No, and we aren't even on a path where these questions are relevant. It's all an exercise in mental mastrubation. If you think that were going to accidentally stumble into sentience never mind sapenenc I have a bridge I would like to sell you.

OpenAI's explicit goal is to create and release an AGI. A majority of experts in the industry I have either talked with personally or heard in long form interviews expect that we could be very close to AGI, on the order of a couple years up to maybe 15 or 20 years on the high end. Given how slowly any societal discussion related to rights of a population move, do you think we'll have plenty of time to decide this after an AGI is released in some spectacularly Silicon Valley product release party?


>> on the order of a couple years up to maybe 15 or 20 years

Fusion, flying cars, AI in the 50's and in the 70's... Hell Hal was born in 97 and the film was in 68.

For as impressive as LLM's are once you dig into them they aren't magical at all. A sophisticated model of language that predicts the next word is about as likely to become an AGI as whether predictions are likely to control the weather. Ask any expert in AGI what the "next token" is and they are going to fucking disagree. This isnt us building the bomb where we have a pretty good idea of how to do it and just need to put all the parts together. This is a bunch of people stabbing in the dark and getting lucky here and there.

Were not going to bumble fuck our way forward on this, and the path were going down has potential to be a game changer but its not going to give us a super intelligence...


We don't have a clear, agreed upon definition of AGI, a way to test for it, or a way to predict whether a new model will meet those criteria.

The definition OpenAI uses for AGI is being more economically valuable than most humans at most things. That definition is entirely backwards looking. They'll keep releasing new models and occasionally check in to see how each model compared to humans economically. If that isn't bumble ducking our way into it I don't know what is.

Its interesting that you raise the difference in knowing the physics behind a bomb before building it and AI research stabbing in the dark as an acceptable thing. Its precisely that they don't know exactly what they're doing that there is so much risk. They're building potentially very powerful and dangerous things, connecting them to the public internet, and selling them to whoever wants to use them. That doesn't seem at all risky or dangerous to you?


> The definition OpenAI uses for AGI is being more economically valuable than most humans at most things.

Then were already there. The web, amazon is already more valuable than sears and roebuck was as a paper catalog. All those people who took calls or opened envelopes, all the manual tracking of inventory and orders... we replaced all that years ago.


I don't think anyone would consider a paper catalog as a candidate for any form of intelligence.

It is a good example of how poor the OpenAI definition is, but confusing any human invention with intelligence is a bit of a stretch.


If most people think the lobotomized and sanitized commercial model behavior is the product of "safety" and "alignment" and that isn't really the case, then the "AI safety people" have done an extremely poor job of communicating exactly what their actual goals are.


I’m confused. How are AI safety and DEI opposed?


They aren't opposed, I may not have been clear enough there.

I raised DEI as an example of a system that, in my opinion, is further down the path that I see AI safety going down. They both started with strong intentions to help guide or redirect an industry but, again in my opinion, both fields have ended up playing a role of PR and "cover your ass" more than anything else.


>I no longer agree that the "safety" people want the same things we do. I suspect after the bard debacle a bit back that "safety" looks a lot more like 1984.

It's interesting that's how it looks like to you from the outside.

If you look a little closer at the "AI Safety" community, it consists of two competing factions: the faction responsible for the Bard debacle, and the faction that's focused on trying to prevent human extinction -- "AI notkilleveryoneism", as in https://x.com/aisafetymemes

They both insist that the other faction is "distracting everyone from the real issues".


Way back in the day, almost 4 years ago, I recall a third front on AI safety. It was called UBI and it was really hot. What happened to that?


Universal basic income?

The idea behind that was people would go on to pursue more creative or artistic jobs that don’t prioritize profit after being given a basic income to live off.

Unfortunately it won’t work out that way: We see now that AI will mostly take over creative and artistic work and people will have no motivation to pursue art. The only jobs left for humans are hard physical labor and no one seems excited to do that… so you’ll just be giving people basic income to sit around and do nothing all day.

Having a society of mostly idle people that cost you money doesn’t really sound like a great or sustainable idea, nor does there seem to be any benefit to society.


> Having a society of mostly idle people that cost you money doesn’t really sound like a great or sustainable idea, nor does there seem to be any benefit to society.

So what are we going to do with all of the excess humans once human productivity jumps one, two, or three more orders of magnitude?


We will cull them like we did when we invented the sewing machine and the loom... them luddites were right then and they can be again

/s


I am half /s, half dead serious. The sarcastic half is me playing along with AGI singularity maximalists like Sama.

The other half is me dead scared that he is right. LLMs/GPTs != cryptocurrency. There are actual use cases other than money laundering. If you are paying attention, then the insane scaling in capabilities should have us all dumbfounded. I agree with Linus Torvalds: "just" predicting the next token is not an insult, that's mostly what we are all doing.

Human productivity/the global economy has been growing at an exponential rate[0] since the industrial revolution. In our lifetimes we have ridden the close-to-vertical part, and if GPTs continue to scale, then it will continue to near-vertical. This will be near-post-scarcity society. Capitalism will soon have become so successful that it will make itself obsolete. Are we ready for that, politically? Most of our ruling a-holes maintain power via scarcity, how will they react?

Culling might not be an action, just an apathy. The transition is going to suck as the poors now will also have taken advantage of that hockey stick of asymmetric warfare aka "productivity" via things like ML powered kill drones. This tech is happening now, in 2024-25, in Ukraine.

We are at a crossroads, more than when horse-driven carriages turned horseless. The humans are the horses now, and they will not be happy.

We really should have worked out global civility by now, given the risks of truly global destabilization.

[0] https://ourworldindata.org/grapher/global-gdp-over-the-long-...


>> This will be near-post-scarcity society. Capitalism will soon have become so successful

Not even close.

Take something like the iPhone. Everyone will want a pro max latest version. Great we can make 8 billion of them. How do you hand them out... we have to get in line I think.

The person at the back of the line, looking at 8 billion assholes in front of them is gonna get stabby...

The chart you're pointing out is showing us that an increase in productivity and/or creativity only will serve to exacerbate disparity, not improve it. Who goes first when there are more firsts is going to be a bigger problem.


Attention will be scarce, influeism will replace capitalism.


Massive inflation from the COVID shutdowns and stimulus checks happened.


Do you have any data to support this statement?


I'd also like to know about anyone writing on this subject.


> They both insist that the other faction is "distracting everyone from the real issues".

Because in part, it's a false dichotomy. AI safety just is. It's like airplane safety; we can write all these cool guidelines and track people from hell and back, but there's still a degree of determinative capability that pilots assume when they fly a plane. Same goes for operating AI; the onus of not using it to kill everyone falls on everyone, not one person.

Both sides are distracting each other because neither of them have anything to support without their valueless tribal politics. AI research operates independently of their discourse, and gets deployed without ever consulting any of them. They are armchair experts at best, and stoop to being Twitter reactionaries when they demand respect from their core audience.

The only dichotomy that exists in AI is the stuff that gets made and the stuff that doesn't. I say this as someone that despises the field by now and wishes it never existed in my lifetime; if you don't make it, they will. How's that for an AI safety policy?


> pilots assume when they fly a plane. Same goes for operating AI; the onus of not using it to kill everyone falls on everyone, not one person.

That’s why we don’t let a random Joe fly a 747, there is extensive training, licensing, etc.

Do you envision the same for operating AI? In the real world you can’t even drive a moped without licence, registration and insurance. Same goes for access to dangerous chemicals. If AI is dangerous, this is the logical conclusion


I envision that the existence of Air Traffic Control won't inherently stop people from using controlled airspace for hostile purposes. We can idealize what conduct looks like but failure of protocol still happens deliberately or by mistake.

The same is going to happen with AI. There will be bad actors, and trying to stop them from using AI for whatever "hostile" purposes it might yield is going to be nigh-impossible.


But it does work, 9/11 is not a daily occurrence.


> AI research operates independently of their discourse, and gets deployed without ever consulting any of them. They are armchair experts at best, and stoop to being Twitter reactionaries when they demand respect from their core audience.

Astonishing claim, when some such researchers have (unfortunately) been responsible for many of the most impressive capabilities advancements in the last few years, and the three most cited AI researchers of all time are all doing extinction-risk-mitigation work (with Bengio and Sutskever both doing technical work; Hinton mostly seems to be focusing on outreach).


The world still funds AI that completely disregards the notion of "safety" out of the gate. You can argue that those AIs aren't dangerous to begin with, but I would counter that by arguing no AI is dangerous to begin with and this entire field is a brouhaha to wrestle legislative control from Open Source opponents.


> I would counter that by arguing no AI is dangerous to begin with and this entire field is a brouhaha to wrestle legislative control from Open Source opponents

Note that this is not a valid argument against any argument that superintelligent AI might kill everyone; it's just a character attack.


Wasn't sama a YC CEO or something?


> AGI is not around the corner, so no worries

Unfortunately while this is very likely the case, the vast amount of money being poured into these projects will also seep into other horrifying projects like: Lavender or Pattern (seriously go look up both of these, it's actually going to fry your brain).


Brain not fried. Why do you think these projects are horrifying?


Weird thing to joke about.


There’s also the dumb option that he’s afraid of it, but thinks it’s better for him to be in control than someone else. If everyone thinks they are the lesser evil, they all do what they don't want anyone to do.


It would probably be against hacker news guidelines to call him a manipulative weasel, so I will refrain from that.


Greg has some agency in this

He also stuck by Altman in November. I'm surprised by how much credit and how much hate goes to the CEO when there are clearly other people making their own decisions, too.

Say whatever you want, they've done something impressive. That credit goes to more than just Sam. Ilya & Jan aren't the only ones making decisions.

Also, if you've ever been at a startup growing at the speed OpenAI is growing (think revenue, fundraising, headcount), it's not that unusual to see cofounders leave or to have other leadership changes. The reason why everybody thinks OpenAI looks particularly bad is because everybody is watching them (and public sentiment towards ai has shifted)


> The reason why everybody thinks OpenAI looks particularly bad is because everybody is watching them

A reason, sure. The reason? Come on. There's a clue in their name about what part of their history is particularly controversial.


Sorry, in the context of this tweet and my message, I was referring to "senior leadership leaving"

This sort of high level turnover isn't that uncommon. OpenAI is growing quickly and going from initial concept to hundreds of millions in revenue - this is a time of transition for a lot of startups.

You can absolutely criticize them for hypocrisy or whatever other issue, but I don't think most people should read too much into Ilya/Jan leaving.


If I remember correctly those "adults" offered Altman the opportunity to lead the AI efforts at Microsoft after he got fired.

Make of that what you want.


It is not like Microsoft’s CEO was behaving very admirably or free of ego.


For some reason people give Satya a pass. I guess because he comes off as a nice guy. Somehow no one associates all the anti competitive stuff MS does with him - like bundling Teams into Office, or forcing a Copilot button onto OEMs, or dark pattern nagging about default apps, or inserting ads into the start menu, or whatever. But that mode of operation is classic MS, and it makes sense they are still this way, since Satya was a long time MS veteran before he was CEO. All that said, he is probably actually nicer than previous MS executives. He has had a lot of difficulty in his personal life around his son, and that probably changed him for the better.


Satya is a canny player, he knows how to give with one hand and take away with the other like a magician so you don't see the trick. I wish he was pushing Microsoft to be a bit more ethical, but he has done good stuff and he's a good model CEO, since he hasn't done any one genius thing, but he's consistently positioned and executed well which is something the "average" leader can model.


"We are below them, above them, around them." - doesn’t sound particularly ego free or nice.

And it seems that between the choice of OpenAI reassembling at Microsoft and OpenAI continuing under Sam Altman, the nonprofit board had decided to go ahead with Sam Altman.


Bear in mind that an average person who doesn’t frequent sites like HN might see this differently.

Altman seems to be very aware of that, dropping short sound bites/bits like implying that he has no financial interest in OpenAI (congress hearings IIRC) or comparing GPT4-o to Her.

It sounds like complete nonsense (if you know anything about tech) or cringe at best (Her), but what matters is quotability/catchiness because that what gets pumped by the media.


> he has no financial interest in OpenAI

I think that's true though? He has an interest in HN, which has an interest in OAI.


HN? Don’t you mean YC?


Yeah yeah of course, the two blend into one in my mind. Apologies.


> Interesting that OpenAI has completely destroyed their reputation over the last year.

No, they're still the "name brand" option for commercial LLM stuff.


> cult-like vibes.

What cult-like vibes? The OpenAI employees wanted Sam back because of financial reasons and peer pressure. Sam has learnt how to keep employees on leash via money.


Having senior leadership and rank and file publicly proclaim their love for each other in identical ways, as happened during the leadership debacle, read cult-like to me.


What you describe is precisely cult-like and so it gives off cult-like vibes.


I’m not getting how OpenAIs reputation is being tarnished from all of basically amounting to nothing but theatrical stuff that amounts to people saying Sam is out to destroy the world, none of which is even close to happening, with every entity in earth developing the same tech.


I remember early in the days of COVID when the authorities reassured us by telling us there was "no evidence of community spread in the United States".

Sometimes it pays to look at the fundamentals. https://www.lesswrong.com/tag/ai-alignment-intro-materials


Don't hate the player, etc.


I think the ego and cult-like atmosphere will lead to more growth (maybe not in the "right" direction but in the technical prowess direction) which offsets all criticism.

And then the lines will cross over the break even point and be absorbed into Microsoft who proceeds to enjoy the lunch for the foreseeable future.


> (maybe not in the "right" direction but in the technical prowess direction) which offsets all criticism.

Are you sure about that? Maybe you would like to think a bit harder about what exactly offsets "all criticism"? Being very good at one thing doesn't absolve you from consequences of your other shitty actions.


I think this perspective is probably only true for 0.001% of people that actually follow Sam closely and are not optimistic about AGI and like to throw their opinions around. The superficial stuff. The rest don’t care to even know who Sam is and don’t care to assume motive.

It’s very likely they’ll bounce back. I’d rather OpenAI continue to innovate and push the industry forward as they have been. Haven’t seen much of that from Microsoft, so heavily disagree with you there. Prefer to focus on the actual product of the company not the personalities of the people there or armchair assumptions on the vibes of the culture.


> this perspective is probably only true for 0.001% of people that actually follow Sam closely

It’s corroded his credibility in D.C. and Brussels for a generation. He raised his profile tremendously right before people credibly called him a liar. It’s like he lofted an adversary’s payload into orbit. He will still get an audience with anyone, as he deserves. But people fact check him in a way they didn’t before and don’t with others. Even those who support his policy priorities, and with whom he and his team talk frequently. (OpenAI’s GR is between incompetent and non-existent.)


> Haven’t seen much of that from Microsoft

Microsoft is the de facto controlling shareholder in OpenAI. They provide all the money, compute, and backing, and have full access to the models. If OpenAI collapsed tomorrow, Microsoft would absorb its key employees (as they almost did during the board debacle) and everything would continue under the Microsoft umbrella. “OpenAI” is just a shinier name for work that is being done under the near-total control of Microsoft.


The money and compute is not the innovation. The LLM models and associated tools are, which is work by OpenAI employees and teams, not Microsoft employees/teams.


In that vein, I'd say that most of the LLM research was done at Google. OpenAI productized faster.


Confused here, is your argument here that OpenAI is not responsible for any innovation when it comes to LLM tech today? I’m curious about why you so strongly want to believe that?

Nobody knew that scaling transformer architecture would lead to the emergent intelligence we see today. Among other things, OpenAI did R&D for years on that. Also the only situation where this could true is if Google knew that LLMs could lead to this intelligence and decided to not make it happen, (along with every other tech company now that is furiously trying to catch up to OpenAI), which is absurd.


You seem to be inflating the emotional importance of my comment. Google did an enormous amount of the research prior to scaling. I merely pointing out that if there's credit to be given out, a bunch of it goes to Google.


Agreed.


Actually Google were building and scaling transformers at same time as OpenAI - BERT (following Allen AI's ELMo), T5, Meena, LaMDA (chatbot - preceding ChatGPT by a year or two), PaLM ...

It seems that Google really didn't know what to do with the tech, and hadn't figured out a way to control it (OpenAI's RLHF - critical for ChatGPT's success). It's a bit ironic that DeepMind were doing so much with RL, but Google Brain being separate at the time apparently were not consulting with them or tapping into their expertise.


> not responsible for any

That's a very strange way to read (the inverse of) the word "most".


It might’ve been easier to hire that talent under the shiny OpenAI umbrella, but as I said, Microsoft could absorb the entire thing overnight if it wanted to. And pay them enough to make them stay.


> Microsoft is the de facto controlling shareholder in OpenAI

No, they are not even a shareholder.

> Microsoft is entitled to up to 49 percent of the for-profit arm of OpenAI's profits, according to reports. But that's not the same as 49% ownership. That investment does not result in Microsoft owning part of OpenAI


https://www.bloomberg.com/opinion/articles/2023-11-20/who-co...

Note that that diagram is now a little out of date - Microsoft now has an actual (albeit non-voting) board seat, not just effective control.


You're right, this isn't a view that is likely to be shared by the general public.

However, I don't think the general public's view of OpenAI is much better at this point, given that their exposure is Hinton on 60 Minutes claiming that AI is going to imminently end civilization, creatives arguing that OpenAI has stolen their work, and students using their products to cheat.

The only people that I do know who have historically had a positive view of OpenAI has been people working in tech. And Sam seems to be doing everything he can to destroy that goodwill.


Among the people I've discussed recent AI with that aren't in tech, almost everyone is very uneasy about it. Some of them use it, and all of them recognize it as potentially useful, but almost everyone is more concerned than excited. Seems like surveys back my personal experience:

https://www.pewresearch.org/short-reads/2023/08/28/growing-p...

"More concerned than excited" went from 37% in 2021 to 52% in 2023, "more excited than concerned went from 18% to 10%.


All I’d like to see from AI safety folks is an empirical argument demonstrating that we’re remotely close to AGI, and that AGI is dangerous.

Sorry, but sci-fi novels are not going to cut it here. If anything, the last year and a half have just supported the notion that we’re not close to AGI.


The flipside: it's equally hard for people who assume AI is safe to establish empirical criteria for safety and behavior. Neither side of the argument has a strong empirical basis, because we know of no precedent for an event like the rise of non-biological intelligence.

If AGI happens, even in retrospect, there may not be a clear line between "here is non-AGI" and "here is AGI". As far as we know, there wasn't a dividing line like this during the evolution of human intelligence.


I find it delightfully ironic that humans are so bad at the things we criticise AI for not being able to do, such as extrapolating to outside our experience.

As a society, we don't even agree on the meanings of each of the initials of "AGI", and many of us use the triplet to mean something (super-intelligence) that isn't even one of those initials; for your claim to be true, AGI has to be a higher standard than "intern of all trades, senior of none" because that's what the LLMs do.

Expert-at-everything-level AGI is dangerous because the definition of the term is that it can necessarily do anything that a human can do[0], and that includes triggering a world war by assassinating an archduke, inventing the atom bomb, and at least four examples (Ireland, India, USSR, Cambodia) of killing several million people by mis-managing a country that they came to rule by political machinations that are just another skill.

When it comes to AI alignment, last I checked we don't know what we even mean by the concept: if you have two AI, there isn't even a metric you can use to say if one is more aligned than the other.

If I gave a medieval monk two lumps of U-238 and two more of U-235, they would not have the means to determine which pair was safe to bash together and which would kill them in a blue flash. That's where we're at with AI right now. And like the monks in this metaphor, we also don't have the faintest idea if the "rocks" we're "bashing together" are "uranium", nor what a "critical mass" is.

Sadly this ignorance isn't a shield, as evolution made us without any intentionality behind it. So we don't know how to recognise "unsafe" when we do it, we don't know if we might do it by accident, we don't know how to do it on purpose in order to say "don't do that", and because of this we may be doing cargo-cult "intelligence" and/or "safety" at any given moment and at any given scale, making us fractally-wrong[1] about basically every aspect including which ones we should even care about.

[0] If you think it needs a body, I'd point out we've already got plenty of robot bodies for it to control, the software for these is the hard bit

[1] https://blog.codinghorror.com/the-php-singularity/



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: