Hacker News new | past | comments | ask | show | jobs | submit login

There seem to be only three options here:

1) AGI is not around the corner, so no worries

2) He no longer cares about the possible negative effects, a departure from his past statements

3) I missed something, please help me learn




> 1) AGI is not around the corner, so no worries

I agree with this.

I no longer agree that the "safety" people want the same things we do. I suspect after the bard debacle a bit back that "safety" looks a lot more like 1984.

> He no longer cares ... I missed something

Sam was gone, none of these people spoke up, and now they are leaving. I think the story is plain as day but we're pretending it isnt.

Sam isnt a leader, he never has been. This is not what leadership looks like, its want a megalomanic trying to keep control looks like.


> I no longer agree that the "safety" people want the same things we do. I suspect after the bard debacle a bit back that "safety" looks a lot more like 1984.

AI Safety will inevitably look a lot like DEI programs. They're trying to thread a needle with two fundamentally opposed ideas, and with a bit of time it turns into little more than a combination of Cover Your Ass policies.

The ridiculous thing is that we actually need initiatives that follow what both AI safety and DEI set out to be. Somehow along the way they get ruined though, leaving us in an even worse spot because for a while we have an excuse to think there are adults in the room actually making sure things are moving in the right direction.


I didn't say DEI for a reason, here is the other side talking rather overtly: https://www.youtube.com/watch?v=xnhJWusyj4I ... Pick a topic and someone is gonna have a really bent take on how things would be... and keen on jamming that take down someone else's throat so they can feel better. Everyone has shitty takes that they want others to buy into... everyone.

Is Taiwan a country? I would say yes, most Americans would (I hope). Ask ChatGPT...

4chan is still a thing, the library is still a thing. The nuclear boy scouts made a giant mess before there was an internet.

As for safety, well, it's gonna be a long time before shutting down the world isnt a weekend project for a hand full of people if they are allowed.


I raised DEI as an example of a field started with good intentions to solve very real and important problems that, in my experience, completely lost its way. That's not to say there isn't good work being done or good people doing it, but from what I've seen DEI has in many ways been whittled down to a combination of public relations and CYA policies.

DEI has definitely become a political trope, but regardless of what Newt Gingrich or any other talking head might say, there are plenty of valid critiques that can and should be made of the field if its going to get back on track.


Have you actually talked to any AI safety people? They despise the DEI-flavored AI ethics "make sure it generates enough black people" stuff just as much as you do, and I think you're lumping the two of them together just based on your own dislike, without checking whether that's actually reflective of the reality on the ground.


I didn't actually mean to refer to the DEI-flavored AI ethics. That was totally unintentional on my part, I should have caught that.

I raised both because I see them going down very similar paths. DEI conceptually is a really important idea and could lead to really important change or course correction. As implemented today though, it seems to most often be implemented as a much more surface level program that sure feels more like PR or legal protection than anything else.

With AI safety, based on those in the industry I have spoken with, there's a very real risk the same happens. AI safety, again as I've seen implemented and what I've heard from people working in the field, is much more concerned with minor risks and has given up on concerns like the alignment problem. AI safety isn't concerned with moral or ethical questions of what happens as the technology progresses.

I'm not just talking about job loss concerns. Thinking bigger for a minute, what rights would a real AI have? Can it be turned off? Can it commit crimes or be punished? Does it have rights? No one in AI safety is realistically considering whether these systems should be on the public internet at all, unless there are drastically more powerful systems kept under wraps due to these risks. No one is seriously asking about risks to privacy, I'm sure some in the field share those worries but they are the outlier and don't seem to be given the ability to meaningfully move the industry.


> DEI conceptually is a really important idea and could lead to really important change or course correction. As implemented today though, it seems to most often be implemented as a much more surface level program that sure feels more like PR or legal protection than anything else.

I suspect that some DEI efforts are helpful and effective, and some DEI efforts are hollow or foolhardy. We probably can’t speak of “DEI today” as a monolith. Also we may be biased to hear about instances of it being stupid and ineffective because that can be a useful talking point to some. Instances where it works well and gets more people hired and engaged are less interesting to a predominantly white society, so maybe aren’t discussed as much outside of non-white communities.

Idk that’s all a load of speculation but I wanted to share these thoughts/observations about your argument.


I do agree that my referring to DEI today may be too broad, that's a great point.

> Instances where it works well and gets more people hired and engaged are less interesting to a predominantly white society, so maybe aren’t discussed as much outside of non-white communities.

This got me curious, have you sewn any examples of DEI programs helping to get more people hired rather than different people hired? Either can be useful, but that distinction would be a big one as the former means DEI is somehow growing the job market rather than refocusing hiring practices.

Nothing wrong with speculation as far as I'm concerned! Reliable and accurate data is hard to come by, I'd argue that most of what is presented as fact is little more than speculation backed by fuzzy data full of assumptions.


Our DEI program was great. It helped us scale from 100 people to 1000 people by scouring HBCU colleges across the US for talent. Had we hired only from Silicon Valley and Stanford, where we were located, things would have sucked during that growth and our previously global hodgepodge of a team that built products bringing in hundreds of millions of dollars would have been White-washed by rich kids, atrophied and died. Instead, we we went on to billions of dollars because we had a group that weren't all fucking Stanford grads with a couple years of Google or Facebook under their belts and mommy and daddy to fall back on.


> Thinking bigger for a minute, what rights would a real AI have?

None, and the court has already spoken on this. It's a pretty dead issue.

> Can it be turned off?

Yes it's a machine. Save state, power down.

> Can it commit crimes or be punished?

See PGE and its own death toll.

> Does it have rights?

No, and we aren't even on a path where these questions are relevant. It's all an exercise in mental mastrubation. If you think that were going to accidentally stumble into sentience never mind sapenenc I have a bridge I would like to sell you.


We can't formally stumble into sentience because we don't have a definition of it let alone a test which can confirm or refute its presence.

People will think they "know it when they see it". The arguing will be fun.


> None, and the court has already spoken on this. It's a pretty dead issue.

The question of rights would be legislative rather than judicial. More importantly, the discussion should actually be among the people first in anything resembling a democracy, the few in charge are meant to represent us not rule us.

> See PGE and its own death toll.

Unless I'm mistaken, PGE isn't an artificial intelligence or sentient.

> No, and we aren't even on a path where these questions are relevant. It's all an exercise in mental mastrubation. If you think that were going to accidentally stumble into sentience never mind sapenenc I have a bridge I would like to sell you.

OpenAI's explicit goal is to create and release an AGI. A majority of experts in the industry I have either talked with personally or heard in long form interviews expect that we could be very close to AGI, on the order of a couple years up to maybe 15 or 20 years on the high end. Given how slowly any societal discussion related to rights of a population move, do you think we'll have plenty of time to decide this after an AGI is released in some spectacularly Silicon Valley product release party?


>> on the order of a couple years up to maybe 15 or 20 years

Fusion, flying cars, AI in the 50's and in the 70's... Hell Hal was born in 97 and the film was in 68.

For as impressive as LLM's are once you dig into them they aren't magical at all. A sophisticated model of language that predicts the next word is about as likely to become an AGI as whether predictions are likely to control the weather. Ask any expert in AGI what the "next token" is and they are going to fucking disagree. This isnt us building the bomb where we have a pretty good idea of how to do it and just need to put all the parts together. This is a bunch of people stabbing in the dark and getting lucky here and there.

Were not going to bumble fuck our way forward on this, and the path were going down has potential to be a game changer but its not going to give us a super intelligence...


We don't have a clear, agreed upon definition of AGI, a way to test for it, or a way to predict whether a new model will meet those criteria.

The definition OpenAI uses for AGI is being more economically valuable than most humans at most things. That definition is entirely backwards looking. They'll keep releasing new models and occasionally check in to see how each model compared to humans economically. If that isn't bumble ducking our way into it I don't know what is.

Its interesting that you raise the difference in knowing the physics behind a bomb before building it and AI research stabbing in the dark as an acceptable thing. Its precisely that they don't know exactly what they're doing that there is so much risk. They're building potentially very powerful and dangerous things, connecting them to the public internet, and selling them to whoever wants to use them. That doesn't seem at all risky or dangerous to you?


> The definition OpenAI uses for AGI is being more economically valuable than most humans at most things.

Then were already there. The web, amazon is already more valuable than sears and roebuck was as a paper catalog. All those people who took calls or opened envelopes, all the manual tracking of inventory and orders... we replaced all that years ago.


I don't think anyone would consider a paper catalog as a candidate for any form of intelligence.

It is a good example of how poor the OpenAI definition is, but confusing any human invention with intelligence is a bit of a stretch.


If most people think the lobotomized and sanitized commercial model behavior is the product of "safety" and "alignment" and that isn't really the case, then the "AI safety people" have done an extremely poor job of communicating exactly what their actual goals are.


I’m confused. How are AI safety and DEI opposed?


They aren't opposed, I may not have been clear enough there.

I raised DEI as an example of a system that, in my opinion, is further down the path that I see AI safety going down. They both started with strong intentions to help guide or redirect an industry but, again in my opinion, both fields have ended up playing a role of PR and "cover your ass" more than anything else.


>I no longer agree that the "safety" people want the same things we do. I suspect after the bard debacle a bit back that "safety" looks a lot more like 1984.

It's interesting that's how it looks like to you from the outside.

If you look a little closer at the "AI Safety" community, it consists of two competing factions: the faction responsible for the Bard debacle, and the faction that's focused on trying to prevent human extinction -- "AI notkilleveryoneism", as in https://x.com/aisafetymemes

They both insist that the other faction is "distracting everyone from the real issues".


Way back in the day, almost 4 years ago, I recall a third front on AI safety. It was called UBI and it was really hot. What happened to that?


Universal basic income?

The idea behind that was people would go on to pursue more creative or artistic jobs that don’t prioritize profit after being given a basic income to live off.

Unfortunately it won’t work out that way: We see now that AI will mostly take over creative and artistic work and people will have no motivation to pursue art. The only jobs left for humans are hard physical labor and no one seems excited to do that… so you’ll just be giving people basic income to sit around and do nothing all day.

Having a society of mostly idle people that cost you money doesn’t really sound like a great or sustainable idea, nor does there seem to be any benefit to society.


> Having a society of mostly idle people that cost you money doesn’t really sound like a great or sustainable idea, nor does there seem to be any benefit to society.

So what are we going to do with all of the excess humans once human productivity jumps one, two, or three more orders of magnitude?


We will cull them like we did when we invented the sewing machine and the loom... them luddites were right then and they can be again

/s


I am half /s, half dead serious. The sarcastic half is me playing along with AGI singularity maximalists like Sama.

The other half is me dead scared that he is right. LLMs/GPTs != cryptocurrency. There are actual use cases other than money laundering. If you are paying attention, then the insane scaling in capabilities should have us all dumbfounded. I agree with Linus Torvalds: "just" predicting the next token is not an insult, that's mostly what we are all doing.

Human productivity/the global economy has been growing at an exponential rate[0] since the industrial revolution. In our lifetimes we have ridden the close-to-vertical part, and if GPTs continue to scale, then it will continue to near-vertical. This will be near-post-scarcity society. Capitalism will soon have become so successful that it will make itself obsolete. Are we ready for that, politically? Most of our ruling a-holes maintain power via scarcity, how will they react?

Culling might not be an action, just an apathy. The transition is going to suck as the poors now will also have taken advantage of that hockey stick of asymmetric warfare aka "productivity" via things like ML powered kill drones. This tech is happening now, in 2024-25, in Ukraine.

We are at a crossroads, more than when horse-driven carriages turned horseless. The humans are the horses now, and they will not be happy.

We really should have worked out global civility by now, given the risks of truly global destabilization.

[0] https://ourworldindata.org/grapher/global-gdp-over-the-long-...


>> This will be near-post-scarcity society. Capitalism will soon have become so successful

Not even close.

Take something like the iPhone. Everyone will want a pro max latest version. Great we can make 8 billion of them. How do you hand them out... we have to get in line I think.

The person at the back of the line, looking at 8 billion assholes in front of them is gonna get stabby...

The chart you're pointing out is showing us that an increase in productivity and/or creativity only will serve to exacerbate disparity, not improve it. Who goes first when there are more firsts is going to be a bigger problem.


Attention will be scarce, influeism will replace capitalism.


Massive inflation from the COVID shutdowns and stimulus checks happened.


Do you have any data to support this statement?


I'd also like to know about anyone writing on this subject.


> They both insist that the other faction is "distracting everyone from the real issues".

Because in part, it's a false dichotomy. AI safety just is. It's like airplane safety; we can write all these cool guidelines and track people from hell and back, but there's still a degree of determinative capability that pilots assume when they fly a plane. Same goes for operating AI; the onus of not using it to kill everyone falls on everyone, not one person.

Both sides are distracting each other because neither of them have anything to support without their valueless tribal politics. AI research operates independently of their discourse, and gets deployed without ever consulting any of them. They are armchair experts at best, and stoop to being Twitter reactionaries when they demand respect from their core audience.

The only dichotomy that exists in AI is the stuff that gets made and the stuff that doesn't. I say this as someone that despises the field by now and wishes it never existed in my lifetime; if you don't make it, they will. How's that for an AI safety policy?


> pilots assume when they fly a plane. Same goes for operating AI; the onus of not using it to kill everyone falls on everyone, not one person.

That’s why we don’t let a random Joe fly a 747, there is extensive training, licensing, etc.

Do you envision the same for operating AI? In the real world you can’t even drive a moped without licence, registration and insurance. Same goes for access to dangerous chemicals. If AI is dangerous, this is the logical conclusion


I envision that the existence of Air Traffic Control won't inherently stop people from using controlled airspace for hostile purposes. We can idealize what conduct looks like but failure of protocol still happens deliberately or by mistake.

The same is going to happen with AI. There will be bad actors, and trying to stop them from using AI for whatever "hostile" purposes it might yield is going to be nigh-impossible.


But it does work, 9/11 is not a daily occurrence.


> AI research operates independently of their discourse, and gets deployed without ever consulting any of them. They are armchair experts at best, and stoop to being Twitter reactionaries when they demand respect from their core audience.

Astonishing claim, when some such researchers have (unfortunately) been responsible for many of the most impressive capabilities advancements in the last few years, and the three most cited AI researchers of all time are all doing extinction-risk-mitigation work (with Bengio and Sutskever both doing technical work; Hinton mostly seems to be focusing on outreach).


The world still funds AI that completely disregards the notion of "safety" out of the gate. You can argue that those AIs aren't dangerous to begin with, but I would counter that by arguing no AI is dangerous to begin with and this entire field is a brouhaha to wrestle legislative control from Open Source opponents.


> I would counter that by arguing no AI is dangerous to begin with and this entire field is a brouhaha to wrestle legislative control from Open Source opponents

Note that this is not a valid argument against any argument that superintelligent AI might kill everyone; it's just a character attack.


Wasn't sama a YC CEO or something?


> AGI is not around the corner, so no worries

Unfortunately while this is very likely the case, the vast amount of money being poured into these projects will also seep into other horrifying projects like: Lavender or Pattern (seriously go look up both of these, it's actually going to fry your brain).


Brain not fried. Why do you think these projects are horrifying?


Weird thing to joke about.


There’s also the dumb option that he’s afraid of it, but thinks it’s better for him to be in control than someone else. If everyone thinks they are the lesser evil, they all do what they don't want anyone to do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: