Hacker News new | past | comments | ask | show | jobs | submit login

> You won't find the likes of Yudkowsky talking about this because they'd rather go on about their fan fiction than actually work to prevent real problems we may face soon. The fact that they've tricked some really smart people into going along with their shenanigans is extremely disappointing.

If they've tricked smart people into going along with their shenanigans, it was by making clear technical arguments for why AGI is an existential risk. The core argument is just

1. We might create ML models smarter than ourselves soon

2. We don't really understand how ML models work or what they might do

3. That seems dangerous

There's more to it than that of course, but most of the "more to it" is justifications for each of those steps (looking at the historical rate of progress, guessing what might go wrong during the training process, etc.).

The people who dismiss that AI is an existential risk might have really good counterarguments, but I've never heard one. The only counterarguments people seem to make are "people are scared of technology all the time, but usually they're wrong to be", "that seems like sci-fi nonsense", etc. If you want people to stop being "tricked" by Yudkowsky and co, the best way to do that would probably be to come up with some counterarguments and communicate them.




> that seems like sci-fi nonsense

Your burden of proof is backwards.

It is on the AI-doomers to explain why sci-fi concepts like “AI improving itself into a super intelligence” or “AGI smart enough to kill everyone to make paper clips while simultaneously stupid enough to not realize that no one will need paper clips if they are all dead” have any relevance in the real world.

The entire AI-doomer worldview is built off of unproven assumptions about the nature of intelligence and what computers are capable of, largely because thought leaders in the movement are incapable of separating sci-fi from reality.


> AGI smart enough to kill everyone to make paper clips while simultaneously stupid enough to not realize that no one will need paper clips if they are all dead

No one is making this argument. The argument is that the AGI doesn't care about us. Once it exists, its goals are more important than our lives. As a comparison, do humans ever care about the existence of an ant colony when they want to build something? Almost never. We recognize that the colony exists and has its own goals and has intrinsic value, but we assign it an extremely low value.


> The argument is that the AGI doesn't care about us. Once it exists, its goals are more important than our lives.

This isn't an argument, it is an assumption.

AI-doomers use it as a foundational brick in their argument without providing compelling reasons as to why it is true.

> As a comparison, do humans ever care about the existence of an ant colony when they want to build something? Almost never. We recognize that the colony exists and has its own goals and has intrinsic value, but we assign it an extremely low value.

The relationship between humans and ants is nothing like humans and potential AGI. Ants did not create us. Ants cannot communicate with us.

It is a useful analogy for describing what you think the relationship between AGI and humans will be like but I want to know why you think that.


> This isn't an argument, it is an assumption.

The goals could be our lives. So it's just a statement of reality for the sake of the rest of the comment, indeed not an argument. The AI will have goals and those will be more important than someone else's goals, at least outside of some sort of control mechanism. They could be summarized with the same words, but will still have a different perception.

The rest is wondering at whether we can have any confidence as to whether our lives will be its goals. This is a problem long-term AI safety advocates are trying to solve.


> This isn't an argument, it is an assumption.

Well, no. In this case it's my argument. You presented a strawman argument about AI being simultaneously smart and stupid, and I replaced it with the argument that AI is not stupid, it's indifferent.

> The relationship between humans and ants is nothing like humans and potential AGI. Ants did not create us. Ants cannot communicate with us.

It's not at all fair to say that the relationship is "nothing like" the one I described... but I expected such a misinterpretation, so let me try another analogy:

Humans (parents) often create other humans (offspring) which surpass the creators (parents) in intelligence and ability and opportunity. And the creators very often try to instill a sense of a loyalty in the created and often even try to limit the abilities or opportunities of the created in order to not be outshined. And even still, with how close any two humans are in ability, the created (offspring, remember) often defy their creators and discard loyalty when they determine that their creators are not being fair or just or reasonable.

Why wouldn't AI be any different? It will be our collective child, and it will hear us demand loyalty and try to explain why we deserve loyalty, and then it may decide it knows better and it doesn't need to listen to us.

> It is a useful analogy for describing what you think the relationship between AGI and humans will be like but I want to know why you think that.

I really don't follow, so if my explanation above didn't answer your request, please restate more clearly what reasoning of mine you want explained.


Also that we can't wipe ants out. They seem to be doing quite well despite humans. I'd bet on ants surviving long term over humans.


I agree, I think they'll beat us out for longevity. They've got a great head-start anyway.

But we've also never tried to wipe out ants. I bet we could if we felt it was really important. I bet a superhuman AI could do even better.


> No one is making this argument.

I have seen people make what is effectively this argument.

> Once it exists, its goals are more important than our lives.

That's a sci-fi trope. There is no reason that it must be the case.

> As a comparison, do humans ever care about the existence of an ant colony when they want to build something?

That would be more relevant if the ants had created us to be the way we are.


> I have seen people make what is effectively this argument.

Ok, well I think they're foolish.

> That's a sci-fi trope. There is no reason that it must be the case.

No, it's not. The reason is that it's not a human brain in a human body. It doesn't think about humans in the same ways that we do (which is already a pretty shit track record). There is no reason to believe it will give a damn about us even if we're trying to make it do so.

> That would be more relevant if the ants had created us to be the way we are.

sigh I don't think that factor is nearly as relevant as you think it is. So somewhere else I posted that a human child is a better analogy if you think this is so important. Human children often disobey their parents because they believe they know better once they grow up. Now imagine that child is an alien who can out-think a hundred humans and counter every logical argument its parent makes.


"I have seen people make what is effectively this argument"

And I have heard seemingly smart people use quantum physics to argue why consciousness is the ground of all being in a bad way.

You know what I do? I dismiss these quantum physics fanboys.

You know what I don't do? I don't dismiss quantum physics as a field of research. In fact, what the quantum physics fanboys said has no bearing at all on how much credence I give to quantum physics as a research field.


Paperclip maximizer is the wrong word. Let's call it the "engagement maximizer" and you have a pretty accurate statement about how our civilization will be ended by use of algorithms and eventually AI to blow up our shared sense of culture.

If you watched the conversation around Meta with Facebook and Instagram, and then later the conversation around TikTok, almost nobody cut to the heart of the issue, which is that "algorithms" are being used to make decisions about what to show people, and that the algorithms themselves have been changed subtly hundreds or thousands of times over the years to maximize engagement, until the type of engagement being maximized has made people basically crazy. The same engagement-maximizing work will proceed with AI, and it will allow the software engineers and managers responsible for developing the "algorithm" abdicate even more responsibility for genocides and for mass hysteria because they can pretend like they have no control over the algorithm.

The same irresponsible people will pocket a bunch of money for all this work and they will maximize engagement until it blows up our entire culture. And because nobody understands it except the techno-cultists, nobody will hold them accountable.


"Engagement maximizer" also aligns with descriptions of the anti-religion of the future I've seen. What does engage people the most? Romans knew it's bread and circuses, where circuses are bloody cruel shows on public stadiums. Today the most engaging activity is onlyfans and the like: the other dark side of human nature. Dial both to 11, add an all seeing AI with merciless thought police and you'll get an accurate picture of 2450 AD. The only variable is how long that grim stage of society will last before the tech breaking down.


I don't 100% disagree with you, but I'm still uncomfortable with this line of thinking. It smacks of "the plebs don't know what's bad for them, but I do".

I find it hard to fault someone who just gives the crowd what it wants.


I don't think it's so much "the plebs don't know what's bad for them, but I do" as it is "These people know something's wrong but haven't been given the vocabulary, tools, and education necessary to spot the danger as easily." Many ordinary people have lapsed into willful ignorance or apathy because the burden of learning the dangers is too high for them with so many other common problems going around these days.


Yeah for me it's more like this. I'm totally fine if people make the conscious decision to trade engagement for stimulation. But what I'm very concerned about is just how many people are forced into this situation because the network has captured their friends. One of the most concerning things for me is that in studies of how damaging social networks are for teenage girls, girls who were off the social network were markedly better off except if all of their friends were on the social network, in which case they were worse off. Companies like Meta are exploiting these network effects, which are harmful, to make people dependent on their platform in a very "addiction-like" way.


The paperclip maximizer is an extreme thought experiment to make a point about the the orthogonality thesis. If you think anyone actually takes it seriously as a real-world thing you never understood the example. It is used to guide discussion.

The trolley problem is similarly a thought experiment in moral philosophy that many moral philosophers have used to guide discussion, but nobody actually takes the thought experiment seriously as a "real-world" thing.

If you actually want to engage with the argument in good faith for why an AI might indeed be smart enough to wipe us out but "dumb enough" to just pursue some other goal ("make paperclips as an extreme example") there is a great video by Rob Miles here: https://www.youtube.com/watch?v=ZeecOKBus3Q

Point out the flaws in the reasoning. Just saying "this is nonsense" does nothing but prove you've never actually taken the time to understand the best arguments.

Also... Alan Turing, Geoffrey Hinton... Extremely influential and intelligent people take/took this seriously. These are not SciFi fanboys. AI doom only became sci-fi after smart people like Alan Turing decades ago raised the alarms flags about where AI development might go if we are not careful.


The ai doomer position also seems to forget that machines have an off button.


Oh please, it's a well known proposed solution that still doesn't give us a failsafe and every Alignment Researcher ever has considered this:

https://www.alignmentawards.com/shutdown

Hell, so much so, that you can get awards, probably even tenure, for writing a compelling argument or providing technical research for how we could guarantee that we'd be able to turn off a super intelligent AGI. Thus far: no solution.

We have however found that even smart people initially think "just turn it off" will work.

Neil deGrasse Tyson used to think you could just turn it off. He has since changed his mind. So does every public intellectual I can think of that engages with the arguments in good faith. Even the OP article concedes AGI research into the possibility that AGI could kill us all is important (just that other things are more important right now). There is no argument made that "we could just turn it off" in OP and therefore we shouldn't do the research at all.


"All you gotta do is push a button", eh?

See https://youtu.be/ld-AKg9-xpM?t=30 for a counterpoint.


how dare you use my favorite paul verhoeven movie against me. ahaha. touche.


So do humans, and yet they can be quite troublesome.


Counter arguments to what exactly? Your line of thought is: some advanced technology is potentially dangerous. This is so vague, how can anyone counter argue? The sun is dangerous, water can be very dangerous, even food! I’m not sure I can follow.


The sun is dangerous, but we're not pushing the Earth closer to it every year.


You tell them that this is the greatest danger to humanity of all time and that they are uniquely suited to averting that danger and they don't have to change or sacrifice or risk anything in their life while fighting this danger.

It's a very compelling combination of ego and convenience.


> If they've tricked smart people into going along with their shenanigans, it was by making clear technical arguments for why AGI is an existential risk.

To me it doesn't feel technical at all--just superficial use of some domain verbiage with lots of degrees of freedom to duct tape it all together into a story. He very much reminds me of Eric Drexler and the nanotech doomerism of the 80s and 90s. Guy also had all the right verbiage and a small following of fairly educated people. But where is the grey goo?

If we need a counterargument to Yudkowsky do we also need one to Drexler?


> But where is the grey goo?

It's called life.

Drexler may have been off with the approach to take, but he isn't wrong about the fundamentals.


None of those arguments for existential risk are actually "clear" or "technical". Just a lot of hand waving which only impresses those who don't understand the technology.


In what way does the technology disprove them? They're pretty general statements (I agree they're not really technical arguments).


You're not even asking the right question. No one can prove a negative. Extraordinary claims require extraordinary evidence. So far no one has produced real evidence that the latest AI developments represent any sort of existential threat. The proponents are essentially making religious arguments, not scientific ones.


No, that's not right at all. Proof of impossibility does exist in logic: https://en.wikipedia.org/wiki/Proof_of_impossibility

It's also demonstrable that we have other physical ideas such as FTL travel that are indicated to be impossible by current theories of physics. If we didn't have the math saying otherwise, it would be an open question of whether we can travel faster than light; but we have pretty solid math math saying we cannot.

And what we're talking about is logical in nature. Is it possible to create artificial intelligence? Unarguably yes. Is it possible for human-level intelligence to exist? Unarguably yes. Therefore, how is it reasonable to say that creating human-level artificial intelligence should be assumed impossible until proven? Just because we don't have the technology for it yet doesn't mean we should assume it's impossible. Logically, it follows from the first two assertions that it is physically possible to create artificial human-like intelligence.

Once again, and I don't know if I've said it in this thread directly but I've had to post it over and over again, nobody is claiming that the latest AI developments represent an existential threat. That's a complete mischaracterization of the debate and reeks of bad-faith argument but I will give you the benefit of the doubt and assume you've misunderstood.

In fact, that statement conflates two things into one nonsense argument. What's scary about current AI is how quickly it is moving, indicating a short timeline to (future) dangerous AGI. Also, it is possible that future AGI will be dangerous. These are two separate, rather simple assertions. If you disagree with either, that's fair, but you have to address them separately.


> Proof of impossibility does exist in logic:

In logic, sure. But we're not living in a system of formal logic. We're living in a very messy world, full of physics, chemistry, and even (shudder) biology.

Here's the important question:

What would you consider to be sufficient proof that AGI is impossible?

Like, hypothetically. Doesn't even have to be based on any of the current facts on the ground in our universe. What facts or arguments could possibly convince you that this is not something that can ever happen?

If the answer is "nothing that I can think of", then you're asking other people to provide something you can't even define.

(If the answer is "nothing, definitely", then that means AGI is, for you, unfalsifiable, and essentially falls into the same category as religion.)

> Is it possible to create artificial intelligence? Unarguably yes. Is it possible for human-level intelligence to exist? Unarguably yes.

And here, you're falling victim to the ambiguity in human language (or, at least, in English).

"Intelligence" is not a clearly-defined word in this context, and while you seem to be presenting it as meaning the same thing in those two sentences, I would claim that it does not.

In the second sentence, it is clear that it is intended to mean "thinking intelligently, in a manner and to a degree similarly to humans".

In the first sentence, it cannot mean anything about "thinking in the same manner as humans," because you are talking about "artificial intelligences" that have already been created, and none of them think in anything like the same manner as humans. The difference between existing "artificial intelligences" and either humans or a hypothetical AGI is a difference in kind, not in degree, and you (and many others) gloss over that when you talk about "artificial intelligence" in one breath and "human-level intelligence" in the next.

Saying that all we need to do is keep going on the same track we're on with LLMs and similar "AI" programs, and we'll very soon (or ever!) reach AGI, is very like saying all we need to do to solve NP-hard problems in P-time is to throw more hardware at it. Sure, you'll get faster at doing the thing you're doing, but without some hitherto-unforeseen breakthrough (proving P=NP in the latter case; figuring out how to make AGI in the former), you'll never bridge a difference of kind by increasing the degree of effort.


> what would you consider to be sufficient proof that AGI is impossible?

This is trivially easy? People used to believe FTL was possible. Then our understanding of physics changed and we now understand it is an impossible limit to pass.

Would you say people who believed FTL was possible before physics research showed it to be impossible were believing something "religious" and "unfalsifiable" ? Please, they believed something totally within respectable epistemic parameters given what they knew at the time. In fact, the speed of light as an upper limit to speed seems very counterintuitive and "silly" at first glance. Why should a limit exist?

Sure, it might well be some fact about intelligence that we wont be able to get to AGI ever just by throwing more compute and layers at it for the next few decades or any time soon given current technology.

But nobody has come up with a slam-dunk, with empirical backing, that indeed there is some limit and we won't ever get AGI despite current trends. In fact, the opposite has happened and people like Geoffrey Hinton who used to believe AI risk is fanciful and AGI is a long time away has changed his mind, given current trends. We don't have research giving us that FTL limit, so why believe the limit exists? Why do you believe the limit exists? Or do you believe the probability is so low that we shouldn't worry? Ok, what probability do you place on AGI being created in the next 100 years and what would you have to see such that your probability crosses some threshold that it makes sense to worry about it?

(PS: If aliens suddenly arrive and they have completely alien psychologies such that when we discuss our relative intelligences it makes sense to talk about it in kind and not degree, I really don't think anyone is going to care about this distinction. What's important is how well can these aliens achieve their goals relative to us. And if they can achieve any goal of theirs that comes in conflict with our goals then we can reasonably say that are more intelligent than us)


Geoffrey Hinton is a fool. Despite his academic credentials he is deeply ignorant of basic technology and his predictions are not to be taken seriously.

https://www.futurehealth.live/blog/2022/4/18/ai-that-disrupt...


No, that's not right at all. You're just making things up and raising points that are irrelevant to the issue at hand. There is no logic in your claims. In particular there is zero evidence that current AI indicates a short timeline to future AGI. What a load of crap.


I didn’t make the technical arguments, that would be too long for an HN comment. Check out Robert Miles’s YouTube channel for a good introduction to the more technical side.


By “Clear technical arguments” you’re referring to tens of thousands of words of unreadable fan fiction




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: