Hacker News new | past | comments | ask | show | jobs | submit login

This is a truly awful article. I'm mildly positive towards AI because I think people tend to react with fear and examine the risks without equivalently examining the rewards, and I think that's happening here and the rewards seem amazing. But the arguments of risk are serious and plausible and this article is just lazily dismissive of them.

> AI is not competing for resources with human beings.

A superintelligence would absolutely be competing for resources (mainly electricity and cooling) with human beings.

> Rather, we provide AI systems with their resources, from energy and raw materials to computer chips and network infrastructure.

A superintelligence will be able to convince anyone it wants to do anything for it. Even the previous President did a pretty good job at causing adulation and democracy-threatening personal loyalty from people he had not actually met in person. Imagine how amplified and personal the effect could be.

> For now, AI depends on us, and a superintelligence would presumably recognize that fact and seek to preserve humanity since we are as fundamental to AI’s existence as oxygen-producing plants are to ours. This makes the evolution of mutualism between AI and humans a far more likely outcome than competition. Moreover, the path to a fully automated economy — if that is the goal — will be long, with each major step serving as a natural checkpoint for human intervention.

The author is literally relying on us outsmarting the superintelligence (collectively, with coordination!). It doesn't sound like they've come to terms with the concept of superintelligence at all. Or with the state of human global coordination problems, come to think of it.

> AI cannot physically hunt us.

This is likely to become literally false sometime soon if it hasn't already, but even if it doesn't, the AI doesn't have to. It just has to convince another human that the human is in love with it and it wants the human to kill a bunch of people, then scale the process.

> AI’s impact on the climate is up to us.

Our own impact on the climate is up to us. Our collective decision-making is.. suboptimal.

> If we really think that superintelligent AI presents a plausible existential risk, shouldn’t we simply stop all AI research right now? Why not preemptively bomb data centers and outlaw GPUs?

You are literally linking to the person who is the most prominent voice of AI existential risk, who is seriously suggesting doing exactly that.




>This is likely to become literally false sometime soon if it hasn't already, but even if it doesn't, the AI doesn't have to. It just has to convince another human that the human is in love with it and it wants the human to kill a bunch of people, then scale the process.

I'm stunned at the number of people who try to make this argument. The operators of Radio Télévision Libre des Mille Collines never had to leave their broadcast studio. Goebbels never had to do anything but get in front of a microphone or a typewriter. Still, millions were violently killed. We bicker about LLMs' ability to write C code that will compile, but their best abilities are to gaslight, emotionally manipulate, lie and create FUD.


I thought "Ex Machina" portrayed this expertly, that a superintelligence with the motive to escape would develop psychopathic manipulation well before anything resembling empathy.


> A superintelligence will be able to convince anyone it wants to do anything for it.

This "just so" claim has always struck me as quite a reach. Humans are manipulable, yes, especially by (perceived) group pressure, but there are still limits. Especially if the human can easily just mute the AI's output. It's like when people believe that a sufficiently clever hacker can get a given program to do anything. There are millions of security vulnerabilities, yes, but you're unlikely to find a RCE in the program `true`. But instead of the 80s wizard hacker trope we're going back to the older B-movie hypnotist who can take full control of any person by talking

> You are literally linking to the person who is the most prominent voice of AI existential risk, who is seriously suggesting doing exactly that.

There's a few links in the article - which person are you referring to? Being the most prominent voice calling something unsafe is not an inherently valuable qualification. The most vocal opponents of nuclear power tend to have a poor understanding of it


> This "just so" claim has always struck me as quite a reach. Humans are manipulable, yes, especially by (perceived) group pressure, but there are still limits.

You could reimagine it as "convince most people to do anything, including killing people who are unconvinced" if you prefer.

> There's a few links in the article - which person are you referring to?

Sorry, this link, which is attached to "bomb" in the article: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-no...

> Being the most prominent voice calling something unsafe is not an inherently valuable qualification.

That's true. But the article does something odd instead of making that critique -- it presents bombing GPU data centers as a reductio ad absurdum, suggesting if you really believed in AI risk then you would have to be willing to consider doing that. The people who really believe in AI risk already are willing to consider it, though.


> A superintelligence will be able to convince anyone it wants to do anything for it.

What is the evidence for such an extravagant claim? Is there a correlation between IQ delta and persuadability that has no degradation as the lower IQ rises? Especially given that there are plenty of people far more intelligent than the President in your example who are much less persuasive.


Just define intelligence as "ability to do a task". The human brain isn't magical, it can simulated, so we know for certain that a powerful enough AI can be at least as skilled at any given task as the most skilled human who has ever lived.

So for any given person, consider how well the most effective persuader out of 8 billion people would be at persuading them. Maybe they can't always persuade them of anything, but pretty close. This is the lower bound for an AI's maximum theoretical abilities, and it's already enough to rule the world.


It's very hard to transport humans to other planets, but a bunch of robots are already exploring Mars. I see no reason why a superintelligence would want to confine itself to Earth.

Why compete with humans on Earth if there's the whole Solar system to exploit?


you completely fail to understand it. when you understand it you wont use the word compete — thats like saying the AI will have to compete with an inanimate object. there will be no competition, no struggle and no choice in the matter. if the singularity kicks off, humans will be rendered completely helpless. you clearly dont realize this because of the ridiculous notion that you harbor of humans competing with machines. trying to predict their motives is totally useless — you cant do it.


While I like your comment and I understand your sentiment, it's not wise to rely on this type of thing to claim we're going to be ok.

I love these type of thought experiments though...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: