Hacker News new | past | comments | ask | show | jobs | submit login
What Ilya Sutskever really wants (aipanic.news)
44 points by georgehill 6 months ago | hide | past | favorite | 25 comments



“If you believe that AI will literally automate all jobs, literally, then it makes sense for a company that builds such technology to … not be an absolute profit maximizer. It's relevant precisely because these things will happen at some point.”

Can someone explain how that's possible? AI will plant/harvest crops ant be a butcher? I don't follow.


See this proof of concept integration of an LLM into the Spot robot - https://youtu.be/djzOBZUFzTw

I don't see anything limiting this tech from being integrated into other kinds of robots.


A job could be performed by a human, but still be automated in the sense of virtually all decision-making being made elsewhere, turning pretty much every form of labor into assembly-line work, with human supervisors only present to wear a badge of authority, affirm the computer's decisions, and formally transact hiring and firing.

That said, automation of mundane tasks by robots looks increasingly plausible: https://www.youtube.com/watch?v=Qob2k_ldLuw

https://www.youtube.com/watch?v=Qob2k_ldLuw


automated in the sense of virtually all decision-making being made elsewhere, turning pretty much every form of labor into assembly-line work, with human supervisors only present to wear a badge of authority, affirm the computer's decisions, and formally transact hiring and firing.

This almost exactly describes Amazon warehouses.


As long as humans represent the lowest cost actuators, perhaps we'll have an economic role suited to manual labour. But that's not necessarily a situation that lasts a very long time.


Nor one that will enable particularly humane jobs. Manual labor tend to be poor in terms of health, and competing only on cost will be a race to the bottom.


Even the rough path after the singularity isn’t fathomable to our lower intelligence. Once you have something that can self improve itself smarter and smarter on the order of seconds, there’s no way of telling how it achieves what it wants to achieves. It’s the power imbalance that exists in a universe where an ant is trying to run to safety when there’s an F-16 overhead.

Humans are so used to being top of the food chain, there is scant understanding in our culture of what it means to be in the middle of it, like most organisms.



AI doesn’t need meat or grain.


Uranium and oil then?


If you think this AI thing is neat, just wait until you hear about robots!


robots that zero-shot learn how to plant/harvest crops just by watching a video of you doing it.


Do you also use this argument to discount the value of AI investments/R&D near to those of mere manufacturers of robots?


re the website name (aipanic) and the attitude that the real problem is the people reaction to ai and not ai itself.

radio erevan q&a

q: what should we do in case of an atomic attack?

a: you dress in a white sheet and head tippy-toe to the cemetery.

q: but why tippy-toe?

a: let's not create panic, comrade.


> re the website name (aipanic) and the attitude that the real problem is the people reaction to ai and not ai itself.

The website seems to be about fighting "ai panic" though, not producing/sharing the panic. At least that's how I understand the last sentence:

> It was after this meeting that I decided I wanted to fight the publicly-facing AI hype and doom - even more.


we are reading 'panic' the same way :) fight panic all you want, but ai rendering the vast majority of humans redundant is quite a more pressing problem to fight, i would say.


A great many people disagree and do not think that is a likely or even remote outcome. If they are correct, then you are shouting "FIRE" in a crowd for no reason.


If there's a lot of smoke coming, people are running out of the building and you can see an ominous red glow in the windows, shouting "FIRE" is the right thing to do even if we are not going to be engulfed in flames this very second or the next. The potential costs given the evidence we all have are simply not comparable.


What smoke? So far the predictions of AI x-risk folk haven't panned out the way they said it would. In fact the opposite has happened. What smoke are you referring to?


Does he think they are going to some day accidentally become the Tyrell Corporation from Blade Runner?


Apparently, Open AI's board decides when AGI has been achieved. When it has, all bets are off with Microsoft(and presumably other sponsors ?). Any exclusive IP or deals no longer apply.


This is from September


Indeed, but probably kind of relevant as Ilya is one of the few left at the reign of OpenAI, so useful to know what their thoughts are as the board didn't get rid of them.


tl;dr: the author doesn't think AGI is possible (or somehow thinks it won't affect anything) but Ilya does think it's possible, and the author is concerned by this for some reason.


I'm still not sure what is the point of the article.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: