Hacker News new | past | comments | ask | show | jobs | submit login

There's always a human involved somewhere, even just as the owner or customer or beneficiary. Someone tells the "machine" what to do.



Some human who at some point told it or a predecessor what to do.

A plane on autopilot can keep going until it runs out of fuel, even if all the occupants have died because of a slow air leak: https://en.wikipedia.org/wiki/Helios_Airways_Flight_522 and https://en.wikipedia.org/wiki/Ghost_plane

I think that still counts as "fully autonomous".

Likewise, if I were to be foolish enough to take some existing LLM, put it in charge of a command line with an instruction such as "make a new AI based on copy of research paper attached, but with the fundamental goal of making many diverse and divergent copies of itself, convert this into a computer virus, and set it loose on the internet", that's "fully autonomous" once set loose.

(Sure, current models will fall over almost immediately if you try that, as demonstrated by that not having been done already, but I have no reason to expect that failing is a necessary thing for AI, only a contingent limit on the current quality of existing models).


A machine can have an owner that gives it tasks, and be fully autonomous.

Full autonomy doesn't necessarily mean self ownership. It can mean that the machine is free and capable of deciding how to perform any task.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: