Hacker News new | past | comments | ask | show | jobs | submit login

Does a program have to do things? What can it do? What does a human have to/can do?



Traditionally, a program is a series of instructions. The program doesn't really act on its own.

Now, a program which is objective driven and can infer from new inputs might be something else.

Just like humans try to maximize the stability of their structures via a reward system. (it got slighty complex, faulty at times, or the tradeoff between work vs reward is not always in favor of work because we do not control every variable, hence procrastination for example, or addiction which is not a conscious process but neuro-chemically induced).


But what does "act on its own" mean? If I give the program some randomness over its next action, is that "acting on its own"? When I'm at work, I act according to a series of instructions. Am I not acting on my own?

This is a very philosophical discussion, but if I had an infinitely-powerful computer and could simulate an entire universe based on a series of instructions (physical laws), would the beings in that universe that created societies not be "acting on their own"?


Yes, as long as the computer chooses its next set of instructions in order to maximize a given value (the objective), I would say that it acts on its own. Instruction set that was never defined by anyone that is.

If the instruction set is limited and defined by someone else, I believe it doesn't.

I think, re. the simulated universe, that for us, they wouldn't have free will because we know causality (as long as we are all knowing about the simulation). But as far as they would be concerned, wouldn't they have free will if they know that they don't know everything and whether the future they imagine is realizable?

If they knew with certainty that something was not realizable, they wouldn't bother, but since they don't know, either they try to realize a given future or they don't.

Partial information provides choice of action, therefore free will.


>Partial information provides choice of action, therefore free will.

So how would an agent based system connected to a multi-modal LLM/AI fall into this?


Very good question. What do you think?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: