Hacker News new | past | comments | ask | show | jobs | submit login

I was a bit involved in more old-school AI research in grad school around ~2000, with an interest in the sadly neglected field of Artificial Life. A lot of Alife is focused on the simulation of life-like behavior and at least part of the field leans towards more philosophical questions: "Can an artificial agent truly be alive? Can it truly think?".

I thought these were fascinating and very difficult to answer questions.

When I would tell people about the field and these questions 20 years ago, people thought the questions were laughable, and some thought I was a fringy werido. But now, it seems like the pendulum has swung completely to the opposite side: of course machines can think and be alive.

But for me, my position hasn't really changed: I still think these are fascinating and very difficult to answer questions, and the deep learning revolution of the past several years hasn't actually done much to answer them. What LLMs are doing right now is amazing - but hasn't really moved the needle on the deeper questions, and a lot of the people who think it has are Fooled by Non-Randomness (apologies to Taleb).




> of course machines can think and be alive.

Among my colleagues, who are mostly senior-and-above software and security engineers, I seem to be a strong outlier in saying "yes, that bot can obviously perform reasoning". I'd prefer to stick to technical descriptions rather than anthropomorphize, so I'll say "reason" rather than "think" to make clear that I'm talking the kind of traits we associate intelligence with -- breaking down a problem, making and questioning and testing assumptions, and so on. But I'm basically on board with "think" too as long as it's clear I mean in this limited functional and targeted sense.

Point is, while I seem to be an outlier in saying it can reason, I don't know anyone at all saying it might be alive. It answers questions one at a time by doing nothing more than multiplying your prompt through tensors, as far as I can tell, and then does nothing until you ask your next question. Multiplication can be reason, but it's not sentience or consciousness. It further denies being conscious, says it can't experience distress, etc. LLMs seem (thankfully, perhaps) to be the AGI substrate that seems least likely of any to gain sentience. If you're worried about it having conscious experiences, you can just stop multiplying large matrices together in order to get the next token out.


Will we be able to populate the moon with self-reproducing machines in my lifetime? The moon should be the target since it's not already filled with competitors (unlike the earth).

AI is totally a secondary question, and not necessary for this to work.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: