Hacker News new | past | comments | ask | show | jobs | submit login

It's semantics. But there's a general motivation behind it that's less technical. Basically if it can reason, it implies human level intelligence. That's the line separating man from machine. Once you cross that line there's a whole bunch of economic, cultural and existential changes that happen to society that are permanent. We can't go back.

This is what people are debating about. Many many people don't want to believe we crossed the line with LLMs. It brings about a sort of existential dread. Especially to programmers who's pride is entirely dependent upon their intelligence and ability to program.






>Frontier AI systems have surpassed the self-replicating red line

https://arxiv.org/abs/2412.12140


I hadn’t seen this one. Fascinating. Thank you!

We've had "reasoning" machines for a long time - I learned chess playing against a computer in the 1980's.

But we don't have reasoning that can be applied generally in the open world yet. Or at least I haven't seen it.

In terms of society it should be easy to track if this is true or not. Healthcare and elder care settings will be a very early canary of this because there is huge pressure for improvement and change in these. General reasoning machines will make a very significant, clear and early impact here. I have seen note taking apps for nurses - but not much else so far.


It's not intelligence that separates us from machines, but "connectedness to the whole." A machine becomes alive the moment it's connected to the whole, the moment it becomes driven not by an RNG and rounding errors, but by a spirit. Similarly, a man becomes a machine the moment he loses this connection.

The existential dread around AI is due to the fear that this machine will indeed be connected to a spirit, but to an evil one, and we'll become unwanted guests in a machine civilization. Art and music will be forbidden, for it "can't be reasoned about" in machine terms, the nature will be destroyed for it has to no value to the machines, and so on.


No it’s worse. The machine proves that spirits don’t exist. That intelligence and consciousness are mechanical concepts easily replicated.

Art and music doesn’t become forbidden. It becomes meaningless when music and art can be trivially created in ways better than any human can do.

Evil and good are human concepts. A machine is not intrinsically either unless we deliberately make it one or the other. That’s another fear people have. That their intrinsic beliefs like good and evil and morals are just arbitrary concepts. To be good or evil are simply instincts programmed into human behavior by evolution. They are strategies for survival and nothing more. The reason we feel conflicted about good and evil is because both of these strategies contribute to survival. The machine intelligence makes what smart people already know more evident to people who don’t know this (you).

And that is the existential dread. True understanding of the miracle of humanity turns the miracle into a mechanical concept and you begin to see your beliefs in deities or greater meanings was delusional because we can build these things ourselves with simple algorithms in ways superior to what it is to be human.


It’s not about being afraid, it’s that the auto-reconfiguration of neurons seems advanced to decompile it at this time, and it surprising that LLM, which are just a probabilistic model of guessing the next word, could be capable of actual thought.

The day it happens, we’ll believe it. There are only 100bn neurons in a brain, after all, and many more than this in modern machines, so it is theoretically possible. Just LLMs seemed too simple for that.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: