Hacker Newsnew | past | comments | ask | show | jobs | submit | voxleone's commentslogin

I rank with those who think human-like intelligence will require embeddings grounded in multiple physical sensory domains (vision, touch, audio, chemical sensing, etc.) fused into a shared world representation. That seems much closer to how biological intelligence works than text-only models. But if this path succeeds and produces systems with something like genuine understanding or sentience, there’s a deeper question: what is the moral status of such systems? If they have experiences or agency, treating them purely as tools could start to look uncomfortably close to slavery.

It's an interesting question. On one hand we don't worry about this much with animals, the most advanced of which we know have personalities, moods, etc (Pigs, for instance). They really only seem to lack the language and higher-order reasoning skills. But where's the line?

Lol. A lump of metal can't be sentient.

In the 90s people hoped Unified Modeling Language diagrams would generate software automatically. That mostly didn’t happen. But large language models might actually be the realization of that old dream. Instead of formal diagrams, we describe the system in natural language and the model produces the code. It reminds me of the old debates around visual web tools vs hand-written HTML. There seems to be a recurring pattern: every step up the abstraction ladder creates tension between people who prefer the new layer and those who want to stay closer to the underlying mechanics.

Roughly: machine code --> assembly --> C --> high-level languages --> frameworks --> visual tools --> LLM-assisted coding. Most of those transitions were controversial at the time, but in retrospect they mostly expanded the toolbox rather than replacing the lower layers.

One workflow I’ve found useful with LLMs is to treat them more like a code generator after the design phase. I first define the constraints, objects, actors, and flows of the system, then use structured prompts to generate or refine pieces of the implementation.


I agree with the sentiment but want to point out that the biggest drive behind UML was the enrichment of Rational Software and its founders. I doubt anyone ever succeeded in implementing anything useful with Rational Rose. But the Rational guys did have a phenomenal exit and that's probably the biggest success story of UML.

I'm being slightly facetious of course, I still use sequence diagrams and find them useful. The rest of its legacy though, not so much.


One interesting way to look at projects like this is that they’re essentially tiny universes defined by a functional update rule.

The grid + instruction set + step function form something like:

state(t+1) = F(state(t))

Once you have that, you get the same ingredients that appear in many artificial life systems: local interactions; persistence of information (program code); mutation/recombination; selection via replication efficiency. And suddenly you get emergent “organisms”. What’s interesting is that this structure isn’t unique to artificial life simulations. Functional Universe, a concept framework [0], models all physical evolution in essentially the same way: the universe as a functional state transition system where complex structure emerges from repeated application of simple transformations.

From that perspective these kinds of experiments aren’t just toys; they’re basically toy universes with slightly different laws. Artificial life systems then become a kind of laboratory for exploring how information maintains itself across transformations; how replication emerges; why efficient replicators tend to dominate the state space. Which is exactly the phenomenon visible in the GIF from the repo: eventually one replicator outcompetes the rest.

It’s fascinating because the same abstract structure appears in very different places: cellular automata, genetic programming, digital evolution systems like Avida, and even some theoretical models of physics.

In all cases the core pattern is the same: simple local rules + iterative functional updates → emergent complexity. This repo is a nice reminder that you don’t need thousands of lines of code to start seeing that happen.

[0] https://voxleone.github.io/FunctionalUniverse/


Not sure i get the down vote. Real person here, maybe too vague and enthusiastic, but not malicious.

Working on Functional Universe (FU), a formal framework for modeling physical reality as functional state evolution, integrating sequential composition with simultaneous aggregation.

https://voxleone.github.io/FunctionalUniverse/


I can relate to some of what you’re describing, though from a different angle. I’ve always been somewhat of a loner, and as I’ve gotten older I’ve grown increasingly dissatisfied with the shallowness of many modern interactions: the constant glance at the screen, that black brick glued to the hand, the strange absence of attention even when you try to do something kind for someone. It often feels like we’re all performing a kind of theater of socialization.

One thing that helped me over the years was cultivating a richer inner life and maintaining some contact with nature. Long walks, quiet time, reading, building things slowly, the kinds of activities that don’t depend on an audience. At first that kind of solitude can feel oppressive, but with time it can also become a kind of freedom.

As you get older, or at least that has been my experience, you begin to realize how precious each moment is, and how little sense it makes to spend too much of it on interactions that feel hollow. Real presence, even if rare, becomes much more valuable.

Your situation is clearly different, and the transition you’re going through sounds genuinely hard. But sometimes these chapters also open space to rediscover parts of yourself that were quiet for a long time. I wish you strength navigating this change, and I hope you eventually find a rhythm that feels meaningful again.


Yes, anyone can generate code, but real engineering remains about judgment and structure. AI amplifies throughput, but the bottleneck is still problem framing, abstraction choice, and trade-off reasoning. Capabilities without these foundations produce fragile, short-lived results. Only those who anchor their work in proper abstractions are actually engineering, no matter who’s writing the code.

I’ve always designed systems along the classic path: requirements → use cases → schematization. With AI, I continue in the same spirit (structure precedes prompting), but now the foundational layer of my systems is axioms and constraints, and the architecture emerges through structured prompts. Any AI on the shift is an aide in building systems that are logically grounded. This is where the “all of us as AI engineers” claim becomes subtle. Yes, anyone can generate code, but real engineering remains about judgment and structure. AI amplifies throughput, but the bottleneck is still problem framing, abstraction choice, and trade-off reasoning.

What’s striking here is the convergence on a minimal axiomatic kernel (Lean) as the only scalable way to guarantee coherent reasoning. Some of us working on foundational physics are exploring the same methodological principle. In the “Functional Universe” framework[0], for example, we start from a small set of axioms and attempt to derive physical structure from that base.

The domains are different, but the strategy is similar: don’t rely on heuristics or empirical patching; define a small trusted core of axioms and generate coherent structure compositionally from there.

[0] https://voxleone.github.io/FunctionalUniverse


I’ve found a workflow that feels both structured and respectful of professional craft, especially in the context of this thread. I don’t just "vibe code" and let an LLM fill in the blanks. I use a classic design discipline (UML and use-cases) to document the process: 1. Start with requirements – 2.Define use cases - 3. Implement classes/objects (Architecture first, not after-the-fact refactors) 4. Add constraints and invariants (Contracts, boundaries, failure modes, etc.) - 5. Let the agent work inside that frame, pausing at milestones for human oversight.

Those UML/use-case/constraint artifacts aren’t committed as session logs per se, but they are part of the author’s intent and reasoning that gets committed alongside the resulting code. That gives future reviewers the why as well as the what, which is far more useful than a raw AI session transcript.

Stepping back, this feels like a decent and dignified position for a programmer in 2026: humans retain architectural judgement --> AI accelerates boilerplate and edge implementation --> version history still reflects intent and accountability rather than chat transcripts. I can’t afford to let go of the productivity gains that flow from using AI as part of a disciplined engineering process, but I also don’t think commit logs should become a dumping ground for unfiltered conversation history.


It is interesting that Google translates the first paragraph of the text like this>>

"And the word he spoke was all like this. He was a hired hand, and he was full of malice, and he was in ƿælfæst. He didn't remember the man's name. He was in gefeohte(...)"

It says Icelandic.:)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: