Hacker Newsnew | past | comments | ask | show | jobs | submit | otabdeveloper4's commentslogin

> sub-prime technical debt is now easy to take on

Vibe-coded projects can't keep up with the scale of technical debt accretion. See the proliferation of OpenClaw clones - instead of fixing it we're iterating on rewriting it from scratch without fixing the core issues. (Give it a year and the "minimal" Claw-clones will also collapse under technical debt, because they're also vibe-coded, with all that implies.)


Shit code was always cheap, this is why "technical debt" exists as a concept.

There is no "message provenance" in LLM machinery.

This is an illusion the chat UX concocts. Behind the scenes the tokens aren't tagged or colored.


I am aware. That is not what the guy above was suggesting, nor what was I.

Things generally exist without an LLM receiving and maintaining a representation about them.

If there's no provenance information and message separation currently being emitted into the context window by tooling, the latter part of which I'd be surprised by, and the models are not trained to focus on it, then what I'm suggesting is that these could be inserted and the models could be tuned, so that this is then mitigated.

What I'm also suggesting is that the above person's snark-laden idea of thinking mode, and how resolvable this issue is, is thus false.


What, you don't trust the vibes? Are you some sort of luddite?

Anyways, try a point release upgrade of a SOTA model, you're probably holding it wrong.


why yes, yes I am. ;-)

You forgot to add "you are a senior software engineer with PhD level architectural insights" though.

And "you're a regular commenter on Hacker News", just to make sure.

A secret art known to the cognoscenti as "benchmark gaming".

It's all just system prompts under the hood and nothing more.

Not if you go custom, you have unlimited latitude, examples...

I modified file_read/write/edit to put the contents in the system prompt. This saves context space, i.e. when it rereads a file after failed edit, even though it has the most recent contents. It also does not need to infer modified content from read+edits. It still sees the edits as messages, but the current actual contents are always there.

My AGENTS.md loader. The agent does not decide, it's deterministic based on what other files/dirs it has interacted with. It can still ask to read them, but it rarely does this now.

I've also backed the agents environment or sandbox with Dagger, which brings a number of capabilities like being able to drop into a shell in the same environment, make changes, and have those propagate back to the session. Time travel, clone/fork, and a VS Code virtual FS are some others. I can go into a shell at any point in the session history. If my agent deletes a file it shouldn't, I can undo it with the click of a button.

I can also interact with the same session, at the same time, from VS Code, the TUI, or the API. Different modalities are ideal for different tasks (e.g. VS Code multi-diff for code review / edits; TUI for session management / cleanup).


[primary author and architect of scion here] Actually - there are two other big parts: a CLI and a control plane

Don't forget a while loop and a TODO.md

Yes, and unironically.

You just got used to slop and peeked behind the curtain when the wow factor wore off.

LLMs are next token predictors. Outputting tokens is what they do, and the natural steady-state for them is an infinite loop of endlessly generated tokens.

You need to train them on a special "stop token" to get them to act more human. (Whether explicitly in post-training or with system prompt hacks.)

This isn't a general solution to the problem and likely there will never be one.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: