Hacker Newsnew | past | comments | ask | show | jobs | submit | ZLStas's commentslogin

Fair point that blind rule-following is dangerous — but that's true of any checklist, not specific to this tool. The goal isn't religious compliance, it's surfacing relevant principles at the right moment so the developer can make an informed decision to follow or consciously break them. Also — if LLM weighting on bad code is the core problem, isn't that an argument against using LLMs for coding altogether? Yet here we are, and they're useful anyway.

Your heart is pure. But people are going to "let the AI do it", just like people let calculators do it in the 1970s, or let computers do it in the 1990s. An LLM with a very small additional bit of weight is still probabilistic, and the weights favor "best practices" and median program texts. You're encouraging blind rule following

My point is that LLMs weights are full of misinformation, misconception and median code. Putting a few books in isn't going to change that.


I think he should have a documentation file which explains what practices he thinks the LLM should follow.

There is router skills that will navigate the llm to useful skill based on the context

That's a great point — asking focused questions definitely gets better results than dumping generalized knowledge. I think both approaches can complement each other. Where I see book-based skills adding value is in the iterative review loop: you let the LLM review your code against well-known principles (like Clean Code or DDIA patterns), it flags issues and suggests improvements, and you apply them repeatedly. Over multiple passes, the code quality compounds significantly. So it's less about feeding the LLM static rules and more about giving it a structured lens to evaluate through. The LLM still does the thinking — the books just sharpen its focus. That said, I'm still figuring out how to run this evaluation properly. A colleague of mine has been experimenting with spinning up sub-agents to review the outputs of the main LLM flow — essentially an automated review layer. That might be the right pattern: one agent creates, another evaluates against known principles. Curious if anyone else has tried something similar.


Are you using Claude Code?

I find it to be SIGNIFICANTLY better than Projects in any other form, because the amount of layers you can create.

You can store the full books, have your workflows, etc.


Yes, I use Claude — both the chat and Claude Code. Projects are great for layering context, but my concern with storing full books is that they eat up a huge chunk of the context window. I'd rather spend that budget on actual project context — the codebase, architecture decisions, domain specifics. That's where the distilled skill files come in: you get the core principles from the book in a compact, actionable form without burning through your context on hundreds of pages of text. At least that's how I see it


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: