Hacker Newsnew | past | comments | ask | show | jobs | submit | winwang's commentslogin

Linear walkthrough: I ask my agents to give me a numbered tree. Controlling tree size specifies granularity. Numbering means it's simple to refer to points for discussion.

Other things that I feel are useful:

- Very strict typing/static analysis

- Denying tool usage with a hook telling the agent why+what they should do (instead of simple denial, or dangerously accepting everything)

- Using different models for code review


If you have enough cores, you could pool the L1 together for makeshift RAM!

Kind of agreed. I like vibe coding as "just" another tool. It's nice to review code in IDE (well, VSCode), make changes without fully refactoring, and have the AI "autocomplete". Interesting, sometimes way faster + easier to refactor by hand because of IDE tooling.

The ways that agents actually make me "faster" are typically: 1. more fun to slog through tedious/annoying parts 2. fast code review iterations 3. parallel agents


Yeah. I've been finding a scary number of people saying that they never write code by hand any more, and I'm having a hard time seeing how they can keep in practice enough to properly supervise. Sure, for a few weeks it will be OK, but skills can atrophy quickly, and I've found it's really easy to get into an addictive loop where you just vibe code without checking anything, and then you have way too much to review so you don't bother or don't do a very good job of it.

Not sure, but https://status.claude.com/ uptime is pretty spotty. Funnily, the latest bar is still green despite there being an incident (and the messages even acknowledge this)


Interesting! I actually split up larger goals into two plan files: one detailed plan for design, and one "exec plan" which is effectively a build graph but the nodes are individual agents and what they should do. I throw the two-plan-file thing into a protocol md file along with a code/review loop.

Awesome stuff. I have a 'root' cli that i namespace stuff into so to remove the need to pass around paths, e.g: `./cli <cmd> ...`

For manual prompting, I use a "macro"-like system where I can just add `[@mymacro]` in the prompt itself and Claude will know to `./lookup.sh mymacro` to load its definition. Can easily chain multiple together. `[@code-review:3][@pycode]` -> 3x parallel code review, initialize subagents with python-code-guide.md or something. ...Also wrote a parser so it gets reminded by additionalContext in hooks.

Interestingly, I've seen Claude do `./lookup.sh relevant-macro` without any prompting by me. Probably due it being mentioned in the compaction summary.


Compaction includes all user prompts from the most recent session verbatim, so that's likely what's happening!

Fun fact, it can miss! I've seen it miss almost half my messages, including some which were actually important, haha.

Apparently LLM quality is sensitive to emotional stimuli?

"Large Language Models Understand and Can be Enhanced by Emotional Stimuli": https://arxiv.org/abs/2307.11760


Is it the performance benefits? Or being able to write concurrent code much more expressively? Though I suppose the latter might imply the former.


Performance.

Our code looks like pure pandas (fancier SQL) wrapped as HTTP service (arrow instead of json), so the expressivity is more of a step backwards. We already did the work of turning awkward irregular code into relational pipelines that GPUs love.

Our problems are:

- Multi-tenancy. Our users get to time share GPUs, so when getting many GPU tasks big & small, we want them co-scheduled across the many GPUs & their many cores. GPUs are already more cost effective per Watt than CPUs, but we think we can 2x+ here, which is significant.

- Constant overheads. One job can be deep, with many operations, so round-tripping each step of the control plane, think each SQL subexpression, CPU<>GPU is silly and adds up. Small jobs are dominated by embarrassing overheads that are precluding certain use cases. We are thinking of doing CPU hot paths to avoid this, but rather just fix the GPU path.


Forgot exactly when, but at least ~4 days ago. Or maybe I'm tweaking...

In any case, this is likely the next computing revolution.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: