Hacker Newsnew | past | comments | ask | show | jobs | submit | fourthrigbt's commentslogin

If you’re using claude code/cursor, you should be using plan mode.

There are 3 major steps:

(Plan mode)

1. assuming this is an existing codebase, load the relevant docs/existing code into context (usually by typing @<PATH>

2. Ask it to make a plan for the feature you want to implement. Assuming you’ve already put some thought into this, be as specific and detailed as you can. Ask it to build a plan that’s divided into individually variable steps. Read the plan file that it spits out, correct and bad assumptions it made, ask it questions if you’re unclear one what it’s saying, refine, etc.

(agent mode) Ask it to build the plan, one step at a time. After it builds each step verify that it’s correct, or have it help you verify it’s correct in a way you can observe.

I have been following this basic process mostly with Opus 4.5 in a mixture of claude code and cursor working on a pretty niche image processing pipeline (also some advanced networking stuff on the side) and have hand-written basically zero code.

People say - “your method sounds like a lot of work too” and that’s true, it is still work, but designing at a high level how I want some CUDA kernel to work and how it fits into the wider codebase and then describing it in a few sentences is still much faster than doing all of the above anyway and then hand writing 100 lines of CUDA (which I don’t know that well).

I’d conservatively estimate that i’ve made 2x the progress in the same amount of time as if I had been doing this without LLM tools.


Doesn’t sound like you paid all that much attention when learning ML. The curse of dimensionality doesn’t say that every problem has some ideal model size, it says that the amount of data needed to train scales with the size of the feature space. So if you take an LLM, you can make the network much larger but if you don’t increase the size of the input token vocabulary you aren’t even subject to the curse of dimensionality. Beyond that, there’s a principle in ML theory that says larger models are almost always better because the number of params in the model is the dimensionality of the space in which you’re running gradient descent and with every added dimension, local optima become rarer.


The product they build is literally mentioned in the post? It’s one of the more popular personal finance/budgeting apps, and it’s a pretty good one in my opinion as someone who has used a variety of them.


I’ve had a major conversion on this topic within the last month.

I’m not exactly a typical SWE at the moment. The role I’m in is a lot of meeting with customers, understand their issues, and whip up demos to show how they might apply my company’s products to their problem.

So I’m not writing production code, but I am writing code that I want to to be maintainable and changeable so I can stash a demo for a year and then spin it up quickly when someone wants to see if or update/adapt it as products/problems change. Most of my career has been spent writing aircraft SW so I am heavily biased toward code quality and assurance. The demos I am building are not trivial or common in the training data. They’re highly domain specific and pretty niche, performance is very important, and usually span low level systems code all the way up to a decent looking gui. As a made up example, it wouldn’t be unusual for me to have a project to write a medical imaging pipeline from scratch that employs modern techniques from recent papers, etc.

Up until very recently, I only thought coding agents were useful for basic crud apps, etc. I said the same things a lot of people on this thread are saying, eg. people on twitter are all hype, their experience doesn’t match mine, they must be working on easy problems or be really bad at writing code

I recently decided to give into the hype and really try to use the tooling and… it’s kind of blown my mind.

Cursor + opus 4.5 high are my main tools and their ability to one shot major changes across many files and hundreds of lines of code, encompassing low level systems stuff, GOU accelerated stuff, networking, etc.

It’s seriously altering my perception of what software engineering is and will be and frankly I’m still kind of recoiling from it.

Don’t get me wrong, I don’t believe it fundamentally eliminates the need for SWEs, it still takes a lot of work on my part to come up with a spec (though I do have it help me with that part), correct things that I don’t like in its planning or catch it doing the wrong thing in real time in and re direct it. And it will make strange choices that I need to correct on the back end sometimes. But it has legitimately allowed me to build 10x faster than I probably could on my own.

Maybe the most important thing about it is what it enables you to build that would have been not worth the trouble before, Stuff like wrapping tools in really nice flexible TUIs, creating visualizations/dashboards/benchmark, slightly altering how an application works to cover a use case you hadn’t thought of before, wrapping an interface so it’s easy to swap libs/APIs later, etc.

If you are still skeptical, I would highly encourage you to immerse yourself in the SOTS tools right now and just give in to the hype for a bit, because I do think we’re rapidly going to reach a point here where if you aren’t using these tools you won’t be employable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: