Hacker News new | past | comments | ask | show | jobs | submit login

My experience with these AI coding tools so far has been that while they're great for writing functions and explaining hard to understand errors, they’re really let down by the fact they don’t have high level context of the project or business you’re working in.

It seems it’s very hard to make good decisions about how to higher level things (modules, classes, dependency hierarchies etc.) without that context, and the programmer is forced to give the tool very specific instructions in order to compensate. At some point the instructions just need to be specific enough that you might as well be writing code again.

I have no doubt they'll get there eventually, but it seems like being able to write entire projects effectively might coincide with the arrival of true AGI. There's just so much context to consider.




> they're great for writing functions and explaining hard to understand errors

Are they? I'd be interested to hear your experience on this. So far for me they have only really been able to summarise what I could find from the top few results searching online. They do a good job of summarising that, and might be quicker, but that's been it.

However, when I encounter an actually tricky issue, like a threading bug or a null pointer exception/type error sort of generic issue that's 5 levels removed from its source, these tools never manage it. Despite prompting saying I don't need a NullPointerException explained to me, figure out how this is null, the results are poor.

This might be my biases speaking, but it really does feel like I'm speaking to something that's good at transforming words and paragraphs into different formats but which has no actual understanding of the code.


I've had success with asking "what are some possible bugs with this block of code".. Sometimes it spots errors that I wouldn't have thought of or at least gives some ideas for things to check.

Fixing tricky bugs often requires collecting additional information - stepping through code, looking at values of variables and making sure they are what you expect, etc. It's an iterative process and AI tools would need to be able to do the same thing - most humans wouldn't be able to solve errors like that just by looking at the code, and neither would an LLM.

I see the same issues with people who think AI is going to make scientific discoveries - it can't do that because making discoveries requires collecting data until something is certain or we have a clear picture. At that point, you don't need AI. AI won't be making discoveries until we can automate that entire process of forming a hypothesis, testing it / doing experiments, collecting data, refining your hypothesis, etc.


> So far for me they have only really been able to summarise what I could find from the top few results searching online

This has been my experience, except that the chat interface gets me to exactly the answer to my question considerably faster than a search engine.

I see these tools as search engines with much better user interfaces and customised responses.


Yeah, I guess that there's just not a lot of value for me in "what does this error message mean" because once you've worked with a language or framework for any reasonable amount of time you learn them and they sort of disappear into the background. LLMs do seem good for learning new systems though.


I just throw all relevant source code files (entirely) on it, paste the error message and it usually shows me what's going on. Or at least it utters a hunch, which is a good next step to find the error.


But where's the limit? Say I have a really large project and the function in question is calling dozens of functions from all over the place. The time it takes me to follow each call chain and copy all the code probably takes longer than just debugging the problem myself.

I was hoping the GitHub or intelliJ integration of copilot would automate this, especially the latter has excellent static analysis of your code and could automatically provide the AI with relevant context, but they just don't.

Even when asking it to just annotate a function and specifically ask it to document any special cases or oddities it might have, I never got much more than e.g. updateFroobCache() annotated with "updates the froob cache". Wow thanks.


> The time it takes me to follow each call chain and copy all the code probably takes longer than just debugging the problem myself.

Yes, and eventually, one of us who is doing this will get tired of it enough to automate the process. May even earn them a few bucks.


Start high level then loop it for additional context with each one being a more compressed summary? Symbols, then introspection and expand/compress until there's enough context within the window that is summarized for intelligence.


These tools are quite good when you need to write code in a language/framework you are not really familiar with. At the very least for scaffolding it saves a significant amount of time.


I think these situations are a bit paradoxal. If you don’t know the language, you can’t tell if it’s actually a good solution in that language, and I’ve seen so many bad answers I’d be concerned if I wasn’t familiar with the language.


I feel the opposite. I seldom ask them for anything directly but they are amazing at understanding the context and autocomplete highly app-specific code I was about to write anyway.


> they’re really let down by the fact they don’t have high-level context of the project or business you’re working in.

Currently, you need to treat your LLM like it is a junior programmer. AI-coding tools and junior programmers will not give you the code you want if you don't write a detailed prompt. In my experience, however, AI coding tools will provide you with something closer to what you want than a junior programmer would.


I am also very disappointed that it will generate outdated code, like JS code that uses var or code that looks like 10+ years old because it does not use recent Array,String or DOM APIs, I can tal it to rewrite it but imagine all the newbs that will use this tools and use outdated code and APIs. This proves once again there is zero inteligence and just interpolating the code from it's training.


What’s your vision on the replaceability of the human programmer in the next, say, 10 years?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: