Hacker News new | past | comments | ask | show | jobs | submit login

Do people find these AI auto complete things helpful? I was trying the XCode one and it kept suggesting API calls that don't exist. I spent more time fixing its errors than I would have spent typing the correct API call.



I really really dislike the ones that get in your way. Like I start typing something and it injects random stuff (yes in the auto-complete colors). I have a similar feeling to when you hear your voice back in a phone: completely disabling your thought process.

In IntelliJ thankfully you can disable that part of the AI, and keep the part that you trigger it when you want something from it.


> I have a similar feeling to when you hear your voice back in a phone: completely disabling your thought process.

This is a fantastic description of how it disturbs my coding practice which I hadn't been able to put into words. It's like someone is constantly interrupting you with small suggestions whether you want them or not.


This is it. I have a picture in my mind and then it puts 10 lines of code in front of me and my brain can't ignore. When I'm done reviewing that, it's already tainted my idea.


I find the simpler engines work better.

I want the end of the line completed with focus on context from the working code base, and I don't want an entire 5 line function completed with incomplete requirements.

It is really impressive when it implements a 5 line function correctly, but its like hitting the lottery


I particularly like the part where it suggests changes to pasted code.

When I copy and paste code, very often it needs some small changes (like changing all xs to ys and at the same time widths to heights).

It's very good at this, and does the right thing the vast majority of the time.

It's also good with test code. Test code is supposed to be explicit, and not very abstracted (so someone only mildly familiar with a codebase that's looking at a failing test can at least figure the cause). This means it's full of boilerplate, and a smart code generator can help fill that in.


Visual Studio "intellisense" has always been pretty good for me. Seemed to make good guesses about my intentions without doing anything wild. It seemed to use ad hoc rules and patterns, but it worked and then got out of the way.

Then it got worse a couple of years ago when they tried some early-stage AI approach. I turned it off. I expect that next time I update VS it'll have got substantially worse and it will have removed the option for me to disable it.


Agreed, the old Visual Basic, Visual C++, Borland Delphi, Visual C# experiences were how I dove into the deep end of several languages back in the late 90's/early 2000's. Things were VERY discoverable at that point. Obviously a deeper understanding of a language is necessary for doing real work, but noodling around just trying to get a feel for what can be done, is a great way to get started.


I like Cursor, it seems very good at keeping its autocomplete within my code base. If I use its chat feature and ask it to generate new code that doesn’t work super well. But it’ll almost always autocomplete the right function name as I’m typing, and then infer the correct parameters to pass in if they’re variables and if the function is in my codebase rather than a library. It’s also unsurprisingly really good at pattern recognition, so if you’re adding to an enum or something it’ll autocomplete that sensibly too.

I think it’d be more useful if it was clipboard aware though. Sometimes I’ll copy a type, then add a param of that type to a function, and it won’t have the clipboard context to suggest the param I’m trying to add.


I really like Cursor but the more I use it the more frustrated I get when it ends up in a tight loop of wanting to do something that I do not want to do. There doesn’t seem to be a good way to say “do not do this thing or things like it for the next 5 minutes”.


It probably depends on the tool you use and on the programming language. I use Supermaven autocomplete when writing Typescript and it’s working great, it often feels like it’s reading my mind, suggesting what I would write next myself.


I mostly use one-line completes and they are pretty good. Also I really like when Copilot generates boilerplate like

    if err != nil {
      return fmt.Errorf("Cannot open settings: %w", err);
    }


I use the one at G and it's definitely helpful. It's not revolutionary, but it makes writing code less of a headache when I kinda know what that method is called but not quite.


I often delete large chunks of it unread if it doesn't do what I expected. It's much like copy and paste; deleting code doesn't take long.


So your test is "seems to work"?


No, what I meant is that, much like when copying code, I only keep the generated source code if it's written the way I would write it.

(By "unread" I meant that I don't look very closely before deleting if it looks weird.)

And then write tests. Or perhaps I wrote the test first.


Oh, if the AI doesn't do what you expected, got it.


Right now my opinion is that they're 60% unhelpful, so I largely agree with you. Sometimes I'll find the AI came up with a somewhat better way of doing something, but the vast majority of the time it does something wrong or does something that appears right, but it's actually wrong and I can only spot it with a somewhat decent code review.


I suspect that if you work on trivial stuff that has been asked on stackoverflow countless of times they work very nicely.


This is what I've been noticing. For C++ and Swift, it makes pretty unhelpful suggestions. For Python, its suggestions are fine.

Swift is especially frustrating because it will hallucinate the method name and/or the argument names (since you often have to specify the argument names when calling a method).


Ah I've had it hallucinate non-existing methods in python rather often.

Or when I say I need to do something, it invents a library that conveniently happens to just do that thing and writes code to import and use it. Except there's no such library of course.


No, not at all.

"classic" intellisense is reliable, so why introduce random source in the process?


I use Codeium in NeoVim and yes I find it very helpful. Of course, is not 100% error free, but even when it has errors most of the time it is easier for me to fix them than to write it from scratch.


Often yes. There were times when I was writing unit tests that was me just naming the test case, with 99% of the test code auto generated based on the existing code, and the name.


Looks like model is not trained well. From my exp, after make few projects (2 looks enough), oldest XCode managed to give good suggestions in much more than 50% cases.


It is useful in our use case.

Realtime tab completion is good at some really mundane things within the current file.

You still need a chat model, like Claude 3.5 to do more explorational things.


I was evaluating it for a month and caught myself regularly switching to an IDE with non-AI intellisense because I wanted code that actually works.


No, not at all. It’s just the hype. It doesn’t replace engineering.


The one Xcode has is particularly bad, unfortunately.


Copilot is very good.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: