Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How do you keep up with advances in AI-assisted programming techniques?
37 points by thisisbrians on Nov 7, 2023 | hide | past | favorite | 46 comments
Obviously, things are moving very quickly. How are you keeping up?

I'm building a list of folks to follow on X (formerly Twitter, sigh...) and trawling the comments here on HN.




By and large, just ignore them? Life's a lot easier when you don't fancy yourself a 10x programmer, accept mediocrity, and just absorb best practices by osmosis a few years after they're developed by the bigshots.

This reminds me a lot of early Javascript: a thousand small companies each promising a 10% improvement to your workflow. Wait a couple years and it'll be three companies each offering 150% boosts instead, with clear instructions.

It's an exciting time if you like to be on the bleeding edge, but then you should be working with one of those companies. Otherwise, just wait for the dust to settle...


My 2¢ having tried several "AI pair programmer" solutions on a recent project: Don't wait.

It's not anything like committing to a new tool or framework. There's not going to be a "winner". Delaying a personal understanding of how AI-assisted development is going to change software engineering will just hurt folks who aren't in the twilight of their careers.


> absorb best practices by osmosis a few years after they're developed by the bigshots

Better yet, ignore the concept of "best practices" and learn what different software development practices have to offer in different contexts, from your personal experience and from nuanced reports from people you (at least somewhat) trust.

There are a lot of ideas flying around about how to do software development better, in my experience, someone describing an idea as a "best practice" is more likely to be a red flag than a sign you should pay attention.

That said, try Copilot. It's surprisingly helpful.


Yeah, Copilot is worthwhile. You need to keep an eye on it, I've definitely been sent down the wrong path plenty of times, but all in all it's been enough help enough times.

The trick is knowing when to have it off and when to have it on.

I particularly like Copilot Chat. Really nice to have there in the IDE.


Can I ask for a real example of it? I'm a senior dev, worked in software for ~18 years now. We work in a medium to large sized Rails codebase, with some idioms for things like logging and error handling that have been built over time. We mostly deal with detailed business logic.

I have really wanted to like AI assistance, but I can't figure out when it really helps. I'm an expert at ruby already, a targeted google search is faster than asking when it comes to an api I know exists, and I just need the order of arguments. It generates annoyingly "wrong" code, since it doesn't know our idioms.

Copilot is basically just a context-aware one line autocomplete. I can't figure out how to use it for more.


Not just one line, but often 3-4 lines, and when writing tests, it often completes 10+ lines for me, and I only have to make a few corrections, sometimes none at all.

You have to read every line and character it suggests, but even so, it saves time.

It has also helped me while learning Typescript. As a beginner, it wasn't/isn't always clear to me how to find the types I need to use in third-party libraries like the Firebase SDK, and generating the code via Copilot and then finding the types it references helps me work backwards to understand where the types are declared. Sometimes, working with Copilot is like having documentation with tailor-made examples. Sometimes it's just wrong, though, so it's a mixed bag.


I'm not young, but considerably less experienced and more of a data scientist, so YMMV.

I'm signed up to the Copilot Chat preview so there are two parts to this answer: autocomplete and chat.

Autocomplete is great for things like boilerplate, docstrings, and annoying tip-of-tongue syntax.

It's a lot less helpful when working on something more creative or difficult, and that's when it needs to be off. I find it's an absolute no-go for complex data analysis, for example.

The Chat function is just GPT in IDE, so use is optional at all times.

As a concrete example, I was doing a Django project recently. I've done a few in the past, but I'm far from being an expert.

Things like the models.py were a breeze with the autocomplete on, but fell apart rapidly when writing a fiddly feature.

I used the chat whenever I ran across issues for which I wasn't finding quick answers on SO. This generally worked well, particularly as you can ask as many follow up questions as you like.

I also found chat good in other areas where I'm out of my comfort zone. Like throwing some vanilla HTML at it and asking for some CSS to layout and format in a particular way.

Long and short is it enabled a fair-to-middling dev to rapidly knock-up and document something in a relatively unfamiliar framework that was useful to his org.

In your case, you might find you have more truck with it when exploring something new or less familiar.


My impression is that Copilot mostly saves me a lot of typing; even on top of using Nvim. Mostly those little refactors where some complex statement is moved partially to a separate variable. On personal codebases, also sometimes helper functions which are on the edge of what could have been in the base language.


I don't consider myself a 10x programmer, but I do consider writing unit tests and mapping values to properties to be tedious. AI (copilot) makes those less tedious. I love it.


I don't get that take, wouldn't most people use a tool if it made you even 5% or 10% more efficient at your job after a little bit of research? Sure digging for the tool and task use case takes time but it'll more than pay back itself


Deskwork efficiency has never been a challenge in my work or in my career. My productivity is usually paced to some client or collaborator already, so getting a little bit mode done in my time doesn't win me much but a longer wait at that gate. That's been consistently the case for nearly 25+ years.

So no, I don't invest a lot of attention in further optimizing how quickly I get my work done. If anything, I'd rather invest that attention in strengthening the routines and practices of my work so that it demands less conscious attention as I do it, at no loss of quality. Write a module 5% faster? Doesn't matter to anyone. Leave a workday refreshed and ripe because I've mastered my tools and workflows? Huge win for me.


If it was the case, most developers would take a few hours to learn how to use regular expressions to search faster in their code. But they don’t.


Not accounted for in the initial calculation is cost of adoption needing to learn how to apply them in your workflow in the most effecient / least disruptive way to realize the 5% improvement

I don't think anyone's against self improvement, but the cognitive overload of continually juggling multiple 5% improvements vs a possibly more-disruptive (but in reality usually more streamlined) 25% improvement down the line just doesn't make sense if you value your own time, especially if you have to essentially embed yourself as an expert into the domain to be able to follow improvements


5/10% efficiency gains may not translate into 5/10% compensation gains.


Thank you for this, well articulated.


The same way I've successfully kept up on tech progress for 3+ decades now:

By sitting back, doing what I know, and letting everyone else burn their energy on "keeping up". Then, when the hype has died down and I see what really has worked and stuck with people, I go learn it.


On the one hand I keep wondering if that would not have been the better approach, as keeping up with generative AI developments for the last year nearly did my head in.

On the other hand, I have a pretty good feeling for just how this is going to change things and what can be done with the current and future models.

I'm betting on easy-access UI tools never being as useful as working with raw models. If I'm wrong, oh well. I had a lot of fun tinkering.


Great as an employee approach. Not as good as a founder approach.


You don’t need to chase the shiny to be a successful founder of a successful company.

https://boringtechnology.club


Yeah. I subscribe to the theory of conservation-of-cool. You get to either build something cool with something boring, or something boring with something cool.


I love boring tech! Building boring tech with AI tooling is what I'm after with all this.


Are you founding something in "AI-assisted programming"?

Because if you're not, it sounds like you're distracting yourself with shiny things instead of focusing on your industry, on your investors, on your leads and clients, on your team, etc. While common, that sounds like a terrible founder approach.

AI-assisted programming may be something that your engineers bring into your company because they find it improves their work. But like any other tool one's staff may prefer, your role as a leader doesn't involve "keeping up on the advances". At best, it involves sourcing trusted perspectives when you face a decision point (authorizing a request, perhaps), making the choice, and then moving on to other leadership tasks.


Founder in what? Figure out what your core competency is and keep up to date on that. For everything else, stick with the tried and true.


Not every founder is a sole engineer who codes on their own.


I ignore them. If a technique is useful enough, it will become mainstream and won't need to be advertised by a small community of apostles on X or Mastodon.

I work as a junior programmer/sysadmin. Most of the work I do has centered around being aware of the abstract logical nature of the problems I solve. Thus, I have had little issues solving a routing policy in IOS, JunOS, OpenBSD or plain old Linux, for example. Or writing some WSGI application to interface with an old water meter. Or fixing a broken Ubuntu 12 webserver that no-one has accessed in years.

Considering the comments on another topic on HN today, It still looks like GPTs have great issues with abstraction, which I personally experienced when I asked it to fix some issue I've been having regarding data structures I've been using in a project. So I don't think I'll be running out of things to do so soon.


Here are a few lesser-known resources I’ve found:

- aider - AI pair programming in your terminal https://aider.chat/

- Tips on coding with GPT (from the author of the above): https://news.ycombinator.com/item?id=36211879

- Cursor - The AI-first Code Editor https://cursor.sh/


Phind is also great, in case you haven't tried it.

https://www.phind.com/

https://www.phind.com/blog


Do you have any advice for working around the 10,000 token rate limit when using aider? It seems to be a huge limiting reagent, as it makes it pretty much impossible to use on large files.


Read widely, absorb judiciously, adopt frugally


I really like this motto.


Don't get the hypemongers rope you into thinking you're missing out. FOMO is above all a sales tactic.


I mostly don't. I turn them off if they are imposed upon me and carry on without the distractions.


LlamaIndex, HuggingFace, LangChain, and Ludwig are functioning as partial proxy aggregators of what is working and working better than alternatives. Most people in this space have multiple models for virtually every step in their heads. You cannot and should not process them. Thus you need aggregation and filtration on some objective(s) and presumptions with a purpose or you're a boat on a turbulent sea.

One simple tool I use to evaluate is how easy is it to spin up a working demo, in isolation or context of other tools, to show the maturity of thought on the idea moving to application. If there is no benchmark on output(s) before you start that is more science/fantasizing than business. but maybe that's your goal, just don't delude yourself in what you're doing. I also look up project leads on Linkedin and such since these things require push and without a good promoter, usually don't pan out(see crypto for example).

Buried lead: I've built a few tools allowing me to parse the tree and graph structures of new projects in charts/schemas to get quick visuals for myself and the LLMs I use to code. Additionally, I have built tooling to test rapidly. For if everything is dynamic, optimizing measurement is quite valuable in and of itself.


Keep up? I just ask chat gpt stuff and use copilot autocomplete. Just doing that has been a huge help and I’m not sure how much more I need at this time!


I just ignore them mostly, I find that in the space that I currently work in both ChatGPT and Bard hallucinate like crazy and I get recommendations for things that does not exist and when I ask them about this they always keep hallucinating. Python packages that doesn't exist at all, solutions that are impossible, step-by-step guides that lie straight up.

On top of that I can't really use any AI tools in my editor as that would violate work policy, so no auto-completion for work stuff.

I use GH Copilot at home sometimes for fun but mostly for generating mock data or quick examples I can give to students when I work in another role.


Simply Hackernews makes you better informed than about 80% of other engineers. Follow and read popular documents from a few professors, entrepreneurs, and communities on Twitter/Reddit/Quora to get to about 95%. Hunt for specific information to get to around 98%. Build something new and useful to get to 99%.


I read about the standout parts after the fact. I find any other approach (for any newish topic - not just AI) is unsustainable.

E.g. I see a lot of similar comments with multiple replies specifically about Copilot - but that's exactly what I'm talking about. ChatGPT, DallE, Copilot - these are the standout things that everybody not keeping up with AI knows about.


visual studio new snippet experience was auto-rolled out. first a performance issue, now merely "mostly useless"


Twitter. AI news/info on Twitter is ridiculously better than anything found on HN.


I find the garbage vs interesting content to be not great on Twitter, especially lately with the lack of moderation. The important tweets will make it to HN or some sun reddits after a few hours.


Maybe Reddit (I avoid it), but there's no question HN misses 95% of what's happening in ML, maybe more. The 5% it gets isn't even the good stuff.

As another anecdote, I've had the opposite experience w.r.t. moderation under Musk, it's way, way better for me. Twitter is roughly back to where it was in 2016.


By the time things are useful, you start noticing a proportion of colleagues using them (or the tools are imposed on you by your company). If it's not the case, the tools are probably not mature enough or just passing fads.


    pip install aider-chat
It checks for updates on startup and shows the new version number (and command to update).


What techniques do you need? Just talk to gpt4 explain your problem and ask for a solution.


In addition to reading online, talking with coworkers and friends helps a lot.


Matthew Berman’s YouTube channel




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: