Hacker News new | past | comments | ask | show | jobs | submit | zkry's comments login

My guess would be not much. LLMs are pretty useless concerning anything novel. Maybe an LLM could update the docstrings?

I'm assuming they meant replacing the markov model with a modern language model, not having an LLM magically improve the repo.

That sounds like it'd give you no hope at all of ever learning muscle memory for it.

I personally am not a vim user in any way and have no intention to use it, neovim, or its keybindings, but I thoroughly enjoy such articles.

According to the Lindy Effect, we can expect such articles to continue until 2060. I personally feel that vim and it's derivatives will be with us as long as our civilization supports computing.


This is a key point and one of the reasons why I think LLMs will fall short of expectation. Take the saying "Code is a liability," and the fact that with LLMs, you are able to create so much more code than you normally would:

The logical conclusion is that projects will balloon with code pushing LLMs to their limit, and this massive amount is going to contain more bugs and be more costly to maintain.

Anecdotally, supposedly most devs are using some form of AI for writing code, and the software I use isn't magically getting better (I'm not seeing an increased rate of features or less buggy software).


My biggest challenges in building a new application and maintaining an existing one is lack of good unit tests, functional tests, and questionable code coverage; lack of documentation; excessively byzantine build and test environments.

Cranking out yet more code, though, is not difficult (and junior programmers are cheap). LLMs do truly produce code like a (very bad) junior programmer: when trying to make unit tests, it takes the easiest path and makes something that passes but won't catch serious regressions. Sometimes I've simply reprompted it with "Please write that code in a more proper, Pythonic way". When it comes to financial calculations around dates, date intervals, rounding, and so on, it often gets things just ever-so-slightly wrong, which makes it basically useless for financial or payroll type of applications.

It also doesn't help much with the main bulk of my (paid) work these days, which is migrating apps from some old platform like vintage C#-x86 or some vendor thing like a big pile of Google AppScript, Jotform, Xapier, and so on into a more maintainable and testable framework that doesn't depend on subscription cloud services. So far I can't find a way to make LLMs productive at all at that - perhaps that's a good thing, since clients still see fit to pay decently for this work.


I don't understand - why does the existence of a CLI tool mean we're risking a grey goo situation if an LLM helps produce Dart code for my production Flutter app?

My guess is you're thinking I'm writing duplicative code for the hell of it, instead of just using the CLI tool - no. I can't run arbitrary binaries, at all, on at least 4 of the 6 platforms.

Beyond that, that's why we test.


Apologies if it looked like I was singling out your comment. It was more that those comments brought the idea to mind that sheer code generation without skilled thought directing it may lead to unintended negative outcomes.

> I’ve been coding for a long time...

I think having been coding for a long time, I don't think you fall into the same category. Dart having paradigms not too different from other standard languages, a lot of these skills are probably transferable.

I've seen beginners on the other hand using LLMs who couldn't even write a proper for-loop without AI assistance. They lack the fundamental ability to "run code in their head." This type of person I feel would be utterly limited by the capabilities of the AI model and would fail in lockstep with it.


This is kind of the classic “kids these days” argument: that because we understand something from what we consider the foundational level up to what we consider the goal level, anyone who comes along later and starts at higher levels will be limited / less capable / etc.

It is true, but also irrelevant. Just like most programmers today do not need to understand CPU architecture or even assembly language, programmers in the future will not need to understand for loops the way we do.

They will get good at writing LLM-optimized specifications that produce the desired results. And that will be fine, even if we old-timers bemoan that they don’t really program the way we do.

Yes, the abstractions required will be inefficient. And we will always be able to say that when those kids solve our kinds of problems, our methods are better. Just like assembly programmers can look at simple python programs and be astounded at the complexity “required” to solve simple problems.


I agree with this take actually. I do imagine how programming in the future could be comprised of mostly interactions with LLMs. Such interactions would probably constrained enough to get the success rate of LLMs sufficiently high, maybe involving specialized DSLs made for LLMs.

I do think the future may be more varied. Just like today where I look at kernel/systems/DB engineering and it seems almost arcane to me, I feel like there will be another stratum created, working on things where LLMs don't suffice.

A lot of this will also depend on how far LLMs get. I would think that there would have to be more ChatGPT-like breakthroughs before this new type of developer can come.


I feel like you underestimate how much effort goes into making CPUs reliable and how low level/well specified the problem of building a VM/compiler is compared to getting a natural language specification to executable program. Solving that reliably is basically AGI - I doubt there will be many humans in the loop if we reach that point.

I get CPU’s; I worked at Intel and cut my teeth on x86 assembly.

But the fact that some people need to understand CPU architecture does not mean all people need to.

The vast, vast majority of programmers do not need to understand CPUs or compilers today. That’s fine. It is also fine that many new programmers won’t even think in the form of functions and return values and types the way we do.

I’m not saying traditional hard science tech is useless. I am saying it is not mandatory for everyone.


Yeah but what I'm saying is the level of engineering power that goes into making such relatively simple abstraction is huge and we have decades of experience.

I think if AI ever gets to the point where it's so reliable for natural language -> code - we're into the AGI era and I don't see the role of programmers at all - bridging that layer successfully requires some very careful analysis and context awareness.

Unless you think we're headed off in a direction where LLLms are gluing idiot proof boxes that are super inefficient but get the job done. I can sort of see that happening but in my experience reasoning through/debugging natural language specs is harder than going through equivalent code - I don't think we're getting much value here and adding a huge layer of inefficiency.


> The sum total of the human-generated knowledge was derived in a similar manner, with each generation learning from the one before and expanding the pool of knowledge incrementally.

Is human knowledge really derived in a similar manner though? That reduction of biological processes to compression algorithms seems like a huge oversimplification.

It's almost like saying that all of of human knowledge derives from Einstein's Field Equations, the Standard Model Lagrangian, and the Second Law of Thermodynamics (what else could human knowledge really derive from?) and all we have to do to create artificial intelligence is just to model these forces to a high enough fidelity and with enough computation.


It's not just any compression algorithm, though, it's a specific sort of algorithm that does not have the purpose of compression, even if compression is necessary for achieving its purpose. It could not be replaced by most other compression algorithms.

Having said that, I think this picture is missing something: when we teach each new generation what we know, part of that process involves recapitulating the steps by which we got to where we are. It is a highly selective (compressed?) history, however, focusing on the things that made a difference and putting aside most of the false starts, dead ends and mistaken notions (except when the topic is history, of course, and often even then.)

I do not know if this view has any significance for AI.


Human knowledge also tends to be tied to an objective, mostly constant reality.

The AIs could also learn form and interact with reality, same as humans.

Not really.

The models we use nowadays operate on discrete tokens. To overly reduce the process of human learning, we take a constant stream of realtime information. It never ends and it’s never discrete. Nor do we learn in an isolated “learn” stage in which we’re not interacting with our environment.

If you try taking reality and breaking into discrete (ordered in the case of LLMs) parts, you lose information.


As an experiment, at my work I've stopped using all AI tools and went back to my pre-ai workflows. It was kind of weird at difficult at first, like maybe having to drive without GPS navigation, but I feel like I'm essentially at pre-AI usage speed.

This experiment made me think, maybe most of the benefit from AI comes from this mental workload shift that our minds subconsciously crave. It's not that we achieve astronomical levels of productivity but rather our minds are free from large programming tasks (which may have downstream effects of course).


> However, I also think we're lacking text focused environments for smaller scale programs, the equivalent of shell scripts or BASIC programs.

I actually think Emacs is the perfect environment for this. Not only that, but Emacs can go on an even smaller scale: keyboard macros. For example, I've performed keyboard macros to update data with a combination of Emacs HTTP request modes and org-mode. And this is all doable because of Emacs' text orientation.


a lot of which were founded on the promises of AI: symbolics, thinking machines corporation


There are a lot of comparisons that could be drawn: web 3.0, the internet, the dot com bubble, etc. but I think the most appropriate comparison would be to... AI in the past. No one doubts that there was a lot of value coming from that research. In fact a lot of it is incorperated in our every day life. But it didn't live up to its hype. I suspect the same will be true for this wave of AI (and perhaps an associated AI winter).


My recollection of AI in the past is that it was nothing like this.

If you look at the Wikipedia article 'History of artificial intelligence' for now it has 'AI boom' and '2004 Nobel Prizes' but everything earlier is kind of meh.

I remember sitting down with pen and paper to try to write a ChatGPT type chatbot 44 years ago and of course totally failing to get anywhere, but I've followed the goings on since and this is the first time this stuff is working well.


As an enthusiast myself I'd be tempted to come to the same conclusion but I'm surprised with the options and learning resources out there how quick some people can get up to speed in Emacs. Yeah, it will probably take a good weekend project worth of time, but it's pretty easily doable to make the switch.


Learning the basics you need is just one part. It takes far longer to get used to actually using what you learned. Whereas switching from IntelliJ to VSCode, you'll need the basic setup and appropriate plug-ins, and then you'll be basically fully up to speed. You'll take much longer just to get used to M-w instead of Ctrl+C.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: