>> "we're already not writing machine code by hand for 50 years, how is AI different from a higher level language?"
I never got that argument. Compilers are formally proven, deterministic algorithms . If you understand what compiler does, you can have pretty good idea what it will produce. If it doesn't do that, its a bug. Definition of correctness is well defined by semantic equivalence.
LLMs are none of that. Its a fuzzy system that approximates your intent and does its best. I can make my intent more and more specific to get closer to what I want, but given all that is just regular spoken language its still open to interpretation. And all that is still quite useful, but I don't get the assembly language comparison here.
Because compilers are only deterministic when using ahead of time compilation, without profiling data, and always the same set of compiler flags.
Introduce dynamic compilation, profiling data, optimization passes, multiple implementations, ML driven heuristics, and getting deterministic Assembly output from a compiler starts to get harder to achieve.
You are right about that but that's talking about what you generate but not what the output does. My point is that the compilers still designed to preserve semantic equivalence. semantic equivalence makes sense here because there are semantics well defined for both input and output. That bit is supposed to be deterministic. If something breaks that that is a bug.
I just don't think comparing with compilers is a good argument.
>> Bad engineers continue being bad, good engineers continue being good.
I don't know if good engineers can necessarily continue to be good. There is limit to how much careful consideration one can give if everything is on an accelerated timeline. Regardless good or not, there is limit on how much influence you have on setting those timelines. The whole playing field is changing.
It's deeper. We used to mock architects that stepped back and stopped coding, because they generated trash.
There's a cycle that is needed for good system design. Start with a problem and an approach, and write some code. As you write the code, you reify the design and flesh out the edge cases, learning where you got the details wrong. As you learn the details, you go back to the drawing board and shuffle the puzzle pieces, and try again.
Polished, effective systems don't just fall out of an engineers head. They're learned as you shape them.
Good engineers won't continue to be good when vibe-coding, because the thing that made them good was the learning loop. They may be able to coast for a while, at best.
Reminds me of Gall’s Law from his book Systemantics.
A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.
You don’t need to write code by hand to learn from iterations and experiments. I run more experiments and try out more different solutions than I ever could before, and that leads to better decisions. I still read all the code that gets shipped, and don’t want to give that up, but the idea that all craft and learning is lost when you don’t is a bit silly. The craft/learning just moves.
Imo the biggest issue with these no-code architects has been that you could become one without ever having coded at any noteworthy level of skill (which meant most of them were like this).
In my experience, in a lot of organizations, a lot of people either lacked the ability or the willingness to achieve any level of technical competence.
Many of these people played the management game, and even if they started out as devs (very mediocre ones at best), they quickly transitioned out from the trenches and started producing vague technical guidance that usually did nothing to address the problems at hand, but could be endlessly recycled to any scenario.
The entire mistake you are making is comparing using AI to skimming textbooks, or taking shortcuts. Your entire premise is wrong.
People who care about craft will care about the quality of what they produce whether they use AI or not.
The code I ship now is better tested and better thought through now than before I used AI because I can do a lot more. That extra time goes into additional experiments, jumping down more rabbit holes, and trying out ideas I previously couldn’t due to time constraints. It’s freeing to be able to spend more time to improve quality because the ROI on time spent experimenting has gone up dramatically.
a) I cannot effectively review more than 2000 lines of code a day. The LLMs can produce much more than that.
b) Even if I accepted my reading throughput limitations as the cost of being in the loop, reading is not enough to keep cognitive debt in check: my skills will atrophy if I do not participate in the writing ("What I cannot create I cannot understand").
So, to me, it seems like we, humans, either have to come up with higher (and deterministic) abstractions than code to communicate with LLMs or resign ourselves to letting the LLM guess what we want from English and then banging on the output to see if it sort of works. This later state of affairs seems to be what the current trend is and I find that absolutely revolting.
I think the distinction is that for experiments and prototypes the behaviour of the final system is what we are trying to design. We can experiment and see the tradeoffs and explore the design space before committing to a direction. And then we can sit down and produce the final code to a quality we are happy with. If you are serious about this process, there is no way you are producing 1000s of lines of code a day, unless it is trivial boilerplate.
In terms of higher-level abstractions, I agree this is one particularly treacherous rung on the ladder of abstractions. Previous abstractions like compilers or garbage collectors have at least had more structure/rules to rely upon. I don't know exactly how that will look but I don't think we will solely be relying on banging on the output, we will also be spot-checking the source code, using profilers or other tools to inspect the behaviour of systems, and asking the agent to explain the architectural decisions made. I'm not sure exactly how this will look, but I do believe that people who care will still find ways to do good work.
My agentic workflow probably differs somewhat from the majority of others here, but I can positively guarantee you that both the quality and quantity of my output is significantly higher than it has ever been, in my 20-something years of writing code. And at least 90÷ of the code I've written this year was output by an LLM. You can keep sticking your head in the sand, in the end it will only be to your own detriment.
Well you have obviously already made up your mind, so have fun with your confirmation bias. We'll all be over here having a good time, getting more work done. Feel free to come over when you put down your grudge.
This is an unpopular take, but when I was in undergrad maths in an old-school two-semester courses with one exam (exercises + oral) to cover it at the end, I was able to get to 60-80% score on exercises when I did just theory as prep.
I couldn't get exercises done where there were tricks/shortcuts which are learned by doing a lot of exercises, but for many, these are still the same tricks/shortcuts used in proofs.
This was indeed rare among students, but let's not discount that there are people who _can_ learn from well systemized material and then apply that in practice. Everyone does this to an extent or everyone would have to learn from the basics.
The problem with SW design is that it is not well systemized, and we still have at least two strong opposing currents (agile/iterative vs waterfall/pre-designed).
Good engineers are also capable of managing expectations. They can effectively communicate with stakeholders what compromises must be made in order to meet accelerated timelines, just as they always have.
We’ve already had conversations with overeager product people what the ramifications are for introducing their vibe coded monstrosities:
- Have you considered X?
- Have you considered Y?
Their contributions are quickly shot down by other stakeholders as being too risky compared to the more measured contributions of proper engineers (still accelerated by AI, but not fully vibe-coded).
If that’s not the situation where you work, then unfortunately it’s time to start playing politics or find a new place to work that knows how to properly assess risk.
Yep, validation is key. The smartest thing I've heard on this, which has reoriented how I think about this is that the objective function of a piece of software is now more important to get right than the implementation.
> the objective function of a piece of software is now more important to get right than the implementation
That has always been the case. That is why weeks or even months of programming and other project busy work could replace a couple of days of time getting properly fleshed out requirements down.
Agreed, it has always been the case. But I've never thought of it that way so explicitly. And I might argue that the important distinction is that the objective function is programmatically verifiable (which the word "requirements" has not always implied).
No, what is rewarded is "the code has been shown to conform to the given objective function" and especially "that objective function is a good representation of what we are trying to accomplish with this code".
I say this usually about self-driving cars, but the phrase fits here too.
"It doesn't need to be perfect. It just needs to be better than the average human, and humans suck at driving."
I estimate that I'm now spending about 10 to 30 hours less time a week in the mechanical parts of writing and refactoring code, researching how to plumb components together, and doing "figure out how to do unfamiliar thing" research.
All of those hours are time that can now be spent doing "careful consideration" (or just being with my family or at the gym or reading a book, which is all cognitively valuable as well).
Now, I suppose I agree that if timelines accelerate ahead of that amount of regained time, then I'm net worse off, but that's not the current situation at the moment, in my experience.
Maybe we do different things. Not that you are wrong about spending less time on things that you don't care about, but at the same time all that mechanical things helps you build a really good mental model of your product from high level design to individual classes. If I already have a good mental model of that I can direct AI to make really good changes fast, if I don't I will get things done ... but it does end up with less than ideal changes that compounds over time.
What you said: "figure out how to do unfamiliar thing" -- is correct, and will get things done, but overall quality, maintainability or understanding how individual pieces work...that's what you don't get. One can argue who care about all that as AI can take care of that or already can. I don't think its true today at-least.
I guess I just don't really agree that doing the tedious mechanical things is all that helpful for building the necessary mental model. I mean, I do think it was useful (indeed, necessary) for me to actually type out very similar lines of code over and over again when I was building up the programming skillset, but I really think the marginal value of that is just very low for me at this point. I worry a lot about how we're going to train the next generation of people without there being any incentive to do this part of the process! But for me, I already did that part.
What I find is actually necessary for me to have a mental model of the system is not typing out the definitions of the classes and such, but rather operating and debugging the system. I really do need to try to do things, and dig into logs, and figure out what's going on when something is off. And pretty much always ends up requiring reading and understanding a bunch of the implementation. But whether I personally typed out that implementation, or one of my colleagues, or an AI, is less important.
I mean, I already had to be able to build a mental model of a system that I didn't fully implement myself! I essentially never work on anything that I have developed in its entirety on my own.
Yeah! I mean, who needs to LEARN how to to these things properly when you can just let an autocorrect on steroids hallucinate the closest thing to “barely working”. Right?
10 to 30 hours saved on not learning new things! Hurray!
I genuinely don't understand what you're talking about with this comment. Learn how to do what things properly? I've been writing software for two decades... I'm not primarily in a learning phase, I'm in a doing phase. I'll take advantage of tools that save me time and energy in my work (for the right price). Why wouldn't I?
What do you mean by "barely working"? I can now put more iterations into getting things working better, more quickly, with less effort. That seems good to me.
10 to 30 hours a week is 25% to 75% of my time working. Seems like a pretty good trade?
I do understand that the calculation is different for people who are new to this. And I worry a lot about how people will build their skills and expertise when there is no incentive to put in all the tedious legwork. But that just isn't the phase of my career that I'm in...
There is simply no chance that LLMs are saving you 30 hours of work a week, especially if they're doing something where you'd have to do the research yourself. Either you're just simply wrong, or you went from understanding the code you were writing to skimming whatever the magic box spits out and either merging it outright or pawning off the effort of review on someone else.
That's why I gave a range. I didn't say it is saving me 30 hours every week, I said 10 to 30 hours a week. So 30 is the max of the range, and I'd say the distribution is pretty heavily left-skewed. It really depends on what I'm doing, but I do think there are weeks where it has save me 75% of the time I would have otherwise spent. I think there are two kinds of weeks where this is the case:
1. A week where I would have otherwise actually spent the majority of my time writing out and doing a ton of refactoring of a lot of implementation code. This is very rare for me, but it does exist. I can remember how it could actually take me a whole week to just "code up" meaningfully sized prototypes or greenfield implementations of some unambiguous thing. Truly, now, for that kind of work, claude code can save me full days of mechanical work.
2. A week where there is something very subtle going on that I have to figure out, probably having to do with some component or system I'm not very familiar with yet. Having an AI tool as a rubber ducky, or like a supercharged stackoverflow, can save me days of reading, debugging, working on minimal repros, etc.
Again, I'm not saying this is the common case at all. And estimating this kind of thing is always wildly inaccurate, so sure, take it with a grain of salt. But I know that a few times now, doing estimates based on my past experience, I've said "that will take me a week" (in case #1) or "gosh, I dunno, that's a tricky one, that might take me a week to figure out" (in case #2), and instead it only took me a day.
But honestly I think people focus too much on the high end of this range. The more valuable thing to me is the large number of weeks where it saves me that 10 to 15 hours, where I can then use that time to research new things, try more ideas, say "yes" to more things, or just not spend that time working.
My one question for you: What’s your level of editor fluency? Because I would really like to know if there’s a correlation between claiming these kind of time savings and not using advanced features in your editor.
My time is spent more on editing code than writing new lines. Because code is so repetitive, I mostly do copy-pasting, using the completion and the snippets engine, reorganize code. If I need a new module, I just copy what’s most similar, remove everything and add the new parts. That means I only write 20 lines of that 200 lines diff.
Also my editor (emacs) is my hub where I launch builds and tests, where I commit code, where I track todo and jot notes. Everything accessible with a short sequence of keys. Once you have a setup like this, it’s flow state for every task. Using LLM tools is painful, like being in a cubicle reading reports when you could be mentally skiing on code.
My 2023 to early 2025 usage of AI was as "slight improvement to my existing editing and autocomplete capabilities". That was great and I loved it. But sometime over the last 12 months it has switched to "mostly using the editor pane to read rather than edit".
Honestly I experience this as a great loss. All these hours over all these years perfecting the vim editing movements! And now I only spend like 10% of my time directly editing things anymore.
I feel like it would be fun (and also sad and nostalgic) to see a time lapse of the relative size and time spent focused between my editor pane, terminal pane, and AI tool pane. It has changed massively, especially in the last year.
When there is all that crap out there, good engineer may simply just carry out, call it good and leave the industry. Personally seeing the proliferation of wibe coded apps has made me hesitant of publishing and promoting my AI free apps.
>> It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever. We have to get safety right, which is not just about aligning a model
The question is what are they doing about "getting safety right" and are they doing enough. To me it seems like all the focus is on hyper growth, maximum adaptation and safety is just afterthought. I understand its competitive market, and everyone is doing it, but its just hollow words. Industries that cares about safety often tend to slow down.
> I told my GF over dinner tonight that historians in 1000 years will look back to Nov 2023 as a pivotal fork where humans lost.
Yes, because no one listened to me. It was early-mid 2024, and here as well as on other places, people kept saying "oh well the cat's out of the bag now, nothing can be done, it can't be stopped". I pointed out that only 4 or so planes being made to collide with TSMC, NVIDIA and ASML would be enough to give at least a decade of breathing room while we try to figure out how to keep this technology safe. I'm almost certain there were people who read it on here as well as elsewhere who could have made it happen.
Claude has really helped me improve my Emacs config (elisp) substantially, and sometimes even fix issues I've found in packages. My emacs setup is best it has ever been. Can't say it just works and produces the best solution and sometimes it would f** up with closing parens or even make things up (e.g. it suggest load-theme-hook which doesn't exist). But overall, changing things in Emacs and learning elisp is definitely much easier for me (I'm not good with elisp, but pretty good Racket programmer).
I used Emacs for about a decade and then switched to VS Code about eight years ago. I was curious about the state of Claude Code integration with Emacs, so I installed it to try out a couple of the Claude packages. My old .emacs.d that I toiled many hours to build is somewhere on some old hard drive, so I decided to just use Claude code to configure Emacs from scratch with a set of sane defaults.
I proceeded to spend about 45 minutes configuring Emacs. Not because Claude struggled with it, but because Claude was amazing at it and I just kept pushing it well beyond sane default territory. It was weirdly enthralling to have Claude nail customizations that I wouldn't have even bothered trying back in the day due to my poor elisp skills. It was a genuinely fun little exercise. But I went back to VS Code.
Came to post exactly this, except it’s got me using emacs again. I led myself into some mild psychosis where I attempted to mimic the Acme editor’s windowing system, but I recovered
Yeah, and all the little quirks here and there I had with emacs or things that I wish I had in workflow, I can just fix/have it without worrying about spending too much time (except sometimes maybe). The full Emacs potential I felt I wasn't using, I'm doing it and now I finally get it why Emacs is so awesome.
E.g. I work on a huge monorepo at this new company, and Emacs TRAMP was super slow to work with. With help of Claude, I figured out what packages are making it worse, added some optimizations (Magit, Project Find File), hot-loaded caching to some heavyweight operations (e.g. listing all files in project) without making any changes to packages itself, and while listing files I added keybindings to my mini buffer map to quickly just add filters for subproject I'm on. Could have probably done all this earlier as well, but it was definitely going to take much longer as I was never deep into elisp ecosystem.
> Emacs TRAMP was super slow to work with. With help of Claude, I figured out [...]
Out of curiosity, did it advise you to configure auto-save and backup such that they write their files under ~/.emacs.d, rather than in the same directory alongside the (with Tramp, potentially remote) file they're about? Especially with vanilla Emacs, that's always the first place you want to look when you see freezes doing file operations on a remote host over a slow or flaky link.
I believe I first added that change to my .emacs in 2010 or 2011, and as far as I can recall, it was the only change I ever needed to make to address Tramp being slow sometimes.
Hooking up Emacs to ECA with emacs-eval MCP is fantastic - Claude can make changes in my active Emacs session, run the profiler, unload/reload things, log some computation or embark-export search results and show it in a buffer; It can play tetris and drive itself crazy with M-x doctor - it's complete and utter bonkers. I can tell it to make some face color brighter/darker on the spot, the other day I fixed a posframe scaling issue that bugged me for a long time - it's not even about "I don't know elisp", this specific thing requires you to sit down and calculate geometry of things - mechanical, boring stuff. AI did it in minutes. VS Code, IntelliJ, any other shit that has no Lisp REPL? What are you even talking about? It's like a different world.
>> recently I realized that I read code, but almost never write it.
I think most engineers are reading code than writing it. I find it very hard to not use Emacs when reading large codebases. Interestingly, its mostly because of file navigation. I love using ido/ivy for file navigation, quickly filtering through buffers, magit.
Emacs in terminal is not an ideal experience though. So I can imagine it being multi-fold worse with phone keyboard.
it has never been my explicit goal. but i have certainly enjoyed the rewards of recognition (e.g. i was able to lean on a successful project of mine to help land a nice consulting gig) and it would be silly to ignore that.
(edit: the comment i replied to was edited to be more a statement about themselves rather than a question about other developers, so my comment probably makes less sense now)
I don't dispute your own personal motives, but if it's never been a goal for most people, then CC0 would be more popular than the BSD or MIT license - it's simpler and much more legally straightforward to apply.
I worked on several open source projects both voluntarily or for work. The recognition doesn't really need to be financial. If people out there are using what you are building, contributing back, appreciating it -- it gives you motivation to continue working. Its human nature. One of the reason why there are so many abandoned projects out there.
Emacs is my editor/IDE of choice and consider myself power-user. However, I'm no expert in its internals or elisp. I understand that things are built with single-thread execution in mind over decades. However, I think things still can be more async, where you can offload heavy stuff to separate thread and stream results. E.g. Magit status doesn't need to block my entire editor. It can run what it needs to do in separate thread, send the results back to main thread just for rendering when its ready. Same with say consult-ripgrep / consult-find-file / find-file-in-project etc -- doens't need to wait for it in main thread and block the render / event handling until entire result set is ready (e.g. in this case things can be streamed). As in maybe there is a way around to make this much better by message passing/streaming instead of sharing state itself?
I love Emacs, but it really just fails to be effective for me when I work on monorepos and even more so, when I'm on tramp.
Probably all true, what you say about magit and so on. Message passing values would be an idea, but with the current situation, when 1 concurrent execution units, a process, finishes its job, how does its "private" potentially modified state get merged back into the main Emacs global state? Lets say the concurrently running process creates some buffers to show, but in the meantime the user has rearranged their windows or split their view, but the concurrent process doesn't know about that, since it was after its creation time. Or maybe the user has meanwhile changed an important Emacs setting.
I think the current solutions for running things in separate threads are only for external tools. I guess to do more, a kind of protocol would need to be invented, that tells a process exactly what parts of the copied global state it may change and when it finishes, only those parts will be merged back into the main process' global state.
Maybe I understood things wrong and things are different than I understood them to be. I am not an Emacs core developer. Just a user, who watched a few videos.
Tramp can be sped up a bit. I remember seeing some blog posts about it. I guess if you need to go via more than 1 hop, it can get slow though.
Yes, totally agree that its not always applicable. But I think there is still lot of scope to offload some operation (e.g. magit operations like status, commit, streaming search result into minibuffer in ivy-mode). Having a dedicated protocol would of course be best (VSCode Remote works flawlessly for me).
>> What is the problem with mono repos?
If you use things like that depend on something like ivy/vertico/... find-file-in-project, projectile-find-file, ripgrep gets super slow (I think the reason is that they usually wait for entire result to be ready). LSP/Eglot gets slower. Similarly, will have to disable most of VC related stuff like highlight diff on fringe. Git will be inherently slower, so magit will hang your UI more often. Of course you can disable all these plugins and use vanilla emacs, but then if you remove enough of them you're likely going to be more productive with VSCode at that point.
Just to clarify this is experience with monorepo + tramp. Also not sure how much of its just plugins fault. Somwhat better if you use emacs locally where the monorepo is, however that often means using Emacs cli -- which usually means lose some of your keybindings.
From 2022. Funny that soon after that we figured out how to automate the Tactical Tornado programmer and collectively decided that they're the best thing ever and nobody needs other kinds of devs anymore.
I never got that argument. Compilers are formally proven, deterministic algorithms . If you understand what compiler does, you can have pretty good idea what it will produce. If it doesn't do that, its a bug. Definition of correctness is well defined by semantic equivalence.
LLMs are none of that. Its a fuzzy system that approximates your intent and does its best. I can make my intent more and more specific to get closer to what I want, but given all that is just regular spoken language its still open to interpretation. And all that is still quite useful, but I don't get the assembly language comparison here.
reply