Zellij among is a great example, I can do everything with my keyboard, but every now and them I'm already with the mouse and just click a tab or pane, no functionality lost, just added, why the need to make a cutoff philosophical/semantic hard argument?
Two different stages of the project, not necessarily contradictory. I'm not saying this is great, but tests make a whole lot more sense when you know what you're building.
Yes. TFA author could have gone into it with this mindset and treated the initial work as a prototype with the idea of throwing it away and would have been happier about it.
> but tests make a whole lot more sense when you know what you're building.
It's very true. This is a "gotcha" a lot of anti-TDDers always bring up, and yet some talk about "prototyping == good" without ever making the connection that you can do both.
Two different extremes of dumbassery. If you can't program without the simplest dogma guiding you then programming isn't for you. If you don't even know what you're building why are you selling it as a product?! What are you doing in those 18 months that you don't understand anything about the thing you are building.
It should be common sense to add common sense tests to critical components. Now they are doing TDD THEY STILL DON'T KNOW THE CRITICAL COMPONENTS. Nothing changed. They lack systems thinking.
The real goal isn't for Alacrity or Kitty or WezTerm or any other terminal to use libghostty. I think over the long term, terminal emulator user bases dwindle down to niche (but important) use cases.
The real goal is for higher-level tooling (GUI or browser) that utilizes terminal-like programs to have something like libghostty to reach for. I think this represents the much, much larger ecosystem out there that likely touches many more people. For example, Neovim's terminal mode, terminal multiplexers, PaaS build systems, agentic tooling, etc. You're seeing this emerge in force already with the awesome-libghostty repo.
libghostty would still be useful for traditional terminal emulators to replatform on, and for example xterm.js is seriously looking into it (and I'm happy to help and even offered their maintainer a maintainer spot on libghostty). But, they're not the goal. And if fragile egos hold people back, it's really not my problem, it's theirs.
As an outsider to the fascinating world of terminal emulators... can you explain why this might be? Rather, what about `libghostty` would be off-putting vs `libtermengine`?
Just that it's a specific "product"-y sounding name? Would you also be concerned about "libwayland" vs "libcompositor"? Genuinely curious: this seems like an insightful question, I just don't follow the reasoning.
Disclaimer: I am not the maintainer of anything terminal related, it's just an intuition.
Let's say I'm the creator of ghostty "competition".
The fact that is has the same name, could feel if I change that:
- Maybe my users start thinking why don't I use ghostty instead
- Will the maintainers of libghostty chose more oriented to ghostty than for my terminal?
It's a half assed analogy, but think if Google's V8 would be called ChromeEngine instead of V8.
@grok remove the corporate jargon and explain it in a direct and candid way in a single sentence
@grok
We're slashing the company from 10k to under 6k people because AI plus tiny teams now let us do the same work with way fewer bodies, and the CEO would rather gut half the staff in one brutal move than bleed out slowly over years.
I am curious why this got so popular, it really is the same thing, am I missing something? Is it because of elon/jack dynamics?
The amount of little tools I'm creating for myself is incredible, 4.6 seems like it can properly one/two shot it now without my attention.
Did you open source that one? I was thinking of this exact same thing but wanted to think a little about how to share deps, i.e. if I do quick worktree to try a branch I don't wanna npm i that takes forever.
Also, if you share it with me, there's obviously no expectations, even it's a half backed vibecoded mess.
I’ve been wanting similar but have instead been focused on GUI. My #1 issue with TUI is that I’ve never liked code jumps very smooth high fps fast scrolling. Between that and terminal lacking variable font sizes, I’d vastly prefer TUIs, but I just struggle to get over those two issues.
I’ve been entirely terminal based for 20 years now and those issues have just worn me down. Yet I still love terminal for its simplicity. Rock and a hard place I guess.
Testing. If you share something you've tested and know works, that's way better than sharing a prompt which will generate untested code which then has to be tested. On top of that it seems wasteful to burn inference compute (and $) repeating the same thing when the previous output would be superior anyway.
That said, I do think it would be awesome if including prompts/history in the repos somehow became a thing. Not only would it help people learn and improve, but it would allow tweaking.
If I'm understanding the problem correctly, this should be solved by pnpm [1]. It stores packages in a global cache, and hardlinks to the local node_packages. So running install in a new worktree should be instant.
There are a lot of confounding variables. Chief among them is someone at the top just wanting to get on with their life, start a family for instance, or basically anything other than study 12 hours a day.
It's hard to say it's cognitive decline for most of the people who just aren't working as hard at 40 as they were at 25.
If Chess960 or some other variant that doesn't involve as much rote work becomes sufficiently popular for long enough perhaps it will yield some valuable data about mental function versus age. At least a more holistic view than the studies we currently have.
I'm really not trying to hate, I think you method is great and I love that you have rationalized it, but as someone whose mostly find this kind of social interactions natural, there's something "funny" about finding the algorithm for it. I never did the math and always naturally landed more or less there.
I have no idea what this anti AI trends are talking about, even if it doesn't write a single line of code the argument is moot since the ability of being a companion and helping you understand what is happening in a code base has to be undeniable.
It's so intriguing, I wonder if the people who are against it haven't even used it properly.
The argument seem to be always the fallacy of AI doesn't work great for this or other context (which I might agree), so it just make everything worse.
AI most definitely makes hard things easier.
THIS DOES NOT CONTRADICT THE FACT THAT IF YOU ONLY VIBE CODE AND DON'T EVEN UNDERSTAND WHAT THE CODE IS OR IT'S DOING OR THE BUSINESS OR THE CONTEXT YOUR PROJECT IS GOING TO GO TO SHIT.
I've seen some discussions and I'd say there's lots of people who are really against the hyped expectations from the AI marketing materials, not necessarily against the AI itself. Things that people are against that would seem to be against AI, but are not directly against AI itself:
- Being forced to use AI at work
- Being told you need to be 2x, 5x or 10x more efficient now
- Seeing your coworkers fired
- Seeing hiring freeze because business think no more devs are needed
- Seeing business people make a mock UI with AI and boasting how programming is easy
- Seeing those people ask you to deliver in impossible timelines
- Frontend people hearing from backend how their job is useless now
- Backend people hearing from ML Engineers how their job is useless now
- etc
When I dig a bit about this "anti-AI" trend I find it's one of those and not actually against the AI itself.
The most credible argument against AI is really the expense involved in querying frontier models. If you want to strengthen the case for AI-assisted coding, try to come up with ways of doing that effectively with a cheap "mini"-class model, or even something that runs locally. "You can spend $20k in tokens and have AI write a full C compiler in a week!" is not a very sensible argument for anything.
It’s hard to say. The compiler is in a state that isn’t useful for anything at all and it’s 100k lines of code for something that could probably be 10k-20k.
But even assuming it was somehow a useful piece of software that you’d want to pay for, the creator setup a test harness to use gcc as an oracle. So it has an oracle for every possible input and output. Plus there are thousands of C compilers in its training set.
If you are in a position where you are trying to reverse engineer an exact copy of something that already exists (maybe in another language) and you can’t just fork that thing then maybe a better version of this process could be useful. But that’s a very narrow use case.
The cost argument is a fallacy, because right now, either you have a trained human in the loop, or the model inevitably creates a mess.
But regardless, services are extremely cheap right now, to the point where every single company involved in generative AI are losing billions. Let’s see what happens when prices go up 10x.
Maybe, but I seriously doubt that new DRAM and chip FABs aren't being planned and built right now to push supply and demand to more of an equilibrium. NVIDIA and Samsung and whoever else would love to expand their market than to wait for a competitor to expand it for them.
How long do you think it takes for those factories to go from nothing to making state-of-the-art chips at a scale that's large enough to influence the supply even by 1%?
There are plenty of them being built, yes. Some of them will even start outputting products soon enough. None of them are gonna start outputting products at a scale large enough to matter any time soon. Certainly not before 2030, and a lot of things can change until then which might make the companies abandon their efforts all together or downscale their investments to the point where that due date gets pushed back much further.
That's not even discussing how easier it is for an already-established player to scale up their supply versus a brand-new competitor to go from zero to one.
If you keep digging, you will also find that there's a small but vocal sock puppet army who will doggedly insist that any claims to productivity gains are in fact just hallucinations by people who must not be talented enough developers to know the difference.
It's exhausting.
There are legitimate and nuanced conversations that we should be having! For example, one entirely legitimate critique is that LLMs do not tell LLM users that they are using libraries who are seeking sponsorship. This is something we could be proactive about fixing in a tangible way. Frankly, I'd be thrilled if agents could present a list of projects that we could consider clicking a button to toss a few bucks to. That would be awesome.
But instead, it's just the same tired arguments about how LLMs are only capable of regurgitating what's been scraped and that we're stupid and lazy for trusting them to do anything real.
> I wonder if the people who are against it haven't even used it properly.
I swear this is the reason people are against AI output (there are genuine reasons to be against AI without using it: environmental impact, hardware prices, social/copyright issues, CSAM (like X/Grok))
It feels like a lot of people hear the negatives, and try it and are cynical of the result. Things like 2 r's in Strawberry and the 6-10 fingers on one hand led to multiple misinterpretations of the actual AI benefit: "Oh, if AI can't even count the number of letters in a word, then all its answers are incorrect" is simply not true.
> It's so intriguing, I wonder if the people who are against it haven't even used it properly.
I feel like this is a common refrain that sets an impossible bar for detractors to clear. You can simply hand wave away any critique with “you’re just not using it right.”
If countless people are “using it wrong” then maybe there’s something wrong with the tool.
> If countless people are “using it wrong” then maybe there’s something wrong with the tool.
Not really. Every tool in existence has people that use it incorrectly. The fact that countless people find value in the tool means it probably is valuable.
When it comes to new emerging technologies everyone is searching the space of possibilities, exploring new ways to use said technologies, and seeing where it applies and creates value. In situations such as this, a positive sign is worth way more than a negative. The chances of many people not using it the right way are much much higher when no one really knows what the “right” way is.
It then shows hubris and a lack of imagination for someone in such a situation to think they can apply their negative results to extrapolate to the situation at large. Especially when so many are claiming to be seeing positive utility.
I had Claude read a 2k LOC module on my codebase for a bug that was annoying me for a while. It found it in seconds, a one line fix. I had forgotten to account for translation in one single line.
That's objectively valuable. People who argue it has no value or that it only helps normies who can't code or that sooner or later it will backfire are burying their heads in the sand.
> People who argue it has no value or that it only helps normies who can't code or that sooner or later it will backfire are burying their heads in the sand.
I don’t think this describes most people and it’s certainly not what I think.
While that may very well be true, it's a valid reply to the GP who made this claim, not to my comment explaining to the parent why their argument was logically flawed.
Just because you disagree with me doesn’t mean my argument is “logically flawed.” And as the other commenter said, I never said AI had no value. I have used various AI tools for probably 4 years now.
If you’re going to talk to and about people in such a condescending way then you at least ask clarifying questions before jumping to the starkest, least charitable interpretation of their point.
And yet every LLM company pushes it as a simple chat bot that everyone should use for everything right now with no explanation or training.
It can’t be a precision tool that requires expertise as well as a universally accessible, simple answer to all our problems. That doesn’t strike me as user error.
I'm similarly bemused by those who don't understand where the anti-AI sentiment could come from, and "they must be doing it wrong" should usually be a bit of a "code smell". (Not to mention that I don't believe this post addresses any of the concrete concerns the article calls out, and makes it sound like much more of a strawman than it was to my reading.)
To preempt that on my end, and emphasize I'm not saying "it's useless" so much as "I think there's some truth to what the OP says", as I'm typing this I'm finishing up a 90% LLM coded tool to automate a regular process I have to do for work, and it's been a very successful experience.
From my perspective, a tool (LLMs) has more impact than how you yourself directly use it. We talk a lot about pits of success and pits of failure from a code and product architecture standpoint, and right now, as you acknowledge yourself in the last sentence, there's a big footgun waiting for any dev who turns their head off too hard. In my mind, _this is the hard part_ of engineering; keeping a codebase structured, guardrailed, well constrained, even with many contributors over a long period of time. I do think LLMs make this harder, since they make writing code "cheaper" but not necessarily "safer", which flies in the face of mantras such as "the best line of code is the one you don't need to write." (I do feel the article brushes against this where it nods to trust, growth, and ownership) This is not a hypothetical as well, but something I've already seen in practice in a professional context, and I don't think we've figured out silver bullets for yet.
While I could also gesture at some patterns I've seen where there's a level of semantic complexity these models simply can't handle at the moment, and no matter how well architected you make a codebase after N million lines you WILL be above that threshold, even that is less of a concern in my mind than the former pattern. (And again the article touches on this re: vibe coding having a ceiling, but I think if anything they weaken their argument by limiting it to vibe coding.)
To take a bit of a tangent with this comment though: I have come to agree with a post I saw a few months back, that at this point LLMs have become this cycle's tech-religious-war, and it's very hard to have evenhanded debate in that context, and as a sister post calls out, I also suspect this is where some of the distaste comes from as well.
HN has a huge anti AI crowd that is just as vocal and active as its pro AI crowd. My guess that this is true of the industry today and won’t be true of the industry 5 years from now: one of the crowds will have won the argument and the other will be out of the tech industry.
Vibe coding and slop strawmen are still strawmen. The quality of the debate is obviously a problem
I don’t understand why people are so resistant to the idea that use cases actually matter here. If someone says “you’re an idiot because you aren’t writing good, structured prompts,” or “you’re too big of an idiot to realize that your AI-generated code sucks” before knowing anything about what the other person was trying to do, they’re either speaking entirely from an ideological bias, or don’t realize that other people’s coding jobs might look a whole lot more different than theirs do.
We don’t know anything about the commenters other than that they aren’t getting the same results with AI as we are. It’s like if someone complains that since they can’t write fast code and so you shouldn’t be able to either?
> We don’t know anything about the commenters other than that they aren’t getting the same results with AI as we are.
Right. You don’t know what model they’re using, on what service, in what IDE, on what OS, if they’re making a SAP program, a Perl 5 CGI application, a Delphi application, something written in R, a c-based image processing plugin, a node website, HTML for a static site, Excel VBA, etc. etc. etc.
> It’s like if someone complains that since they can’t write fast code and so you shouldn’t be able to either?
If someone is saying that nobody can get good results from using AI then they’re obviously wrong. If someone says that they get good results with AI and someone else, knowing nothing about their task, says they’re too incompetent to determine that, then they’re wrong. If someone says AI is good for all use cases they’re wrong. If someone says they’re getting bad results using AI and someone else, knowing nothing about their task, says they’re too incompetent to determine that, then they’re wrong.
If you make sweeping, declarative, black-and-white statements about AI coding either being good or bad, you’re wrong. If you make assumptions about the reason someone has deemed their experience with AI coding good or bad, not even knowing their use case, you’re wrong.
What we call AI at the heart of coding agents, is the averaged “echo” of what people have published on the web that has (often illegitimately) ended up in training data. Yes it probably can spit out some trivial snippets but nothing near what’s needed for genuine software engineering.
Also, now that StackOverflow is no longer a thing, good luck meaningfully improving those code agents.
Coding agents are getting most meaningful improvements in coding ability from RLVR now, with priors formed by ingesting open source code and manuals directly, not SO, as the basis. The former doesn't rely on resources external to the AI companies at all, and can be scaled up as much as they like, while the latter will likely continue to expand, and they don't really need more of it if it doesn't. Not to mention that curated synthetic data has been shown to be very effective at training models, so they could generate their own textbooks based on open codebases or new languages or whatever and use that. Model collapse only happens when it's exclusively, and fully un-curated, model output that's being trained on.
Exactly this. Everything I've seen online is generally "I had a problem that could be solved in a few dozen lines of code and I asked the AI do it for me and it worked great!"
But what they asked the AI to do is something people have done a hundred times over, on existing platform tech, and will likely have little to no capability to solve problems that come up 5-10 years from now.
The reason AI is so good at coding right now is due to the 2nd Dot Com tech bubble that occurred between the simultaneous release of mobile platforms and the massive expansion of cloud technology. But now that the platforms that existed during that time will no longer exist, because it's no longer profitable to put something out there--the AI platforms will be less and less relevant.
Sure, sites like reddit will probably still exist where people will begin to ask more and more information that the AI can't help with, and subsequently the AI will train off of that information; but the rate of that information is going to go down dramatically.
In short, at some point the AI models will be worthless and I suspect that'll be whenever the next big "tech revolution" happens.
reply