>
This is arguably their defining HN characteristic: they are one of the most vocal, persistent AI optimists on the platform. They claim ~90-95% of their shipped code is AI-generated, report 5-10x productivity gains, and have built a detailed methodology around it — using Playwright for visual verification, static typechecking as a hallucination filter, and e2e test suites as automated validation harnesses
Wow, I sound really annoying. Sorry about that everyone!
I mean, you are painting it as some moralistic judgement, but if you’re asking me for on one hand listening to some annoying music, and on the other hand having some chance (however slight) of bodily injury, knife wound, or whatever… I know which one I am going to choose.
It’s hard to imagine a slow, overworked, somewhat inept, bureaucratic school board, with a thousand other things it wants to care about, managing to stay ahead of thousands of crafty and highly motivated teens.
> The concept of congregating in walled gardens owned by pedophilic fascist speed freaks
Are we really calling everyone we don't like a pedophilic fascist now? I honestly had really hoped that this sort of polarized, low-quality content wouldn't make it onto HN. :(
If you think that everyone who works on a website that is a walled garden is a "pedophile fascist", I don't know what to say to you -- I don't think we live in the same reality.
It is not "factual" to call these people pedophiles. Maybe you think they are bad for society. Maybe you think their websites are terrible. Maybe you don't like them. Those are all fine things, and you are free to say them! But to say they are factually a pedophile without evidence is not true. It only diminishes the quality of conversation.
I'm reading this line of conversations and I can tell you, you're wasting your time.
There is NO convincing these people of anything else, they will move the goal posts every time. I've been in these same conversations and it goes nowhere.
If you continue, it will move all the way to "If you're not out protesting, voting for X, you are in fact a fascist pedo yourself".
Even the mere fact that you question such line of thought... makes you a facist pedo.
Rationalists were talking about AI decades before anyone else were talking about it. They were also early on COVID and crypto. They are only "aggressively wrong" about "everything" if you are, ironically, not thinking rationally about it.
Kind of curious what everyone else sees in this? As far as I can tell it’s a fairly trivial wrapper around an LLM - you can get equivalent results (or honestly even better) with Opus 4.6. Maybe that just isn’t common knowledge yet?
I'm more and more convinced that humans were fundamentally not ready for LLMs and are not taking how existential of a threat it poses to basic communication and social normals seriously enough.
Why is this the attitude when it comes to AI? Can you imagine someone saying “please provide your code” when they claim that Rust sped up their work repo or typescript reduced errors in production?
Eh, sorry, I may have been too quick to judge, but in the past when I have shared examples of AI-generated code to skeptics, the conversation rapidly devolves into personal attacks on my ability as an engineer, etc.
I think the challenge is to not be over-exuberant nor to be overly skeptical. I see AI as just another tool in the toolbox, the fact that lots of people produce crap is no different from before: lots of people produced crappy code well before AI.
But there are definitely exceptions and I think those are underexposed, we don't need 500 ways to solve toy problems we need a low number of ways to solve real ones.
Some of the replies to my comment are exactly that, they show in a much more concrete way than the next pelican-on-a-bicycle what the state of the art is really capable of and how to achieve real world results. Those posts are worth gold compared to some of the junk that gets high visibility, so my idea was to use the opportunity to highlight those instead.
FWIW, I did a full modernization and redesign of a site (~50k loc) over a week with Claude - I was able to ensure quality by (ahead of time) writing a strong e2e test suite which I also drove with AI, then ensuring Claude ran the suite every time it made changes. I got a bunch of really negative comments about it on HN (alluded to in my previous comment - everything from telling me the site looked embarrassing, didn't deserve to be on HN, saying the 600ms load time was too slow, etc, etc, etc), so I mostly withdrew from posting more about it, though I do think that the strategy of a robust e2e suite is a really good idea that can really drive AI productivity.
Yes, that e2e suite is a must for long term support and it probably would be a good idea to always create something like that up front before you even start work on the actual application.
I think that it pays off to revisit the history of the compiler. Initially compilers were positioned as a way for managers to side step the programmers, because the programmers have too much power and are hard to manage.
Writing assembly language by hand is tedious and it requires a certain mindset and the people that did this (at that time programming was still seen as an 'inferior' kind of job) were doing the best they could with very limited tools.
Enter the compiler, now everything would change. Until the mid 1980s many programmers could, when given enough time, take the output of a compiler, scan it for low hanging fruit and produce hybrids where 'inner loops' were taken and hand optimized until they made optimal use of the machine. This gave you 98% of the performance of a completely hand crafted solution, isolated the 'nasty bits' to a small section of the code and was much more manageable over the longer term.
Then, ca. 1995 or so the gap between the best compilers and the best humans started to widen, and the only areas where the humans still held the edge was in the most intricate close-to-the-metal software in for instance computer games and some extremely performant math code (FFTs for instance).
A multitude of different hardware architectures, processor variations and other dimensions made consistently maintaining an edge harder and today all but a handful of people program in high level languages, even on embedded platforms where space and cycles are still at a premium.
Enter LLMs
The whole thing seems to repeat: there are some programmers that are - quite possibly rightly so - holding on to the past. I'm probably guilty of that myself to some extent, I like programming and the idea that some two bit chunk of silicon is going to show me how it is done offends me. At the same time I'm aware of the past and have already gone through the assembly-to-high-level track and I see this as just more of the same.
Another, similar effect was seen around the introduction of the GUI.
Initially the 'low hanging fruit' of programming will fall to any new technology we introduce, boilerplate, CRUD and so on. And over time I would expect these tools to improve to the point where all aspects of computer programming are touched by them and where they either meet or exceed the output of the best of the humans. I believe we are not there yet but the pace is very high and it could easily be that within a short few years we will be in an entirely different relationship with computers than up to today.
Finally, I think we really need to see some kind of frank discussion about compensation of the code ingested by the model providers, there is something very basic that is wrong about taking the work of hundreds of thousands of programmers and then running it through a copyright laundromat at anything other than a 'cost+' model. The valuations of these companies are ridiculous and are a direct reflection of how much code they took from others.
Vite 8 is pretty incredible. We saw around an 8x improvement (4m -> 30s) in our prod build, and it was nearly a drop-in replacement. Congrats (and thank you!) to the Vite team!
Same here (10s to 1s). The main reason for this is rolldown [1]. Already had it installed months ago, before it got merged into vite proper. Really awesome stuff.
Not meant as a gotcha but I'm surprised because people always tout it as being so much faster than Next. (4m with Turbo would have to be a crazy huge app IME)
Seems to be around 1 million. It's chunky and it's probably not well optimised for the build to be honest, but it was only starting to creep up the priority list as it crossed the 10m mark.
This is also the length on our CI which is running on some POS build machine. Locally it's far faster, but with Vite 8 its crazy fast.
I am still trying to work out what Teams is "setting up for me" when it takes several seconds from opening the bookmark in my browser to having a UI where I can read the chats. It's running on a PC that can render complex graphical scenes in real time but it takes half a minute to see "LGTM!". (Shaking head emoji goes here.)
Then again Teams is still barely an amateur compared to the incomprehensible slowness of Jira.
Wow, I sound really annoying. Sorry about that everyone!
reply