A few thousand rich people don't need 8 billion pets.
"Maintain humanity under 500,000,000 in perpetual balance with nature" (Georgia Guidestones, a monument describing someone's ideal future that is surrounded by conspiracy)
Will you be one of the 500 million pets? Or one of the 7.5 billion leftovers? Sure would be a shame if billions become unnecessary and then a pandemic 100X worse than covid got lableaked.
At some point, you become the luddite. Maybe you have no experience with modern AI dev tools, maybe you work in a language that is underrepresented in models meaning off the shelf tools don't work well, or maybe you're just an old curmudgeon who will die on a hill.
But modern AI tools are far beyond "auto complete". (I actually turn off those in-line completions, I feel they ruin flowstate). The tools now are fully prompted, with multi-file editing, with full codebase context, with web/search and doc integration, and for "on the rails" development are producing high quality code for "easier" tasks.
These modern models and tools can solve nearly every single leet code problem faster than you. They can do every single Advent of Code problem likely 10X-100X faster than you can.
In my professional, high standards, very legal and contract driven web app world, AI tools are still very useful for doing "on the rails" development. Is it architecting entire systems? No of course not (yet). Is it emulating existing patterns and extending them for new functionality 10X faster than a Jr or Mid? Yes it is. Is it writing nearly perfect automated tests based on examples? Yes it is. It is scaffolding new ideas and putting down a great starting point? Yep. And it's even able to iterate on featurework pretty well, and much faster than Jr/Mid.
The kind of work I'd give to a Jr/Mid and expect to take 2-3 days before they need serious feedback up and down the change, these AI are doing in about 30 seconds, maybe 90 seconds if you need to iterate a few times on the prompt.
I get that "AI" is a buzzword that is pumping valuations and making business people see $$$.
But coding assistants are not that. For many programmers, they are quickly becoming valuable tools that do in fact speed up development.
> you work in a language that is underrepresented in models meaning off the shelf tools don't work well
That's exactly what happens, and why I think the whole hype is a joke. I have tried all the models and tools though, it's always an annoying mess.
> tools can solve nearly every single leet code problem faster than you
That would be useful if I was paid to "leet code" or solve Christmas games. This is not a good rebutal though but it made me smile.
> The kind of work I'd give to a Jr/Mid
Good, but I don't want to know what happens in 20 years when there are no more juniors to feed the AI and work on becoming seniors. I will be retired by then and I'll enjoy writing my own open-source stuff.
>That's expected, since all the leetcode problems have ready-to-use solutions on the internet.
1) If the implication is "The model knows the answer and regurgitates it like lyrics to a song" then I would push back. Put a leet code problem into deepseek r1 chain-of-reasoning model and watch it spend 2 minutes spitting out 5000 words thinking through every single facet of the problem and genuinely solving it at a level that is higher than 95% of programmers.
And point 2)
If you do believe it's fundamentally about how much the model has been trained on, then it has seen your CRUD app and has already seen 10,000 times the feature or system you're about to write -- so it should be a foregone conclusion that it can also do all of that development work too. Only the higher order architecting and proprietary domains should be challenging for it, as there would be far less examples to train on (scarcity) or the model doesn't understand a complex solution (architecting systems at scale is something it can't do).
(I also point out how well these models did for Advent of Code 2024, when there were zero examples in the training data for it).
It's not "thinking", it's regurgitating an internet search and padding it out with markov-chain style text autocompletion.
The 5000 words of padding do not actually provide any value, it's verbal white noise to fill space.
> ...then it has seen your CRUD app and has already seen 10,000 times the feature or system you're about to write
Well, yes. Lots of pointless waste in software engineering.
Fortunately I don't write CRUD apps and AI does nothing at all for me in a professional context.
Are you "thinking" or are you regurgitating analysis of the problem based on what you read on the internet, too?
This one is funny because for something like leet code, nearly everyone just reads the best answers, learns them and learns how to regurgitate them in an interview environment.
> (In fact, the reason they ask leetcode questions isn't to test your IQ, it's to know if you've read the obvious and available literature.)
Rather: it tests whether you are sufficiently docile and devoted to be willing to cram lots of leetcode exercise books that have no relevance for the programming concepts that the job will involve, just for a lottery ticket for a somewhat well-paid position.
I know that there is so much more to programming and related topics that is sooo much deeper (in particular if non-trivial mathematics becomes involved) than these leetcode-style brainteasers. So I strongly prefer to read about such deeply intellectually inspiring topics related to programming instead of jumping through the idiotic hoops that other people want me to.
Indeed, I thus fail the test for docility and devotedness, but I honestly can't take organisations seriously that demand such jumping through hoops.
I feel like 95% of the developers who are touting AI are doing web dev or app development - a field which has low stakes, low barrier to entry, and an incredible amount of reinventing the wheel - all things that an LLMs is naturally going to excel with. I can't imagine you'd hold these same beliefs if you were writing control software for a life saving medical device where a single bug could kill someone, where the board package is proprietary and not understood by an LLM, and the thing is written in a C dialect combining only half the features of C99 with a subset of obscure compiler flags.
I think people are running into a couple of challenges. One is keeping up with the pace of improvement. The tools are much improved over 6-12-24 months ago. A poor first impression can leave people thinking a tool is terrible forever more. Second is that someone must learn to work with the new tools. The hype can lead people to think the tools will magically just do all these things. The reality is, like most tools, it takes some trial and error to learn how to best use it.
Claude in 2025, especially with Project feature is far better. It can complete CRUD project on its' own, and all I have to do is to fix glaring issues and design API before.
Which might be not impressive to someone, but it is good at that. And few years ago, it would not be possible.
CRUD generation is a staple in web framework CLI tooling since like forever. Once upon a time Wordpress was this, the boilerplate application you scripted out and then adapted. In certain programming languages macros or type systems power up this kind of tactic, and IDE:s typically have very good support for these kinds of shortcuts.
Then the project management tooling does a lot more, like automatically reverse engineer existing databases and so on.
I don't view Big Tech as being against copyright. They simply hold a position that they will not pay for something unless forced to ("make me" - a very common position for the powerful to hold).
In fact, I'd argue that Big Tech is pro copyright, because once they force the copyright holder to negotiate, the cost is irrelevant to them and they build a moat around that access.
For example, Google stole Reddit content for Gemini until Reddit was forced to the table, and now Google has a seemingly exclusive agreement around Reddit data for AI purposes.
> I don't view Big Tech as being against copyright. They simply hold a position that they will not pay for something unless forced to
Yep, the contradiction between them feeling entitled to use anything they want for training, while simultaneously having license terms which forbid using the output of their models to train other models is pretty glaring. Information wants to flow freely but only in one direction apparently.
Yup. For Big Tech, the ideal outcome of these cases isn't that copyright is widely or deeply undermined as they rely heavily on it themselves (let alone how their customers and investors benefit from it).
Their ideal outcome is that there's some narrow carveout that gives them permission to ignore copyright where they want to, while extending similar permission to as few/irrelevant others as possible.
I agree but for a different reason -- cost is actually relevant, in the sense that only the biggest player can afford to pay for the copyrights. If you are a small player, however your tech stack is or how good your model is, if you can't afford it, you can't compete with Google.
For the record, Twitter currently punishes people who call VIP's mean names and seems to take action against all negativity pointed towards certain ideologies that fit with the owners preferences, and they're talking about some opaque "positivity" changes which actually sound like automating the current manual moderation behind their censorship of wrongthink.
We should stop pretending that that website resembles its preceding namesake, because it does not.
I think you just accept the risk, same as not doing code review. If you're doing something Serious™ then you follow the serious rules to protect your serious business. If it doesn't actually matter that much if prod goes down or you release a critical vulnerability or leak client data or whatever, then you accept the risk. It's cheaper to push out code quickly and cleanup the mess as long as stakeholders and investors are happy.
And I think this analysis suffers from ignoring the increased environmental cost of creating a new electric vehicle. It takes 15,000 to 20,000 miles of driving for the average EV to "break even" with new ICE vehicles due to the much worse environmental impact of creation.
Reduce, reuse, repair, recycle. Vehicles should be reduced first (drive less), reused second (keep using existing one), repaired (keep using existing ones), and finally you can recycle as best as possible.
All of this should happen prior to replacement. If you replace an existing ICE with an EV, the EV not only has to catch up with a new ICE, it actually has to "catch up" with your existing already made working ICE that has no new cost for construction. That's much worse than 20k miles because the cost of building your existing car is sunk. It could take as many as 50,000 miles of driving to break even against your existing used car.
Consumerism and early replacement of working goods has always been the mortal enemy of environmentalism.
Good news: in the car market, almost everyone follows 'reuse' and 'repair'. Not many people take a car and crush it for recycled parts unless it's truly worthless or super old. They sell it to someone else. For the type of person who buys a new car every 3-4 years, you are selling that car to someone else who will continue using it, likely replacing their less efficient car with yours.
This is unlike most other consumer goods which tend to be scrapped much earlier. If you're scrapping a car before its ~10-15 years old, chances are there's either something quite wrong with it, or you just drove it way too much and its gonna fall apart (i.e., something wrong with it).
> All of this should happen prior to replacement... it actually has to "catch up" with your existing already made working ICE that has no new cost for construction
That is precisely the sunk cost fallacy though: the principle is that continuing an endeavor simply because it already has been a cost paid shouldn't be done, unless the total continuing cost (including the eventual replacement) is less than the cost of immediate replacement (plus all continuing costs after initial replacement, including the eventual replacements of those in turn). Otherwise the principle says it is a waste of resources.
The 4 R's assume that the replacement is no better than the original at the job, which is why I described that analysis as the sunk cost fallacy. We don't have to take this assumption (personally, I prefer to bike over driving my ICE, so this analysis doesn't apply), but if we take the assumption that the EV does less environmental damage over its lifetime, then this assumed "environmental damage" function is minimized by discarding the ICE immediately, as any further use simply increases the total "environmental damage" caused by the choice of which car to use.
This can be seen in computers too, as newer computers are sometimes so much more power efficient then their replacement that they can very quickly save on resources by throwing out a perfectly working computer to replace it with a newer one.
Using the front end of these websites to work in your complex codebase is very challenging. I use the chatbots for higher level questions about libraries and integrations, not for specific implementation details in my codebase. But without a data agreement in place, you shouldn't (or maybe can't) paste in code, and even if you could, it's an inferior way of providing context in comparison to better tools.
However, I do use copilot+vscode with claude 3.5 and the "Edit with Copilot feature" where it takes my open files, plus any other context I want to give it, to drive changes to my files, has been surprisingly good. It's not really a time saver in that the amount of time I spend verifying and fixing or enhancing the result isn't really faster than me just writing it myself, but I still find benefits for brainstorming, quickly iterating on alternate ideas and smaller refactors, and overcoming the "get started" hesitation on a seemingly complex change. I'm to the point where it can absolutely add tests to files using my existing patterns that are well done and rarely need feedback from me. I have been surprised to the see the progress because for most of the history of LLM I didn't find it useful.
It also helps that I work in a nodejs/react/etc codebase where the models have a ton of information and examples to work with.
> But without a data agreement in place, you shouldn't (or maybe can't) paste in code
There's a checkbox you can toggle so that Openai doesn't use your code to train their models.
And I find the "chatbot" experience different and better than aider/copilot. It forces me to refocus on the really useful interfaces instead of just sending everything, and makes it better to verify everything instead of just accepting a bunch of changes that might even be correct, but not what I exactly want. For me, the time spent verifying is actually a bonus, because I read faster than I can type. I think of it as a peer programmer who just happens to be able to type much, much faster and doesn't mind writing unit tests or rewriting the same thing over and over.
The problem with reading vs writing is building "true understanding". If you are reading code at a high level and building a perfect mental model and reading every bit of code, then you're doing it right. But many folks see finished code and get "LGTM brain" and don't necessarily fully think out every command on every line, leading to poor understanding of code. This is a huge drawback in LLM-assisted coding. Folks re-read code they "wrote" and have no memory of it at all.
In the edit experience I am using, the LLM provides a git style changelog where I can easily compare before/after with a really detailed diff. I find that much more useful than giant "blobs" of code where minor differences crop up that I don't notice.
The other massive drawback to the out-of-codebase chatbot experience (and the Edit With Copilot experience IS a chatbot, it's just integrated into the editor, changes files with diffs, and has a UI for managing file context) is context. I can effortlessly load all my open files into the LLM context with 1 click. The out-of-editor chatbox requires either a totally custom LLM with various layers to handle your codebase context, or you have to manually paste in a lot of context. It's nonsense to waste time pasting proprietary code into OpenAI (with no business agreement other than a privacy policy and a checkbox) when I can get Copilot to sign a BA with strict rules about privacy and storage, and then 1-click add my open files to my context.
Folks should give these new experiences a try. Having claude chatbot integrated into your editor with the ability to see and modify your open files in a collaborative chat experience is very nice.
When you post on Instagram, there are opt-out features that will 'automatically share to your Threads account too' and you can see Threads notifications in the Instagram app and such .. so I think it's reasonable to assume they are leveraging the Instagram user-base a bit.
"Maintain humanity under 500,000,000 in perpetual balance with nature" (Georgia Guidestones, a monument describing someone's ideal future that is surrounded by conspiracy)
Will you be one of the 500 million pets? Or one of the 7.5 billion leftovers? Sure would be a shame if billions become unnecessary and then a pandemic 100X worse than covid got lableaked.
reply