There's a significant asymmetry though, it's not just a bit more work. I'm a bit cynical here, but often it's easier to just overengineer and be safe than to defend a simple solution; obviously depending on the organization and its culture.
When you have a complex solution and an alternative is stacked up against it, everything usually boils down to a few tradeoffs. The simple solution is generally the one with the most tradeoffs to explain: why no HA, why no queuing, why no horizontally scalable, why no persistence, why no redundancy, why no retry, etc. Obviously not all of them will apply, but also obviously the optics of the extensive questioning will hinder any promotion even if you successfully justify everything.
I do it and never had an issue. I get odd emails every now and then with an unused address, for services/people I never contacted though. But I'm talking about perhaps 2-3 per year.
I started laugh reacting at Russian propaganda and now all I get is Russian propaganda, literally half of my posts are boomers, shills and people from "non-aligned" countries falling for the Russia stronk/based west evil/gay meme, and Russian embassies and consulates non-stop DARVOing. But before that it was indeed a constant flurry of thirst traps, ragebait, etc. I only keep using it for a couple of well moderated groups.
What I remember is that you could push OBEX calendar objects without much refusal from the phones and make people have alarms ringing at 3am, fun times!
I honestly believe everything will be normalized. A genius with the same model as I will be more productive than I, and I will be more productive than some other people, exactly the same as without AI.
If AI starts doing things beyond what you can understand, control and own, it stops being useful, the extra capacity is wasted capacity, and there are diminishing returns for ever growing investment needs. The margins fall off a cliff (and they're already negative), and the only economic improvement will come from Moore's Law in terms of power needed to generate stuff.
The nature of the work will change, you'll manage agents and what not, I'm not a crystal ball, but you'll still have to dive into the details to fix what AI can't, and if you can't, you're stuck.
The margins on inference definetly aren’t negative. An easy way to check this is by looking at the costs of using cloud hosted open source models, which necessarily are served at a positive margin, and are much lower $/token than what you get from the labs.
>Organizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations.
>people are underestimating how draining is operating and maintaining software
Yep. Many SaaS have an edge because they factorize the struggle of many customers, if a SaaS has 1000 customers, each customer vibing their way into a home-built solution will require dedicated efforts at maintaining it. Even with AI, those efforts aren't negligible.
Many companies don't even operate any IT infrastructure, cloud or otherwise, themselves, beyond office connectivity, AI replacing SaaS will require someone in charge of that at the very least.
There's a significant asymmetry though, it's not just a bit more work. I'm a bit cynical here, but often it's easier to just overengineer and be safe than to defend a simple solution; obviously depending on the organization and its culture.
When you have a complex solution and an alternative is stacked up against it, everything usually boils down to a few tradeoffs. The simple solution is generally the one with the most tradeoffs to explain: why no HA, why no queuing, why no horizontally scalable, why no persistence, why no redundancy, why no retry, etc. Obviously not all of them will apply, but also obviously the optics of the extensive questioning will hinder any promotion even if you successfully justify everything.
reply