AWS may be overcharging but it's a balancing act. Going on-prem (well, shared DC) will be cheaper but comes with requirements for either jack of all trades sysadmins or a bunch of specialists. It can work well if your product is simple and scalable. A lot of places quietly achieve this.
That said, I've seen real world scenarios where complexity is up the wazoo and an opex cost focus means you're hiring under skilled staff to manage offerings built on components with low sticker prices. Throw in a bit of the old NIH mindset (DIY all the things!) and it's large blast radii with expensive service credits being dished out to customers regularly. On a human factors front your team will be seeing countless middle of the night conference calls.
While I'm not 100% happy with the AWS/Azure/GCP world, the reality is that on-prem skillsets are becoming rarer and more specialist. Hiring good people can be either really expensive or a bit of a unicorn hunt.
It's a chicken and egg problem. If the cloud didn't become such a proeminent thing, the last decade and a half would have seen the rise of much better tools to manage on-premise servers (= requiring less in-depth sysadmin expertise). I think we're starting to see such tools appear in the last few years after enough people got burned by cloud bills and lockin.
And don't forget the real crux of the problem: Do I even know whether a specialist is good or not? Hiring experts is really difficult if you don't have the skill in the topic, and if you do, you either not need an expert, or you will be biased towards those that agree with you.
It's not even limited to sysadmins, or in tech. How do you know whether a mechanic is very good, or iffy? Is a financial advisor giving you good advice, or basically robbing you? It's not as if many companies are going to hire 4 business units worth of on prem admins, and then decide which one does better after running for 3 years, or something empirical like that. You might be the poor sob that hires the very expensive, yet incompetent and out of date specialist, whose only remaining good skill is selling confidence to employers.
> Do I even know whether a specialist is good or not?
Of course but unless I misunderstood what you meant to say, you don't escape that by buying from AWS. It's just that instead of "sysadmin specialists" you need "AWS specialists".
If you want to outsource the job then you need to go up at least 1 more layer of abstraction (and likely an order of magnitude in price) and buy fully managed services.
This only gets worse as you go higher in management. How does a technical founder know what good sales or marketing looks like? They are often swayed by people who can talk a good talk and deliver nothing.
The good news with marketing and sales is that you want the people who talk a good talk, so you're halfway there, you just gotta direct them towards the market and away from bilking you.
At the same time, the incredible complexity of the software infrastructure is making specialists more and more useless. To the point that almost every successful specialist out there is just some disguised generalist that decided to focus their presentation in a single area.
Maybe everyone is retaining generalists. I keep being given retention bonuses every year, without asking for a single one so far.
As mentioned below, never labeled "full stack", never plan on it. "Generalist" is what my actual title became back in the mid 2000s. My career has been all over the place... the key is being stubborn when confronted with challenges and being able to scale up (mentally and sometimes physically) to meet the needs, when needed. And chill out when it's not.
I throw up in my mouth every time I see "full stack" in a job listing.
We got rid of roles... DBA's, QA teams, Sysadmins, then front and back end. Full Stack is the "webmaster" of the modern era. It might mean front and back end, it might mean sysadmin and DBA as well.
Even full stack listings come with a list of technologies that the candidate must have deep knowledge of.
> We got rid of roles... DBA's, QA teams, Sysadmins, then front and back end.
On a first approximation, those roles were all wrong. If your people don't wear many of those hats at the same time, they won't be able to create software.
But yeah, we did get rid of roles. And still require people to be specialized to the point it's close to impossible to match the requirements of a random job.
You can easily get your service up by asking claude code or whatever to just do it
It produces aws yaml that’s better than many devops people I’ve worked with. In other words, it absolutely should not be trusted with trivial tasks, but you could easily blow $100K’s per year for worse.
I've been contemplating this a lot lately, as I just did code review on a system that was moving all the AWS infrastructure into CDK, and it was very clear the person doing it was using an LLM which created a really complicated, over engineered solution to everything. I basically rewrote the entire thing (still pairing with Claude), and it's now much simpler and easier to follow.
So I think for developers that have deep experience with systems LLMs are great -- I did a huge migration in a few weeks that probably would have taken many months or even half a year before. But I worry that people that don't really know what's going on will end up with a horrible mess of infra code.
To me it's clear that most Ops engineers are vibe coding their scripts/yamls today.
The time difference between having a script ready has decreased dramatically in the last 3 years. The amount of problems when deploying the first time has also increased in the same period.
The difference between the ones who actually know what they're doing and the ones who don't is whether they will refactor and test.
> AWS may be overcharging but it's a balancing act. Going on-prem (well, shared DC) will be cheaper but comes with requirements for either jack of all trades sysadmins or a bunch of specialists
Much easier to find. Even more, they are skills much easier to learn for existing engineers. What's better, they are fundamental skills that will never lose their value as those systems are what everything else is built on.
Managed servers reduce the on-prem skillset requirement and can also deliver a lot of value.
The most frustrating part of hyperscalers is that it's so easy to make mistakes. Active tracking of you bill is a must, but the data is 24-48h late in some cases. So a single engineer can cause 5-figure regrettable spend very quickly.
I've seen this in startups with 7 figure ARR (where annual cloud costs were also 7 figures).
Also seen that in F500 where a single architect caused a 5-figure mistake which remove cloud privileges from the entire architecture team. Can't make it up.
7 figures is single digit millions. 5 figures is a single silly mistake. Just enabling more verbosity on logs is enough to trigger that in a day. Such mistakes would be found monthly. We could have easily had 10% more engineers with the same budget if it weren't for lighting our runway on fire that way.
It depends upon how many resources your software needs. At 20 servers we spend almost zero time managing our servers, and with modern hardware 20 servers can get you a lot.
Its easier than ever to do this but people are doing it less and less.
Just get on the road with a 3/4/5G connection on a mobile phone if you want to understand why we still need to design for "iffy" Internet. So many applications have a habit of hanging when the connection isn't formally closed but you're going through a spotty patch. Connections to cell towers with full bars and backhaul issues are surprisingly common. It's a real problem when you're dealing with streaming media (radio can be in the low kbps) or even WebSockets.
That mention of EPS takes me back, we used to use it all over the place to form basic hub-and-spoke networks in areas where we had lots of small sites that would all connect to a single exchange. It would generally bounce along at 2Mbps which wasn't bad in those days.
We also had some large campus type sites where we would sometimes implement EPS to do LAN extension over the onsite twisted pair as it was cheaper than installing fibre and just about fast enough.
OpenTherm is a cool idea but even new installations aren't always wired for it. When installing a new smart thermostat I found the installation has been wired as S Plan with the few cables running between the boiler location and valves location already consumed. Makes the job much bigger if you're not prepared for it.
As someone who's been involved with radio and occasionally podcasts for about 20 years... I'm struggling to see the benefit of this one. Yes, prep services have existed in the past and I'm sure continue to exist today. Xtrax rings a bell from years gone by.
But honestly, if you're going to be interviewing someone and the content is going to be engaging, you can't just fly from some LLM output. Talking to a politician you're going to need knowledge of their past actions, figures to challenge them on, etc. For music guests, a bit of knowledge about the band, key figures and moments throughout their story. I'd hope anyone using the LLM crib sheet is also being reactive to what their guests say (e.g. "you touched on X but when you were Chief of X...").
Interviews aren't my strength but I'd be wary of such a service. Combined with the usual AI hallucinations it could be quite the entertaining car crash.
Thanks for the feedback! I really appreciate it since it's coming from someone who was in the industry for many years.
I see your biggest concern is AI hallucinations, right?
I'm not using just an LLM. I added a service to give LLM up-to-date knowledge from the web. That reduces hallucinations a lot. Can I guarantee no hallucinations at all? No, I can't.
Where I see value in PodcastPrepper the most is being able to process dozens of sources from the web in parallel and create a report on the guest in about 3 minutes.
No. The biggest concern is that the conversation is going to be dull as heck because all you've got is a list of AI generated topic starters rather than any sort of meaningful capacity for conversation or meaningful structure for the conversation (either in narrative or pedagogy).
And if you are marketing this as taking 3 minutes and saving 95% of your time then this means it saves all of... one hour. Not exactly the bulk of the time spent producing a podcast episode.
It's worth remembering that Radio Garden is now gubbed for transatlantic listening from the UK due to music licencing issues. The same problem also impacts TuneIn.
Users in the United Kingdom are restricted from tuning in to stations outside of the UK for an indefinite period due to copyright and neighboring rights related matters that require clarification.
Stations situated in the UK continue to be available.
For more information please read the statement in the 'Settings' section."
I was on-call for over a decade, usually in roles where there was no compensation for working out of hours other than maybe TOIL. We're not talking FAANG gigs here - like £20-50k in the UK stuff. It's amazing how much having to carry an extra phone or making sure your laptop is in your car impacts your day-to-day life. Any social thing you're at could be interrupted at zero notice. Heck, I've taken calls in supermarkets and concert venues.
One place I worked had a 1 in 2 rotation. Every other week on call or weeks back to back if your colleague was on holiday. There was no front-line service screening calls which meant you could be woken several times in one night. All for £30 pcm towards broadband costs.
Most places are more sane than that example but suffer from the same core problem. Follow the sun support is incredibly expensive when compared to putting your existing staff to be on call. Here in the UK, so long as your equivalent hourly rate doesn't drop below national minimum wage and you're opted out of the working time directive (a lot of employers slip an opt-out form into your paperwork implying it's normal to sign it), then it's legal.
Unfortunately I'm yet to find anywhere that on-call operational teams have the clout to get code induced issues high up the priority list outside of cases where they've had to drag developers out of bed at 2am. In my experience that also plays out with getting anything infrastructure based into tech debt budgets. Why focus on fixing problems you don't directly suffer from when you can spend the time on a refactor, integrating a cool new library or spaffing out one more feature in the sprint?
Indeed. The few times I've encountered Rust in the wild it's been for a project that didn't need it (web or IO bound applications) and someone's "My First Rust Project". It's difficult or even at times beyond the budget of smaller organisations to then hire a seasoned Rust dev to unpick whatever mess you got in to.
Don't get me wrong, Rust has a niche where it's the right choice. But being a popular language of the day, it's getting used a lot in the wrong places.
Project that does not need rust "Web, or IO" -> What would you actually choose to make an API then ? Python ? Ruby ? Did you compare benchmarks from rust server to python servers ? Did you actually feel the difference ?
Most web servers aren't doing anything computationally complex and there's a lot of tech to help you scale to multiple servers, so single server performance usually isn't really critical.
Web stuff is about developer speed. So familiarity, libraries, and tooling. There are plenty of good options.
Anything that needs to be performant can go in it's own service.
I will say that a rust service, even when doing relatively simple stuff, can scale well in the cloud due to small memory footprint, fast startup time when auto-scaling in containers, and no need for a JIT to "warm up" before it has high throughout and consistent low latency. Building something with these qualities on a framework like Rocket is pretty straightforward, IME.
Building a web app with rocket is like using a fine BMW to go on highway. It just feels right- I love Guards and what not he added in his framework ; absolutely refreshing work !
Music rights are a bit of a mess across the Atlantic. PRS/PPL licences don't reciprocate with the American equivalents and vice versa. Either it's two sets of licences (and possibly legal entities) or you end up geo blocking. Though on the up side you can stream to parts of Europe and even Australia from the UK IIRC.
Licencing bodies are also fairly actively monitoring these things. I've had them try to chase me for royalties for services that have been shut down because they're still listed in public directories.
It's a real problem they need to get on top of as an organisation. Unceremoniously pulling the plug on services like IoT Core on tight timescales doesn't scream "reliable, sustainable platform I can pitch to my bosses".
Even with sizeable discounts and professional services funds for migration, I wouldn't consider a move to Google Cloud until they calm this down. This is coming from someone that once worked for a Google Cloud Partner.
That said, I've seen real world scenarios where complexity is up the wazoo and an opex cost focus means you're hiring under skilled staff to manage offerings built on components with low sticker prices. Throw in a bit of the old NIH mindset (DIY all the things!) and it's large blast radii with expensive service credits being dished out to customers regularly. On a human factors front your team will be seeing countless middle of the night conference calls.
While I'm not 100% happy with the AWS/Azure/GCP world, the reality is that on-prem skillsets are becoming rarer and more specialist. Hiring good people can be either really expensive or a bit of a unicorn hunt.