Yes, you are on the money. A cloud service provider needs to maintain reliability first and foremost, which means they won't have a runtime dependency on their billing system.
This means that billing happens asynchronously. You may use queues, you may do batching, etc. But you won't have a realtime view of the costs
>they won't have a runtime dependency on their billing system
Well, that makes sense in principle, but they obviously do have some billing check that prevents me from making additional requests after that "final query". And they definitely have some check to prevent me from overutilizing my quota when I have an active monthly subscription. So whatever it is that they need to do, when I prepay $x, I'm not ok with them charging me more than that (or I would have prepaid more). It's up to them to figure this out and/or absorb the costs.
They do have a billing check, but that check is looking at "eventually consistent" billing data which could have arbitrary delays or be checked out-of-order compared to how it occurred IRL. This is a strategy that's typically fine when the margin of over-billing is small, maybe 1% or less. I take it from your description that the actual over-billing is more like dozens of dollars, potentially more than single-digit percentages on top of the subscription price. Here's hoping they tighten up metering <> billing.
Then the right thing to do from a consumer standpoint is to factor that overbilling into their upfront pricing, rather than surprising people with bills that they were led not to expect.
> they obviously do have some billing check that prevents me from making additional requests after that "final query"
No they don't actually! They try to get close, but it's not guaranteed (for example, make that "final query" to two different regions concurrently).
Now, they could stand up a separate system with a guaranteed fixed cost, but few people want that and the cost would be higher, so it wouldn't make the money back.
You can do it on your end though: run every request sequentially through a service and track your own usage, stopping when reaching your limit.
> Decades ago companies would train new hires out out college, but that trend ended in the 90s.
Decades ago engineering salaries were a fraction of what they are today, developing countries did not have computing and educational infrastructure, and we had worse solutions to the logistics challenges from off-shoring.
It is increasingly difficult to justify the US salaries and I'm not sure that the talent pipeline in the US is so superior to make up for it.
As folks optimize for getting these high paying jobs it is increasingly difficult to find someone who has legitimate problem solving skills vs someone who has invested a lot of effort into looking hireable.
> Decades ago engineering salaries were a fraction of what they are today
That may be so at the FAANGs and the companies trying to become FAANG-like, but I'm not sure that's the case for people working at more ordinary places.
3 decades ago I worked at a place making retail software for Windows and Mac. I had ~15 years experience. The net tells me that today median total compensation for that kind of job at that experience level would be around $150k/year.
I made $63k that year. My recollection is that this was a pretty normal amount at the time.
If we adjust using CPI that would be $132k now, which suggests my salary then was about 12% lower than an equivalent salary today. However, if we adjust using the indexing factors [1], which are usually consider better than using CPI for comparing wages across time, my $63k then is equivalent to $169k now.
Are you one of those developers that hates debuggers and stack traces, and would rather spend three hours looking at the output or adding prints for something that would take 5 minutes to any sane developer?
This is very much a tangent, and was asked in bad faith, but I’ll answer anyways!
One of the interesting things about working on distributed systems, is that you can reproduce problems without having to reproduce or mock a long stack trace
So I certainly don’t see the case you’re talking about where it takes hours to reproduce or understand a problem without a debugger. Of course there are still many times when a debugger should be consulted! There is always a right tool for a given job.
“I have a PR from <feature-branch> into main. Please break it into chunks and dispatch a background agent to review each chunk for <review-criteria>, and then go through the chunks one at a time with me, pausing between each for my feedback”
I love ASCII diagrams! The fact that I can write a diagram that looks equally wonderful in my terminal via cat as it does rendered on my website is incredible.
A good monospaced font and they can look really sharp!
I will definitely give this tool a shot.
I will also shout out monodraw as a really nice little application for building generic ASCII diagrams- https://monodraw.helftone.com/
Like all other job functions tangential to development- it can be difficult to organize the labor needed to accomplish this within a single team, and it can be difficult to align incentives when the labor is spread across multiple teams.
This gets more and more difficult with modern development practices. Development benefits greatly from fast release cycles and quick iteration- the other job functions do not! QA is certainly included there.
I think that inherent conflict is what is causing developers to increasingly managing their own operations, technical writing, testing, etc.
In my experience, what works best is having QA and a tech writer assigned to a few teams. That way there is an ongoing, close relationship that makes interactions go smoothly.
In a larger org, it may also make sense to have a separate QA group that handles tasks across all teams and focuses on the product as a unified whole.
I can’t imagine any role in software that gets better delivering more work in longer cycles than less work in shorter cycles.
And I can’t speak for technical writing, but developers do operations and testing because automation makes it possible and better than living in a dysfunctional world of fire and forget coding.
In my experience (this was about 5 years ago mind you) it was no more complex than an arch installation, but with a smaller community and less documentation.
Ok cool - yeah that's a very reasonable place to start. I've been toying with ideas for "AI driven UIs" but haven't really experimented with much concrete yet. I feel like there's a lot of space to play here though. Eg. let the AI also create controls that are backed by prompts, etc.
In my experience, people building APIs w/ python are almost always using frameworks, while people building APIs w/ golang are almost always using stdlib
This means that billing happens asynchronously. You may use queues, you may do batching, etc. But you won't have a realtime view of the costs
reply