1) Almost all of these services have generous free tiers. Even if you're running a relatively high traffic site your bill will be practically $0.
2) These technologies are stable and usually not that complicated. You are missing out on a lot of productivity by not looking into them.
Understanding the "real underlying technologies" is a myth. The "underlying technologies" are constantly changing, unless you are using an extremely outdated stack: https://www.youtube.com/live/hWjT_OOBdOc?feature=share.
Your link is a guy saying people get so good with the old technologies that there is no point trying to compete with them. That only happens if they don't change much...
If you time traveled someone from a decade ago that knows how to configure an Apache or NGinx server to today they would likely still be able to. It may not follow latest best practice, but at worst they'll use something retired, have to google it and be up and running in minutes.
Same with Spring/Java or .NET/ASP.NET Web API.
And you can still run a VPS or container that can handle significant traffic for free on many of the cloud platforms + migrating it is much much simpler.
My point is that you could make the whole "underlying technologies" argument for Apache back in the day when it was new. Why use Apache when I can understand the "underlying technologies"?
Things evolve over time and new tech slowly becomes so integrated into the stack that it is the underlying technology.
The underlying technologies very rarely change and offer very stable APIs. Apache has been around nearly 30 years. NGinX nearly 20. Both offered major advantages over existing solutions, but also interfaced with the rest of your codebase using a standard mechanism (CGI) so you could easily migrate to or from them.
A single vendors solution will not (at least very much should not) become a standard underlying technology, so this isn't moving to the next underlying technology. You're just tying yourself in to one vendor.
If it gets standardized across vendors or open sourced then it becomes a different story, but until then its a massive gamble to assume it will and the standard chosen will derive from your current vendors solution. Then what happens when the vendor decides to double prices and kill the free tier (also known as being acquired by Oracle)? deprecate it? Goes bust? etc.
There is a difference in abstraction between hooking API Gateway to a Lambda function, and writing and deploying an API on a Linux VM.
One is closer to the metal, so to speak, and provides a better basis for understanding how these cloud services (Lambda, etc) actually work behind the scenes. In this case, understanding the underlying technologies is not a myth.
> The "underlying technologies" are constantly changing, unless you are using an extremely outdated stack
So will also constantly change their behaviour in the edge cases. Thank you for giving me another reason for not using that crap. I'll stick with my extremely outdated stack.
Uh, what? Your product Saltcorn uses Webpack, React, Express, Docker and probably many more modern technologies. Those were just the ones I found looking at the repo for 5min.
Cloud and serverless platforms are similarly modern tech, except for deployment and hosting. Do you really think that e.g. AWS and Cloudflare "constantly change their behaviour in the edge cases"?
> Do you really think that e.g. AWS and Cloudflare "constantly change their behaviour in the edge cases"?
I mean major Google APIs / SDKs do... And not just in edge cases. Not at all uncommon for a vendor to decide a service is unprofitable and kill it, or to decide to launch a new better version and deprecate the old one / demise in a couple of years. That isn't fun when you heavily depend on it.
When you look at the sheer number of services AWS offer, it feels its only a bad year or a major competitor gaining an advantage and undercutting them on price before there is a risk they consider trimming to a smaller core set of services. I'd bet the VPS offerings aren't what goes...
Or a standard comes out, they adopt that and deprecate the existing offering giving you 2 years to migrate. Having to re-work everything is a major cost to large firms and can kill a startup.
yes, but I control all those dependencies with a lock file and pinned dependencies. We have tests for their behaviour including a test script in CI that boots up the service locally and connects to it, making various assertions.
I think there is a difference here between the core behaviour and the edge case behaviour. I guess I would trust that in the core behaviour they do not change on a day-to-day basis but the question is how do they behave when you are pushing the tools outside the intended core use cases. Can you then really trust services that change their core implementation constantly will work for your workflow?
TBH i would probably trust a CDN because I have a fairly simple use for such a service. If I were really pushing these tools, like running a video broadcast service or whatever, I would be much more worried.
I find it really hard to believe that you would run into rough edge cases with most platforms. If you have a small app a lot of platforms do deploys directly from GitHub repos. If you have a more complex app cloud platforms support things like managed Kubernetes and such.
What does something "outside the intended use case" even look like for a deployment and hosting platform?
If you're only using them for hosting and deployment there isn't really any lock-in either. That only occurs if you're using their other cloud services, and even then there are many platforms with similar services.
I agree that paid/cloud does not necessarily equal lock-in. I mean, an Ubuntu VPS is identical-enough (usually) across clouds. Managed databases? Maybe? what about available Postgres extensions etc.
There are lots of cases where deployment needs some kind of customisation. This usually happens either around persistent state or around building in CI. E.g. in one project some years ago, we wanted to test in CI against a subset of the production database. So there has to be a script to appropriately subset this, not easy if you have complicated foreign key relationships, and you have to make sure confidential data did not make it across. We were not the only people to do this I've seen this elsewhere.
Other example, my previous project: the main framework is Django with a React front end. The admin interface is django html templates. But one of those templates has an embedded additional react component because on that admin page we needed more interactivity. So that has to be built. And all of this has to be tested in CI.
What does non-standard deployment look like? Here is one example. For my current project I use wildcard domains so users can create their own tenants on their own subdomain. This would not work on e.g. heroku or similars, at least i don't know how.
All of this could probably be done in Kubernetes but also much simpler with bash scripts and systemd units.
You can just deploy docker containers to many of these services and the versions are pinned just like you are used to. I’m not sure what you are expecting side effects to be here
To the extent that you can put your functionality as a dependency in a container, that's not what I'm talking about. If you can do that then we are all hunky dory.
The problem is with functionality that is only available as a remote API, because for whatever reason we wanted to be cloud native rather than rely on free and open source libraries. I cannot pin that dependency version, as best I can choose which version of the API I am talking to, but if they have changed the underlying implementation then I can't request them to roll that back just for me.
2) These technologies are stable and usually not that complicated. You are missing out on a lot of productivity by not looking into them.
Understanding the "real underlying technologies" is a myth. The "underlying technologies" are constantly changing, unless you are using an extremely outdated stack: https://www.youtube.com/live/hWjT_OOBdOc?feature=share.