So it's a meaningless buzzword. Got it.
So seeing "serverless" in the title caught my attention, only to find that it too looked like just another buzzword..
It's nice that web applications are starting to pick up on this concept. Browsers are advanced enough that the entire front-end can be run 100% in the browser, which shifts a good portion of the app that used to have to be scaled from server to the client. That way you can focus on scaling your API. This split I've noticed also tends to make client-side web apps more responsive (even if it's just an illusion), which people appreciate.
And as wmf pointed out, we already have plenty of buzzwords for this. PaaS, IaaS, "Cloud computing", etc.
Seriously, come on. People still have to think about their servers. AWS is nice, yes, but it still uses servers, and developers and ops people still need to think about those servers.
It not like I press the magic button and my distributed app suddenly becomes available for millions to use simultaneously. I have to think (really hard) about what services I need, how they interact, how they scale in relation to each other, how to deal with failure scenarios, etc etc. None of this has gone away with the advent of "cloud" computing, and there is no way to invisibly scale your app (save maybe Heroku-like services, but they break down at high load).
Is it becoming easier to deploy scalable applications? Yes. But let's not confuse the fact that hourly pricing on VPS instances is still just running your app on a VPS, no matter how many of them there are. You still need to know your fair share of unix commands and you still need to be smart about the different pieces and how they interact. This is nothing new.
Let's also not forget what the cloud really is: an annoyingly overused buzzword thrown around by people who have never actually logged into a server in their lives.
I completely agree that "serverless" is an annoying buzzword and I hope it doesn't catch on, but there's a difference between this sort of environment and managing a collection of virtual machines yourself (I've done both).
Of course. That's the whole point. The abstraction layer is moving up. We're choosing from cloud "services" now to scale various pieces of our apps rather than putting them all onto servers, VPS's, AMI's, etc.
There's a long ways to go but you'll see much less (or no) need for unix level commands, software installations, etc.
I for one, can't wait for the day I never need to log into a server.
Back in 2010, well before WebRTC, I posted the following use case on the WHAT-WG mailing list that explores using peering to offset CDN static asset serving costs in social games. It was a while ago when I was a product manager and a lot less technical, so excuse some of the naive misconceptions I may have made at the time.
Anyone interested in exploring an idea like iron.io, but in their own application stack should check out the following projects from James Halliday (substack)
It'd be awesome to see someone combine WebRTC with the basic idea behind Seaports to allow semantic versioning of services provided by your application's users.
Every application has to deal with capacities and limits. If they didn't, they'd break.
Say your app just stores a couple hundred megabytes every second, because you imagine there's no such thing as a storage limit. Let's imagine disk storage could be expanded fast enough to support this and costs zero money. We still live in a universe bound by physics, and storage only goes so fast: you will eventually have too much data to process.
We can't assume CPU or RAM capacity is limitless. If your app is cracking crypto passwords, depending on the password and method, all of AWS's collective compute cycles still might not be enough to crack one password in a reasonable amount of time. Ask the people trying to make flight price comparison engines about resource starvation.
It's not even difficult to learn how servers work and affect your application. There's really no point to this crap.
The only thing I expect from the "cloud" is proper routing.
All the rest of the storage and computing can be done at the endpoints. This isn't some revolutionary idea. It's how the internet was originally imagined by Paul Baran.
Endpoints will vary. They will have different needs and capacities to meet them. Some may be even be "services" to which other subscribe. But there's no requirement for ubiquitous middlemen. Everything is not a service that requires a third party.
What's next, TaaS? Thinking as a Service?
The future of software is one of empowerment with less third party involvement.
From the title I thought he was going to go farther towards the peer-based content-centric ideas, where the network nodes being addressed are actually data nodes rather than server nodes, and the data is distributed by peers.
Many dismiss that type of peer-based networking as something that will 'never' work for the mainstream internet, but I think you just need sophisticated encryption (bitcoin level) and good upstream bandwidth.
From a practical perspective, part of what this means is that even more seemingly-distinct services will depend on the same infrastructure. We're seeing this already, as every time AWS experiences downtime, a bunch of web services go down. It will be interesting to see what effects this has on everyday business, as outsourcing internal infrastructure (email, calendaring, file storage, etc) becomes even more popular.
Everything was on the server (mainframe) with dumb terminals, where the mainframe was essentially the cloud, then we went to fat-terminals with the pc-era. The web came along and everything went back to the server, then smartphones and tablets, and we went back to apps with data stored on the server. Now we're going back again.
Where is this all going?
So what changed in network topologies?
Cloud is a great idea, assuming you're scratch-building for a particular environment (AWS, Azure, etc). Targeting one of those environments constrains my choice of technologies and tools, unless I have the cash for my own private cloud. I don't.
So the only real change for me in the last 20 years has been the switch to IP, and on top of that HTTP instead of TCP. My remote procedure calls still contain byte arrays, and I build all servers and their components myself.