Hacker News new | past | comments | ask | show | jobs | submit login
Why The Future Of Software And Apps Is Serverless (readwriteweb.com)
36 points by hausburger on Oct 15, 2012 | hide | past | favorite | 21 comments

> The phrase “serverless” doesn’t mean servers are no longer involved.

So it's a meaningless buzzword. Got it.

Recently I've been fascinated with software architectures that shift the locus of control back to the client, instead of the server/cloud.

So seeing "serverless" in the title caught my attention, only to find that it too looked like just another buzzword..

Agreed, I was thinking workloads being divided up and sent to clients for distributed processing which relieves workload and servers, not "u shuld host ur site in teh CLOUD LOL!!" What garbage.

It's nice that web applications are starting to pick up on this concept. Browsers are advanced enough that the entire front-end can be run 100% in the browser, which shifts a good portion of the app that used to have to be scaled from server to the client. That way you can focus on scaling your API. This split I've noticed also tends to make client-side web apps more responsive (even if it's just an illusion), which people appreciate.

Yep. This really is a wasted occasion, I'd have been really interested in how to build application-specific or generic P2P networks to do that kind of stuff, at the smaller or the wider scale.

It's abstracting above having to manage and think about servers. Just as most developers stopped having to think about registers or print drivers or low-level network protocols or memory pages. You still have all these but you gain big by having technologies that let you abstract away from these levels of detail. Buzzwordy, yeah sure. But the meaning and implications are pretty powerful.

Except it doesn't really. You still have to deal with the important things. All that's removed is the tediousness of the physical server maintenance. You still need to architect for redundancy, deal with latency, client/server caching, etc. Your app cares just as much about the server as it always did and your server-side code needs to be as robust as if it were running on dedicated hardware (perhaps more robust).

And as wmf pointed out, we already have plenty of buzzwords for this. PaaS, IaaS, "Cloud computing", etc.

But we already have a name for that: PaaS.

Ok, so 20 years ago, we all had to rely on lowly, pathetic servers. But now, thanks to THE CLOUUUUDD (read: hourly pricing on servers) there is a nebulous sludge of buzzwords running your applications, instead.

Seriously, come on. People still have to think about their servers. AWS is nice, yes, but it still uses servers, and developers and ops people still need to think about those servers.

It not like I press the magic button and my distributed app suddenly becomes available for millions to use simultaneously. I have to think (really hard) about what services I need, how they interact, how they scale in relation to each other, how to deal with failure scenarios, etc etc. None of this has gone away with the advent of "cloud" computing, and there is no way to invisibly scale your app (save maybe Heroku-like services, but they break down at high load).

Is it becoming easier to deploy scalable applications? Yes. But let's not confuse the fact that hourly pricing on VPS instances is still just running your app on a VPS, no matter how many of them there are. You still need to know your fair share of unix commands and you still need to be smart about the different pieces and how they interact. This is nothing new.

Let's also not forget what the cloud really is: an annoyingly overused buzzword thrown around by people who have never actually logged into a server in their lives.

If you're using App Engine, the number of frontend servers ("instances") is a config parameter. The fact that your app runs on multiple servers is implicit in API's and runtime environment because they don't allow you to do some things. But you don't know the names of the machines running your app, you don't get any server-side shell, and don't have any unix commands to run (except on your workstation if you use unix). Instead you upload your app and set the version that's live.

I completely agree that "serverless" is an annoying buzzword and I hope it doesn't catch on, but there's a difference between this sort of environment and managing a collection of virtual machines yourself (I've done both).

"AWS is nice, yes, but it still uses servers"

Of course. That's the whole point. The abstraction layer is moving up. We're choosing from cloud "services" now to scale various pieces of our apps rather than putting them all onto servers, VPS's, AMI's, etc.

There's a long ways to go but you'll see much less (or no) need for unix level commands, software installations, etc.

I for one, can't wait for the day I never need to log into a server.

This article has the right title and discusses the idea of services, not servers, but misses an even bigger opportunity to discuss how you can legitimately ditch lots and lots of servers. WebRTC means that every application can have aspects of peering and reduce the need for centralization in servers.

Back in 2010, well before WebRTC, I posted the following use case on the WHAT-WG mailing list that explores using peering to offset CDN static asset serving costs in social games. It was a while ago when I was a product manager and a lot less technical, so excuse some of the naive misconceptions I may have made at the time. http://lists.w3.org/Archives/Public/public-whatwg-archive/20...

Anyone interested in exploring an idea like iron.io, but in their own application stack should check out the following projects from James Halliday (substack)





It'd be awesome to see someone combine WebRTC with the basic idea behind Seaports to allow semantic versioning of services provided by your application's users.

The phrase “serverless” doesn’t mean servers are no longer involved. It simply means that developers no longer have to think that much about them. Computing resources get used as services without having to manage around physical capacities or limits.

Every application has to deal with capacities and limits. If they didn't, they'd break.

Say your app just stores a couple hundred megabytes every second, because you imagine there's no such thing as a storage limit. Let's imagine disk storage could be expanded fast enough to support this and costs zero money. We still live in a universe bound by physics, and storage only goes so fast: you will eventually have too much data to process.

We can't assume CPU or RAM capacity is limitless. If your app is cracking crypto passwords, depending on the password and method, all of AWS's collective compute cycles still might not be enough to crack one password in a reasonable amount of time. Ask the people trying to make flight price comparison engines about resource starvation.

It's not even difficult to learn how servers work and affect your application. There's really no point to this crap.

Well, what if I don't need the "cloud" to run virtual machines? What if I have many "machines" running on my computer, isolated kernels in userspsace? What if I have all the computing power I need at my fingertips, literally? Is that sort of control and transparency not useful? How many more security breaches do we need to see before we acknowledge that delegating our storage and computing needs may not be the wisest option in all cases?

The only thing I expect from the "cloud" is proper routing.

All the rest of the storage and computing can be done at the endpoints. This isn't some revolutionary idea. It's how the internet was originally imagined by Paul Baran.

Endpoints will vary. They will have different needs and capacities to meet them. Some may be even be "services" to which other subscribe. But there's no requirement for ubiquitous middlemen. Everything is not a service that requires a third party.

What's next, TaaS? Thinking as a Service?

The future of software is one of empowerment with less third party involvement.

The main point I think is that services with web APIs are taking the place of individually managed servers. Which is a good point and a trend because it does make sense a lot of the time.

From the title I thought he was going to go farther towards the peer-based content-centric ideas, where the network nodes being addressed are actually data nodes rather than server nodes, and the data is distributed by peers.


Many dismiss that type of peer-based networking as something that will 'never' work for the mainstream internet, but I think you just need sophisticated encryption (bitcoin level) and good upstream bandwidth.

In other words: PaaS or SaaS will become much more popular to build on, and developers will spend less time working with servers or IaaS. Industry trends continue. We need an article for this?

From a practical perspective, part of what this means is that even more seemingly-distinct services will depend on the same infrastructure. We're seeing this already, as every time AWS experiences downtime, a bunch of web services go down. It will be interesting to see what effects this has on everyday business, as outsourcing internal infrastructure (email, calendaring, file storage, etc) becomes even more popular.

I have a problem with the terminology in the article. For me, a server and the 'cloud' both refer to remote computing resources, separate from clients. The Venn diagram overlaps a bit :-)

This is just a pendulum that swings back and forth, with 'serverless' as a new buzzword.

Everything was on the server (mainframe) with dumb terminals, where the mainframe was essentially the cloud, then we went to fat-terminals with the pc-era. The web came along and everything went back to the server, then smartphones and tablets, and we went back to apps with data stored on the server. Now we're going back again.

Where is this all going?

I didn't go it. After all I've been coding distributed applications since early 90's.

So what changed in network topologies?

Nothing. Similar to yourself my world hasn't changed in that time, and cloud hasn't affected me either.

Cloud is a great idea, assuming you're scratch-building for a particular environment (AWS, Azure, etc). Targeting one of those environments constrains my choice of technologies and tools, unless I have the cash for my own private cloud. I don't.

So the only real change for me in the last 20 years has been the switch to IP, and on top of that HTTP instead of TCP. My remote procedure calls still contain byte arrays, and I build all servers and their components myself.

You don't have to worry about IPX/SPX or Token Ring any longer... that's progress.

The first thing popped in my head after reading the title: "Your mom is serverless"

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact