
Ask HN: Low latency minimalist FaaS provider. Thoughts? - devopsismylife
I’m frustrated with the high latency and complexity of the larger hosting providers. The smaller providers I’ve seen restrict language choice or have high latency themselves. I’d love to provide minimalist low latency language agnostic pseudo-FAAS hosting. Something along the lines of “give us an Alpine Linux binary and supporting files and we&#x27;ll run it in a managed container with some RAM and low latency disk space”.<p>Thoughts? In particular, am I nuts to bemoan 200ms+ (and highly variable) server-side latencies? Sometimes I feel like an old man shouting into the wind...
======
rwdim
I love the concept of FaaS, but the rush to reduce everything to *aaS has
omitted intelligent scaling in favor of code atomicity.

Let’s say your function operates on a huge dataset, and requires 120ms
responses.

How can you do this consistently on a virtualized shared tenancy platform like
AWS where (a) you don’t control the ‘ticks’ allotted your Vcpu and (b) you
don’t have control of, or even insight, into the co-tenancy of the host
machine?

Easy answer. You can’t.

IMHO, since you can’t control wether another process is awarded priority of
Vcpu ticks OR memory, it can’t be done consistently over time on any non-
dedicated platform.

I have two full racks of servers, and I am regularly pricing the clouding of
those server functions, and have been for years. Not once has the cost of
moving to the cloud gone down. At the moment, the cost of moving all my
servers to the cloud is $18.8k/mo versus the $2k I pay now, and I would lose
all control over the throughput of the instances because a profit-only-
motivated host will have final decision over my Vcpu and IOps ratios.

Since my hardware hasn’t changed in several years, and the cost of clouding
has increased over 100%, the only explanation is that providers are slowly and
consistently slowing down Vcpus to maximize profits.

They may be lowering the per-minute cost, but they’re lowering the Vcpu tick
ratios much faster. There just isn’t any other explanation.

For core functions, where latency is key, you either need to find a way to
scale the sensitive functions across multiple instances, or host your own
fully controlled instances and route only latency sensitive functions to them
instead of the cloud.

In the data tables case I mentioned above, you might consider a switch that
will spawn multiple requests to FaaS instances with specific ranges of scope,
effectively minimizing the work each call must do by systematically decreasing
the scope of data it must operate on, and spreading that across (hopefully)
multiple virtual hosts that may or may not be on different physical servers.

Or, you could spend $100 a month on a dedicated machine at a colo provider
that is honest and well known, and I believe you will find over time what I
have...

Profit comes from having control of your on-premise cloud, not farming it out.

Cheers

~~~
devopsismylife
Thanks for the long and thoughtful response! Sounds like there’s scope for a
higher-value *aas product that offers some guarantees about co-tenancy, or at
least response times.

~~~
rwdim
There may be but unless you scope your guarantees around a common benchmark
and anything outside that is on the developer, you may have problems.

You may also have to over-engineer your FaaS platform to ensure spin-up times
for new instances are linear over 2 to 3x load, and that you are relatively
immune from DOS attacks, both technically and within the scope of your
guarantee.

You should always have an “out”... Wether you use it is another thing
entirely.

Cheers!

