
Show HN: Fly – A global load balancer with middleware - mrkurt
https://fly.io/
======
Xorlev
Wow, this is _really_ expensive! I'm sorry, but $0.05/1000 requests is really
really premium.

1000 request/s is $0.05/s. 86400s/day * 30 days * $0.05 -> $129.6K/30d. 1k
req/s is not that uncommon either. And that's before 'paid middleware'!

APIs and sites that'd truly benefit from this are immediately priced out.

I see why you priced it the way you did (gotta start somewhere with such a
large upfront investment if you're not using cloud capacity), but you aren't
exactly going to get any large customers with numbers like that. Project it to
when you have ~500k req/s going through your service (778M ARR!): you cannot
expect to still be making $0.05/1000 requests.

To put it into perspective, costs to call S3 are 10x-100x cheaper (GETs are
0.004/10000, PUT/POST/LIST/COPY is 0.005/1000) and S3 arguably does a lot more
per request.

I really want to like it. The service sounds _awesome_, don't get me wrong.
I'm a huge nerd for smart loadbalancing. I've written more loadbalancing code
than I care to admit, but the economics just aren't there for me to even
consider adopting your service. It's a lot cheaper for me to keep a pool of
nginx servers in AWS regions around the globe with pre-established TLS
connections back to the home DCs for much of the benefit fly.io brings.

~~~
mrkurt
This math isn't right, we bill based on total requests (not req/s capacity).
It's $0.05 per thousand requests. So 86k/day * 30 days / 1000 * $0.05 means
$129/mo.

Can we tweak the wording to clarify what we mean? We'll add examples, too. I
really hate opaque pricing and wanted to make this as simple to understand as
possible.

~~~
Xorlev
This math _is_ right. 1000 requests/s is 86.4M requests per day. So for 1000
requests in a given second, that's $0.05 per second if you substitute the
term. But to show you otherwise:

1000 req/s * 86400s * 30d / 1000 * 0.05 = 129600

It's only $129/mo if it's 0.05c and not $0.05, which is not what the website
says.

~~~
mrkurt
Oh I see what you're saying. Then yes, your math is right, but our assumptions
probably don't match. :)

1\. This is our earliest launch. We're very expensive for 86mm requests per
day right now. We have volume pricing planned but not rolled out. If you're
interested, I'd suggest trying a low volume site to see how it feels and then
get ahold of me to talk about volume pricing.

2\. Our service is designed for "valuable" requests, not any ol' asset. API
calls and pageviews are usually a good fit. Sustaining 1k "valuable"
requests/s for a full month is pretty rare. I know you and other people on HN
run that level of infrastructure, but our value prop for you won't be very
high until we're more mature.

~~~
Xorlev
I'd still say you need to take a hard look at your pricing. Lets assume we're
talking about a pretty standard Rails site on Heroku without ridiculous
traffic. Lets say 1500 requests per minute average (25/s).

25 * 86400 * 30 * 0.05 / 1000 = $3,240

I'm legitimately going to be spending 5-10x as much for my loadbalancer as I
am 2 Heroku dynos and a reasonable Heroku Postgres instance.

All I can say is that I'd recommend discounting your price by 5-10x to put it
in reach. You don't want to be the biggest line item in someone's budget,
you'll be the first to go.

Back in the early days of FullContact, we charged a _lot_ per record for our
Person API. After a lot of feedback at SXSW Interactive, we announced a 50x
price cut. I'm pretty sure that's the only reason our API took off as a
business. We optimized on volume and made it cheap to sustain thousands of
requests a second. Five years later, that's paid off for us quite well.

~~~
mrkurt
Would it surprise you if I said we're testing pricing as part of our launch? I
don't think anyone ever gets it right from the get go. :D

Hopefully if we get to be the biggest infrastructure charge in someone's
budget, it'll be really hard to get rid of us.

5-10x discounts for volume probably aren't unreasonable.

~~~
Xorlev
Been in the startup game, totally understand. I'm presenting my view, which
could be wrong but I figured you'd like my unfiltered feedback. Like I said
before, we too had it wrong from the start and products we launched after
weren't priced correctly either. It's a fine balance to price something just
right to drive new customers, retain old, and maximize margins to ensure your
venture succeeds (or can at least take another round to continue building).

Are you guys bootstrapped? Curious to hear more on that front. :)

~~~
mrkurt
:thumbsup:

We aren't bootstrapped, we raised a seed round. Turns out when you don't need
money it's easy to get ...

I'm happy to give you all the gory details if you want to shoot me an email
(mrkurt at gmail). I think we triggered the flamewar thing on HN so I can't
respond here very fast.

~~~
qeternity
> We aren't bootstrapped, we raised a seed round. Turns out when you don't
> need money it's easy to get ...

What? Why would you raise seed if you don't need it?

~~~
chris_va
Old saying

Take money when you can get it, not when you need it.

~~~
qeternity
A runway is not money that you don't need. That's my point.

~~~
mrkurt
This is probably worthy of a blog post, but!

We took investment for a few reasons. One is diversification, we can put our
own money other places. More importantly, I think, is it's not weird when
someone needs to get paid like it would be if we were personally funding the
company.

And, honestly, we wanted to be able to move faster. We can try riskier things,
be more generous with free plans, etc, etc.

------
andrewbarba
Looking at the response headers from your own site, everything is being served
up from Fastly edge servers. Why wouldn't you serve your own site from your
own infrastructure?

The reason I even bothered to look at the response headers is because I'm a
huge fan of Fastly and wanted to see what you guys were sending back to
compare. Turns out they look identical because they are Fastly headers and
Fastly edge servers.

And maybe have a look at Fastly pricing:
[https://www.fastly.com/pricing](https://www.fastly.com/pricing) $0.0075 /
10,000 requests seems much more reasonable than your $0.05 / 1,000 requests...

~~~
goodroot
Fellow from Fly here, nice to meet you.

We do serve our own site. Right now we run multiple applications on our
domain.

The landing page that you see on [http://fly.io](http://fly.io) is hosted on
GitHub pages, which explains Fastly. Our /docs/, too. You are hitting our Load
Balancer first, though.

------
secstate
I'm sure the tech is really neat, but after the last CloudFlare debacle,
piping all your HTTP requests through a for profit company just seems like
it's ripe for abuse.

This is not meant to slander Fly, and I'm sure the devs have all the best
intentions in the world, but the internet will only become more brittle the
more services like this or CloudFlare become the arbiters of vast swaths of
TCP packets.

~~~
mrkurt
This is a real concern. We think about it a lot. In fact, we have a fairly
nice way of isolating customers who can afford global server installs from
other customers on shared infrastructure.

Unfortunately, setting servers up across the world is really complicated and
expensive. There's a definite trade off — we want to make this kind of power
accessible to every developer, and running shared infrastructure is the only
way we've found to do it.

I do think apps are more secure by default running through fly than most load
balancer services. Traffic between visitors and our edges is SSL (unless a
customer opts out, at which point we nag them), application processes
establish a secure tunnel back to our edge nodes to handle traffic. There are
no network hops between fly and customer apps that happen in the clear.

Replicating this with a normal load balancer setup means handling SSL
termination for visitors and terminating SSL with client cert verification
between load balancers and app servers. It's a pain, and most people don't get
it right because it's too finicky.

But I don't want to disregard your concern either. It's definitely something
we think about a lot. We would like to figure out a way to offer all of this
power easily, cheaply (for developers), and in a way that is entirely opaque
to us. As soon as we figure out how, we will.

~~~
secstate
Thanks for the thoughtful reply :)

At the end of the day, I suppose there's a certain inevitability to the sort
of service you're providing, and I'd rather there were 20 smaller companies,
than just CloudFlare. So carry on, and I'll check out your tech when I have a
chance!

------
mrkurt
(founder here)

We've been shipping apps for many years now, and every time around we're
frustrated by a rudimentary routing layer. We think a proxy/CDN can do a lot
for devs, and Fly is an early look at the power a smarter routing layer can
give to developers.

Fly connects visitors directly to apps, handles all the chores, and provides
edge-level middleware to help devs ship features faster. It's designed to run
in place of local load balancers like nginx and haproxy, and works as both a
load balancer and CDN for dynamic apps (we have servers in 6 datacenters, more
soon).

For me, the most interesting part is the middleware, and this is what we
expect to really matter long term. You're probably familiar with local app
middleware to provide auth, modify content, do analytics, etc, etc. Most of
these can actually run at the edge, and Fly currently has middleware for:

* Server side Google analytics

* Identity Aware Proxy (require auth to even load an app)

* Geo data / connection speed information as headers

* Session cookie knowledge

* Routing rules

You can mix and match these. If your app is getting DDoS'ed from a botnet in
China, for example, you can use the geo data middleware + a routing rule to
send all traffic from China to a higher capacity. backend. Or, you can extract
a user_id from session cookie and send it to Google analytics. Or, send
authenticated users to one backend (like your app) and anonymous users to
another (like a static site hosted in GH pages).

This is something we've wanted for a really long time, we've worked on Ars
Technica, Compose, and even a bunch of low volume apps that would have been
better with a service like this. Traffic routing is really powerful if applied
properly, and we're hoping to give any developer the same flexibility as
they'd have at one of the big 3 tech companies (who tend to build all this
themselves).

~~~
FooBarWidget
Thanks for releasing this. How does this differ from just combining a CDN like
Cloudflare, with an API gateway like Tyk or Kong?

~~~
mrkurt
Missed this yesterday! This would be more equivalent to deploying an API
gateway in multiple datacenters across the world. You _could_ do CloudFlare +
API Gateway in one datacenter, but it'd be pretty slow. One of our main
premises is that lots of code/middleware should run at the edge.

Beyond that — we have some overlap with api gateways, but there's a lot we do
that applies more to apps than APIs, like mix and matching backends on the
same hostname, sending user analytics data to services, etc.

------
tyingq
This is very cool. I suspect though, you'll eventually be competing with CDNs
that are adding API management features.

You're starting off with middleware that's smarter than a typical CDN, but
less smart than a typical API gateway. See [https://tyk.io/tyk-
documentation/get-started/](https://tyk.io/tyk-documentation/get-started/) for
a good overview of some of the features of an API gateway.

At a high level, an API gateway already has most of what your middleware does
(tweaking requests and responses, injecting headers, modifying body,
orchestrating multiple endpoints, etc).

But, they add some things your middleware doesn't do, like smart conditional
caching of api responses, rate limits, quotas, access control + tokens, web UI
for management, and more.

You have a pretty unique value proposition now, but if the various CDNs start
to integrate API gateways, it may not be so unique.

Might be worth looking at some existing API gateway software to quickly add to
your feature list.

------
otterley
> When your visitors explore your application, their HTTPS requests arrive at
> the closest regional edge node, buff up with Middleware, then intelligently
> route to your application processes over a hardened SSH tunnel.

ssh is a terrible way to transfer bulk traffic. The send/receive buffer sizes
are fixed to small sizes, so you can never get the benefits of TCP window
scaling. So there will always be an artificial transfer rate ceiling that's
directly proportional to the latency between the edge node and the origin
server.

You'll quickly find you'll have to use an ordinary TLS-over-TCP connection in
order to attain adequate performance over high-latency connections.

------
willejs
If anyone is considering this, consider fastly and custom VCLs too.

~~~
mrkurt
You can do some of what we do if you're a VCL wizard, but not much. I'm a
recovering VCL wizard, it's part of the reason I wanted to build this. :)

------
marcosscriven
I just tried this, and the IP for my test site seems to resolve to a Chicago
based ISP called Secure Central Network - is that the closest one to UK? Do
you have a list of edge locations somewhere?

The simple SSL cert setup seems novel, though the routing/middle stuff reminds
me of Fastly: [https://www.fastly.com](https://www.fastly.com)

~~~
mrkurt
That's an anycasted IP. We use one pool of IPs globally and let BGP select the
shortest route. If you run a traceroute you'll probably find that you hit a
server in Amsterdam.

~~~
subway
You're running TCP over Anycast?

Bold move.

------
moondev
Looks like a very interesting product. I can see how off-loading middleware
can really make teams more productive.

Can you go into more detail on how you are running this on k8s? A bunch of
federated clusters around the world? SSH tunnels in sidecars?

~~~
mrkurt
We're actually only running some of it in k8s, the edge nodes are ansible
managed bare metal. We do however, work really really well with k8s apps.

Our apps are all k8s deployments. We have a user facing dashboard, a marketing
app, and a backend administrative dashboard. Sidecar pods would work fine for
our agents, but we actually bundle them with the apps ... this lets our edges
detect release version, branch info, etc (that stuff is easy to read from git,
hard to detect from a sidecar). Those agents connect to the nearest edge to
setup a tunnel, then we use very fast backhaul between edges to connect users
back to apps.

We actually started by running the edges as k8s nodes, and even toyed around
with federated clusters. It was a bit unwieldy and had a few too many layers
of indirection between visitors and our proxies so we simplified.

~~~
moondev
Very cool! Thanks for the info.

------
thomcrowe
I've been running Fly in beta for one of my apps and it's pretty damn cool.
Adding a couple more today.

~~~
romanovcode
I don't get it. What's the difference between this and cloudflare?

~~~
mrkurt
Fly is way more "aware" of underlying applications than Cloudflare. We offer
most of the same quick wins (easy SSL, global termination). We also replace
your local load balancers, handle service discovery (just boot an app process
with our agent), and let you run middleware at the edge.

Our site is a good example: the static docs are hosted on Github Pages, the
landing page is a middleware + Rack app, and our dashboard is a Rails app
hosted on Kubernetes. All of these are mounted under
[https://fly.io](https://fly.io). The edges are aware of the session cookies
Rails uses and we have routing rules to send people to the right backends if
they're logged in.

It's early, but the middleware is already super powerful.

~~~
bpicolo
I don't see docs related to service discovery. If that's part of the package,
sounds like it's more front-page worthy, even.

~~~
mrkurt
We have some docs about it here:
[https://fly.io/docs/agents/](https://fly.io/docs/agents/)

It doesn't say service discovery (it should!). This will mostly let us build
some neat features, though. We can let people build rules to send traffic to a
particular branch.

~~~
bpicolo
Phrase that in terms of A/B testing and it's probably a pretty easy selling
point.

------
garry
This lets you do smart obvious things like route /blog on your home domain to
your blog provider. There's a lot of stuff you can do that makes your website
better and faster using a smart load balancer and so this is kind of an
obvious missing link in web stacks. Glad that Kurt and Jerome are working on
this!

