Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Fanout – Reverse proxy to push real-time data to connected devices (fanout.io)
84 points by jkarneges on Sept 28, 2017 | hide | past | web | favorite | 38 comments

Hi folks!

Fanout founder here. Though we've been around for years, we are excited to finally submit our root URL on HN!

If the name seems familiar, it could be because you've seen our blog posts or the Pushpin open source project. Fanout is the company behind Pushpin, and we monetize it as part of a cloud service, effectively a "push CDN". Our goal is to become the Akamai of data push.

For some history, here are the other two Fanout submissions that have been made:

https://news.ycombinator.com/item?id=7567417 (TechCrunch article when we went public beta, 2014)

https://news.ycombinator.com/item?id=3701703 (Tell HN, no URL, 2012)

As a Show HN, this post is about everything we've built since that ancient Tell HN. We hope you find Fanout intriguing and we look forward to any and all feedback.

If you want to get right into it, here's the dev guide: https://fanout.io/docs/devguide.html

At several times in this thread (including parent comment) you refer to your service as a "push CDN". You compare it to Fastly in one response.

CDNs have many well-connected points of presence around the globe. They often route traffic through their own global/regional dark fiber networks to decrease latency.

If a Fanout customer buys the self-hosted package, obviously this won't be the case. But is it the case for your cloud service? How many points of presence do you have? Do you have an expansive network that you control, or are you yourself using a cloud provider?

We're an early stage startup, so some of our claims are aspirational. We're definitely not on the same level as Fastly today.

At present, we have one point of presence (California). We used to have another in Europe but we shut it down to save money. However we'll be reinstating it again next month. We're bootstrapped, so justifying the costs of expansion takes time. Long term we hope to have POPs everywhere.

We use cloud providers for our servers. Most of our workload is on Digital Ocean.

If you find the Fanout concept compelling, but have particular regional/latency requirements, it would be great to chat. Understanding demand may help us on the investment side, which may help us expand faster.

Hey, how is this diff from a simple WebSockets implementation?

A couple of main differences:

1) Fanout can proxy WebSocket client events as as series of HTTP requests to the origin server. This means you can use a normal web backend or FaaS to manage the connections, which is even simpler than having to write a custom stateful server process.

2) Connections are delegated to Fanout, and we handle 1-to-many propagation, for high scalability.

Socket.io also proxies websocket events as HTTP if needed.

So is this is basically socket.io as a service?

Interesting! Do you have a link to any info about this?

Edit: or, are you talking about HTTP fallback on the client side, when WebSockets aren't available? Fanout can do that too, but what I was talking about earlier was how Fanout can speak WebSockets to the client and HTTP to the server (like an inverted sockjs/engine.io).

No, the bit that is different about Fanout is that the HTTP servers don’t need to necessarily know about the fact they’re coming from a persistent connection. The backend servers remain as stateless HTTP servers.

Or MQTT over WebSockets?

Although we have not actually tested this, it should be possible for applications to make Fanout speak the MQTT over WebSockets protocol (and if this is not possible then I'd consider it a bug).

So it's not that the two are incompatible options necessarily, but that you could use Fanout to scale out such an implementation.

What is your own stack written in?

Edge server is C/C++ (based on Mongrel2, libcurl, and Qt), most everything else is Python. API is Django. We use ZeroMQ for internal messaging between processes/servers.

How do you compare it with PubNub?

PubNub is more of an end-to-end system. Clients knowingly connect to PubNub, and they can even publish messages to each other.

Fanout is primarily server-side tech. Clients don't necessarily have to know they are connecting to a Fanout-powered service, and only the server publishes messages.

Integrating with Fanout is a little more involved than PubNub, due to the need to set up an origin server, but the advantage is you own your API contract and avoid any lock-in. This can be particularly useful if your API is public.

I'm a big fan of pushpin.

1.This landing page doesn't make the use cases clear. I think you need to show architecture comparisons that makes it clear.

Ex: https://heartbeat.appbase.io/index.html

Btw appbase is a pushpin user right?

2. The pricing page is confusing. Ex: $4/million minutes along with $25 monthly. So does $25 come with any minutes? Also instead of charging for minutes/received messages/delivered messages, I suggest charging for just one ideally 'delivered messages' and also bundle some with the $25 plan. Charging for all three feels like you are nickel-and-diming the customers

3. Separate out public cloud hosting & on-premise and make the pricing transparent. i.e offer people an easy to integrate fanout into their cloud infra. See here for an example - https://mlab.com/plans/pricing/#provider=amazon

4. On-premise option should have transparent pricing especially at this early stage. You are introducing friction into the process and that's fine for a known product but for a nascent one it's better to have standardised pricing

4. Fanout cloud (as it stands) seems like a weird product. Since you have only PoP the latency introduced is going to be huge. I think focusing on public cloud offering like mlab is the way to go.

> I'm a big fan of pushpin.

Always good to hear. :)

> Btw appbase is a pushpin user right?


> The pricing page is confusing.

The paid plan doesn't come with any bundled minutes/messages, however we're considering changing this (e.g. up to 50,000 minutes/messages, then pure-usage cost above that).

It would be nice if we could abstract away the three metrics and only bill for a single metric, but I don't think it's economically workable. In practice, we pay a cost for all three, and they vary wildly by customer. Competitors generally account for all these metrics too (not an excuse, but...).

I do agree the pricing page could be more clear though. Even the current pricing is sometimes misinterpreted.

> On-premise option should have transparent pricing

I rarely see this elsewhere, but I can imagine how it may reduce friction. Worth considering.

> I think focusing on public cloud offering like mlab is the way to go

Thanks for the tip.

> I rarely see this elsewhere, but I can imagine how it may reduce friction. Worth considering.

Nginx Plus - https://www.nginx.com/products/buy-nginx-plus/

Github - https://enterprise.github.com/features#pricing

Bitbucket - https://www.atlassian.com/software/bitbucket/pricing?tab=sel...

Mattermost - https://about.mattermost.com/pricing/

You can have a "Contact sales" for a really big deployment but customers should have some idea about cost and ideally should be able to buy and try at a small scale.

Great examples. Thanks!

Awesome landing page, very interesting product. One question, is hosting it yourself free, or are you trying to sell support? I'd rather not contact your sales team to find out, all 'install it on your own servers' links go to a contact us page. Just my opinion, but it is quite off-putting.

I'd rather see some documentation than submit a form, like "look how simple it is to install" or "integrates with enterprise software x, y and z".

Thanks for this feedback! The short answer is that our edge server software is open source and can be found here: http://pushpin.org

This information is a little buried though. The main reason is because the rest of Fanout Cloud (clustering, panel UI, aggregated stats, etc) is not open source. So when we say to contact us about self-hosting, we mean about the entire commercial offering, which at this time requires a conversation.

I agree this could be presented better though, and our enterprise page should be a lot more than a bare contact form. Probably what we should do is explain about Pushpin and the rest of the available components above the form.

Ahh I see, I would definitely put that on your page, explaining the value add and what is free/not free.

FYI, if you leave the email and password blank on https://fanout.io/account/login/ you get sent to an Apache default Server Error (500) error message page. Should probably validate and make sure fields aren't empty.

Nice catch! Thanks

Nice. Some ideas for the landing page:

- You don't have any use cases. Good technical explanations for sure, but generally people "get" a product better/faster with some examples of what it can do for them (instead of "hybrid reverse proxy" stuff).

- On the surface this seems similar to lot of other real time messaging services. Clear "why this over those" or unique selling points is missing.

- The free cloud plan seems really stingy.. 500 free messages per day is so low that it left negative impression.

Thanks for the feedback!

We recently introduced the "hybrid reverse proxy" wording in order to hammer home our architectural differentiation. We could do a better job explaining why that's important, though.

The free plan is mainly designed to cover trial usage, as opposed to minimal/personal production workloads. Lots of SaaS/IaaS have time-limited trials so it could be worse. :) But we'll reconsider the current limits.

Yep, but I think "architectural differentiation" etc. often doesn't matter to customers directly. More important is what that architecture allows them to do, or how that architecture makes you better/faster/cheaper than your competitors.

For which use cases would be handy? Online games, real time sportsdata updates? It would be interesting to have an exposition of a project where this is used

Sure. Some real world cases:

* Canadian Bitcoin Index (streaming API) - https://www.cbix.ca/api-streaming

* Appbase (Firebase-like service based on Elasticsearch) - http://docs.appbase.io/scalr/rest/intro.html#quick-start-to-...

We also have some simple examples:

* Chat: http://chat.fanoutapp.com

* Realtime to-do list: http://todo.fanoutapp.com

* Audio streaming: http://audiostream.fanoutapp.com

(code for the examples referenced from our open source project page http://pushpin.org/docs/examples/)

Not to be a negative Nancy. But do all things in software these days need to be a public service you use? Maybe I am completely overlooking something here but it seems that most programmers and most people responsible for software requiring such services should have the ability to write and maintain their own reverse proxy for the services they provide.

I sympathize with your comment about the excessive level of "as-a-service" products out there.

However, I believe we are in a class of IaaS that is justifiably service-based, because there are performance benefits as we grow to being a worldwide service. So eventually it will be about more than just convenience or business model reasons.

The best analogy is a CDN. Fastly is useful even if you understand how to maintain Varnish. We're Fastly for push.

Considering they have open sourced the server they run (Pushpin), they are betting that people won't want to do this. Infrastructure is expensive, but benefits from economies of scale, so providing things as a service makes a lot of economic sense.

I signed up, but when logging in I am greeted with a 500 error :/

Yikes! Investigating and will contact you directly. Sorry about that.

How does this compare to NginX ? Is this something like reverse proxying Webhooks(for real time push). There is a push stream module for NginX that kind of works the same way.

Nginx is a general purpose webserver/proxy.

Fanout is meant for pushing data only. It uses a proxy-based mechanism to interface with the sender, but it's not a general purpose proxy like Nginx. You might even use Fanout and Nginx together, for example if you Nginx as a load balancer in front of our open source version (Pushpin).

Nginx does have a stream module, called Nchan. The main difference is Fanout is fully transparent and can speak any API you define, whereas Nchan somewhat speaks its own protocol and may not be able to power arbitrary predefined APIs. That said, Nchan is far more flexible than most push systems out there, and you can probably get it to do Fanout-like things.

Yep, I thought of the same - WebServer + an abstraction layer containing (NginX+Fanout) would be something I'll be trying. Look pretty cool, will check it out. So theres Fanout Cloud, but do you have an On-Prem version as well ? We have 1000s of customers, accessing millions of data records/sec via the APIs. This will definitely help us.

Site doesn't work from here

Hmm. Thanks for letting us know. Do you mind telling us where you are located?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact