Fanout founder here. Though we've been around for years, we are excited to finally submit our root URL on HN!
If the name seems familiar, it could be because you've seen our blog posts or the Pushpin open source project. Fanout is the company behind Pushpin, and we monetize it as part of a cloud service, effectively a "push CDN". Our goal is to become the Akamai of data push.
For some history, here are the other two Fanout submissions that have been made:
https://news.ycombinator.com/item?id=7567417 (TechCrunch article when we went public beta, 2014)
https://news.ycombinator.com/item?id=3701703 (Tell HN, no URL, 2012)
As a Show HN, this post is about everything we've built since that ancient Tell HN. We hope you find Fanout intriguing and we look forward to any and all feedback.
If you want to get right into it, here's the dev guide: https://fanout.io/docs/devguide.html
CDNs have many well-connected points of presence around the globe. They often route traffic through their own global/regional dark fiber networks to decrease latency.
If a Fanout customer buys the self-hosted package, obviously this won't be the case. But is it the case for your cloud service? How many points of presence do you have? Do you have an expansive network that you control, or are you yourself using a cloud provider?
At present, we have one point of presence (California). We used to have another in Europe but we shut it down to save money. However we'll be reinstating it again next month. We're bootstrapped, so justifying the costs of expansion takes time. Long term we hope to have POPs everywhere.
We use cloud providers for our servers. Most of our workload is on Digital Ocean.
If you find the Fanout concept compelling, but have particular regional/latency requirements, it would be great to chat. Understanding demand may help us on the investment side, which may help us expand faster.
1) Fanout can proxy WebSocket client events as as series of HTTP requests to the origin server. This means you can use a normal web backend or FaaS to manage the connections, which is even simpler than having to write a custom stateful server process.
2) Connections are delegated to Fanout, and we handle 1-to-many propagation, for high scalability.
So is this is basically socket.io as a service?
Edit: or, are you talking about HTTP fallback on the client side, when WebSockets aren't available? Fanout can do that too, but what I was talking about earlier was how Fanout can speak WebSockets to the client and HTTP to the server (like an inverted sockjs/engine.io).
So it's not that the two are incompatible options necessarily, but that you could use Fanout to scale out such an implementation.
Fanout is primarily server-side tech. Clients don't necessarily have to know they are connecting to a Fanout-powered service, and only the server publishes messages.
Integrating with Fanout is a little more involved than PubNub, due to the need to set up an origin server, but the advantage is you own your API contract and avoid any lock-in. This can be particularly useful if your API is public.
1.This landing page doesn't make the use cases clear. I think you need to show architecture comparisons that makes it clear.
Btw appbase is a pushpin user right?
2. The pricing page is confusing. Ex: $4/million minutes along with $25 monthly. So does $25 come with any minutes? Also instead of charging for minutes/received messages/delivered messages, I suggest charging for just one ideally 'delivered messages' and also bundle some with the $25 plan. Charging for all three feels like you are nickel-and-diming the customers
3. Separate out public cloud hosting & on-premise and make the pricing transparent. i.e offer people an easy to integrate fanout into their cloud infra. See here for an example - https://mlab.com/plans/pricing/#provider=amazon
4. On-premise option should have transparent pricing especially at this early stage. You are introducing friction into the process and that's fine for a known product but for a nascent one it's better to have standardised pricing
4. Fanout cloud (as it stands) seems like a weird product. Since you have only PoP the latency introduced is going to be huge. I think focusing on public cloud offering like mlab is the way to go.
Always good to hear. :)
> Btw appbase is a pushpin user right?
> The pricing page is confusing.
The paid plan doesn't come with any bundled minutes/messages, however we're considering changing this (e.g. up to 50,000 minutes/messages, then pure-usage cost above that).
It would be nice if we could abstract away the three metrics and only bill for a single metric, but I don't think it's economically workable. In practice, we pay a cost for all three, and they vary wildly by customer. Competitors generally account for all these metrics too (not an excuse, but...).
I do agree the pricing page could be more clear though. Even the current pricing is sometimes misinterpreted.
> On-premise option should have transparent pricing
I rarely see this elsewhere, but I can imagine how it may reduce friction. Worth considering.
> I think focusing on public cloud offering like mlab is the way to go
Thanks for the tip.
Nginx Plus - https://www.nginx.com/products/buy-nginx-plus/
Github - https://enterprise.github.com/features#pricing
Bitbucket - https://www.atlassian.com/software/bitbucket/pricing?tab=sel...
Mattermost - https://about.mattermost.com/pricing/
You can have a "Contact sales" for a really big deployment but customers should have some idea about cost and ideally should be able to buy and try at a small scale.
I'd rather see some documentation than submit a form, like "look how simple it is to install" or "integrates with enterprise software x, y and z".
This information is a little buried though. The main reason is because the rest of Fanout Cloud (clustering, panel UI, aggregated stats, etc) is not open source. So when we say to contact us about self-hosting, we mean about the entire commercial offering, which at this time requires a conversation.
I agree this could be presented better though, and our enterprise page should be a lot more than a bare contact form. Probably what we should do is explain about Pushpin and the rest of the available components above the form.
- You don't have any use cases. Good technical explanations for sure, but generally people "get" a product better/faster with some examples of what it can do for them (instead of "hybrid reverse proxy" stuff).
- On the surface this seems similar to lot of other real time messaging services. Clear "why this over those" or unique selling points is missing.
- The free cloud plan seems really stingy.. 500 free messages per day is so low that it left negative impression.
We recently introduced the "hybrid reverse proxy" wording in order to hammer home our architectural differentiation. We could do a better job explaining why that's important, though.
The free plan is mainly designed to cover trial usage, as opposed to minimal/personal production workloads. Lots of SaaS/IaaS have time-limited trials so it could be worse. :) But we'll reconsider the current limits.
* Canadian Bitcoin Index (streaming API) - https://www.cbix.ca/api-streaming
* Appbase (Firebase-like service based on Elasticsearch) - http://docs.appbase.io/scalr/rest/intro.html#quick-start-to-...
We also have some simple examples:
* Chat: http://chat.fanoutapp.com
* Realtime to-do list: http://todo.fanoutapp.com
* Audio streaming: http://audiostream.fanoutapp.com
(code for the examples referenced from our open source project page http://pushpin.org/docs/examples/)
However, I believe we are in a class of IaaS that is justifiably service-based, because there are performance benefits as we grow to being a worldwide service. So eventually it will be about more than just convenience or business model reasons.
The best analogy is a CDN. Fastly is useful even if you understand how to maintain Varnish. We're Fastly for push.
Fanout is meant for pushing data only. It uses a proxy-based mechanism to interface with the sender, but it's not a general purpose proxy like Nginx. You might even use Fanout and Nginx together, for example if you Nginx as a load balancer in front of our open source version (Pushpin).
Nginx does have a stream module, called Nchan. The main difference is Fanout is fully transparent and can speak any API you define, whereas Nchan somewhat speaks its own protocol and may not be able to power arbitrary predefined APIs. That said, Nchan is far more flexible than most push systems out there, and you can probably get it to do Fanout-like things.