
Show HN: Flute – my attempt at a better email API - igammarays
https://www.flutemail.com/
======
srhyne
Excited to check this out! Great work igrammarays. I really like how you split
providers by from.

------
igammarays
Hey HN, founder here. I’ve been working on Flute for just over a year now. I
got this idea after being utterly fed up with deliverability problems and lack
of easily searchable logs on Mailgun, SendGrid, Amazon SES, etc…which one do
you use? However, these established providers have been doing email for almost
a decade, so I can’t hope to beat them at the delivery game. SMTP/delivery
magic is an ancient protocol with so many pitfalls and unwritten rules - it
left me aghast. So instead of trying to build another SMTP relay, I tried to
solve the problem by decoupling your email API from from the underlying
servers and providers. I call these decoupled APIs "flutes": think of the air
blowing in as your firehose of API requests, and the fingers of the flutist
routing your email harmoniously through different servers. I propose these
"virtual flutes" as a better way to manage transactional email in a load-
balanced, provider-agnostic, failover-native, hot-swappable, exportable-logs
fashion with full-text Elasticsearch for your data. The primary goal of Flute
Mail is better deliverability - our secret sauce is making it easier to do
proper segmentation and have more control over your email routing.

"Configurable routing" means your API can be configured to route your emails
through a different provider or FROM address on-the-fly (or even load-balance
them to save money by using multiple free-tier offerings). I’m super conscious
about reliability, so Flute was designed from the ground up to handle all
sorts of failure modes [1], from MTA spam bounces [2], mistaken IP/domain
blocks, server downtime, queuing delays, and other sporadic problems that
happen surprisingly often with email delivery APIs. I call this configurable
multi-provider setup "Smart Failover".

Another unique feature of Flute is that we aggregate your email history and
delivery metadata in an Elasticsearch cluster for fast analysis and easy
customer service followups. If you’re concerned about a third-party storing
your data, we offer the ability to interface directly with your own
Elasticsearch cluster, keeping the data off our servers. Email data can be
highly sensitive, so I believe it should be kept on servers you control.

The business model I’m experimenting with is to offer all of these features
for free to developers and small teams, because I know what a pain it can be
to choose a good email API. I plan to monetize this service by charging bigger
volume users.

Please let me know what you think!

[1]
[https://status.sparkpost.com/incidents/ldv2gdc96hx5](https://status.sparkpost.com/incidents/ldv2gdc96hx5)

[2] [https://mailchannels.zendesk.com/hc/en-
us/articles/202191674...](https://mailchannels.zendesk.com/hc/en-
us/articles/202191674-Fixing-the-550-5-7-1-RBL-Sender-blocked-IP-or-domain-
error)

~~~
notheguyouthink
Disclaimer: I don't know the emails.

I am immensely curious how you're achieving better delivery rates - I'm
browsing through the site now trying to understand this. I say this because at
the end of the day, I thought any failures reflected poorly on the provider
(SendGrid/etc) which puts the provider and the customer at a sort of war.

> The business model I’m experimenting with is to offer all of these features
> for free to developers and small teams, because I know what a pain it can be
> to choose a good email API. I plan to monetize this service by charging
> bigger volume users.

These plans scare me, fwiw. I appreciate trying to offer an affordable service
to small teams where money is tight, but to small businesses like mine, our
needs are often too big for the small tiers but too small to get value out of
the next bigger plans. Many online services are _expensive_ for us, despite us
having revenue. Our margins our thin, I guess. This isn't a critique, I'm just
trying to I guess advocate for a pricing model that fairly distributes the
pricing load. Since we _(my company)_ always seem to get caught in the middle.

Appreciate your work here, and I hope I find better understanding to
eventually use the product!

~~~
igammarays
It's not as adversarial as it seems. I should clarify, we achieve better
deliverability through a combination of 3 techniques:

1\. Making it easier to properly segment different kinds of emails (this is
the #1 tip you'll find on all authoritative sources about email performance,
including GMail's official guidelines for bulk senders)

2\. Allowing you to dynamically adjust FROM's and sender IP's to isolate
domain/reputation issues.

3\. Catching provider failures, and retrying (Smart Failover).

Only #3 reflects badly on the provider, "putting them at war" as you say. But
the majority of our deliverability benefits actually come from the first 2
techniques, which is not really fighting with the provider, just using better
tools on top of them.

And thanks for the feedback about pricing. To be honest, my current pricing is
a shot in the dark. We're going to have to figure out a better model as we go.
Do you have any specific suggestions? Perhaps tying pricing to number of
contacts/users instead of sheer volume?

~~~
notheguyouthink
> Do you have any specific suggestions? Perhaps tying pricing to number of
> contacts/users instead of sheer volume?

No specific suggestions, unfortunately. In our experience, I at least
preferred when a service was $X/unit for all tiers.. that is to say, we could
either afford to use the service or not. I love those services, it's clear and
on paper. So to your comment, I actually like sheer volume. I'm paying for
units of your service, and I like that.

However the services I complained about above, were not $X/unit, it was
$X/unit until Y units and then $Z/unit. A non-linear approach to payment. They
also can feel more expensive, as the paying users prop up the free users.
Though this may purely speculation.

Sometimes this meant better bulk rates, where it got cheaper the more you
spent. Other times it got _worse_ , where we were seen as a bigger business
because our needs were greater, and so we effectively pay more for less.
Technically you usually get more features when you pay more, but in unit
services like email, messages, w/e, we weren't upgrading because we wanted
more features, we were upgrading because we wanted more units. So the pricing
per unit got worse.

I'm not advocating you change _anything_ to be clear, you do what's best for
your company - obviously. My comment was simply giving our side of the story
on payment, and how non-linear models often put us in weird positions. Also,
our story is probably atypical.

~~~
igammarays
That makes sense. I'm leaning towards segmenting my plans in a hybrid way that
restricts advanced features to higher plans, but charges a fixed $X/unit for
all tiers. That way you can pick a plan based on the features you need, while
still being able to easily calculate whether $X/unit makes sense for your
margin. Like you said, if you upgrade a plan, you rightfully expect more
features.

The reason we can't do a flat $X/unit alone is because we are not an email
API, but rather we see ourselves as a booster-pack for email APIs. We don't
sell emails, we sell email delivery optimization tools. Yet still, the volume
of emails that flow through our infrastructure really matters. Plus, some of
our features, like an Elasticsearch integration, are really expensive for us
to support, so it makes sense that we should restrict those features to higher
tier customers.

Thanks again, super helpful comments.

------
BartBoch
This looks amazing. And the free tier is very generous as well. I wish you
good luck with this project! I will consider it using in near future.

------
gitgud
Beautifully designed website!

~~~
igammarays
Thank you! Interestingly enough, my partners and advisors did not agree to the
expenditure of a chunk of change (~$2000) quoted by the web designer, so I had
to pull it mostly out of my own pocket, instead of the small group fund we
had.

