Beyond that, the only insidious behavior I've seen has been Outlook.com hosted addresses accepting email, but then not delivering the email to the Inbox or Spam. Mysteriously, this issue disappears whenever I start going up the IT chain on the reciever's side. If the MTA is going to drop email, it should reject it rather than accept it (as otherwise the Outlook MTAs are falsely accepting mail that they will not deliver).
Additionally, IIRC they forbid outbound SMTP on their Floating IP system. (I can't find an official doc on this matter). Either way, you can't get a PTR for your floating IP, which can harm deliverability
For that reason, you may want to consider AWS & an Elastic IP that you can own, groom, and move around 
It's been 5 years since and they still have no plans to do IPv6 properly and allocate /64s .
This is the main reason I switched to Linode, who will happily allocate you a IPv6 range (/116, /64, /56) that can be rerouted between VPSes with a simple ticket (takes <1 hour).
I wish I'd seen this literally yesterday.
I can share a polar opposite experience: Yahoo and Outlook would categorically reject anything DO at the SMTP level with a 502. Luckily enough, at least they were explicit about it. But no leeway, not even accepting the envelope body.
So YMMV, but it will almost certainly be poor :)
You control the way you authenticate users, how to run spam filtering, etc.
I run my own on Linode with postfix/dovecot/rspamd linked to LDAP for auth that gets routed through SES (probably some cents per month) and it's working good.
These days it's not easy to circumvent being flagged as spam running your own node, because you can look like any spammer until you prove yourself you're not one which takes time and usually not the best idea and you'd rather use an external relay who will provide you with a better reputation from day one.
If you stick with your own delivery route and friends tell you, your mail isn't arriving as they get flagged as spam, that's your fault.
We are now flagging all mail coming from their IP space as junk, but without blocking it, because there are (very) occasional legit senders there.
I'm wondering if they are actually spamming gmail users, or maybe if gmail and similar large providers are biased against new domain registrations, and people starting new mail servers often have new domains.
Or maybe they don't do the basics like working DKIM, SPF, TLS, etc and they get penalized for it.
Node.js and more secure by definition? That sounds _very_ strange to me...
* written in memory safe language
* no special permissions needed (besides binding to privileged ports at start but there are workarounds) so no need to be able to chown or spawn workers under different user id's
* no file access for the running daemon, once the daemon starts then it does not read nor write to the local file system
* no spawning shell commands
each of these things is a target for a different attack vector
One of Maildir's design goals was to make it safer to use on networked file systems such as NFS (compared to mbox).
I rather question, why do they store their binaries in mongodb? imho, s3 would be the perfect fit for that requirement.
If there was a more modern all-in-one-binary solution handling MTA/DKIM/IMAP/webmail with sane defaults I would maybe go back to self hosting.
Sounds like a very wrong idea there. One mistake in part of it will crash or get breached in the entire stack. You know why postfix runs many small binaries.
How do I know if I want news and updates from Helm if I can't even find out what Helm does?
Instead WildDuck has its own simple REST API to build email clients against (https://api.wildduck.email/). It is not standard by any means and there are no wide client support but it is really easy to integrate with other projects as it is basically just a wrapper against database access, just like you would access blog posts or whatever via an API.
I work on FastMail’s web interface, which is now based on JMAP. Here are my opinions on the matter.
What papaf says is flatly not true. What andris9 says is generally unreasonable.
At its core, the API part of JMAP is an RPC framework, with defined semantics for object synchronisation and querying. The most important part of it is those semantics, and they are necessarily more complex than what web developers are generally used to. Simple traditional-REST APIs or the likes of GraphQL lack those object synchronisation semantics altogether, and are wildly inferior for practical email clients. There is just no comparison at all.
There is more to JMAP than just the API calls, though. What I think is the most obviously important part is that it also defines a push mechanism, including the ability for email clients to be notified of changes; this is tied into the object synchronisation semantics with the ability to ask questions like “what changed between these two states?”—so that your email client (be it on the web or not) can be told “something changed in the emails” and efficiently fetch the delta and apply it to the user interface and any persistent storage you may use. This is something that the shown WildDuck API does not appear to support at all, and is in fact something entirely unsupportable on such a style of API. These things are hard to do properly, in a way that will give any semblance of performance or efficiency.
(It looks like WildDuck’s web interface does have some way arranged so that changes get pushed to the interface, but that doesn’t appear to be part of its public API, and I can’t imagine that it’s actually a good, flexible and efficient solution. I haven’t looked into how it is implemented at all, however. I’m open to discussion of what I’m confident will be its shortcomings, if you don’t like the aspersions I am casting.)
Now to return to papaf’s remark: JMAP’s method batching is not achievable with HTTP/2 or GraphQL; the key difference is backreferences, whereby you can establish data dependencies between method calls, so that you pass part of one method call as an argument to subsequent method calls. To pick a common example of something that the FastMail web interface does, this allows you to express in one HTTP request the following: “find the first ten emails in Inbox, collapsing threads (so that you get one message from each thread only); then, given those, get the threads that they are in; then, given those threads, get the basic details of the emails that are contained in those threads.” In the absence of backreferences, you would need probably three round trips, or a special case in the API that limits flexibility.
I will add that JMAP can be usefully combined with HTTP/2 so that you can issue multiple batches of requests simultaneously and use the results as they return.
Please read through all the content on the front page of https://jmap.io/. It provides good justification for why quite a few things are how they are.
Remember this most important fact: JMAP is an object synchronisation protocol.
I’ve had one look at it and while the protocol looks really good, it also looks like I’d rather not implement it more than once ;)
Have you considered supplementing Mongo with block storage like S3, though? I've found it to be a pretty much perfect match for storing immutable mail messages.
Russian inside me chuckled.
For the non-multilingual (like myself).
Two questions that I have:
1. Is it possible to plug it into a mysql db instead of mongo?Self hosted apps are heavily relying on SQL db, so it feels more fluent to have a sql support
2. Is there a way to migrate existing postfix/dovecot mail server to this if I end up liking it?
2. migrating is possible only syncing via IMAP. There is a maildir importer but it is not open sourced (yet)
As for inbound spam, I can use RBL's but the most effective thing has been to block email servers that have rDNS that is either mismatched or nonexistent. I also host at a provider that isn't known for having their IP's on the "forever banned" lists.
This is also a recipe for never receiving mail from the SMB you do business with. (The same is true for enforcing SPF hard fails, unfortunately.)
If you really want to be smart about this, use an SMTP soft error (greylisting) or an SMTP greeting pause. This effectively singles out spam bots, while leaving your regular mail traffic unaffected.
It was a great learning experience, and I got to see a lot of volume. Sadly I never had enough customers to make the jump to the big-leagues, but it was fun.
(I'd do proper SMTP-time rejection, but also archive rejected mails in a database for 30 days. Letting you browse / search through the rejected stuff - just in case I made a mistake, and also to see how good I was doing.)
Even then content-scanning was not the best, but I used a lot of heuristics and I was quite proud of some of them. For example I "defeated" fast-flux hosts, via heuristics to lookup the number of addresses for a domain. Even large domains such as gmail only have 1-4 address-records, though of course their anycast and multiplexed. If you got a mail from a domain with 7+ MX records it was 99% spam.)
At one point I documented damn near everything, after I'd shut down, but it seems I left the domains I used expire. I should see if I can dig out copies.
How does this work exactly? Mongo would have to touch a FS at the end of the day, no?
Is there a way to configure catch-all addresses for domains?