I've been a happy customer, even though lately I flirted with going back to GSuite for my personal email, but after a trial realized that Gmail does many things well, except for being a good email service. So I went back to FastMail and renewed for another 2 years.
Seeing this new protocol is exciting, because JMAP is being standardized at IETF. A breath of fresh air to see a new standard being developed.
Also from what I understand JMAP should be friendly for mobile usage. They kept notifications out of it, you're supposed to implement notifications using whatever the mobile platform provides. Interacting via JMAP is via plain HTTP requests, which is super cool.
I can totally see myself implementing a simple email client for automating online services. For example if you implement a commenting system for a website, you might want to do replies by email. That would be a cool project for me to try out.
I wonder if FastMail exposes JMAP publicly yet. Haven't seen any mentions in their admin or docs thus far.
The worst part is I think Fastmail is aware of it and just don't care (believe that's why they mark their emails with a green tick and text). I understand that email has never been really authenticated, but this just throws any trust I had in Fastmail out the window.
I will be evaluating other mail hosts at the end of my subscription.
SPF has nothing to do with the From header. And the DKIM signature does not have to match the sender’s domain, the signature can be that of any domain. This means that for practical purposes, anybody can send spoofed emails. That an email is signed with DKIM, that doesn’t mean much and it is meant to build a web of trust between servers, but otherwise it is useless for the users themselves.
They wrote a blog post about how SPF/DKIM work: https://fastmail.blog/2016/12/24/spf-dkim-dmarc/
If you want to let people know which emails are from you, the From address is very weak. This is because the From/To headers tell you nothing about the source and the destination of the message, according to the email standard. Read that blog post for details.
You need a proper signature via PGP or S/MIME if you want to ensure that the receiver knows the message is from you. And unfortunately this requires education and email clients with support for such signatures (most desktop clients do), but that’s email for you.
The average layperson will not get that. I'm fairly sure if my mother received an email that wasn't delivered to a their spam folder saying "Hey, remember that old copy of my birth certificate you have floating around? Could you send that. Also, CC my good friend firstname.lastname@example.org" that she would call me first - if I was reachable. Also is totally ignorant of digital signatures and most likely unable to verify any present anyway.
As much as I dislike Google and try to avoid their products and services at all cost, at least I have confidence this wouldn't happen with them. Not that I would go back, but it's still concerning.
The only way Google could protect you is if the From address is from @gmail.com (maybe, not completely sure). But if you have your own domain, you can’t have that protection. Sure, you might not be able to use Google’s own servers to send that email, but email is federated so you can use somebody else’s servers.
The only thing that stops spammers from doing more of this is the web of trust happening between email services. This is precisely why if you setup your own server, you’ll start off with a negative reputation and your emails will end up tagged as spam depending on the destination.
No, that's not the point.
> Sure, you might not be able to use Google’s own servers to send that email
That is the point. Why does Fastmail allow this where Google doesn't. At best, it's ignorant and intentionally misleading. At worst, downright malicious and ripe for abuse.
I also wonder if there are superusers that have a legitimate use for sending emails that have a different "From".
Something to think about is that, looking at the postal mail it was designed after, I don't imagine a postal office would reject me if I tried to drop off mail authored by someone else. They don't check the "From" in the envelope with my ID or anything. In fact, many envelopes don't even have a "From", and you don't even have to face a human when dropping off your mail. All the postal office does is provide access to the global delivery network for a fee.
It might be more apt to think of email providers likewise as network providers that allow transparent access to the global MTA network.
Both postal and electronic mail rely on signatures for proper authentication. It's only that electronic mail's (cryptographic) signatures are more secure but more difficult to use by laymen.
Maybe this issue ought to be thought of a similar to how illiterate people sign paper documents by making an "X". I imagine it's trivially easy to spoof documents supposedly signed by them, and even mail them. I wouldn't blame the postal office for accepting such spoofed documents.
Computers being relatively new and all, perhaps it isn't that bad to think that most of the world is still computer illiterate even if they think otherwise because of their ability to use point-and-click interfaces designed to be used even by illiterate young children.
What I think is needed is better computer education.
As to where this expectation for "From" to be validated comes from, I imagine it's something we've grown accustomed to from our use of centralized services. It would be really bad if a message on Facebook or Twitter could be spoofed, but those services are centralized, so restricting their users equates to properly protecting their users. Email, however, is decentralized. That's a good thing, and the proper way to do authentication in an decentralized service without making it more centralized can only be by non-spoofable signatures and not by trusting validations from independent service providers.
I reported the same problem to posteo.de in March 2016 and still have not received a satisfactory answer, though it seems they have some counter-measures in their webmailer nowadays. The fun part was that as a "no logs" privacy-oriented provider, they were not even able to track who sent them a complaint from their own support address ¯\\_(ツ)_/¯
As a comparison: at disroot.org I found the same problem, and it took them a few hours to repair their postfix configs.
"Email spoofing bugs do not qualify. We are quite aware that users can set arbitrary From addresses on emails, that our SPF records allow arbitrary hosts to send email as our domains, and that our DMARC policy is not enforcing passes. These policy decisions are by design, and we track the actual sender in a separate header."
Someone could decide to forward their other mail to their fastmail account. Should they then potentially risk email their other customers send to that address? DMARC headers tries to solve this, but the world is dirty, mailing list software suck, and their they would have to take the blame for problems outside their control.
I can understand the decision. They could probably do something to show good intentions, like flagging suspicious email and making sure their own email software shows appropriate warnings, but it's never going to be perfect.
SPF, DKIM and DMARC do not provide authentication of non-envelope headers like From: and To: etc, unless they are specifically included, but there is no way to publish that you require those headers as part of the DKIM signature.
Stopping phishing is hard. End users mostly are fooled by a little padlock in their web browser, and that's a much simpler trust model. Eliminating email dressed up as web pages would probably do more to combat that than authenticated sender models ever will, but nobody really wants that.
It sounds kind of lazy to me. Though I'm sure they would get lots of complaints if they turned it on...some mailing list software depends on spoofing, for example. Or web based "contact us" forms. So perhaps it's just to avoid lots of support tickets.
Take a look in Gmail at a signed email and you’ll see a “Signed by” field in its header info, with a domain name as a value.
Also the SPF setting has nothing to do with the From header either.
In other words the “From” value cannot be protected, unless you sign your email with PGP or S/MIME.
They know who authenticated to the SMTP server, so they could enforce that the From address is who it was authenticated by. Otherwise, they basically act as an open relay.
Plus it's not a unique problem to fastmail.
I demonstrated this behavior to eggsampler after discovering it quite a long time ago by messing around with HTTP payloads in their web interface - it's wild to me that FastMail will use the DKIM private keys from an entirely different FM account to sign your messages.
Unlike eggsampler, I won't be ditching them, but I hope that FM reconsider their policy eventually. That they have awarded themselves the privilege of a "green tick" on their own official emails while throwing everybody else to the wolves is slightly ironic.
And if Fastmail allows Fastmail user A to spoof Fastmail user B, then the above still only protects you against non-Fastmail customers.
But anyone can set up their own postfix/qmail/sendmail server and put anything they want as the From.
Or am I misunderstanding the issue here?
The problem is if any email service did this you'd start trusting the "from" field and that is wrong. Do not trust the from field. It's as simple as that.
I remember back when Gmail was new and hot. It was unlike any other email service out there, and ridiculing people for using inferior email-solutions could to a certain extent be justified.
While other webmails were slow, had constantly reloading pages and what not, Gmail was fast. It was amazingly fast. Gone where the 5 minutes making the webmail work for you. You just sent the email and you were done. Just like that. Back then, this was unheard of.
These days though? Everyone is still pretending like GMail is the only game in town, when I think they have one of the worst webmail-interfaces out there. And it's slow. Oh god it is slow. And god forbid you try to load it in a browser not Chrome, because then it just grinds to a complete halt.
So yeah. Happy Firefox-using FastMail-customer here. You couldn't get me back to GMail even if you paid me.
(I got my first gmail account in 2004.)
Yeah, the UI wasn't bad, but there was less email back then and I didn't (still don't) really mind being served HTML repeatedly instead of some "AJAX" application, which is what I remember the technique being called. (It used to be the latency to the server was poor and JS hid that; now, on 2-3ms fiber, it seems that the JS actually introduces a lot more latency than it hides.)
be sure to write your filters before receiving anything. Especially important if you're importing your email for the first time... and insanely annoying if you want to debug such an advanced filter.
as always with fastmail: It has great features... but always with a massive caveat.
One nice thing, for me at least, is it runs prior to the email ending up in your inbox and I am uses to Outlook rules were rules execute post arrival.
It would be awesome if some day there was a "jmapfilter". I think JMAP would be really efficient for this use case.
Also FastMail’s search works for any email header so it can be more potent than that of Gmail.
I wonder why can’t they add a checkbox to apply the rules retroactively. Seems to be simple enough.
I can't even begin to consider using it at the price point I presume I'd be in.
For IMAP folders you can set a default identity and email in a folder stays in that folder without cluttering Archive, like in Gmail. So in effect you can do much with a single account.
OTOH, my relatively new gmail account has >4 GB in it. Other accounts have 0.8 and >3 GB; none of these were used for file storage (FUSE) or anything like that.
I read the FM blogs and I remember them mentioning that low battery consumption was one of the design goals.
The JMAP site  has:
> JMAP is designed to make efficient use of limited network resources. Multiple API calls may be batched in a single request to the server, reducing round trips and improving battery life on mobile devices.
Sounds actually not cool. Is there really no standardized way to do notifications? The world consists not only of shitty mobile walled gardens which insist on or strongly favor their proprietary implementations.
Gmail/Gsuite seem to be good email services from my perspective as lay user and occasional admin. Can you expand on why you think they are not good email services?
- constantly changing things around. UI gets less intuitive for every release.
- slower for every release
- violates IMAP standard by re-using IDs across tags/IMAP folders, risking actual data loss. Example: if you in a real email-client try to delete a email from a single folder, you will also delete this email from all other folders it has been "tagged" in.
- similar IMAP issues with sent emails. Hard to track sent emails from email-client, unless you allow the email-client to explicitly save a copy to sent-emails folder. But then you suddenly have duplicate emails in the web-UI.
- Makes an open standard (internet email) proprietary.
- Hell to integrate with: See above.
I'm sure I could go on, but really. If you still consider GMail best in class, the only possible explanation is that you haven't seen anything else.
They just assume everyone uses the web interface.
More likely, they want you using their app and their web interface, and merely tolerate third party apps. For now.
Google wants you in the Googleverse forever.
Describing this as REST is really strange. Defining your own operations over an HTTP POST is what SOAP and other RPC style web services do and specifically what REST isn't. But I guess that a lack of a standard behind REST ended up with the term being used for everything.
It involves two endpoints exchanging the state of a shared resource. It needs to be compliant with the constraints of that style.
People think that REST must be over HTTP, but it can be over any protocol. The essence is that it is a style of systems design, so JMAP can be considered RESTful as described in the link above.
REST is one of the real patterns in software architecture, a set of constraints, not a set of structural elements.
The fact that this is a discussion though makes my point. By REST being only an achitecture style with very loose definitions makes it arguably fit for all different kinds of APIs which in turn has made the term useless over time. Maybe we could use a set of technical standards (like SOAP) for different common API implementation solutions within the bigger REST idea. Discussions like REST vs SOAP are like comparing OO and Haskell, one is a concept/pattern, another is a specific technology.
That wasn't my takeaway. They don't consider it HTTP-based REST, but they do consider it RESTful. The FAQ's question is from the perspective of someone who doesn't make that distinction.
You can model things RESTfully, but the encoding of REST into a carrier protocol (commonly HTTP, but it doesn't have to be) is a separate matter. The "very loose definitions" largely stem from the encoding, not the modeling.
>The "very loose definitions" largely stem from the encoding, not the modeling.
Sure, but the fact that we don't have strict definitions for common encodings means that you then have a discussion on it every time. It would be nice to have Standard A, that defines a common way to encode a common type of REST API (HTTP/JSON/etc) and we can just say our software uses that and REST is implied. Instead we get a situation where JSON-over-HTTP is perceived as being REST when it actually fails as a test in both ways. Some things are REST and don't use JSON or HTTP and some things are not REST and still use JSON and HTTP.
Also i'd classify a REST api as anything that does http requests and consumes proper JSON. As long as this is true , the rest is squabbling :)
REST doesn't need to use any HTTP verbs other than GET and POST.
Maybe I'm naive in thinking it was possible but we could have avoided this whole "make an account on X messenger so we can talk", if someone had just jammed XMPP and SMTP together.
Every few years I'm tempted to try and write a SMTP server (MX) myself but then realize that postfix has been around so long and is the go to choice for a reason. I don't have 20 years of figuring out how X mail server interprets SMTP to be as reliable as postfix. I settled for trying to wrap it.
The real hole is "instant messaging". Somehow every few years we got from ICQ to AIM to Paltalk to Skype to Facebook Messenger to Slack to ... (at least Altassian had the decency to take HipChat behind the woodshed and shoot it)
The strange thing is that these services dry up, blow away, get replaced, but they don't seem to improve on what came before.
There is a standard, XMPP, but the only people who care about it are firefighters, cops, and soldiers. For all the anger at internet giants these days, I can't see for the life of me why people aren't pushing for open instant messenging.
The second point is especially big. People expect to be pointed to a singular app/site, not “choose whichever client you like”. Your average Joe just doesn’t conceptualize services as being separate from clients in the first place, and even if they did having to choose a client is a dead end.
Give it to your average Joes and Janes and they will just use it like any other messenger. But since it is pure XMPP, can just use any other Jabber account to chat with them. :)
I recommended it a lot the last weeks and most of my pals just use it without any hassle. At least in my bubble most of the people have besides WhatsApp also Telegram or Viber or WhatEver installed. So it's for them Quicksy is just yet another app. But this time one, which helps to get out of the walled garden. :)
There would obviously be a lot to work out, but if something like this was in the SMTP standard (or at least introduced), I think eventually email providers would race to differentiate themselves with support for it, and we might not be where we are today. In my view there's no reason to even require the SMTP servers to serve chat traffic -- as long as they could hand off reasonably, and everything spoke the language as specified in the spec (or at least came close).
Even if I'm completely wrong and email should be left asynchronous, SMTP could at least have introduced some standard around chat negotiation and a protocol negotiation process for remote chat servers/agents.
Wouldn't this be best handled with SRV DNS records or some other service discovery process outside of SMTP?
I don't have a preference on exactly how the service discovery would work -- I'm merely suggesting instant messaging could/should have been an extension of the SMTP spec like websockets were an extension of HTTP.
[EDIT] - Looking over the link you sent, that is exactly what would have made a good addition in my mind -- if SMTP had some sort of functionality to suggest that XMPP was available and where to check (DNS SRV records)
Second, IRC alone doesn't provide a bunch of features that everyone expects nowadays. You have to host or pay for a bouncer if you want to see what was said when you were offline, for example. Gotta use a 3rd party service for push notifications on iOS. Again, there's no reason why this couldn't exist, but it's another product, not just IRC.
Years ago, I started out with mIRC on Windows 95. I didn't know many IRC commands, but as I recall, you could navigate through the menus to do things like list channels, join them, part from them, etc. So, I don't think that an application like that should be any more difficult to use compared to Slack.
Xmpp is/supports federated/ion (email@example.com can message firstname.lastname@example.org).
But the major players (by numbers) wanted silos: Google (talk) and fasebook (first gen of messenger).
They basically went lol, screw users (arguably because: spam. But hello, Gmail? And today fb spam...).
So thank Google and Facebook for deliberately gimping it so you can't fb message email@example.com or gtalk first firstname.lastname@example.org.
2) there were only two "big" players, Google and Facebook. The benefit of federation would be as with email: an open internet with federation across organisation level services (community/company run servers).
The fact that Google didn't implement ssl for federation doesn't mean people "wouldn't" federate with them, it mean Google didn't make a real effort.
Wave suffered from a UI that was a mess layered on top of a protocol that made it really hard for people to get started on experimenting with alternative frontends, and without a user-base giving people a reason to persist figuring out how to interoperate with it. Had the protocol been simpler, the UI mess might not have mattered so much - people might have come up with their own ideas.
That gives you some idea how simple it is.
Of course, if the authentication doesn't work, nothing works!
I guess the modern equivalent is checking if a REST API is usable just with curl.
> make an account
You are required to make an account somewhere so your account can be banned if you spam, basically.
I thought I searched far and wide for other mail servers, yet only came up with postfix, iredmail and cyrus, I clearly didn't look hard enough
Umbrella's for example see continuous evolution in means of making better fully collapsible versions, making the deployment more automated, reducing weight, etc.
From Wikipedia, illustrating both that there's continued substantial effort in coming up with new ideas, and at the same time that it is hard to come up with something genuinely new:
> Umbrellas continue to be actively developed. In the US, so many umbrella-related patents are being filed that the U.S. Patent Office employs four full-time examiners to assess them. As of 2008, the office registered 3000 active patents on umbrella-related inventions. Nonetheless, Totes, the largest American umbrella producer, has stopped accepting unsolicited proposals. Its director of umbrella development was reported as saying that while umbrellas are so ordinary that everyone thinks about them, "it's difficult to come up with an umbrella idea that hasn’t already been done."
People buy cheap umbrellas, they suck, the people imagine better umbrellas that cost more: goto START.
A 9yo of my acquaintance is really inventive; he often says "what if we had something that would ...", yes, if you'd invented and developed that 50 years ago you'd have been a multi-millionaire.
A lot of inventions arise naturally out of a creative mind being confronted with the problem.
I have view on the spam thing. The whole "your idea will not work because" meme is hugely destructive of innovation in email. It sucks energy and mindshare. It's classic old timer put down.
What would (imnsho opinion) have fixed spam is sender pays. I've debated this with a lot of people. We're 50/50 on it. Fifty agree with me, fifty million don't.
Jmap is utf8 clean btw.
(I used to do email for a living in the eighties when life was simple and bang!chains!worked)
I propose a scheme where the sender pays e.g. 5 ct per e-mail (low enough that it does not matter for legitimate use, but high enough to make spam unprofitable). BUT with the following twist: The receiver can generate API tokens that allow free e-mail delivery.
So when I sign up for e.g. Twitter, I can give them an API token for my mail address so they can send notification mails to me for free. If they decide to start spamming me, or if they decide to sell my token to a spammer, I can just revoke the token and the spam flood stops instantly.
I have not found any flaws in this idea yet. RFC!
Natural solution is a pricing list, but then USA spammers could route via cheapest geographic servers.
How do you get paid? Banks will put a transaction cost making sending email cost stupid money. So, bitcoin? But the built in energy costs for processing a transaction will force a floor on the transaction pricing that's too high?
If a token leaks you'll have to fiddle about to allow a sender's emails to get through ; I guess you could automate that if your MUA had credentials to inform genuine senders of API key updates.
Then you funnel all that junk into a special folder (black hole) that you can either ignore or check if you are expecting a legit request for your attention.
The more I think about it, the more I love this idea. All email that shows up in your inbox is there because you have to explicitly allowed it.
The one controversial part of this is deciding who is permitted to redeem the bond, i.e. who is the central arbiter of what is and isn't spam? Well, fortunately we already have an entity like that, de facto, since we have the Big 4 email providers ("Gmail, Hotmail, Yahoo, and AOL", or any other similar group that all mail servers must, in practice, comply with the demands of).
In the system I'm proposing, if any of these big providers detect a domain being involved in spam (or N out of M providers, to reduce the risk of false positives), they could redeem the bond. Of course, the bond would be structured such that the only recipient was a non-profit, like the IETF, or possibly an entity like ICANN. That way there is no financial incentive to make false positive claims.
As for how this system would be introduced, that's the easy part. All pre-existing domains would be grandfathered in, so that no current email users would be negatively affected. (Indeed, there is a moral hazard that some email providers would support this system precisely because it only affects new entrants to the market). There would simply be a flag day, after which if you register a new domain, and want to send emails from it, your domain registrar would have a check box saying "Yes, I am happy to be charged an extra $10 and have put it in a bond so that I can send email to the Big 4 providers". After sending a certain amount of legitimate email, a Big 4 provider could then send a transaction which makes the locked funds return to the original issuer of the bond.
Finally, to decide how big the bond has to be, we can look at data like this:
If spammers are going to extra effort to save a few dollars on their registration costs, then their margins must be pretty tight. It also means that domain reputation systems are working, since spammers have to keep bulk buying new domains:
The obvious downside is that since the system is fundamentally just cryptography it doesn't discriminate between spam and legitimate bulk mail (password resets, newsletters, mailing lists, etc.)
I don't really see the payment as a big problem. This has been discussed previously, mostly initiated by the big email services
The sender pays the reader. If the reader is the one deriving the value from this communication, they would pay the sender out of band as compensation.
Along with the billions of spam SMS that are also supposed to be sender-pays, sadly
Alternate but similar - sender holds the email until the receiver fetches: https://cr.yp.to/im2000.html
I'm not sure it actually helps that much to cut down spam but anything is worth a try at this point.
Email is designed around asynchronous clients, e.g. the ability to write everything offline, connect for just long enough to queue them all up in some server, then disconnect while that server passes them along. By the time the receiver (server or client) knows that they've been sent a message, the sender may be long gone.
Hashcash only works if:
- It can't be bypassed. Without a mechanism to tell senders how much to include, receivers must set their price near zero to avoid discarding legitimate messages which guessed the wrong amount. Keeping messages without any/enough hashcash would defeat the purpose of the thing.
- The sending client does the work. Since servers are always online and reachable via a known address, we could have them negotiate an amount; e.g. the receiving server checks the message for hashcash, returns an error stating that a certain amount is required, the sending server tries again with more hashcash. The problem is, getting the sending server to mine hashcash won't stop spammers: they'll just use other people's servers, like gmail, hotmail, etc.
I've written about this before, but I think that a generic protocol for negotiating hashcash would be really worthwhile. Maybe it could be made to work for email, but even if not there are plenty of synchronous protocols which could use it.
In particular, there's no reason to keep a fixed price; we can figure out a price using the same heuristics as existing spam filters: can we verify the sender, have we seen spam/ham from them before, do they appear on black/whitelists, etc. This way the pressure can be kept on spammers, whilst the majority of normal traffic can go through with little effort. Note that this fixes the mailing list problem too, e.g. if users add the list address to their whitelist, or if they send a message in order to subscribe (hence triggering the "allow this, it's a reply to our message" heuristic).
I also think this would be a nice alternative to API keys, since it would keep things more "open" for experimenting and mashups, whilst giving providers a way to avoid abuse (API keys could still be provided, as a way to significantly lower the amount of hashcash required).
The idea was dead on arrival.
As protocols are meant to connect different implementations and bringing more protocols for the same task quickly hurts the interoperability (before bringing some improvements in the long-term), I am skeptical to the effects those new protocols bring to the ecosystem. I am aware that JMAP was born more out of the RESTful requirements of an HTTP based App, but in general, I am wondering where this road will lead us.
As for XMPP battery consumption I didn't see major problems, Conversations.im is always <1% (I just checked and it shows 0%). On the other hand Conversations can use push to optimize battery usage.
So it is pretty obvious that the vendor built battery optimization software does more bad than good.
Could JMAP replace SMTP as well as IMAP?
BURL  is an SMTP extension that allows to submit a mail that is already stored on an IMAP server. This still requires to make a separate connection to send the mail, but at least it does not require uploading it to both SMTP and IMAP only to store it in the Sent folder.
However, this extension is rarely implemented or deployed, although the RFC itself is already twelve years old. As an example, Dovecot started to support it by acting as a SMTP proxy  as of version 2.3 in 2017. I am not aware of any mail clients that support BURL.
Can I know from a capability exchange if the remote server will send emails after adding to the "Outbox" folder?
Background for this is that I implemented a calendar client and a calendar server (with SabreDAV) last year and I'm wondering if I should be adding support for JMAP anytime soon.
When I was working on calendaring project I've found lots of ways to create broken events in google calendar for example. They are completely valid according to RFC but absolutely not usable in apps.
Also there are working groups that STILL developing calendaring RFCs, they create tons of esoteric and cryptic documents every year.
P.S. When I write “RFC” I mean any well established standard.
> Internet: Do you think JMAP will really take off?
> Me: JMAP is an open, smart, modern, and powerful E-mail protocol, so probably not.
I am not impressed. Folders moved in the webinterface show up unmoved in Thunderbird. And vice versa.
Sometimes it works, sometimes it doesn't. They're investigating the problem at the moment, but the updates they've given me don't give me much hope.
I'm looking at migrating away to another provider that just does plain old imap.
I just do not know if EWS is an "open standard" that could be replicated by a third party server or if it is only "open documentation" and licensed for Exchange only :(
Not sure if this is possible, I don't know if Outlook uses the EWS protocol, OWA or something entirely different :/
WTF? So, webmail was limited by the fact that it was running in a browser and thus had to use HTTP, which has semantics that don't really fit the needs of an email access protocol. Now, we do have stuff like websockets that would make it possible to run a protocol from the webmail client that actually fits the needs, and instead people invent a new protocol that inherently doesn't match the needs of the application?
And could you also explain how constantly making new connections makes things work more reliably over unreliable links? Like, does that allow you to transfer data when the network link is down? Does the fact that inside the TLS/TCP connection data is transferred via HTTP instead of IMAP somehow make the TCP connection work better over lossy links?
It would seem to me like the exact opposite should be the case, if it has any effect at all?
The platform argument is roughly that if they hold open one connection, it's cheaper than each app holding open their own connection. And maybe, if you're optimistic, the platform will be better at figuring out ping intervals for that connection that adapt to broken Nat devices per network (ie: on some networks, wake up once a minute to keep the connection active, and on reasonable networks wake up once every 30 minutes)
But in any case, none of that is an argument for building a protocol that forces the problems that result from that on all platforms that don't have such restrictions, at best that is an argument in favor of workarounds to make things work as well as possible on such platforms.
Also, I would think that the long-term solution to this problem should be mobile link layer protocols that allow the mobile station to receive incoming packets with little delay while still conserving power. Stuff like high-powered, high SNR alert channels that allow the mobile station to shut down everything apart from a simple, low power, receiver, that only powers on for a few microseconds every few hundred milliseconds to listen for a wakeup signal from the network, so that incoming packets can be delivered with a latency of a few hundred milliseconds at any time. And then you are stuck with a stupid poll-based protocol that's adapted to the limitations of some ancient technology as a replacement for an even more ancient protocol that didn't have that limitation.
On the other hand, JMAP does allow you to get push notifications on changes, outside of the normal HTTP API.
That doesn't explain why it is limited, only why it isn't interoperable. A traditional IMAP client also didn't speak TLS. That wasn't a reason to invent a replacement for IMAP, because you simply stack IMAP on top of TLS, and noone complains that "every TLS IMAP server has to implement its own TLS proxy", let alone gets the idea that you should invent a replacement for IMAP in order to be able to use it over TLS.
> and IMAP as a protocol is very, very bad if you don't want to sync emails to your local machine and thus keep your own copy of the whole mailbox state.
Because? And mind you, we are looking for a reason that would justify (a) inventing a whole new protocol instead of adding a few small extensions and (b) specifically inventing a pull-based protocol that uses HTTP as the basis.
> On the other hand, JMAP does allow you to get push notifications on changes, outside of the normal HTTP API.
In other words: Because JMAP doesn't fit the needs of the application, because it uses a pull-based protocol, and in contrast to IMAP which has push built-in already (well, it's an extension to the original protocol, but one that is widely supported and a relatively simple change to the protocol), they support tacking on a workaround for the resulting problems? And that is supposed to be an argument for the protocol, or what?
I am not at all convinced that JMAP will succeed, I think it's along shot, but it's failure will not be because of the choice of HTTP.
First, it's based on HTTP, so it's obviously not simple. Then, it obviously doesn't solve the problem perfectly if it cannot do the job without an additional side channel (which still doesn't solve the problem perfectly as the side channel obviously is slower and reduces reliability). And what is possibly the monstrosity with websockets, if not the fact that it is kinda-sorta using HTTP for something it wasn't built for ... which you think is best avoided by using HTTP for something it wasn't built for?
And no, I certainly don't want to use websockets, just as I don't want to use any of the other monstrosities that make up "email in a browser". But the great thing about sensible protocols for the purpose, which could work over websockets, is that you can simply drop all that crap and run them over TCP (with TLS in between, preferably).
If you maintain a connection you have a leave the modem powered. If you connect every so often (1 minute, 10 minutes, whatever) the modem can be powered down in between. The modem and the screen are the top two users of power in a phone so this is a big win.
The only thing that requires leaving the modem powered on is if you want to be able to receive something. But tearing down the connection obviously doesn't help with that. When the modem is powered down, you won't be able to receive anything more by not having an established TCP connection.
So, no, that does not explain how doing more work saves power.
So in practice, you have to regularly send traffic down the connection, fully powering up the modem, and ideally that shouldn't happen for each app individually. Now one could imagine a system where the OS coordinates this and signals to apps that the system is going to transmit now, and that they should trigger their own requests, bundling them in the process, but that's not how the platforms work. Instead, the choosen method is to have central push services, which the phone OS polls and then distributes the messages to the apps.
FWIW this would happen quite naturally if the system simply used flexible timers for these sorts of wakeups. It's an accepted, even trivial solution to the "need to power something up on an infrequent and perhaps unpredictable basis, and have it be used seamlessly by multiple apps" use case. Centralized services are nice but they shouldn't be the only way for a system to get notified about stuff.
That's only true in so far as it is besides the point. Yes, suspending your phone for hours on end and expecting established TCP connections through some telco's IP connectivity to still work obviously won't work. But that wasn't my point. My point was that interrupting connectivity does not interrupt TCP connections, or other kinds of long-running connections. And that tearing down the connection does not in any way improve the situation with regards to not being able to receive messages.
The mere fact that polling doesn't make things worse in a particular use case is not an argument for designing a poll-based protocol. But also, saying that you can only use polling patterns anyway is a massive oversimplification. Even if you implement a poll-based "background delivery" mechanism, a push-based protocol still is advantageous for when the application is being actively used in the foreground. Mind you, you generally can still poll with a push-based protocol, but that does not really work the other way around. Even if you poll for new messages in the background, you have a better user experience when server-side state changes during active use of an application are reflected immediately in the user interface, so you should still have a push-based protocol, and only switch to polling when in the background.
Also, part of your argument only applies to TCP, which unfortunately uses the IP addresses as part of the connection identity, but that is not an argument against long-running connections per se, as you obviously can build connection protocols that don't do that, see QUIC for a real-world example. So, if that is an aspect you want to solve, you shouldn't invent a new poll-based protocol on top of a "transport" that doesn't fit the needs of the application and that also doesn't even solve the problem (when the IP address changes during an HTTP request, that connection will still fail, possibly using a long-ish timeout, and in any case necessitating a complete retransmit of anything that had already been transferred), but instead you should maybe write a specification for IMAP over QUIC, so you can actually seamlessly continue a data transfer over address changes and NAT remappings.
> but that's not how the platforms work. Instead, the choosen method is to have central push services, which the phone OS polls and then distributes the messages to the apps.
Well, yeah, but that's simply the manufacturers forcing a technically inferior solution on their users in order to gain power over their property. While designing a workaround for this situation certainly is a good idea, making this idiocy the basis for a protocol design, so that even software on sane platforms can not do better does not exactly seem like a bright idea.
You could have a variant of JMAP with GraphQL syntax and semantics, but there would be a fair bit of mismatch, for their purposes and focuses are quite different: JMAP is concerned with object identity and synchronisation, for what you might call thick clients (and JMAP Mail needs to be broadly IMAP compatible), while GraphQL is UI-focused, for what might reasonably be considered to be thin clients (not that they are logicless, but that they are very much focused on offloading burden to the server). I see no compelling case for a JMAP-like GraphQL thing.
It could still be an interesting task to develop a object synchronisation protocol like the JMAP core on top of GraphQL.
> JMAP is not designed around a persistent network socket, so it’s perfect for webmail clients that connect, do stuff, then disconnect
This would allow implementing JMAP in serverless architecture, e.g. self hosting on AWS lambda. That should be very cheap and significantly lower the barrier to self hosting (not needing to manage a box, just providing aws credentials).
Think of what it takes to truly decentralise email. There are technical hurdles, but also some fundamental ones:
- other providers mark you as spam because you’re unknown
- you need an always on server to actually get your incoming mail
This would at least solve the second problem. If we can develop a product like mailinabox (Which has its flaws but it’s the right idea), but instead of asking for a fresh vm, it just asks for aws credentials, that could be pretty solid.
Hopefully one day we can give the people back their control over their means of communication. This seems like a step in the right direction!
Did I miss something about JMAP?
Mail is decentralized already. Part from relying on DNS.
And you seem very eager to put everything in the amazon basket.
What about for desktop purposes? What would a proper client like Thunderbird or Outlook do?
This isn't true; there's a LOT of industry on Exchange of some sort, and plenty of older institutions running their own email. Especially in IETF land.
And it's a huge risk increase if everyone moves everything to gmail.
Microsoft Exchange used to not bother with actually SMTPing when talking to another exchange server (or itself), I don’t know what it does these days.