One of the things I particularly recall reading is FastMail devs explaining that the standardization process led to changes and improvements. (Even their about page says that "JMAP was built by the community, and continues to improve via the IETF standardization process.")
Really thrilled the FastMail team is working on this, and doing it in the way that changes to mail should be handled: Going through proper Internet standards bodies. It's in stark contrast to AMP4Email, which is being implemented by Google wholly without respect to Internet standards.
The IETF's standardisation process almost can't help but improve whatever it is you're trying to standardize. Not only are you being shepherded by peers with expertise in whatever corner of networking you're excited about (the Area Directors for whichever Area your Working Group was assigned to) but you're obliged to put up with anyone who cares to offer an opinion mailing it in. Looking for consensus isn't a good way to run a restaurant kitchen, but it may be the best possible way to agree network standards. Even when contributors say something you believe to be completely stupid figuring out how to put into words exactly why they're wrong is likely to be instructive, and as often as not they're not /quite/ as wrong as you first thought.
For example the TLS Working Group is polishing up a document telling people to cut it out with the TLS 1.0 after all these years and get to something at least vaguely modern like ten year old TLS 1.2 instead, (once upon a time named the diediedie draft, these days it has a more respectable name). Martin Rex offered the opinion that TLS 1.2 is _worse_ than TLS 1.0 and 1.1 because it doesn't smash SHA1 and MD5 together to produce a single hash. He reasons that if, somehow, SHA1 is broken enough to allow a preimage attack, TLS 1.0 is fine, whereas TLS 1.2 with SHA1 is now broken.
Of course people said Martin is barking up the wrong tree. But much more importantly they took another look at their document, it didn't actually _say_ you should implement SHA-256 when you upgrade to TLS 1.2, but they all agreed you should do that, so probably the document wants to spell that out...
... is the changes from their last draft to the published RFC
Admittedly only very recent work with git-minded people in the WG has a full git history of revisions, earlier work like SIP just has the numbered drafts representing a few months work typically from one draft to the next, but you can go back and see this work and also the mailing lists that drove the changes.
I'm not a SIP expert, but looks to me like the brief extracts I read are improvements. For example that last draft requires people who want to add a new SIP method (like INVITE or CANCEL) to send IANA a copy of the RFC describing their method. Why? What possible purpose could there be when RFCs are public documents? Are they expected to... print it out and mail it? So, the final standard text just says to provide the RFC number.
Unfortunately Exchange adds almost nothing to the mix, they support only a bare minimum of imap extensions (only literal+ if i remember correctly) and much stuff is not supported
Really thrilled the FastMail team is working on this
Sorry to be 'that guy'. I like Fastmail but maybe they could fix some of the irritating bugs we keep asking for resolutions on before embarking on fun protocol invention missions. Two killers:
(1) When forwarding an email there is no link between the forwarded email and the original thread (TOTAL TIME WASTER), when reported they "don't consider it a bug [because of SMTP ID handling in existing daemons]". Missing the point here so hard guys!
(2) When creating a new filter it doesn't have a checkbox to include matching emails. This means you have to manually go replicate the search again, for consistency, to get things where they should be.
Frustrated at lack of response from reporting these as a commercial user. Inventing new protocols should not be higher priority than fundamental annoyances that are wasting customer time NOW.
I would be very surprised if the people working on creating and implementing the new protocol have anything to do with the team that would fix the bugs you mention.
If this takes off, it would remove one of the last few holdouts of non-HTTP application specific protocols. That just leaves us with SMTP really. While using HTTP is not necessarily a bad thing, it's certainly an interesting indication of how application developers prefer convention based protocol definitions (JSON over HTTP) instead of having a strictly defined protocol over a duplex stream (IMAP over TCP).
>If this takes off, it would remove one of the last few holdouts of non-HTTP application specific protocols.
IRC and FTP come to mind, but yes. It's as if email's being attacked at all angles, lately.
>While using HTTP is not necessarily a bad thing, it's certainly an interesting indication of how application developers prefer convention based protocol definitions (JSON over HTTP) instead of having a strictly defined protocol over a duplex stream (IMAP over TCP).
Using HTTP is a bad thing, I think, because it's a complex protocol and it's entirely unnecessary; JSON is a poor, ad-hoc format that isn't optimal for parsing. I don't believe developers ''prefer'' any of this so much as it's being forced on them.
You can really only define things rigidly if you want to make much in the way of guarantees. Any system held together by conventions can have those conventions violated and this is a harrowing sign of things to come.
Email parsing can break in so many subtle and/or obscure ways that I have trouble putting any confidence in the parsing code I’ve written… it’s quite frustrating, especially since there aren’t well tested mail parsing libraries for every language/platform to fall back on.
On the other hand, JSON parsing errors are reasonably bounded and there’s well-tested highly optimized libraries for it for practically every supported platform and language. No it’s not perfect or even well suited for the task, but it’s still an improvement.
> You can really only define things rigidly if you want to make much in the way of guarantees. Any system held together by conventions can have those conventions violated and this is a harrowing sign of things to come.
The problem with this is that most new protocols (or APIs as they're popularly known now) are built by startups aiming for rapid growth. Writing an RFC for the underlying protocol of the newfangled dating app you're building is counterproductive if you don't know the company will exist six months later.
A minimal HTTP 1.1 client is rather straight forward. It's not asynchronous, doesn't use any non-printable characters, not fixed width, US-ASCII, etc. Adding encryption and compression complicates things, but that is true of all protocols.
STARTTLS is vulnerable to man-in-the-middle and encryption downgrade attacks. Using a distinct port that only allows these application protocols over SSL/TLS (e.g. SMTPS and IMAPS) is better. It's also widely accepted. I don't think there's many that are still running unencrypted SMTP or IMAP connections.
Opportunistic STARTTLS is vulnerable to encryption downgrade attacks. Good email clients don't implement opportunistic STARTTLS, they implement mandatory STARTTLS or TLS, with plaintext being either an option that's heavily discouraged by UX flow, enabled only for tests, or not available at all.
Sure, but a user has no way of knowing if their mail client does opportunistic STARTTLS or not. It's not something the average user can test, and not something mail clients would typically feel the need to mention. On the other hand, you can tell people that good practice is to use ports 993 for IMAP and 465 for SMTP and they'll be good to go, that using any other ports opens them to potential attack.
That ship has long sailed. HTTP is no longer just hypertext -- it's a ubiquitous and well-understood transport for RPCs, with lots of tooling and infrastructure support across the Internet.
I think at its core, HTTP abstracts away some of what you would want in a request-response protocol, but not really all of it. But _simply because of the ubiquity of tooling_, you get a whole lot of other stuff for free (load balancing, DDoS detection, in-browser debugging, etc.)
> Is it so complicated to work with a duplex-stream? Could a library alleviate the complications?
Yes, TCP streams are very complicated to work with (in my experience) -- there are way too many failure modes and corner cases to consider. IMO, your language's go-to HTTP library is that library that alleviates the complications.
All this said, I personally think gRPC is the right approach for RPCs, and I hope to see broader adoption of it over the coming years.
> Yes, TCP streams are very complicated to work with (in my experience) -- there are way too many failure modes and corner cases to consider.
Isn't it the same thing, but simpler and more flexible? You write a message, and then you read a reply, rinse and repeat. Could you expand on such things that are complicated with TCP that HTTP helps with?
When you're working with sockets there's lots to think about that you can simply ignore with an HTTP library -- connection setup and teardown, buffering, serialization, byte ordering etc. But more importantly, you'll have to build your own status/error handling, cache control, content encoding, TLS etc. (Simply, doing TLS correctly itself is not easy.)
To be clear, I'm not saying that there's no place for socket programming -- there definitely is. I've spent a good part of my life doing obscure network programming on all kinds of platforms, so I try not to use TCP/UDP unless I have a really good reason. (It's kinda the same reason I won't use C or C++ -- too much to worry about for most general purposes.)
Thanks for the reply. This gives me a better understanding of the issue here.
> connection setup and teardown
If there's no benefit in avoiding setting up and tearing down a connection with each request like one typically does with HTTP, then I can see how it might be an annoyance compared to making a single function call from a library that makes the request and returns the response.
> buffering
A parser should deal with that. It could be JSON parser.
> serialization
That's still a problem with HTTP. People typically use JSON for this and it doesn't depend on HTTP.
> byte ordering
parser/JSON's job.
> your own status/error handling, cache control, content encoding
That can be beneficial or it can be problematic. If you need such things, then yeah, it's cool to not have to think about making your own. However, if you don't or if your needs don't align with what HTTP provides, that can be problematic. For example, from a cursory search I see the following in the JMAP spec:
> Implementors must take care to avoid inappropriate caching of the session object at the HTTP layer. Since the client should only refetch when it detects there is a change (via the sessionState property of an API response), it is RECOMMENDED to disable HTTP caching altogether, for example by setting `Cache-Control: no-cache, no-store, must-revalidate` on the response.
So, HTTP provides caching, but not the caching mechanism that's needed, so it's recommended to explicitly disable it completely.
With respect to status and error handling, it seems JMAP still had to define their own errors to be described in JSON[1], so not a perfect fit there either.
On the content encoding, I'm not sure if JMAP would use something other than application/json, so it seems like something that wouldn't need to be specified if one didn't use HTTP.
I know the discussion is not really about JMAP. I'm just using JMAP as an example on the above.
> TLS etc. (Simply, doing TLS correctly itself is not easy.)
I would hope that there's a library equivalent of the shell's:
openssl s_client -connect $host:$port
I don't know why a library would have to be much more complicated. Maybe I'm being naive, but I think that if one wouldn't specify special TLS parameters to an HTTP library, then there's no reason why one would need to if they skip the HTTP library and go straight to using a TLS library. IOW, I don't see why an HTTP library would be more helpful on setting up TLS than using a TLS library directly.
> You write a message, and then you read a reply, rinse and repeat.
well, no, you don't "read a message" with TCP, because it is a stream-oriented protocol. you read bytes that keep coming and wait until you have enough or reach a delimiter that you chose.
Yes, that's what I meant. I would expect a good parser to hang until the message is complete as determined by the syntax of the protocol and return that, leaving the stream at the point the next message should start.
I'm getting the feeling that the popularity of JSON and HTTP come by good part from the lack of good parsing libraries in many languages. People generally only know how to work with what regexes are able to handle and nothing more.
> I'm getting the feeling that the popularity of JSON and HTTP come by good part from the lack of good parsing libraries in many languages. People generally only know how to work with what regexes are able to handle and nothing more.
Even if you have good parsing libraries, parser combinators, etc, you still need to write the grammar - which can be hard if you are not accustomed to that... years after I still find an occasional bug in grammars I've written in the past. Compare this to JSON over HTTP which ends up basically being
var xhr = new XMLHttpRequest();
xhr.open('GET', "https://my/api.json", true);
xhr.onreadystatechange = function(e) { var response = JSON.parse(xhr.responseText); }
what other format allows to get objects over the network that easily, without any special libraries ?
1. Everyone does that, why should I do different thing?
2. I've heard about corporate firewalls which close everything and nobody has power to overcome that, so I'll tunnel everything via HTTP(S) and forget about those horrors.
3. HTTPS provide absolutely hassle-free encryption with reverse-proxies. I don't even need to think about it. Using TLS library sounds too complex and I must make some real effort to implement it. Rolling out my own encryption? Everyone will scream!
4. HTTP/1.1 protocol is really simple. There are some hidden gotchas in the corners, but basically it's headers, empty line and then either bytes or stream of chunks, that's all about it. So I can pass routing information (URL path), I can pass some metadata in headers and I can pass either a singular message or a streamed content. And with HTTP libraries in every language you'll get that functionality almost for free, both for server and client.
5. Theoretically you can leverage HTTP caching on various levels. Might be tricky, but certainly easier rather than implementing it from the scratch. And with things like Cloudflare you can't really build anything similar.
6. Browsers can only talk HTTP. So even if I don't need it know, if I would ever need to build web application, I'll have to provide HTTP bridge. And web applications are really common nowadays. You can't really make IMAP client in JavaScript running in browser. But you probably can do it with JMAP.
Do you also have a list for why many people oppose HTTP (or better what is sometimes called "shoehorning everything in a HTTP shaped box")? I guess that the versioning and advanced features are a stark contrast with versionless TCP from the '80, but I am interested in hearing more opinions on the matter as I am ignorant of any dealbreaker regarding HTTP.
The hardest thing with HTTP is to send server-initiated messages to client. There are multiple approaches (websockets, SSE, client polling) but they are not simple.
Also HTTP carries some overhead. It's not significant, something like 50-100 bytes, but it might matter for a lot of tiny messages, I guess.
Another factor is: those intermediate proxies might break your app. They might cache your responses even if you don't want that, so your application will act wierdly. Of course any intermediate proxy might break any protocol...
> The hardest thing with HTTP is to send server-initiated messages to client.
My (weak) understanding here is that in most mobile cases this is intrinsically problematic and is the reason for the plethora of push notification standards. In this case HTTP is more battery friendly as a noisy server can't as easily spam your phone with traffic.
HTTP can't be more mobile friendly, since it's exactly the same TCP connection for iOS, there's nothing special about it. You might design mobile-friendly protocol which would leverage proprietary push messages, but that's HTTP-agnostic as well.
I mean, if you need to spam your phone with traffic, you'll do that. If you're writing whatsapp, you want to deliver messages instantly if user has app opened, so you'll keep TCP connection to the server.
I was referring to the low power optimizations like doze in android, where the OS tries to schedule together many actions so to minimize the time the radio is on. In this case the OS can optimize HTTP connections (if you use the OS's HTTP library) better that a bare TCP. (this does not apply if you handle HTTP internally)
The honest reason boils down to "there's a lot more closing down of generic TCP than HTTP in firewalls and application privilege frameworks."
Email has the other unfortunate property that it has arbitrary binary data (message bodies) mixed in with a UTF-8 control stream, which makes SMTP/IMAP/POP/NNTP especially annoying to implement in languages that draw very strong distinctions between binary and text strings, since you can't just wrap the TCP stream in a UTF-8 decoder/encoder and be done with it.
I'll give a potentially flamebaity response... But it's because reality appears to say that OSI protocols were right and TCP was wrong.
Plain TCP essentially emulates a full duplex serial port. Applications need more on top of that. Methods for framing, attaching Metadata, data flows, secure transport layers, etc.
So HTTP which provides a modicum of those options ends up being default, because it's available.
For comparison, OSI defined various components to handle those elements and a proper OSI stack should have then available to application developer.
Another example would be Symbolics Lisp Machines (Genera) where the network layer provided a bunch of mid-layer protocols, like one which provided messages that were formed of basic typed data structures. So people would use those to quickly build custom protocols for their applications, thanks to not having to deal with low level details.
> Is it so complicated to work with a duplex-stream?
It's not that duplex-stream is complicated, it's just that request/response semantics are much more suited for most use cases. Where duplex streams make sense, we now have websockets. Even then, on top of that you'll end up implementing stateful request/response semantics because you need some structure unless you're literally dumping binary data into the socket.
> Forgetting ubiquity of tooling, is there something technical about HTTP that makes it so attractive?
Slightly different than ubiquity of tooling is firewall policies: HTTP(S) is the protocol most likely to get where it needs to without someone configuring something to allow it, so tunnelling everything over HTTP(S) is the path of least resistance to widespread use.
By useless I meant that the use of HTTP doesn't contribute anything essential. Though maybe I could be wrong? But that's what I understood by "is not core to its operation". I understand it to just wrap the JSON in HTTP for the benefits that come from its ubiquity in use and not so much because of merits of the mechanisms described in the HTTP spec.
No, it's just the most widely supported protocol of that kind, so why not use it? Is there anything specific against it/how do you answer for another protocol the question of "why not HTTP, which is similar and has better support?"?
(Caveat: I think you'd ideally want http/2, which isn't as widely supported)
EDIT: actually, one point: you can talk JMAP over HTTP(S) directly from a browser without a translating server.
Which can be read also as: HTTP allows to embed arbitrary semantics with close to no friction. Which sounds like a really good point together with the tooling and ubiquity
What are the optimizations in JMAP that make it faster than, say, Solid? Solid is built on a bunch of W3C Web, Security, and Linked Data Standards; LDP: Linked Data Protocol, JSON-LD: JSON Linked Data, WebID-TLS, REST, WebSockets, LDN: Linked Data Notifications. [1][2] Different worlds, I suppose.
There's no reason you couldn't represent RFC5322 data with RDF as JSONLD. There's now a way to do streaming JSON-LD.
LDP does paging and querying.
Solid supports pubsub with WebSockets and LDN. It may or may not (yet?) be as efficient for synchronization as JMAP, but it's definitely designed for all types of objects with linked data web standards; and client APIs can just parse JSON-LD.
Does JMAP support labels; such that I don't need to download a message and an attachment and mark it as read twice like labels over IMAP?
How does this integrate with webauthn; is that a different layer?
(edit) Other email things: openpgpjs; Web Key Directory /.well-known/openpgpkey/*; if there's no webserver on the MX domain, you can use the ACME DNS challenge to get free 3-month certs from LetsEncrypt.
I would say your comment, and the first reply, both demonstrate quite effectively why JMAP is probably a better choice for email.
> It may or may not (yet?) be as efficient for synchronization as JMAP, but it's definitely designed for all types of objects
If we hypothetically allow for equal adoption & mindshare of both, and assume both are non-terrible designs, I'd guess the one designed for "all types of objects" is less likely to ever be as efficient as the one designed with a single use-case in mind.
And narrow focus is not only good for optimising specific use-cases, it's also good for adoption as people immediately understand what your protocol is for and how to use it when it has a single purpose and a single source of truth for reference spec, rather than a series of disparate links and vague all-encompassing use-cases.
Solid has brilliant people behind it, but it's too broad, too ambitious, and very much lacks focus, and that will impair adoption because it isn't the "one solution" for anyone's "one problem".
--
To take another perspective on this, there are other commenters in this thread bemoaning the loss of non-HTTP-based protocols. Funnily enough, HTTP itself is a broadly used, broadly useful protocol than can be used for pretty much anything (and had TBL behind it also). The big difference was that Tim wasn't proposing that HTTP be the solution to all our internet problems and needs in 1989—it was just for hypertext documents. It's only now, post-adoption, that it is used for so much more than that.
> If we hypothetically allow for equal adoption & mindshare of both, and assume both are non-terrible designs, I'd guess the one designed for "all types of objects" is less likely to ever be as efficient as the one designed with a single use-case in mind.
This is a generalization that is not supported by any data.
Standards enable competing solutions. Competing solutions often result in performance gains and efficiency.
Hopefully, there will be performant implementations and we won't need to reinvent the wheel in order to synchronize and send notifications for email, contacts, and calendars.
To eliminate the need for domain-specific parser implementations on both server and client, make it easy to index and search this structured data, and to link things with URIs and URLs like other web applications that also make lots of copies.
Solid is a platform for decentralized linked data storage and retrieval with access controls, notifications, WebID + OAuth/OpenID. The Wikipedia link and spec documents have a more complete description that could be retrieved and stored locally.
I clicked on this with a lot of hope — alas, this only seems to address the "mailbox access" part of E-mail. I think that part actually works fairly well today (not optimally, but well enough in practice). The part that really needs fixing is SMTP.
IMAP is a really broken protocol; I'd characterize it as a database synchronization protocol that wasn't designed to be one. You need to rely on certain extensions to have a hope of being bug-free [1], and even those can be problematic (UIDVALIDITY changed, gotta clear your entire inbox!).
[1] The IMAP protocol used to rely heavily on message sequence numbers, where each message in the folder was numbered sequentially from 1-N, and deleting messages caused all subsequent messages to have their number decremented by one. But you can have multiple connections active on the same mailbox, and you can't always immediately notify the other connections of when things get deleted, so every connection can have a slightly different mapping. So everyone uses UIDs now, which are guaranteed to be unique, until the server feels like deleting them all and restarting from scratch (i.e., updating UIDVALIDITY), so every message has a different UID now.
The UIDVALIDITY is actually a great design. IMAP servers will usually store the UID mapping in some kind of index that is used even across restarts. Therefore in normal operation this is not a problem. When did you encounter problems with this in every day use?
If the UIDs were not allowed to change, you would run into practical problems on edge cases. What if you switched your IMAP server implementation? What if the index files on server side got corrupt and you had to rebuild them? What if you had a fatal disk failure and had to restore an older backup? Mismatching UIDs would be even worse than a way to indicate to the client that the previously retrieved UIDs and associated messages are no longer valid. UIDVALIDITY is basically just a way for cache invalidation.
I don't specifically know the history of IMAP, particularly IMAP UID, but the semantics of UID to me implies that it was originally specified to permit its implementation as the file offsets into the mbox files that backed the store, and I believe some IMAP servers implemented it that way.
Some IMAP server implementations have a custom attribute they use to indicate a globally-unique message ID.
> I think that part actually works fairly well today (not optimally, but well enough in practice).
I haven't done much with IMAP, but I did setup a watcher for my mailboxes that uses the IDLE extension. I consider it a flaw that I need to create a separate connection, each needing their own session, for each folder I want to watch simultaneously. Servers understandably limit the amount of simultaneous connections you can have, so that limits the amount of folders you can watch simultaneously.
I don't know if JMAP fixes that, and I don't know if I'd prefer JMAP over IMAP as a protocol, but there are things that could be better with IMAP.
SMTP is the protocol that's used to route email between servers. How is Fastmail supposed to authenticate with Gmail or Yahoo with Hotmail? Better yet, how is a new mail server in the internet supposed to authenticate with all existing mail servers? Really, authentication for all uses of SMTP doesn't make sense.
The other part that is really lacking is security, in particular 2fa. The proposed standard says that obtaining credentials is out of scope, which means it will remain in the realm of vendor-specific implementations.
I'd guess that you'd get a token that gets added as an `Authorization: bearer <token>` header in practice for most implementations. The specifics of the auth and token itself may vary though.
Edit:
Though having a dedicated /auth/login path that takes a JSON post with {username,passphrase,2facode,...} could be readily defined where any of the above parameters are optional. The response being a token that's used as the previously mentioned authorization header.
The core work is done, yes, draft-ietf-jmap-core-17 and draft-ietf-jmap-mail-16 [edit: fixed draft name] are in the RFC editor's queue, how long it takes to get through that queue can depend on a lot of variables. At the end it goes to "AUTH48" which is notionally 48 hours long but a few weeks is more likely in practice, and then it gets cut a number and becomes an RFC.
The Working Group still has open items, including calendaring (because people never can build a mail system that doesn't involve calendaring for some reason), handling "read receipt" type features and because this is Current Year a mechanism to involve WebSockets. Either those will eventually also be polished into standards, or they'll be abandoned at some point and undelivered.
I think their goal is to have transport layer be a pluggable subsystem, divorced from their request-response protocol.
They even essentially say that HTTP is not necessary to their protocol, so I think it's pretty apparent that HTTP is a utilitarian choice, but they're not trying to build email-over-http.
What drawbacks do you see in JSON here? For machine to machine communication it is a pretty good serialization format I would say (It is not great for many other things, but this sound like its strong point)
It seemed to me that the article was only using the fact of JSON use as more evidence of modernness. It implied that JSON didn't map that well into their problem space but they overcame.
https://jmap.io/#push-mechanism
JMAP going to use rfc8030 push for mobile and EventSource (kind of http long polling) for desktop clients, as I understood. I don't like that idea, it's even worse than IMAP4 IDLE. Why not just WebSockets for mobile and desktop clients?
Looking at the request it contains properties like collapseThreads and sort, moving this sort of functionality server side is a regression. This is an API for hosted services, it's disingenuous to compare it to imap.
Do you see anything that should (or could) be done via IMAP but not via JMAP? Adding support for higher level operations to the protocol can be worth the complexity if it removes network roundtrips. Not everyone is lucky enough to have a reliable, fast, low latency link to his IMAP server.
> Not everyone is lucky enough to have a reliable, fast, low latency link to his IMAP server.
This is a protocol from the 80's, I'm sure it doesn't require anything we'd consider a fast connection today, latency may have changed though, mail servers used to be geographically closer.
> Adding support for higher level operations to the protocol can be worth the complexity if it removes network roundtrips.
It also benefits SaaS providers by having the complexity on their end and potentially limits apps. In my example of "collapseThreads" why does this need to be an option at all instead of something like "parentId" and leaving it up to the client?
More features also means more ways to abuse them, imagine google saying "yes we support JMAP in gmail" then ignoring certain features.
> This is a protocol from the 80's, I'm sure it doesn't require anything we'd consider a fast connection today,
But it does require long lasting stable connections, which is what you want to avoid on a mobile that tries to connect for only a few second at a time and the go back to low-power mode.
Getting a FastMail subscription over a decade ago really has been one of the best tech decisions that I ever made. They are a great mail provider, and I am very happy that my money helps fund work like JMAP.
Hopefully Microsoft Exchange protocol will become a thing of a past now. It is always the biggest problem of Linux desktop integration in Windows-only corporate ecosystem.
Gmail on Android is already embarrassingly bad for IMAP... The minimum sync period is 15 minutes and it won't sync subfolders until you open them, defeating the entire point.
Don't look to Google to do anything in this space but continue to push their creepy ads and lock people in.
What I want to know is whether JMAP can help push end to end encrypted email for everyone. Probably not, because that's not part of FastMail's feature set.
Why wouldn't Google implement JMAP? Interestingly enough JMAP adds support for Gmail-specific behaviors like labels, which IMAP lacks. This would allow other apps and services to better integrate with Gmail's quirky behavior, improving the experience for Gmail users.
IMAP doesn't lack Gmail's "labels", Gmail just never bothered implementing their labels as IMAP labels, leading to every IMAP client requiring horrible hacks to work around Gmail's inane implementation.
Google are unlikely to implement JMAP for the same reason they've:
- ignored pleas to implement IMAP properly going back to its inception
- have IMAP disabled by default for new users, and for most corporate users (requiring a GSuite admin to explicitly enable IMAP if employees want to access their corporate email with a client).
Google benefit massively from people not using IMAP clients and being forced into their product UIs.
That said, I wouldn't call it dead in the water for this reason. Google's tactics here have reduced the overall usage of IMAP clients, but they also serve as strong motivation for that small niche of users to move away from Gmail.
There already is one, written by the people behind JMAP even, with a public demo instance at https://proxy.jmap.io/ (don't use the demo with your own credentials of course, host it yourself)
Nope, that needs to be solved in SMTP. Unfortunately, JMAP only replaces IMAP/POP.
I have personally been working on a new protocol that replaces both IMAP and SMTP. It's not ready yet, but the solution turns out to be quite elegant and I have great hope that it will work in the future.
I haven't read JMAP, but I don't think that's the problem it's tackling.
Also, I think SPAM is as solved as it can be with blacklists, heuristics-based filtering software like spamassasin, and personal filtering with sieve scripts. What else would anyone suggest?
I handle spam by handing a unique email address to every entity that requests my address (domain wildcard). What would be better is if there was an initial "hello" where some entity requested permission to send me email, and I could approve, deny, or even revoke in the future.
Basically, the issue is that senders cannot be reliably identified. If this were fixed, then spam filtering would be trivial.
Spam could be half-fixed with forcing DKIM, SPF, DMARC and TLS. The very least there would be no impersonation of other domain names (filtering the rest into a spam box will become much easier after that) and insecure transportation then.
How would sender authentication solve the problem? A large amount of spam is sent from throwaway domains with SPF and DKIM in place, and another large part of it is sent through hijacked web sites and email accounts.
As I have said, it would very least avoid spam looking real from real domains, which is half of the issue. The throwaway part could be fixed by making spam illegal and actually cracking down on it.
Simpler would be a callback mechanism that required at least a domain cert with TLS..
Server A => Server B :
hey, I have a message
for X
from Y
MessageId: Z
SingleUseKey: K
Server B disconnect
Server B => DNS Entry for Y's domain
Server B : hey, I want message ID: Z, Key: K
Then you need not only DKIM, SPF, etc, but you need a server that responds to inbound requests to pick up an outbound mail configured and setup on the correct port.
GPG exchange with trusted personal and commercial contacts being as easy (UX) to add to your keychain as adding someone into the contacts of your phone.
Really thrilled the FastMail team is working on this, and doing it in the way that changes to mail should be handled: Going through proper Internet standards bodies. It's in stark contrast to AMP4Email, which is being implemented by Google wholly without respect to Internet standards.