That aside, POP should really be considered legacy, it comes with many downsides that hinder people's e-mail usage. IMAP is definitely more functional, but has a successor - JMAP. So in some sense, it'd not even be entirely wrong to migrate.
Lack of HTTPS on the author's site also adds a nice subtle flavour to the blogpost.
Read it again - they're unambiguously saying that non-Microsoft email programs that rely on SMTP, POP and/or IMAP will "stop functioning when Microsoft chooses to disable these protocols".
That the linked help clarifies a little doesn't mean this is a flatly misleading letter.
Well Microsoft blocks the Thunderbird embedded browser, so you can't complete the Oauth2 login. I've been effectively locked out of my account since (I'm not going to use the webmail just for this account - set up a forwarding email and told everyone to use my new address).
specifically what is missing, is that email using basic auth won't disconnected OCT 1st because of the tons and tons of machine that do things like email faxes, and scans that do not support modern authentication.
This looks like whoever "enterprise services" is, they wrote a brief message that wasn't very accurate.
"We won't support clients that dont support microsoft supported practices" is a very short stone's throw away from "we won't support non-microsoft clients", as I've come to discover at my university, where I had to ask for permission to use an email client just for oauth to work.
Which is yet to be granted after 2 months (and if granted, will be granted site-wide ... i.e. unlikely unless there is a large demand for non-microsoft clients)
If you're not doing anything requiring security, you don't need HTTPS, IMHO.
> POP should really be considered legacy
I know people who knowingly use POP to keep their remote boxes empty, and keep everything local, so I don't think we should decide for people that swiftly.
Similarly, I'll let the wisdom of "Teh Internetz" to decide whether JMAP is worthy of the effort to replace IMAP. Having a IETF RFC is a good start, but let's see...
I disagree, for a lot of reasons. For one thing, I don't want some random WI-FI to know every page I visit, even insecure pages.
I also don't want to leak any information about my browsing habits. Using https everywhere limits the information you leak about how much of your traffic is sensitive.
Edit also: even with DNS over HTTPS, a lot of the Internet uses SNI for TLS, so anyone snooping can anyway see what hostname you're visiting (not to mention, IPs are not significantly more private than host names, and those will always be public unless you're using ToR or something similar).
But no one cares if they can look into all your mails and documents (Microsoft, Google), track you all over the web, see where you are going, which flight you are taking,...
The important is that ISPs are evil and untrusted. facepalm
I don't think I go a single day without reading posts complaining about exactly that, so obviously people do care.
They are easily eliminated from the picture. One less party with access to your data the better. There is really _no_ good argument for not having HTTPS on any site unless its some very rare case like an intranet site.
We have to do a lot of things because bad guys do bad things. What is the alternative? Pretend they don't exist?
It really isn't hard to set up an unbound resolver.
Now you might argue that's a bit silly, but it is a use-case.
You're leaking data all over the place, placing the burden on the website owner is silly.
There have been documented attacks of people getting hacked because they were browsing the web with plain-HTTP and someone injected malicious pages mid-stream; see "Quantum Insert":
Now just imagine what this corrupt third world government here can do.
Of course its "not a problem" if you aren't a vulnerable person who dares go against the grain, but on the whole it is.
If your site is not using tls then it's automatically blocked sorry, maybe I'm not your target audience regarding security blogs though :)
https prevents them injecting shit
Even with https, there's no scenario where I'm not encapsulating all traffic in a tunnel there.
No, no, no, no, no, no, no.
ALL Web traffic should be https (or http/2 or /3). If you connect to a site that uses insecure http, ANY link between you and that site can easily snoop on traffic and even inject different content. You CANNOT be sure that the content you see is what was originally served by the server, and you can certainly expect that state actors will be noting that you viewed this content, and building a profile on you based on that information.
Anyone can use any protocol they wish to, that's the beauty of the internet.
That could be done using IMAP as well. The overhead for that use case seems quite small. However it is just one protocol to support. And I haverun into a bunch of non-techy users with problems like "if i read my mails on my desktop I can't access them on my phone anymore, so I always use the phone"-like problems, which tells me their mail client on desktop likely uses POP3. By not offering POP3 you are removing that class of support problems with very little downside.
I missed that, because I don't jump down every linked rabbithole. Thanks.
Basic auth shouldn't be used for anything; I'm surprised it isn't already deprecated to hell and back. The letter says that Microsoft intends to deprecate IMAP and POP because some people use them with Basic auth. That doesn't make sense.
Having to use anything more complex would be a downgrade. More complex to setup on server and client side, with more options and more problems to take care of.
I like to use the simplest tool for the job.
Exactly that, it's terribly simple. People have multiple devices, that alone makes POP3 an annoying protocol to use.
A big assumption.
POP3 works perfectly fine and is preferable in my situation.
The workload required to poll is heavy. You need to reconnect TCP, TLS and login. Once you find you have no messages, you have start over to check again.
Clients and servers need to keep a short-term "message id" to long-term "unique-id" map.
Deletes are batched and only committed at the end of a connection. Unless you're willing to close and reopen a connection right away, any messages you delete are in a not-quite-deleted limbo until you close the connection.
I use POP3 only to move messages from my mail server's mailbox on a VPS to my LAN using a pull method. Polled mailbox is thus mostly empty or has very few messages at each poll. So it's a nice fit.
This blog can be read over TLS using gemini://.
printf 'gemini://gemini.conman.org/boston/2022/09/22.1\r\n' \
|openssl s_client -connect 220.127.116.11:1965 -ign_eof
Neither Gemini nor the Internet Archive have access to the content through a channel other than the insecure HTTP endpoint, right?
All Gemini is signed.  It's mandatory. That's why the parent piped through openssl to connect.
For example, the page that we're looking at is signed.  (Using a bridge to show the certificate, but you can verify it yourself, as well.)
> 4.2 Server certificate validation
> Clients can validate TLS connections however they like (including not at all) but the strongly RECOMMENDED approach is to implement a lightweight "TOFU" certificate-pinning system which treats self-signed certificates as first- class citizens. This greatly reduces TLS overhead on the network (only one cert needs to be sent, not a whole chain) and lowers the barrier to entry for setting up a Gemini site (no need to pay a CA or setup a Let's Encrypt cron job, just make a cert and go).
> TOFU stands for "Trust On First Use" and is public-key security model similar to that used by OpenSSH. The first time a Gemini client connects to a server, it accepts whatever certificate it is presented. That certificate's fingerprint and expiry date are saved in a persistent database (like the .known_hosts file for SSH), associated with the server's hostname. On all subsequent connections to that hostname, the received certificate's fingerprint is computed and compared to the one in the database. If the certificate is not the one previously received, but the previous certificate's expiry date has not passed, the user is shown a warning, analogous to the one web browser users are shown when receiving a certificate without a signature chain leading to a trusted CA.
> This model is by no means perfect, but it is not awful and is vastly superior to just accepting self-signed certificates unconditionally.
OpenSSH's TOFU works in its specific context primarily because SSH targets have been connected to beforehand and you can't place an evil proxy between. In order to prevent issues with TOFU, we've got SSHFP DNS records (so, trusting DNSSEC).
Now with the web, it's already rather ridiculous to suggest that it's not a terrible security model to not visit any new websites on untrusted connections. Plus the seeming lack of alternative methods (like SSHFP/TLSA + DNSSEC) to establish trust with untrustworthy networks (which well, most networks should be considered as).
If you're worried about key continuity issues, do what large SSH fleet deployers do, and use certificates. Key continuity was the motivating use case for SSH certificates, before they were used to do MFA/SSO logins for users.
I don't disagree with the analysis of Gemini's trust model; I think you're right that key continuity doesn't work on the scale of the public web. Of course, I don't think DNSSEC has any answers here, either; the Web PKI is probably the best thing we've got right now.
Though in theory, just using SSHFP+DNSSEC and those operators being malicious shouldn't degrade SSH's security, it should only hinder the same TOFU. In which case I suspect people would fall back to the Web PKI to look up the reason for the mismatch.
Regarding the issue of "integrity", i.e., not being modified in transit, ideally all www pages could have a hash (signature), or "digest", of the page content. As long as the www user can obtain the signature over a secure channel then, in theory, integrity can be verified. The content could be sent over an insecure channel and integrity could still be verified. This sort of verification has long been common (well before HTTPS became widespread) when downloading files, namely, software, e.g., as tarballs.
The problem with using an intermediary that has crawled the www, whether it is IA or Google or whomever, is that the www user has no way to know how the content was retrieved. How does every www user know that the pages in Google's index that they are purportedly searching were accessed via HTTP or HTTPS. The only people who would have firsthand knowledge of the retrieval are those who observed it. As such, there is some (misplaced) trust involved when people use IA, Google or other intermediaries.
Personally, if I am forced to use an intermediary, e.g., a web archive, a search engine, a third party DNS provider, etc., then I try to use multiple intermediaries to obtain the same data, then compare. This does not solve the problem, but it is arguably better than trusting any particular intermediary.
Ideally, web page authors and intermediaries could provide signatures for the web pages they serve. (The same way that software authors provide signatures for the archives of the software they publish.) For example, if one uses Common Crawl then one gets a "WARC-Payload-Digest" header. For example, the payload digest for example.com's index.html is currently
If its signature was published by the original source, then there could be multiple sources for example.com's index.html across the web. Each copy would match the same signature published by the author of example.com's index.html. This practice has long been used in software distribution where multiple "mirrors" contain copies of the same files.
I still find there are numerous "insecure HTTP endpoints" in operation on the www. It seems to me that as long as these are concealed, no one complains about them. I recently commented about this with respect to podcasts. A surprising number of podcasts are being served over HTTP. But as long as the podcast listener is unaware of how their software is accessing them, no one complains about potential modification in transit or other issues with using an insecure channel.
That's the unfortunate reality of trust as a currency... On average, people are way more likely to trust Google, a household name, than Gemini, a three-year-old brand-new protocol with its own mystery set of new and exciting not-yet-discovered security issues.
Google does mis-crawl all the time (in fact, they have a whole division dedicated to confirming whether sites aren't detecting their crawler and actively lying to them; sites that do get penalized in search results). But it's Google, so people believe they have a vested interest in getting it right. There's no such guarantees in people's minds for data coming over Gemini protocol; it hasn't been earned yet.
IMHO, at best Gemini is a Gopher redux. No one is planning to run commerce over Gopher. It is no different for Gemini. It's a "people's protocol", not a corporate one. It does not even have an RFC. It's relatively easy to write clients and servers. No corporate vendor or advertising sponsorship is needed. No one needs to have a "business model" to publish data/information via Gemini.
The Gemini FAQ explicitly addresses critics who see the internet through web-tinted glasses:
1.6 Do you really think you can replace the web?
Not for a minute! Nor does anybody involved with Gemini want to destroy Gopherspace. Gemini is not intended to replace either Gopher or the web, but to co-exist peacefully alongside them as one more option which people can freely choose to use if it suits them. In the same way that some people currently serve the same content via gopher and the web, people will be able to "bihost" or "trihost" content on whichever combination of protocols they think offer the best match to their technical, philosophical and aesthetic requirements and those of their intended audience.
4.2 Server certificate validation
Clients can validate TLS connections however they like (including not at all) but the strongly RECOMMENDED approach is to implement a lightweight "TOFU" certificate-pinning system which treats self-signed certificates as first- class citizens. This greatly reduces TLS overhead on the network (only one cert needs to be sent, not a whole chain) and lowers the barrier to entry for setting up a Gemini site (no need to pay a CA or setup a Let's Encrypt cron job, just make a cert and go).
TOFU stands for "Trust On First Use" and is public-key security model similar to that used by OpenSSH. The first time a Gemini client connects to a server, it accepts whatever certificate it is presented. That certificate's fingerprint and expiry date are saved in a persistent database (like the .known_hosts file for SSH), associated with the server's hostname. On all subsequent connections to that hostname, the received certificate's fingerprint is computed and compared to the one in the database. If the certificate is not the one previously received, but the previous certificate's expiry date has not passed, the user is shown a warning, analogous to the one web browser users are shown when receiving a certificate without a signature chain leading to a trusted CA.
This model is by no means perfect, but it is not awful and is vastly superior to just accepting self-signed certificates unconditionally.
Most ISP's can't be trusted, TOFU becomes basically useless. Ignoring that major issue with the model, it's also unusable in especially untrustworthy networks like Airport WiFi. I'm also going to preemptively say that suggesting not to visit new websites on those networks (because you can't TOFU) is just ridiculous.
"It is not awful" in this context is a very strong claim with little backing it up.
Why not just use HTTPS on the "untrustworthy" ISP's.
The use of the word "most" implies that there are some that can be trusted.
Assuming you are a trustorthy source for such information (and how do I know it's really you and not an "imposter"), then what are they. Please list the ISPs everyone can "trust".
The point that is being missed in this comment thread, and most others about TLS, HTTPS and CAs, is that there is a question of who decides whether something is "trustworthy" or not.
Personally I like to make these decisions for myself. Unlike an incredible number of internet commentators, I do not purport to tell anyone else who they should or should not trust. That decision is ultimately for each person to make on their own. We can provide information that may help a person with their decision, but it's still their decision, not mine.
But that's not how "chain of trust" works.
The concept of "chain of trust" itself does not even exist in the real world. It only exists in the imagination of socially inept persons hiding behind keyboards. In practice, for HTTPS, the cast of characters is a laundry list of third party intermediaries, all trying "cash in" on the use of the internet, a public resource we already pay ISP's to access. The idea that any of them would be sources of "trust" is comical.
Why trust "domain name registrars" as a source of useful information about people who run websites.
Why trust CAs issuing non-EV certificates. They only verify that someone rents a domain name from an "ICANN-approved" registry.
Why trust CAs issuing EV certificates. The people approving these CAs all have a vested interest in the web (browser) as a means of online advertising.
Why trust the people who "approve" CAs for inclusion in popular web browsers.
There are something like 75 CAs hardcoded into popular web browsers. If I want to remove one, what do I have to edit the source code and recompile. Inconvenient to say the least.
In all of this third party nonsense, there is no opportunity for an ordinary person, not invested in or benefitting from the "tech" company racket, to have any input on whether or not she wants to "trust" a website is being operated by a particular person. She is effectively locked out of the process. These third parties are often comprised of people I would never trust IRL. But they hide behind keyboards so we never get to see them for what they are.
At least with Gemini, clients and servers are smaller and simpler, and easy to edit and recompile. Gemini clients, written by anyone, not necesarily "tech" companies, are not designed with online advertising in mind. The protocol itself is not "advertising-friendly". It is little more than plain text.
The "threat model" for me in the majority of web use is the "business model" of so-called "tech" companies, i.e., surveillance, data collection and advertising, not "imposters". Nevermind that "tech" companies have pushed for a web that is 100% commercial/political, where even recreational use is monitored for insights useful to advertising. That only creates a greater incentive for "imposters". When I started using the internet it was still predominantly used for academic and military purposes.
If a "tech" company employee wants to choose to use HTTPS, DNSSEC, TOFU, etc., then that is their decision. But if they want to remove the ability of anyone else to make that decision for themselves, then I see a problem with that.
I don't know, ask the Gemini people.
> The use of the word "most" implies that there are some that can be trusted.
Only a Sith deals in absolutes. I'm sure someone out there is their own ISP and can trust themselves.
> Personally I like to make these decisions for myself.
Sure, you do you. That doesn't make it a widely viable neither mostly secure approach.
Problem is, it is being replaced by solution that decreases security.
I for example just wouldn't like anyone to be able to see what data I exchange with any server, be it small profile blog or a login page.
Though if someone can set it up in less than 15 minutes, and doesn't, I reserve the right to snark. It's not a bad look in cases like that.
Thank you for your service
The funny thing is that IPv6 short-circuits my brain. Why do I have four addresses, and where did they come from? Why isn't there a link-local address for loopback and wireguard?
For self-hosted, it's table-stakes knowledge. Failure to do it implies the site admin knows so little about modern security that their access logs are probably only thinly secured. It's an "admin smell," if you will.
Though I suppose https is a prerequisite for that pipe to maybe be safe. Piping curl to bash varies from stupid to just fine depending on context.
All of this was present in the 90s and early 2000s so not really theoretical attacks.
If you believe that your ISP or a middleman can't inject ads without breaking the S in HTTPS, I have a bridge to sell you.
They can just push the content into a frame and inject the content outside that frame. I encountered this more than once.
They're not doing this anymore, because I guess they now know how to use their DPI infra in useful ways to them.
My mobile carrier still injects stuff to HTTP pages, but doesn't mess with HTTPS ones, at least yet.
My ISP used to do that when they started deploying DPI hardware as a technology demo. They'll hijack your traffic and inject full ads w/o redirection or added (bill) warning banners or ads sporadically to retrieved pages.
My mobile carrier sometimes injects SMS & Notifications arriving to my modem if they find the chance w/o disturbing the connection too much.
So, having a HTTPS connection doesn't make it tamper resistant, but tamper evident, at most.
That is not possible. You would get a cert mismatch error.
Consider what your browser does when you navigate to the page: it directly opens a TLS connection to port 443. There's nothing your ISP can do to force the browser to request the page using a non-TLS connection.
What might have happened, is that you might have carelessly typed the address in your URL bar without the "https://" prefix, as in "www.example.com"; for legacy historical reasons, most browsers (except IIRC some very old browsers from the dialup era, which always required an explicit URL scheme) treated that as if you had prefixed it with http:// (so it actually was the non-HTTPS "http://www.example.com" that you were using). Many sites would then redirect you to the HTTPS site, but your ISP could hijack the page before that redirect (since the redirect was not protected by HTTPS). Had you been careful to always prefix any address you type with "https://", there would be no initial non-HTTPS connection to hijack.
Because I'm pretty sure that happened.
1. Navigate to https://www.example.com
2. Arrive at https://www.completelyunrelated-adsite.com while your address bar reads https://www.example.com
They used to do this regardless of your DNS. They directly hijacked that stream/connection.
2. Get https://www.example.com with an ad-banner on top.
They happened rarely, and never survived a reload or further navigation. They completely stopped after a while.
I have a 4G modem from my mobile carrier. They inject a info popup when I receive an SMS or any other notification' if they can manage it. It's very rare now, too.
The only way it would be possible is if you installed a root cert from your ISP onto your computer so that it would trust a cert issued by them. Otherwise, they would not have a valid cert for example.com and you would be presented with a cert error.
This is literally the exact thing https was designed to prevent. It is and always has been impossible (again, unless the client machine is administered by the ISP or whoever the middleman is, and they can install a cert on the machine)
And one big advantage - it actually allows you retrieve and store e-mail locally - irrespective of any server allocation.
And to be fair, configuring most clients to retrieve and then delete, or keep a local copy in addition to the server one, is not difficult at all - these options are not hidden or anything.
In case the password leaks, attacker can only fetch new messages over POP3, and not plant them or fetch the entire archive from the server. (yeah, I can fetch and delete over IMAP too, but then what's the point of all the extra unused complexity of IMAP, it's just an extra risk)
Considering you have to download the entirety of the mail contents to read it anyways, I have no idea what makes you think this is an impossibility.
With IMAP, when an email gets deleted by some client, other clients will also delete their local copies of that email. That won't happen with POP3.
But I haven't read either of the two protocols, so I'm not sure whether that's something required by the protocol or just a common behavior of clients.
This is not an IMAP feature, this is a client feature. There are plenty of clients that don’t sync deletes, if you don’t want to.
Or, just don’t delete your emails? If you delete a POP email from your client, it’s also gone for good.
This isn't IMAP limitation, this is how clients implemented it. Nothing stops one to download every message over IMAP and never delete them.
Email OAuth 2.0 Proxy <https://github.com/simonrob/email-oauth2-proxy>; mailctl <https://github.com/pdobsan/mailctl>; mutt_oauth2.py <https://gitlab.com/muttmua/mutt/-/blob/master/contrib/mutt_o...> (some suggestion that it might not always work these days?); pizauth <https://github.com/ltratt/pizauth>; oauth-helper-office-365 <https://github.com/ahrex/oauth-helper-office-365>. Disclaimer: I wrote pizauth and it's just about to move into the alpha stage.
Not sure about Google, but Microsoft supports client credentials for IMAP/POP3, but not for SMTP yet. IIRC it was supposed to be rolled out this January but is still missing. Hopefully they can get that deployed ASAP.
Or maybe you can enlighten me how you can get the token for XOAUTH2 from just your gmail email address and password without involving any opaque google service.
Authentication is happening completely outside of OAuth inside some google black box. 2FA has nothing to do with OAuth at all. It's just another feature of the google's black box which decides whether to give you the access/refresh tokens or not.
I fully agree with the move away from plain passwords in this case, given that it's no longer "just" the password to a mail account, but to much, much more.
Now while I think OAuth adds some features that can be useful in certain settings, I'll be inclined to agree that requiring OAuth isn't the best move.
However the alternatives would probably require a lot of extra work on Microsoft's behalf, like being able to set up device-specific passwords or similar.
So, given the need to move away from plain account passwords, I can understand why they wouldn't want to do that and just use what they already had.
Already the case on mobile:
> If you use the Play store or GitHub version of FairEmail, you can use the quick setup wizard to easily setup a Gmail account and identity. The Gmail quick setup wizard is not available for third party builds, like the F-Droid build because Google approved the use of OAuth for official builds only. OAuth is also not available on devices without Google services, such as recent Huawei devices, in which case selecting an account will fail.
I'm not sure M66B needs to get approval for the other builds, though, because the access is gated at the cloud API, not though client libraries. You can use Play Services to grant OAuth tokens, or you can use the boring old Google API client libraries, or roll your own; you just need to add the other signing key fingerprints and application IDs to the credential in the project's cloud console.
I could easily be mistaken, but there are numerous open-source projects acting as mail clients through the GMail API, Google has granted them access, and they don't have to use a closed-source client to do it. Most of them don't even target Android.
I'm using it right now in order to use Gmail in an email client that doesn't support OAuth.
The only blocker is that app passwords are only available if you turn on 2fa (even though the functions aren't related.)
From what I understand, Microsoft will disable basic authentication starting January 2023, and the next few months are sort of a "grace period" to migrate to Microsoft's new authentication protocol :
> On September 1, 2022, we announced there will be one final opportunity to postpone this change. Tenants will be allowed to re-enable a protocol once between October 1, 2022 and December 31, 2022. Any protocol exceptions or re-enabled protocols will be turned off early in January 2023, with no possibility of further use. See the full announcement at Basic Authentication Deprecation in Exchange Online – September 2022 Update.
> Microsoft are disabling these and Basic Authentication as most users don’t use them and it’s the primary vector for sending emails from compromised accounts
Even if most users don't use basic auth, I don't see why Microsoft has to disable it altogether. For people who want to keep using legacy clients, it's not too hard to force the usage of application-specific passwords.
I don't use Microsoft mail services, except as SMTP destinations. Does Outlook not support Digest Auth? Digest Auth certainly isn't perfect (I seem to remember that it requires an extra roundtrip), but it's not a security disaster like Basic Auth.
My main problem with OAuth is that it's hard for users to understand. If we expect users to use the internet securely, then they need to be able to know when that's not what they're doing, and I don't know any ordinary Joe that I could explain OAuth to. Hell, I implemented OAuth once, and now I can't remember how it works. It doesn't help that OAuth is a moving target.
All of them with complicated authentication requirements, idiosyncratic URL construction, and other difficulties. You would throw away the baby and keep the bathwater.
Heck you'd probably have a hard time just getting people to agree to use REST. "Why not GraphQL?"
So sure, one can look at this from an authentication perspective, or simply look at this as one in a line of steps in a specific direction.
Double checked and you are right, it was this issue that I recalled: https://support.microsoft.com/en-us/office/add-your-other-em...
I guess that’s US only? With 5 employees we are a pretty small company and this is not the case for us.
"At the moment, we only support connecting domains managed by GoDaddy with Outlook.com"
In my market those accounts were also marketed towards small companies, with only Microsoft 365 Business tiers and above having the feature of allowing other providers than godaddy as domain registrars.
You can also read about it here in the press release: https://news.microsoft.com/2014/01/13/microsoft-and-godaddy-...
"Announced on Monday a long-term strategic partnership to offer Office 365 as GoDaddy’s exclusive core business-class email and productivity service to its small-business customers".
Microsoft do however change their tiers and plans regularly, and whom they target them to. In my job, seeing customers being unable to switch to other registrars has been a fairly common occurrence. Microsoft 365 Business plans should be fine to my knowledge, through I recall Microsoft 365 email essential for small business wasn't, which has been rebranded to Microsoft 365 Business basic, but I don't know if that mean it is a Microsoft 365 Business plan now or still the more limited "personal" plan. A customer who bought essential in the past and now is on basic might be able to leave godaddy, but I don't know and it might depend on software versions, updates and who know what.
They are basically imposing the worst registrar ever on customers, for this particular scenario.
on one level, it's sad to see the open protocols go... on the other, google passwords are a big deal.
I suppose this can be a pain if you are not aware of these 2 settings, for newcomers they would likely need a tutorial. However this setup is really a one-time process.
JMAP is the current hotness.
a) password doesn't support 2nd factor.
b) Most configurations keep password is on disk somewhere, often in plaintext.
c) User configurations break on password rotation.
Your tracking theory doesn't really hold up a) they know exactly who you are on your email client anyway as you log in and b) most users are logged in to their google/microsoft account anyway because of o375/workspace/youtube.
I support adding 2FA to email in some way, but I heavily dislike using browsers to do so. What's wrong with adding a simple challenge-response protocol for FIDO2/U2F USB drives? Or a TOTP popup if you don't have a physical security key?
This can all be standardised without a browser ever touching the email client. We already have IMAP authentication methods that use signatures (like most 2FA hardware keys use) or challenge/response methods. You can even do client certificate authentication through STARTTLS when lacking a TPM.
Infrastructure to handle authentication on the web already exists. This is a massive benefit for providers and client developers. Whatever you propose does not. Good luck convincing big email providers to agree on a new standard like that.
GitHub alone has like 5 different ways to handle 2FA. Google has, I think, 3? Using a browser to handle this simplifies things a lot.
b) While it's not fixed by Oauth it greatly limits what can happen:
— First, you only need to store a refresh token that can expire and that expiration can be controlled by administrator
— Second, that token has limited scope: password provides access to entire account
— Third, it's clear where it came from — if token gets compromised, you will know where it happened. With password, it's unclear.
Email does not run over the web, it runs over the internet. It uses a completely different set of protocols from the web, which were all invented at least 5 years before the first web protocol. Why should email clients be required to add HTTP support in order to make email work?
Maybe we should take heed of Zawinski's Law, and make all web browsers implement native email clients instead. Yeah, that's probably it. The Netscape Communicator/Mozilla Suite model should never have been dropped, and it was a mistake to separate Firefox and Thunderbird as separate projects!
- Open a link in a browser (don't you dare open it in an embedded browser, I will find you and force you to type 100 characters as I dictate it to you)
- Handle callback.
That's all. That's the entire authentication. There is probably not a single platform (language and maybe a framework) that is used to build an email client and has no library to handle this with a few lines of code.
I think that's a much better and easier solution than make developers of email clients handle the very possible authentication method email providers could come up with (see reply to my comment about 2FA with Google).
Also, old opera with torrent client, calendar, compressing proxy, email client was the best.
There's at least:
3. Google Prompt (on Android / iOS)
4. Offline security codes (distinct thing from TOTP, generated from Android settings)
5. Backup codes
6. Security Key
Indeed, OAuth makes it easy to swap out the actual authentication step. Which is nice, because the service shouldn't really care about that, only that the user is authenticated and authorized.
Our application send mails on behalf of our customers. This is done in an on-prem background service running on one of their servers wherever that might be.
So, anything interactive is a no-go. And installing a physical USB key is probably a no-go for most customers, especially those who have their servers hosted by a provider.
FWIW, there’s the hacky way reddit clients authenticate: "password:OTP" instead of just your normal password. Not that MS could do that, but I wanted to mention the option ;)