I get the impression that the article's author didn't really read the linked help page. It's basic auth that's getting deprecated, due to being considered a legacy authentication protocol. For good reasons, as described.
That aside, POP should really be considered legacy, it comes with many downsides that hinder people's e-mail usage. IMAP is definitely more functional, but has a successor - JMAP. So in some sense, it'd not even be entirely wrong to migrate.
Lack of HTTPS on the author's site also adds a nice subtle flavour to the blogpost.
The linked help page doesn't make up for what the letter says. The letter is just plainly misleading.
Read it again - they're unambiguously saying that non-Microsoft email programs that rely on SMTP, POP and/or IMAP will "stop functioning when Microsoft chooses to disable these protocols".
That the linked help clarifies a little doesn't mean this is a flatly misleading letter.
This has already happened! My school promised email for life, and uses Microsoft's Office365. A couple months ago they sent out emails saying if you don't update your client to use "modern auth" (which was a new term for me) I'd lose the ability to check my email.
Well Microsoft blocks the Thunderbird embedded browser, so you can't complete the Oauth2 login. I've been effectively locked out of my account since (I'm not going to use the webmail just for this account - set up a forwarding email and told everyone to use my new address).
That letter isn't from microsoft though, I manage several tenants, and none of our end users or admin got that letter.
specifically what is missing, is that email using basic auth won't disconnected OCT 1st because of the tons and tons of machine that do things like email faxes, and scans that do not support modern authentication.
This looks like whoever "enterprise services" is, they wrote a brief message that wasn't very accurate.
"We won't support clients that dont support microsoft supported practices" is a very short stone's throw away from "we won't support non-microsoft clients", as I've come to discover at my university, where I had to ask for permission to use an email client just for oauth to work.
Which is yet to be granted after 2 months (and if granted, will be granted site-wide ... i.e. unlikely unless there is a large demand for non-microsoft clients)
> Lack of HTTPS on the author's site also adds a nice subtle flavour to the blogpost.
If you're not doing anything requiring security, you don't need HTTPS, IMHO.
> POP should really be considered legacy
I know people who knowingly use POP to keep their remote boxes empty, and keep everything local, so I don't think we should decide for people that swiftly.
Similarly, I'll let the wisdom of "Teh Internetz" to decide whether JMAP is worthy of the effort to replace IMAP. Having a IETF RFC is a good start, but let's see...
> If you're not doing anything requiring security, you don't need HTTPS, IMHO.
I disagree, for a lot of reasons. For one thing, I don't want some random WI-FI to know every page I visit, even insecure pages.
I also don't want to leak any information about my browsing habits. Using https everywhere limits the information you leak about how much of your traffic is sensitive.
Unless you use DNS over HTTPS, all the effort there is moot. Even then, a flow server can trace all the point to point IP traffic passing over it. Yes, it limits the obtained data a lot (no hostnames to begin with), but a proper traffic analyzer is rarely blinded completely by HTTPS.
Knowing that I'm watching YouTube is one thing, knowing what YouTube video I'm watching is a completely different problem.
Edit also: even with DNS over HTTPS, a lot of the Internet uses SNI for TLS, so anyone snooping can anyway see what hostname you're visiting (not to mention, IPs are not significantly more private than host names, and those will always be public unless you're using ToR or something similar).
If Comcast is doing something wrong, why does everyone else need to do something about it? If Comcast stops routing packets on port 433, should everyone else stop using it as well?
Other IT corporations are also evil and untrusted.
But no one cares if they can look into all your mails and documents (Microsoft, Google), track you all over the web, see where you are going, which flight you are taking,...
The important is that ISPs are evil and untrusted. facepalm
> But no one cares if they can look into all your mails and documents (Microsoft, Google), track you all over the web, see where you are going, which flight you are taking,...
I don't think I go a single day without reading posts complaining about exactly that, so obviously people do care.
ISPs are the most blatantly scummy of the lot. They log everything, usually by government requirements, sell it to anyone, give it to the government without a warrant, inject adverts and js in to unprotected requests, and all kinds of malicious things.
They are easily eliminated from the picture. One less party with access to your data the better. There is really _no_ good argument for not having HTTPS on any site unless its some very rare case like an intranet site.
This is like saying we shouldn't use encryption just because there are bad guys stealing our data... just because hackers do something wrong, why does everyone else need to do something about it?
We have to do a lot of things because bad guys do bad things. What is the alternative? Pretend they don't exist?
The website operator has to use it, too. Many HTTP servers still do not support TLS1.3 let alone ECH (Draft 13). ECH is still experimental. Cloudflare disabled their ESNI trial a while back (ESNI worked great for me outside the browser), so unless they have now got ECH working (I still have not seen any announcement), currently there are even fewer sites offering encrypted SNI. You could probably count them on one hand. And Firefox (nightly), Chromium (105+) and Brave (nightly) are probably the only browsers that would support ECH and it is not enabled by default. I would be pleased to learn I am wrong here, because I would love to again start using sites that do not return requested pages unless a servername is sent.
This couldn't be further from the truth. DoT is easy to block, so anyone who wants to censor or surveil you will just do so. You should always use DoH instead, since it's way more resistant to blocking.
Unfortunately, people are now often fighting with their own devices for control over which 3rd party services they access. This sometimes means that you have reasons to MITM or block traffic your own devices generate if you want to control aspects of who you actually send data to, or what data you actually send.
Not just block. I run split-horizon DNS at home for a few of my services. Without being able to control the DNS for devices on my LAN, they can't use those services.
Now you might argue that's a bit silly, but it is a use-case.
> If you're not doing anything requiring security, you don't need HTTPS, IMHO.
There have been documented attacks of people getting hacked because they were browsing the web with plain-HTTP and someone injected malicious pages mid-stream; see "Quantum Insert":
In corporate hellscapes like the USA, there is often no choice in ISPs, either because other ISPs can't justify the infrastructure investment to set up in opposition to the incumbent, or because the incumbent has lobbied local government to make use of power poles/conduits exclusive to them.
Sure, but this was in specific reference to “My ISP”. I can understand no choice in a hotel/cafe/etc. But there’s now lots of choice for home internet. Wired, cellular, even Starlink.
> If you're not doing anything requiring security, you don't need HTTPS, IMHO.
No, no, no, no, no, no, no.
ALL Web traffic should be https (or http/2 or /3). If you connect to a site that uses insecure http, ANY link between you and that site can easily snoop on traffic and even inject different content. You CANNOT be sure that the content you see is what was originally served by the server, and you can certainly expect that state actors will be noting that you viewed this content, and building a profile on you based on that information.
Hey, fellow genius on Hacker News, do note that TLS doesn't protect against state actors, because they can very easily manipulate certificate authorities, and a malicious certificate authority completely removes most protections TLS supposedly provides.
It's enough to compromise one CA for TLS to be entirely defeated - any CA can sign a certificate for any site, and TLS implementations will accept it. The only defense is pinned certificates, but that comes with its own problems.
> I know people who knowingly use POP to keep their remote boxes empty, and keep everything local, so I don't think we should decide for people that swiftly.
That could be done using IMAP as well. The overhead for that use case seems quite small. However it is just one protocol to support. And I haverun into a bunch of non-techy users with problems like "if i read my mails on my desktop I can't access them on my phone anymore, so I always use the phone"-like problems, which tells me their mail client on desktop likely uses POP3. By not offering POP3 you are removing that class of support problems with very little downside.
I missed that, because I don't jump down every linked rabbithole. Thanks.
Basic auth shouldn't be used for anything; I'm surprised it isn't already deprecated to hell and back. The letter says that Microsoft intends to deprecate IMAP and POP because some people use them with Basic auth. That doesn't make sense.
What downsides POP3 has? Looks like a terribly simple protocol and does the job of moving messages from one place to another quite well for me.
Having to use anything more complex would be a downgrade. More complex to setup on server and client side, with more options and more problems to take care of.
The workload required to poll is heavy. You need to reconnect TCP, TLS and login. Once you find you have no messages, you have start over to check again.
https://billpg.com/pop3-commit-refresh/
Deletes are batched and only committed at the end of a connection. Unless you're willing to close and reopen a connection right away, any messages you delete are in a not-quite-deleted limbo until you close the connection.
https://billpg.com/pop3-deli/
Looks like most of these are issues for large providers only or if you have multiple clients/protocols enabled, which is I guess fair. Thanks for the overview, it's always interesting to see limitations of protocols I use daily.
I use POP3 only to move messages from my mail server's mailbox on a VPS to my LAN using a pull method. Polled mailbox is thus mostly empty or has very few messages at each poll. So it's a nice fit.
Are there any non-Microsoft email clients that support different authentication protocols with IMAP connections to Exchange Online? The message from Microsoft certainly makes it seem like IMAP itself is being deprecated.
This is a self-signed cert with no chain of trust to a cert issuer, which means we're back to square one... No way to know that whoever is providing the data to Gemini isn't lying about being Conman Laboratories.
> Clients can validate TLS connections however they like (including not at all) but the strongly RECOMMENDED approach is to implement a lightweight "TOFU" certificate-pinning system which treats self-signed certificates as first- class citizens. This greatly reduces TLS overhead on the network (only one cert needs to be sent, not a whole chain) and lowers the barrier to entry for setting up a Gemini site (no need to pay a CA or setup a Let's Encrypt cron job, just make a cert and go).
> TOFU stands for "Trust On First Use" and is public-key security model similar to that used by OpenSSH. The first time a Gemini client connects to a server, it accepts whatever certificate it is presented. That certificate's fingerprint and expiry date are saved in a persistent database (like the .known_hosts file for SSH), associated with the server's hostname. On all subsequent connections to that hostname, the received certificate's fingerprint is computed and compared to the one in the database. If the certificate is not the one previously received, but the previous certificate's expiry date has not passed, the user is shown a warning, analogous to the one web browser users are shown when receiving a certificate without a signature chain leading to a trusted CA.
> This model is by no means perfect, but it is not awful and is vastly superior to just accepting self-signed certificates unconditionally.
That is a terrible security assessment and it is an awful model for the web.
OpenSSH's TOFU works in its specific context primarily because SSH targets have been connected to beforehand and you can't place an evil proxy between. In order to prevent issues with TOFU, we've got SSHFP DNS records (so, trusting DNSSEC).
Now with the web, it's already rather ridiculous to suggest that it's not a terrible security model to not visit any new websites on untrusted connections. Plus the seeming lack of alternative methods (like SSHFP/TLSA + DNSSEC) to establish trust with untrustworthy networks (which well, most networks should be considered as).
Giving the DNS PKI any influence over your SSH connections seems insane, since there is literally no reason whatsoever the DNS root and TLD operators should have any say whatsoever in how you connect to your own servers.
If you're worried about key continuity issues, do what large SSH fleet deployers do, and use certificates. Key continuity was the motivating use case for SSH certificates, before they were used to do MFA/SSO logins for users.
I don't disagree with the analysis of Gemini's trust model; I think you're right that key continuity doesn't work on the scale of the public web. Of course, I don't think DNSSEC has any answers here, either; the Web PKI is probably the best thing we've got right now.
I just highlighted the fact that OpenSSH (which was an inspiration to the Gemini people) finds this an issue and have devised solutions. You and everyone else are absolutely free to not to trust DNSSEC and deal with the pitfalls of TOFU-over-internet in some other way.
Though in theory, just using SSHFP+DNSSEC and those operators being malicious shouldn't degrade SSH's security, it should only hinder the same TOFU. In which case I suspect people would fall back to the Web PKI to look up the reason for the mismatch.
The Gemini to WWW portal does have access to the content via TLS1.2 or greater, as that is the only way content is served under the Gemini protocol. AFAIK, Internet Archive (IA) does not crawl gemini://. I thought perhaps IA might have crawled https://portal.mozz.us, but I was mistaken.
Regarding the issue of "integrity", i.e., not being modified in transit, ideally all www pages could have a hash (signature), or "digest", of the page content. As long as the www user can obtain the signature over a secure channel then, in theory, integrity can be verified. The content could be sent over an insecure channel and integrity could still be verified. This sort of verification has long been common (well before HTTPS became widespread) when downloading files, namely, software, e.g., as tarballs.
The problem with using an intermediary that has crawled the www, whether it is IA or Google or whomever, is that the www user has no way to know how the content was retrieved. How does every www user know that the pages in Google's index that they are purportedly searching were accessed via HTTP or HTTPS. The only people who would have firsthand knowledge of the retrieval are those who observed it. As such, there is some (misplaced) trust involved when people use IA, Google or other intermediaries.
Personally, if I am forced to use an intermediary, e.g., a web archive, a search engine, a third party DNS provider, etc., then I try to use multiple intermediaries to obtain the same data, then compare. This does not solve the problem, but it is arguably better than trusting any particular intermediary.
Ideally, web page authors and intermediaries could provide signatures for the web pages they serve. (The same way that software authors provide signatures for the archives of the software they publish.) For example, if one uses Common Crawl then one gets a "WARC-Payload-Digest" header. For example, the payload digest for example.com's index.html is currently
Of course, that digest should be published by the author of example.com's index.html.
If its signature was published by the original source, then there could be multiple sources for example.com's index.html across the web. Each copy would match the same signature published by the author of example.com's index.html. This practice has long been used in software distribution where multiple "mirrors" contain copies of the same files.
I still find there are numerous "insecure HTTP endpoints" in operation on the www. It seems to me that as long as these are concealed, no one complains about them. I recently commented about this with respect to podcasts. A surprising number of podcasts are being served over HTTP. But as long as the podcast listener is unaware of how their software is accessing them, no one complains about potential modification in transit or other issues with using an insecure channel.
> As such, there is some (misplaced) trust involved when people use IA, Google or other intermediaries
That's the unfortunate reality of trust as a currency... On average, people are way more likely to trust Google, a household name, than Gemini, a three-year-old brand-new protocol with its own mystery set of new and exciting not-yet-discovered security issues.
Google does mis-crawl all the time (in fact, they have a whole division dedicated to confirming whether sites aren't detecting their crawler and actively lying to them; sites that do get penalized in search results). But it's Google, so people believe they have a vested interest in getting it right. There's no such guarantees in people's minds for data coming over Gemini protocol; it hasn't been earned yet.
It's consistently baffling to me to why people try to "attack" Gemini, as if it is some sort of "threat". A threat to what I am not sure. One of the common foibles is they try to compare it to "the web". This is absurd since Gemini is not "the web", nor is it anything close to a Hypertext Transfer Protocol (HTTP).
IMHO, at best Gemini is a Gopher redux. No one is planning to run commerce over Gopher. It is no different for Gemini. It's a "people's protocol", not a corporate one. It does not even have an RFC. It's relatively easy to write clients and servers. No corporate vendor or advertising sponsorship is needed. No one needs to have a "business model" to publish data/information via Gemini.
The Gemini FAQ explicitly addresses critics who see the internet through web-tinted glasses:
1.6 Do you really think you can replace the web?
Not for a minute! Nor does anybody involved with Gemini want to destroy Gopherspace. Gemini is not intended to replace either Gopher or the web, but to co-exist peacefully alongside them as one more option which people can freely choose to use if it suits them. In the same way that some people currently serve the same content via gopher and the web, people will be able to "bihost" or "trihost" content on whichever combination of protocols they think offer the best match to their technical, philosophical and aesthetic requirements and those of their intended audience.
The Gemini protocol is not a commercially-controlled HTTPS. Some folks like it better than HTTPS.
4.2 Server certificate validation
Clients can validate TLS connections however they like (including not at all) but the strongly RECOMMENDED approach is to implement a lightweight "TOFU" certificate-pinning system which treats self-signed certificates as first- class citizens. This greatly reduces TLS overhead on the network (only one cert needs to be sent, not a whole chain) and lowers the barrier to entry for setting up a Gemini site (no need to pay a CA or setup a Let's Encrypt cron job, just make a cert and go).
TOFU stands for "Trust On First Use" and is public-key security model similar to that used by OpenSSH. The first time a Gemini client connects to a server, it accepts whatever certificate it is presented. That certificate's fingerprint and expiry date are saved in a persistent database (like the .known_hosts file for SSH), associated with the server's hostname. On all subsequent connections to that hostname, the received certificate's fingerprint is computed and compared to the one in the database. If the certificate is not the one previously received, but the previous certificate's expiry date has not passed, the user is shown a warning, analogous to the one web browser users are shown when receiving a certificate without a signature chain leading to a trusted CA.
This model is by no means perfect, but it is not awful and is vastly superior to just accepting self-signed certificates unconditionally.
Some people might like it more, but it's a terrible model.
Most ISP's can't be trusted, TOFU becomes basically useless. Ignoring that major issue with the model, it's also unusable in especially untrustworthy networks like Airport WiFi. I'm also going to preemptively say that suggesting not to visit new websites on those networks (because you can't TOFU) is just ridiculous.
"It is not awful" in this context is a very strong claim with little backing it up.
"Most ISP's can't be trusted, TOFU becomes essentially useless."
Why not just use HTTPS on the "untrustworthy" ISP's.
The use of the word "most" implies that there are some that can be trusted.
Assuming you are a trustorthy source for such information (and how do I know it's really you and not an "imposter"), then what are they. Please list the ISPs everyone can "trust".
The point that is being missed in this comment thread, and most others about TLS, HTTPS and CAs, is that there is a question of who decides whether something is "trustworthy" or not.
Personally I like to make these decisions for myself. Unlike an incredible number of internet commentators, I do not purport to tell anyone else who they should or should not trust. That decision is ultimately for each person to make on their own. We can provide information that may help a person with their decision, but it's still their decision, not mine.
But that's not how "chain of trust" works.
The concept of "chain of trust" itself does not even exist in the real world. It only exists in the imagination of socially inept persons hiding behind keyboards. In practice, for HTTPS, the cast of characters is a laundry list of third party intermediaries, all trying "cash in" on the use of the internet, a public resource we already pay ISP's to access. The idea that any of them would be sources of "trust" is comical.
Why trust "domain name registrars" as a source of useful information about people who run websites.
Why trust CAs issuing non-EV certificates. They only verify that someone rents a domain name from an "ICANN-approved" registry.
Why trust CAs issuing EV certificates. The people approving these CAs all have a vested interest in the web (browser) as a means of online advertising.
Why trust the people who "approve" CAs for inclusion in popular web browsers.
There are something like 75 CAs hardcoded into popular web browsers. If I want to remove one, what do I have to edit the source code and recompile. Inconvenient to say the least.
In all of this third party nonsense, there is no opportunity for an ordinary person, not invested in or benefitting from the "tech" company racket, to have any input on whether or not she wants to "trust" a website is being operated by a particular person. She is effectively locked out of the process. These third parties are often comprised of people I would never trust IRL. But they hide behind keyboards so we never get to see them for what they are.
At least with Gemini, clients and servers are smaller and simpler, and easy to edit and recompile. Gemini clients, written by anyone, not necesarily "tech" companies, are not designed with online advertising in mind. The protocol itself is not "advertising-friendly". It is little more than plain text.
The "threat model" for me in the majority of web use is the "business model" of so-called "tech" companies, i.e., surveillance, data collection and advertising, not "imposters". Nevermind that "tech" companies have pushed for a web that is 100% commercial/political, where even recreational use is monitored for insights useful to advertising. That only creates a greater incentive for "imposters". When I started using the internet it was still predominantly used for academic and military purposes.
If a "tech" company employee wants to choose to use HTTPS, DNSSEC, TOFU, etc., then that is their decision. But if they want to remove the ability of anyone else to make that decision for themselves, then I see a problem with that.
I don't see the problem. HTTPS is basic internet hygiene. It's no worse than telling people they should mind their body odor when they're in a space with a lot of other people. Possibly indelicate, but undoubtedly true.
Will it always be a "low profile blog"? It just got on the front page of HN, which is not exactly "low profile".
As for what risk, two words: Great Cannon. For those who don't know, it's a well-known MITM attacker which injects JavaScript code on non-HTTPS pages, the injected JavaScript being used to do distributed denial of service attacks on other sites. Using HTTPS protects against these kinds of attacks.
It results in your browsing history being tracked and sometimes sold by your access point and ISP at a page level instead of just domain level, results in injected ads and banners on some access points, results in injected trackers on Verizon, and more broadly it permits unknown third parties to alter the content of your website.
Your analogy is flawed because this is Hackernews and, superior beings that we are, we understand body odor to be a symptom of a microbiome that's out of whack due to "modern life". Accordingly, we don't bathe in order to cultivate healthy skin bacteria; some of us wallow in mud instead.
I'm not sure you read what I wrote. At no point did I say to bathe, or in fact proffer any treatment at all for body odor. I merely pointed out that refusing to deal with it when you're in close quarters with a large number of people is antisocial. If somehow wallowing in mud treats it for someone, then that's what they should be doing. The problem is knowing how to solve it and refusing to do so.
I set it up for my home server maybe a year ago. I'm not a web developer or system admin, but I am a highly experienced software engineer with a deep understanding of network protocols. The documentation didn't seem the greatest, basically being, "copy/paste this if you use Apache." My particular configuration was quirkier than the example assumed, and I had to go through a few rounds of troubleshooting. It definitely wasn't trivial.
Lol, thanks. I rarely work on stuff exposed to the Internet. I've implemented multiple bare metal IP stacks from scratch, including Ethernet, DHCP, ARP, ICMP, and UDP. I've run real-time safety critical packets over TCP links. I've tunneled IP traffic through the international space station. I've done unforgivable things with iptables and awk.
The funny thing is that IPv6 short-circuits my brain. Why do I have four addresses, and where did they come from? Why isn't there a link-local address for loopback and wireguard?
For most modern blog hosts, it's a pushbutton solution or automatic.
For self-hosted, it's table-stakes knowledge. Failure to do it implies the site admin knows so little about modern security that their access logs are probably only thinly secured. It's an "admin smell," if you will.
Fun fact: HTTPS is implicitly required by the GDPR because the user agent metadata is considered PII and websites are required to implement best current practices, which includes HTTPS. There is no exemption for (public) personal websites as they're still providing a publicly accessible service and processing PII to do so.
The same goes for a privacy policy for that matter.
What traffic between a blog without user auth for comments needs to be encrypted? Why? I understand that Let’s Encrypt exists and it’s “easy” to set up (for people with root access to the system hosting their site + a decent level of technical sysadmin proficiency)
So that middlemen can’t spy on which part of the blog you’re visiting / alter the content of the blog / inject ads. There’s also all sorts of vulnerabilities that crop up if you use HTTP and HTTPS on the same site.
All of this was present in the 90s and early 2000s so not really theoretical attacks.
Unless the ISP forces the use of its own Certificate Authority on its users, this isn't possible. TLS and the infrastructure surrounding it were designed with integrity of the connection in mind. Knowing this, I would be very curious to know what specifically you encountered and where. I see you added some details elsewhere, but it still doesn't jive with how TLS works.
Our ISP doesn't force MITM certificates, but used to try to mess with the retrieved pages sporadically with stream-hijacks (you navigate to HN, and got greeted with a full page ad) or banner insertions, forcing connection to mixed status while keeping the inner frame HTTPS.
They're not doing this anymore, because I guess they now know how to use their DPI infra in useful ways to them.
My mobile carrier still injects stuff to HTTP pages, but doesn't mess with HTTPS ones, at least yet.
The overall connection will drop to "mixed status", but the inner frame will still be HTTPS.
My ISP used to do that when they started deploying DPI hardware as a technology demo. They'll hijack your traffic and inject full ads w/o redirection or added (bill) warning banners or ads sporadically to retrieved pages.
My mobile carrier sometimes injects SMS & Notifications arriving to my modem if they find the chance w/o disturbing the connection too much.
So, having a HTTPS connection doesn't make it tamper resistant, but tamper evident, at most.
> The overall connection will drop to "mixed status", but the inner frame will still be HTTPS.
That is not possible. You would get a cert mismatch error.
Consider what your browser does when you navigate to the page: it directly opens a TLS connection to port 443. There's nothing your ISP can do to force the browser to request the page using a non-TLS connection.
What might have happened, is that you might have carelessly typed the address in your URL bar without the "https://" prefix, as in "www.example.com"; for legacy historical reasons, most browsers (except IIRC some very old browsers from the dialup era, which always required an explicit URL scheme) treated that as if you had prefixed it with http:// (so it actually was the non-HTTPS "http://www.example.com" that you were using). Many sites would then redirect you to the HTTPS site, but your ISP could hijack the page before that redirect (since the redirect was not protected by HTTPS). Had you been careful to always prefix any address you type with "https://", there would be no initial non-HTTPS connection to hijack.
Afaik not actually possible in practice. Browsers these days will a) remember valid certs for a domain b) remember that a domain is HTTPS (no downgrades) c) sniff certificates based on crowdsourcing.
Yeah. I hope you realize though why there’s a stern castigation, on a particularly techy and security conscious forum, of using HTTP instead of HTTPS. People here both remember the problems and what’s been done to combat them. It’s discomforting to see backwards progress when so much has been done. Security isn’t purely a technical problem. It’s an educational one too.
They happened rarely, and never survived a reload or further navigation. They completely stopped after a while.
I have a 4G modem from my mobile carrier. They inject a info popup when I receive an SMS or any other notification' if they can manage it. It's very rare now, too.
Scenario A is impossible and has been impossible for as long as https has been a thing.
The only way it would be possible is if you installed a root cert from your ISP onto your computer so that it would trust a cert issued by them. Otherwise, they would not have a valid cert for example.com and you would be presented with a cert error.
This is literally the exact thing https was designed to prevent. It is and always has been impossible (again, unless the client machine is administered by the ISP or whoever the middleman is, and they can install a cert on the machine)
Um what? Nothing in IMAP prevents you from doing the same. Just because most client implementations assume you want to keep your mail on the server by default, does not mean the protocol doesn’t account for the other possibility.
And to be fair, configuring most clients to retrieve and then delete, or keep a local copy in addition to the server one, is not difficult at all - these options are not hidden or anything.
POP3 is exceedingly simple protocol compared to IMAP. I'd rather use the simplest tool for the job.
In case the password leaks, attacker can only fetch new messages over POP3, and not plant them or fetch the entire archive from the server. (yeah, I can fetch and delete over IMAP too, but then what's the point of all the extra unused complexity of IMAP, it's just an extra risk)
So does IMAP? Most clients only cache headers because it’s faster and most devices are always-connected; but you can certainly locally download the entirety of your IMAP contents.
Considering you have to download the entirety of the mail contents to read it anyways, I have no idea what makes you think this is an impossibility.
Yeah, but typically the difference is that you can't see IMAP as a backup. Whereas with POP3 (and not having your client set up to automatically delete emails on the server) you can.
With IMAP, when an email gets deleted by some client, other clients will also delete their local copies of that email. That won't happen with POP3.
But I haven't read either of the two protocols, so I'm not sure whether that's something required by the protocol or just a common behavior of clients.
Common POP I implementations let me leave things on the server indefinitely, for a chosen period, or to delete immediately on retrieval, leaving the local copy intact. Can you think of any IMAP implementations that allow that?
For those of us with Unix-y mail setups the move to OAuth2 can be a bit tricky, but there are now several different programs to help (spurred, I suspect in no small part, by Microsoft/Exchange's stance). The ones I know about are:
Not only it’s tricky and user-hostile, but it also severely decreases security by forcing people to use fundamentally insecure mechanism to obtain the authentication token.
It makes it necessary to use a browser to obtain the token. That browser is a huge attack surface. With web, it doesn’t matter, since you need to be using it anyway, but for mail it’s just additional cruft.
That's just for certain flows, like the common authorization code flow. The client credentials flow does not require a browser, for example.
Not sure about Google, but Microsoft supports client credentials for IMAP/POP3[1], but not for SMTP yet. IIRC it was supposed to be rolled out this January but is still missing. Hopefully they can get that deployed ASAP.
More like authorization. Authentication is completely opaque for most people using gmail. (except for those very few using service accounts and signing their own authorization tokens)
Or maybe you can enlighten me how you can get the token for XOAUTH2 from just your gmail email address and password without involving any opaque google service.
Authentication is happening completely outside of OAuth inside some google black box. 2FA has nothing to do with OAuth at all. It's just another feature of the google's black box which decides whether to give you the access/refresh tokens or not.
Well the question I responded to was "What does oauth have to tho with authentication".
I fully agree with the move away from plain passwords in this case, given that it's no longer "just" the password to a mail account, but to much, much more.
Now while I think OAuth adds some features that can be useful in certain settings, I'll be inclined to agree that requiring OAuth isn't the best move.
However the alternatives would probably require a lot of extra work on Microsoft's behalf, like being able to set up device-specific passwords or similar.
So, given the need to move away from plain account passwords, I can understand why they wouldn't want to do that and just use what they already had.
Yes, that’s exactly the reason. Enforcing security is an endless black hole of effort that will increases exponentially with every new thing you need to support. So yes, reducing the number of things that need to be evaluated isn’t just some security guy trying to control everyone, it’s a hard requirement just to be able to have security at all.
> I do have to wonder how long until Google decides that only certain clients can connect with Gmail?
Already the case on mobile:
> If you use the Play store or GitHub version of FairEmail, you can use the quick setup wizard to easily setup a Gmail account and identity. The Gmail quick setup wizard is not available for third party builds, like the F-Droid build because Google approved the use of OAuth for official builds only. OAuth is also not available on devices without Google services, such as recent Huawei devices, in which case selecting an account will fail.
I'm not sure M66B needs to get approval for the other builds, though, because the access is gated at the cloud API, not though client libraries. You can use Play Services to grant OAuth tokens, or you can use the boring old Google API client libraries, or roll your own; you just need to add the other signing key fingerprints and application IDs to the credential in the project's cloud console.
I could easily be mistaken, but there are numerous open-source projects acting as mail clients through the GMail API, Google has granted them access, and they don't have to use a closed-source client to do it. Most of them don't even target Android.
You can still enable these protocols per user - Microsoft are disabling these and Basic Authentication by default as most users don’t use them and it’s the primary vector for sending emails from compromised accounts.
Any Microsoft tenant I set up or manage already has policies to block anything but the Outlook desktop or mobile clients with MFA on every account.
From what I understand, Microsoft will disable basic authentication starting January 2023, and the next few months are sort of a "grace period" to migrate to Microsoft's new authentication protocol [1]:
> On September 1, 2022, we announced there will be one final opportunity to postpone this change. Tenants will be allowed to re-enable a protocol once between October 1, 2022 and December 31, 2022. Any protocol exceptions or re-enabled protocols will be turned off early in January 2023, with no possibility of further use. See the full announcement at Basic Authentication Deprecation in Exchange Online – September 2022 Update.
> Microsoft are disabling these and Basic Authentication as most users don’t use them and it’s the primary vector for sending emails from compromised accounts
Even if most users don't use basic auth, I don't see why Microsoft has to disable it altogether. For people who want to keep using legacy clients, it's not too hard to force the usage of application-specific passwords.
I'm referring to IMAP/POP, you'll be able to use these with OAuth instead of basic auth.
I imagine their stats show that 99 percent of users use Outlook of some sort so to up security they turned basic auth off.
So Microsoft want you to switch from (awful) Basic Auth to a Microsoft-modified version of OAuth, is that it?
I don't use Microsoft mail services, except as SMTP destinations. Does Outlook not support Digest Auth? Digest Auth certainly isn't perfect (I seem to remember that it requires an extra roundtrip), but it's not a security disaster like Basic Auth.
My main problem with OAuth is that it's hard for users to understand. If we expect users to use the internet securely, then they need to be able to know when that's not what they're doing, and I don't know any ordinary Joe that I could explain OAuth to. Hell, I implemented OAuth once, and now I can't remember how it works. It doesn't help that OAuth is a moving target.
Ignoring authentication, POP and IMAP truly are legacy protocols in the sense that they were designed in an era where bandwidth, not latency was the major constraint for accessing email. It made sense to send a notification that the size of your inbox changed, and let the client decide whether or not to fetch your emails. Since then, internet connections have gotten tens of thousands of times more bandwidth, but the speed of light hasn't improved correspondingly.
Now or then, whatever the bandwidth or latency, why couldn't we just use a REST web service which would let a client access whatever the parameter incl. mailbox size, list the messages, read the messages in form of JSON/XML arrays of metadata + message bodies in Markdown format?
Because without a standard protocol they would be N underspecified proprietary variants: the Gmail web service, the Office 365 web service, the Office 365 from last year web service...
All of them with complicated authentication requirements, idiosyncratic URL construction, and other difficulties. You would throw away the baby and keep the bathwater.
I meant a standardized REST protocol. Why does it have (or ever had) to be something obscure like SMTP/POP3/IMAP if it could be just REST (still standardized, name it whatever)?
There are Google and Microsoft APIs for exactly this. Well not markdown exactly but since emails are not sent in markdown it would be kind of subjective exactly how to properly convert it
Some context: Microsoft has disabled the use of alternative email providers in Windows' built-in email app since Windows 10, and for 365 users, unless you got one of the more expensive accounts intended for large companies, then no custom domain names for your email unless you use Godaddy as registrar. They have an exclusivity deal with Microsoft.
So sure, one can look at this from an authentication perspective, or simply look at this as one in a line of steps in a specific direction.
This is completely false, I've just installed the Mail app on my Windows 11 machine, first thing it asks you is what e-mail provider you use [0] and there are options for iCloud, Yahoo and a generic IMAP setup along with the Microsoft offerings.
I'm running Pro but I've seen plenty of people with Home/Core machines using the default Mail app, no idea why since it's so much worse than the webmail option
> and for 365 users, unless you got one of the more expensive accounts intended for large companies, then no custom domain names for your email unless you use Godaddy as registrar
I guess that’s US only? With 5 employees we are a pretty small company and this is not the case for us.
"At the moment, we only support connecting domains managed by GoDaddy with Outlook.com"
In my market those accounts were also marketed towards small companies, with only Microsoft 365 Business tiers and above having the feature of allowing other providers than godaddy as domain registrars.
"Announced on Monday a long-term strategic partnership to offer Office 365 as GoDaddy’s exclusive core business-class email and productivity service to its small-business customers".
Microsoft do however change their tiers and plans regularly, and whom they target them to. In my job, seeing customers being unable to switch to other registrars has been a fairly common occurrence. Microsoft 365 Business plans should be fine to my knowledge, through I recall Microsoft 365 email essential for small business wasn't, which has been rebranded to Microsoft 365 Business basic, but I don't know if that mean it is a Microsoft 365 Business plan now or still the more limited "personal" plan. A customer who bought essential in the past and now is on basic might be able to leave godaddy, but I don't know and it might depend on software versions, updates and who know what.
What hoops are you referring to?
Regarding the password, Google requires you to use an App Password, e.g. a password used solely for POP/IMAP, separate form you Google password. I suppose setting up POP/IMAP nowadays would involve:
- Enable POP/IMAP in GMail settings
- Set an App Password
- Enter email address, full name and App Password in Thunderbird first-time use wizard.
I suppose this can be a pain if you are not aware of these 2 settings, for newcomers they would likely need a tutorial. However this setup is really a one-time process.
> google passwords
at my work - they are disabled in our enterprise account - no alternative to oauth. I think they may even be disabled by default in general in gmail.
Are there any open standards that Microsoft is still going to support to receive emails, or will you be required to use only their proprietary clients to do so going forward?
Strange than no one talked about gmailify: that protocol that replaced pop and imap on gmail so that you can't see your email on other providers unless google accepts them.
IMAP IDLE exists since 1997 (RFC2177) and works decent enough for single mailboxes.
It is limited to only a single mailbox however, which is not quite ideal if one does server-side filtering (as one then needs N connections for N folders).
But for that there is IMAP NOTIFY (RFC5465) since 2009, which covers all remaining use cases..
Their plan is to remove old text-only protocols, and force to use XAUTH or similar protocols that requires use of a web browser, so they can spy you with cookies and more metadata. Both Google and Microsoft are announced this movement.
b) Most configurations keep password is on disk somewhere, often in plaintext.
c) User configurations break on password rotation.
Your tracking theory doesn't really hold up a) they know exactly who you are on your email client anyway as you log in and b) most users are logged in to their google/microsoft account anyway because of o375/workspace/youtube.
a) is only relevant once, during setup; b) isn't fixed by Oauth; c) is by design, I'd argue.
I support adding 2FA to email in some way, but I heavily dislike using browsers to do so. What's wrong with adding a simple challenge-response protocol for FIDO2/U2F USB drives? Or a TOTP popup if you don't have a physical security key?
This can all be standardised without a browser ever touching the email client. We already have IMAP authentication methods that use signatures (like most 2FA hardware keys use) or challenge/response methods. You can even do client certificate authentication through STARTTLS when lacking a TPM.
> What's wrong with adding a simple challenge-response protocol for FIDO2/U2F USB drives? Or a TOTP popup if you don't have a physical security key?
Infrastructure to handle authentication on the web already exists. This is a massive benefit for providers and client developers. Whatever you propose does not. Good luck convincing big email providers to agree on a new standard like that.
GitHub alone has like 5 different ways to handle 2FA. Google has, I think, 3? Using a browser to handle this simplifies things a lot.
b) While it's not fixed by Oauth it greatly limits what can happen:
— First, you only need to store a refresh token that can expire and that expiration can be controlled by administrator
— Second, that token has limited scope: password provides access to entire account
— Third, it's clear where it came from — if token gets compromised, you will know where it happened. With password, it's unclear.
> Infrastructure to handle authentication on the web already exists.
Email does not run over the web, it runs over the internet. It uses a completely different set of protocols from the web, which were all invented at least 5 years before the first web protocol. Why should email clients be required to add HTTP support in order to make email work?
Maybe we should take heed of Zawinski's Law, and make all web browsers implement native email clients instead. Yeah, that's probably it. The Netscape Communicator/Mozilla Suite model should never have been dropped, and it was a mistake to separate Firefox and Thunderbird as separate projects!
Well, because it already exists? All email clients need to do:
- Open a link in a browser (don't you dare open it in an embedded browser, I will find you and force you to type 100 characters as I dictate it to you)
- Handle callback.
That's all. That's the entire authentication. There is probably not a single platform (language and maybe a framework) that is used to build an email client and has no library to handle this with a few lines of code.
I think that's a much better and easier solution than make developers of email clients handle the very possible authentication method email providers could come up with (see reply to my comment about 2FA with Google).
Also, old opera with torrent client, calendar, compressing proxy, email client was the best.
I didn't mean that they had native IMAP support for those, but that these are the 2FA methods they support in general. With browser-based OAuth, it's viable to build support for a new 2FA method in just one place. Needing to build each of these into a protocol + get all popular IMAP clients to implement that support? It'd take an eternity; we'd probably still be stuck on just TOTP.
Indeed, OAuth makes it easy to swap out the actual authentication step. Which is nice, because the service shouldn't really care about that, only that the user is authenticated and authorized.
> What's wrong with adding a simple challenge-response protocol for FIDO2/U2F USB drives? Or a TOTP popup if you don't have a physical security key?
Our application send mails on behalf of our customers. This is done in an on-prem background service running on one of their servers wherever that might be.
So, anything interactive is a no-go. And installing a physical USB key is probably a no-go for most customers, especially those who have their servers hosted by a provider.
FWIW, there’s the hacky way reddit clients authenticate: "password:OTP" instead of just your normal password. Not that MS could do that, but I wanted to mention the option ;)
That aside, POP should really be considered legacy, it comes with many downsides that hinder people's e-mail usage. IMAP is definitely more functional, but has a successor - JMAP. So in some sense, it'd not even be entirely wrong to migrate.
Lack of HTTPS on the author's site also adds a nice subtle flavour to the blogpost.