Hacker News new | comments | show | ask | jobs | submit login
HTTPS on Stack Overflow: The End of a Long Road (nickcraver.com)
574 points by Nick-Craver 34 days ago | hide | past | web | 173 comments | favorite



At $previous_job we once turned on HTTPS for our entire customer website and online store, only to have our customer support team be bombarded by phone calls claiming that our "website was down."

After much teeth gnashing and research, we determined that a large segment of our user base was still using WinXP and the encryption protocols we offered weren't available to them.

We didn't think this would be a problem because the current version of the software wasn't compatible with WinXP any longer.

There was some debate internally whether the better fix was to including the legacy encryption protocols or just leave the HTTP version of the site running and use Strict-Transport-Security to move capable browsers to HTTPS.

In the end we had to include the legacy protocols so those customers could use our online store.


At $current_job we're currently in the middle of the same thing, but took the precaution of checking logs to see which customers use older encryption protocols (we're B2B), and have given them X months to upgrade their systems before we make the switch on our side.

The logic that was communicated to them was that as a service provider, security a prime concern for us (as it should be for them as well), so we can't keep lagging on this forever. Currently, we have $single_digit merchants we're still waiting to make the switch.

It's made the whole switch process much easier and made customers actually appreciate our pro-activeness in this! :)


I know it's hindsight and all that, but why didn't you check your website analytics first? Seems a fairly massive assumption that should have taken 10 seconds to check.


That would have been really smart. However, this move was driven by the product owner, including the requirement that we must score an "A" on the SSL Test site. I had just assumed he knew what he was asking for.

The scanning of the server logs occurred to us in hindsight as well.


Some people don't spy on their customers and don't have these kinds of information available for analyses

They're admittedly few though and their moral high ground is debatable considering that there are self hosted FOSS alternatives around nowadays


I completely understand where you're coming from, but the User-Agent string is included in regular HTTP requests and you don't need to resort to overbearing client-side analytics to aggregate it; it's right there in the access logs on the server.


Calling aggregate anonymous analytics "spying on your customers" is absurd nonsense.


It's "spying" when you're gathering data they didn't consent to give, like mining through their contacts, scanning running processes or uploading unrelated content from their computer. The browser User-Agent string is hardly classified information.


Yeah, this is more like the used car dealership noticing that a lot of their clientele drives Toyotas. SPIES!


Analytics on the web are neither aggregate nor anonymous.


Declaring "absurd nonsense" isn't an argument.


In this case, the absurdity and nonsensical character of the 'spying' claim is fairly self-evident.

When a client voluntarily makes a request to a server, it presents a bunch of information for the server to see and consume. This information is not meant to be kept secret from the server. Among such pieces of information can be some about the characteristics of the user agent, including OS. It is disingenuous at best to call collecting such voluntarily-presented and clearly-transmitted data as "spying" on a user.

A basic requirement for spying is for a collecting party to be obtaining information that can be reasonably considered confidential or restricted. Details about the system from which you send a request are by definition of the protocol not confidential or restricted to the recipient of your request. It is not reasonable to expect a server to not look at or use information you present to it. Therefore, it isn't "spying" for the recipient to consume the information. The information might be used in ways some people(e.g., OP) don't like, but that does not make obtaining the information "spying".


I only posted this because it was the second time that day I saw "absurd nonsense" used as a comment with no additional content. It annoyed me enough the first time that it stuck out like a sore thumb the second time, then I noticed it was the same user and it was their last 2 comments.


You don't need an argument against absurd nonsense.


Better men than you have been beaten by absurd nonsense that shouldn't have needed to be argued against. ;)


I recall a woman who recently had this experience as well.


Just like those anonymous taxi fare statistics, no one ever extracted meaning from those..


The whole point is to extract meaning from analysis but not spy on personal information. Knowing which clients support what kind of SSL isn't personal, it is part of the request transaction.


Mere server-side logging can pick out something like this via User-Agent. Is it spying to count the number of times a request with "Windows NT 5.1" is sent to your server?


Mere server-side logging can include the negotiated encryption parameters (but doesn't by default, IIRC).


Yes, of the connection the server is party to. It's bad security to record them, but how is it spying?


Are you implying that analyzing server logs to gather aggregate user agent statistics is "spying on [your] customers?"

Because that is untrue.


Aside from my personal opinion (which largely agrees with you), there are jurisdictions where a specific IP address is considered enough to make it (and the rest of the data) personal information, requiring a justification, information (or even consent), and other processes to protect privacy.

That's why Google Analytics has an option to remove the last three digits of an IP.


The important part is whether the data are anonymous.

You might be interested in the EFF's Best Practices for Online Service Providers:

https://www.eff.org/wp/osp


Well you can use qualsys labs tool to check your ssl and all the main search engines have said they will start flagging sites that use unsafe HTTPS or show the warning page before letting you proceed


Out of all the grey comments, this one does not deserve to be greyed out.


> We didn't think this would be a problem because the current version of the software wasn't compatible with WinXP any longer.

> There was some debate internally whether the better fix was to including the legacy encryption protocols or just leave the HTTP version of the site running and use Strict-Transport-Security to move capable browsers to HTTPS.

Where can I read about this? Is there any way to display a special "Your browser is outdated" page for the users on WinXP?

Sorry if these seem like basic questions. I am just curious and would like to hear some expert advice.


At our place, we put a redirect on the front end networking device that detected if a browser couldn't support more modern encryption protocols, and sent them to an HTTP information page (instead of to the application itself) if so. This allowed us to update the core app to force newer protocols, while still providing some sort of UX for those left behind. We used Piwik to track the hits on the redirect page to get a sense for how many users were left behind.


We did a similar thing, but folded it into unsupported and deprecated - unsupported browsers will get an HTML page extolling the virtues of updating your browser once a decade, whilst deprecated browsers (basically IE10 at the time tbh) were treated to a popup explaining that whilst the site probably works just fine, their browser wasnt fully upto date and the experience might suffer.

Eventually, and I doubt we had anything to do with it, IE10 usage dipped below the magic .5% (when it costs is more money to support than it earns us) and it was finally unsupported.

The only crappy browsers we still officially support are ancient safari and IE11, both of which are still going relatively strong for reasons we've never been able to fully explain!


IE11 is the most recent version of IE, it's not like it's old or unsupported. And it has way more compatibility tweaks than Edge, so lots of people haven't switched.


Corporate environments often rewrite employees to use IE 11 because of outdated internal web apps. Where I work the Windows laptops, even Windows 10, only allow IE 11, not even Edge.


> Is there any way to display a special "Your browser is outdated" page for the users on WinXP?

https://browser-update.org/ is a great service that does this.

For the case where SSL was broken, unfortunately that wouldn't help at all, because they'd never be able to load the webpage.


Out of curiosity - roughly what year was that and what percentage of the customer base would you say was still on Windows XP at the time?


Had a similar one at my last role. It was a HTML5 remote desktop thing with websockets, TLS1.2 etc etc. Got a bug report from a user that it didn't work in Safari. We didn't have a Mac in the office to test with, so asked the user for more details.

"Oh no, this isn't a Mac, it's Windows"

This is a user of a highly secure system, containing user PII, who expected to use it on a 5 year old browser with XP.

~bangs head~


In unrelated to topic note, I am in a team making html5 remote desktop thingie, I have to now start making the linux agent.

If its alright for you to answer,

1. What would be the best/cross platform way to proceed?We now have separate agents for windows, mac which causes maintenance hell

2. Is chrome remote's way of streaming desktop images as video better than images + diff.

3. Is there any open source mirror driver kind of thing in linux?


The software used was a commercial tool made upon Guacamole (Apache) called Inuvika. It's pretty awful, but having Linux and Windows apps on the same virtual desktop is quite cool. I don't know how much of that functionality comes from Guacamole or from the Inuvika addons.

Inuvika/Guacamole also support plain RDP, but we didn't use this, just the html5 client (browser)

If you want to see what open-source can do then look at Guacamole and go from there.

Don't think that helps, but...


The frustrating point about a similar experience was...

You can support HTTP and the occasional knowledgeable person will suggest you should upgrade. Or you can force TLS with SSLv3 enabled, and suddenly you'll hit a flood of people letting you know you're about to be hacked, based on online scanners. Often complete with requests for a bug bounty.


The other problem with Windows XP and https is SNI. You cant serve more than one domain with different ssl certificates from the same IP address, you either use SANs or different IP addresses. This does not only affect IE on XP but every browser.


> https://twitter.com/David_Leavitt/status/866790014717497345

IIRC, Chrome and Firefox for XP support SNI because they bundle their own TLS libraries, rather than using a system library.


im suprised there are any non sni visitors to SO. i expect most of them are bots eg. old versions of wget etc.


Did you use any alerting mechanism that informed you about the how many % were affected by this?


> The password to our data center is pickles. I didn’t think anyone would read this far and it seemed like a good place to store it.

You ought to have more confidence in your writing. BRB stealing all your servers.


Only if you get there first. And it may in fact be pickles2.


It's actually hunter2


This guy works at Slashdot by the way. Nothing worth stealing in their servers, methinks.


This is incredibly detailed; in short, CDNs, cookies/authentication , tons of subdomains, and 3rd-party/user-generated content make it a pain to move onto HTTPS.

I was chatting with a non-engineer friend about why it's hard to estimate how long tasks often take, and this seems like a prime illustration: the dependencies are endless.

I also love the Easter egg:

"The password to our data center is pickles. I didn’t think anyone would read this far and it seemed like a good place to store it."


Just know the username and you can log onto https://stackoverflow.com/admin.php


Warning: link's NSFW.


It opens a random youtube video everytime (I got several 10 hour vids including Jeff Goldblum laugh and relaxing hairdryer sound).


Seems there's a limited set. It's a pool of 10 hour vids


Stack Exchange is no longer available from my workplace due to this change. We have a strict no-posting-code-fragments policy, and SE was viewed as too risky to allow without some restriction in place to make it read only. Before HTTPS, the IT department had worked out such a read-only restriction by blocking the SE login with firewall rules. But with HTTPS that kludge is no longer possible, so the site is blocked.


You should try this link from home: https://stackoverflow.com/jobs


Many banks have very strict IT policies on posting things on internet, and they have valid business reasons for that. Not saying you meant that, but it's not like they're some dark, silly workplaces that people should get away from asap.


No, the reasons for the policy might be sound.

The enforcement is stupid (both the previous hack and now the block). For me this actually would be a sign that the workplace isn't quite the right fit for me, if the basic assumption is that I ignore the policies anyway - because that's what this seems to indicate?


> The enforcement is stupid (both the previous hack and now the block)

Hack indeed. Seems like blocking POST would block posting stuff while blocking to log in allows you to just copy your cookie, and doesn't allow you to view your notifications.


> Many banks have very strict IT policies on posting things on internet

Yes, they do. And I really love it. Because it means that MY bank eats their lunch, because the bank I work for actually UNDERSTANDS how to use technology, while still keeping (very!) strict controls.


Would be curious which bank you work for. Most do not seem to value technology--which is odd, since most "cash" only exists as data in a computer somewhere. I'd much prefer to patronize a bank that understands and takes seriously their tech.


I work at Capital One. We have been a bank (and are regulated as one), but are trying hard to become a technology company that is specifically focused on banking.

And I'm probably biased, but I think we have some pretty great products also (checking accounts with no fees that pays some interest, savings accounts with very good rates, and so forth), so maybe you'll get a good deal as well as a technical focus.


Same thing happened to me at a workplace once. They blocked StackOverflow, GitHub, Bitbucket, Sourceforge, CodePlex and Google Code.

I told them all estimates go up by 2 years since we would need to reimplement everything. It ended up being unblocked a week later.


I don't know how you'd get anything done since there are answers on Stack Overflow that solve problems that otherwise would involve hours to days of fussing to come up with the same non-intuitive solution.

All roads lead to Stack Overflow these days for progrmaming problems.


For every answered question, there are probably 20 unanswered ones. Almost none of my embedded programming questions got answered.

Edit: my estimate is wildly off. It's basically the opposite of what I said.


12,095,709 questions have an answer, 7,506,004 of those have an accepted answer, and 1,813,270 aren't yet answered.

I'd say your 1:20 ratio is just a little bit off :)


Just out of curiosity, do those 7.5+ million accepted answers include those closed as duplicates? Because by far my biggest complaint is finding the exact question I have was closed as a duplicate and links to a question that is useless at answering my question.


In that case you can vote to re-open and perhaps even post a bounty. Although bounties tend to invite lots of low-quality, low-effort answers just on the off chance that they might be the top-voted one once the bounty runs out.


Thanks for the correction! I am asking pretty niche questions.


I feel you. I've taught myself programming between 13 and, well, I'm now 23; so by the time stackoverflow came around I had figured out how to solve things myself. When I have a question, it's usually either opinion-based (bad fit for SO) or not a common question.

I'd say 1:20 is a good estimate if I ignore answers that didn't read my question (which is most of them), but indeed the facts disagree.


What? Stackoverflow has been around since ~2008 - You certainly didn't learn how to solve things yourself a year into programming :).


Back then I didn't speak proper English, and how many questions were actually covered on SO in the beginning? It took some years to get to where we are, both for SO and for my English ;)


I have had the same experience with embedded programming questions. I suppose they depend too much on the hardware. I do quite a bit of programming with the beaglebone blacks (or at least the same processor). And it seems the best resource is the mailing list.


This sounds beyond absurd to me. Do they also block usb ports to prevent you from copying everything on a usb drive or external harddrive, or phone? Do they lock/solder you machines shut to prevent you from taking out a hard drive / plugging in a new one and then taking it out? Do they prevent you from .. printing the code? In what parallel world do they exist that they think this would make a difference


As someone who works at a finance related company: yes. No USB storage is allowed, all cloud hosting sites are blocked (not SO, thankfully, they're more worried about us stealing SSNs and other PII than code), and all printers are logged and have drivers that detect if you're printing PII and censor it by default (or so I've been told, I don't really feel the need to test that).

A friend works at an investment firm, and has similar restrictions as the above commenter mentioned (no SO, no USB, no printing, etc), as well as pulling his phone out while at his desk or around any other computer being an immediate fireable offense.


A few years ago, I interviewed at a company called 'G Research' and the security procedures I noticed included:

* A 'secure zone' where work took place.

* All desktops virtualised, using thin clients.

* All Windows, no admin access.

* Screens, filesystem snapshots, and web access recorded, all the time.

* All software installation subject to approval (e.g. Firefox not permitted, only Chrome).

* Desks fixed in place, all cables in locked cable trays.

* Separate internal-only e-mail system.

* No printers.

* Specially printed notepads & other stationary in the 'secure zone', no secure zone stationary to leave or non-secure-zone stationary to enter.

* No cell phones, cameras or laptops permitted (lockers were provided).

* Entry points with human guards and metal detectors.

* No late working outside guards' hours.

While it would have been possible to get around the security if you were inventive enough (e.g. camera with no metal parts) it would be difficult to do so then believably claim it was an accident.

I didn't take the job, because I didn't feel I could be productive with so much bureaucracy.


I've worked in financial software and they do block USB ports for any storage device. They block SD card slots too. All work was done on a VM that could only be accessed from the company network and was remotely hosted.


Leaving aside all the reasons why this policy is super dumb (which I'm sure others will cover quite adequately), I guess your IT department can't figure out how to create their own CA certificate and do SSL interception?


Yeah, I'm amazed and concerned that you have a security team so paranoid that they would make SuperUser read-only but apparently lack the ability to perform SSL interception. Considering the huge value the latter has in any kind of post-compromise scenario and, increasingly, to prevent compromise in the first place... there needs to be a real discussion about getting priorities in order.


I disagree about the dumbness.

People do incredibly stupid things. I've seen customer data dumps on web forums.


Certainly doable but this should not be done.


There are many enterprise "solutions" that basically do this "out of the box". Yeah it shouldn't be done and a lot of employees are likely unaware that IT can see all of their SSL traffic but it's a big business.


HTTPS or not, certainly nothing is private here, but that's expected for this sort of place.


Banning SE really doesn't go far enough then does it? Perhaps any site with a text box should be forbidden.


What sort of company do you work at? Why can't everyone just be told not to post code?


In many places (banks) there are legal reasons for this.


I've worked at three big banks (in three different countries actually). We've always had access to stackoverflow.


This is nothing that can't be addressed through training. Questions on Stack Overflow with generic code actually get better responses than those bogged down with irrelevant details. You should strip out all labels, namess, even extraneous fields that don't matter. It makes for a more generic problem and solution pair that can help others as well, and eliminates the problem of leaking proprietary information.


Maybe about half the time I end up answering my own question during this step. The act of genericizing the question ends up giving me some new approach, which either works, or leads me to new existing questions-and-answers.


Yeah. When you remove all the confusion, the problem is usually pretty obvious.


Can you elaborate on this?


IP protection. In a prior life I saw someone fired for mailing a model to a home account. Pasting code to a public website would violate similar protocols.


What sort of company do you work at where every employee obeys every directive?


A company that trust their employees. There are so many ways to get around this anyway so it doesn't make sense to try to enforce it in the first place (considering the issues that follows).


What sort of company do you work at that this sort of crude blocking attempt would actually work?


Wow, that sounds ridiculous. What's the reasoning behind that policy?


Seems obvious that someone high up on the corporate ladder, with no practical knowledge in how the nitty-gritty work gets done, made the decision. Probably to "minimize IP theft".


"What do you mean our competitor is using 'for' loops? We invented those!"


Why don't they just recompile chromium without support for the textarea element, make that the only officially permitted browser, and call it a day? :-)


Sorry, but such policy is just stupid. There are many, many ways one could get a snapshot of code without posting it online. I respect SE for their decision to make things right, not kneel down against costumers and their faulty "security" practices which can be often seen.


I honestly wonder how exactly places like this want to enforce policies like this. Do they allow you to take a phone into your workplace? Aren't they scared you will take a photo and upload the code fragment?


Damn, that's even more strict than when I worked in the IC as a government contractor. I don't know how you'd get anything done, realistically.


Do they realize their employees can use 4G to access SE?


Not if they're forced to check their phones in. I have friends working in the defence industry for whom this is something they have to deal with.


Well sites doing TS work you can sort of understand that

I knew someone who worked for the scientific civil eservice and they where not allowed to have a phone with a camera.

I have also been for an interview at a site (HMGC) where you have to hand in all electronics at reception - this was an avowed role btw so I am not breaking any laws the organisation even has job adverts on the local buses


Not even TS work - it can include lower level classifications too.


If the architecture and code quality is good you should be able to open source your code and not have any security vulnerabilities.

You need to find a new job.


If you like working on these kinds of projects, the SRE team at Stack Overflow is hiring and we allow remote work full time! https://stackoverflow.com/jobs/143725/site-reliability-engin...


Just a reminder, HTTPS isn't enough. Be sure to turn the other security knobs with headers...

https://securityheaders.io/?q=https%3A%2F%2Fstackoverflow.co...


Yep - we're aware. I thought about putting in our Content-Security-Policy-Report-Only findings about what all would break, but the post was already a tad long. It's quite a long list of crazy things people do.

As the headers go, here's my current thoughts on each:

- Content-Security-Policy: we're considering it, Report-Only is live on superuser.com today.

- Public-Key-Pins: we are very unlikely to deploy this. Whenever we have to change our certificates it makes life extremely dangerous for little benefit.

- X-XSS-Protection: considering it, but a lot of cross-network many-domain considerations here that most other people don't have or have as many of.

- X-Content-Type-Options: we'll likely deploy this later, there was a quirk with SVG which has passed now.

- Referrer-Policy: probably will not deploy this. We're an open book.


Great! Thanks for the detailed response!

Expect-CT is one to look at as well.

Basically just tells the browser that Certificate Transparency should be available through the provider (DigiCert in this case).


> - Public-Key-Pins: we are very unlikely to deploy this. Whenever we have to change our certificates it makes life extremely dangerous for little benefit.

Is it possible to pin to your CA's root instead of to your own certificate? That would make rotating certs from the same CA easy but changing CAs hard (but changing CAs is already a big undertaking for big orgs).

Also, I see your five minute HSTS header ;)


Many headers presented here are questionable. X-Frame-Options should be replaced by CSP frame-ancestors. X-XSS-Protection: 1 is the default since a long time for browsers supporting it and Chrome blocks by default since two releases. Referrer-Policy is a matter of choice. It's a useful information for the target site as long as the referrer doesn't contain sensitive information. IMO, most sites shouldn't set this header.


> X-XSS-Protection: 1 is the default since a long time for browsers supporting it and Chrome blocks by default since two releases.

Do you have references to back this up?

> Referrer-Policy is a matter of choice. It's a useful information for the target site as long as the referrer doesn't contain sensitive information. IMO, most sites shouldn't set this header.

Exactly. I think its primary use is when the original site's URL contains user supplied input like Google Search page.


For X-XSS-Protection, see: https://bugs.chromium.org/p/chromium/issues/detail?id=654794. Currently implemented in M57, but you can still disable filtering. This should be removed in the future.


Thank you!


Every site I've put in there gets a failing grade. From Google to Apple to Slashdot etc.

Wonder what the point is then.



Helpful site, but all these headers will slow down a site that doesn't need them. Too bad they aren't defaults. Hopefully http2 mitigates that enough.


The only header that I can think of that might slow down a site is Content-Security-Policy. Even that is negligible as long as you don't have 1000 entries.


Http2 compresses headers so it they are not changing the amount of overhead is negligible (see HPACK).


Well, yes and no - it depends on the length. Let's take 3 common examples. Here's GitHub's relevant headers (that we don't have):

Content-Security-Policy:default-src 'none'; base-uri 'self'; block-all-mixed-content; child-src render.githubusercontent.com; connect-src 'self' uploads.github.com status.github.com collector.githubapp.com api.github.com www.google-analytics.com github-cloud.s3.amazonaws.com github-production-repository-file-5c1aeb.s3.amazonaws.com github-production-user-asset-79cafe.s3.amazonaws.com wss://live.github.com; font-src assets-cdn.github.com; form-action 'self' github.com gist.github.com; frame-ancestors 'none'; img-src 'self' data: assets-cdn.github.com identicons.github.com collector.githubapp.com github-cloud.s3.amazonaws.com *.githubusercontent.com; media-src 'none'; script-src assets-cdn.github.com; style-src 'unsafe-inline' assets-cdn.github.com

Public-Key-Pins:max-age=5184000; pin-sha256="WoiWRyIOVNa9ihaBciRSC7XHjliYS9VwUGOIud4PB18="; pin-sha256="RRM1dGqnDFsCJXBTHky16vi1obOlCgFFn/yOhI/y+ho="; pin-sha256="k2v657xBsOVe1PQRwOsHsw3bsGT2VzIqz5K+59sNQws="; pin-sha256="K87oWBWM9UZfyddvDfoxL+8lpNyoUB2ptGtn0fv6G2Q="; pin-sha256="IQBnNBEiFuhj+8x6X8XLgh01V9Ic5/V3IRQLNFFc7v4="; pin-sha256="iie1VXtL7HzAMF+/PVPR9xzT80kQxdZeJ+zduCB3uj0="; pin-sha256="LvRiGEjRqfzurezaWuj8Wie2gyHMrW5Q06LspMnox7A="; includeSubDomains

Those are 1220 bytes. I'm not sure what they'll compress down to, but it's still non-trivial and not near 0 (anyone want to run the numbers?).

The same pair of headers are 969 bytes for facebook.com and 2,772 for gmail.com.

I don't know what ours would be - since we're open-ended on the image domain side it's a bit apples-to-oranges compared to the big players.

When you take into account that you can only send 10 packets down the first response (in almost all cases today) due to TCP congestion window specifications (google: CWND), they get more expensive as a percentage of what you can send. It may be that you can't send enough of the page to render, or the browser isn't getting to a critical stylesheet link until the second wave of packets after the ACK. This can greatly affect load times.

Does HPACK affect this? Yeah absolutely, but I disagree on "negligible". It depends, and if something critical gets pushed to that 11th packet as a result, you can drastically increase actual page render time for users.

If it helps, I did a blog post with some details about this a while back: https://nickcraver.com/blog/2015/03/24/optimization-consider...


Oh I wasn't clear - I meant that for the same connection headers are not sent for every page but just references for previous values (see [0]). The initial page load is a different matter but that's part of the cost/risk analysis if you need CSP or HPKP (I agree it's not necessary and very easy to mess up).

[0]: https://http2.github.io/http2-spec/compression.html#indexed....

> When you take into account that you can only send 10 packets down the first response (in almost all cases today) due to TCP congestion window specifications (google: CWND), they get more expensive as a percentage of what you can send. It may be that you can't send enough of the page to render, or the browser isn't getting to a critical stylesheet link until the second wave of packets after the ACK. This can greatly affect load times.

I wonder how much of the page can be rendered in 10 packets...

Do you send Link preload headers?


I explicitly try to ensure that for my sites the first 10kB sent (so less than 10 packets typically) is enough to render all the information above the fold. Anything essential should make it out in the first 2 packets for old TCP slow-start rules. (Lipstick and ads can arrive later, once the user is happy reading or whatever, IMHO.) Has been my policy since about the mid '90s!


Note to self: Use subdirectories, not subdomains in the future


The other issue with subdomains is that some customers will insist on typing "www." in front of every domain. Since the wildcard cert won't match, those customers will see an error.


I feel like TLS certificates are fundamentally misdesigned there. It should be possible to have a wildcard certificate that matches all subdomains under a domain, no matter how many layers deep.


Well if it wasn't for someone buying <star>.com back in the day, we probably could have them. Oh and then buying <star>.<star>.com after browsers banned that one, which led to RFC 6125 rule clarifications and restrictions.


Hey, I'm pretty sure that the first real domain name hack was sex.net, which as the proud owner of ex.net [PS: or was it sexnet.com, as we also have exnet.com?] caused some upset for a while, though mainly to disappointed one-handed typists I believe... B^>

BTW, did I blink and miss the "It really is all faster over HTTP/2, even given TLS" bit? My testing for my tiny lightweight sites close to their users (the opposite of what you're dealing with) is that HTTP/2 is slightly slower overall. Even with Cloudflare's advantages such as good DNS. And with the pain of cert management...

http://m.earth.org.uk/note-on-carbon-cost-of-CDN.html

Anyhow, thanks for the warts-n-all.


> which as the proud owner of ex.net

haha, that page is a priceless timecapsule:

Use the Java applet below to search ExNet's main Web pages.

When the ``Status'' indicator stops flashing and says ``Idle'', type key words in the ``Search for:'' box.

The ``Results:'' box will show you the documents that matched your key words, the best matches coming first in the list. Click on any line in the ``Results:'' box, and that document should appear in a new browser window in a few seconds. When you are finished with that document, you can close it without killing your browser.


That code did search-by-word from (IIRC before Google existed, ie Netscape 2) right up until Java applets were dropped, across all compliant browsers AFAIK. It did roughly what G's live search now does.


I would imagine the more resources your page has, the more benefit you can get from HTTP/2 because of Server Push. So if you're comparing a tiny lightweight site, I'm guessing you can't benefit as much from Server Push.


I have relatively little that would benefit from push; basically a tiny hand-crafted CSS file that I currently inline because HTTP/1.1 and even HTTP/2 overhead for having it separate may be too high.


browsers use domains for everything from connection limits to data storage. if you use folders everything will be shared.


Note to self: Use subdomains, not subdirectories in the future

Wait...


The real LPT here is using different domain altogether..

But wait, in that case browser will make another DNS fetch and open up a separate http connection!


TLS kills this kind of "cool" features which is kind of sad :( Unless you can afford wildcard certs.

What's the argument behind LetsEncrypt not doing that? Extended Validation stuff?


There's a long StackExchange answer about this: https://security.stackexchange.com/a/158164

But it boils down to there being no practical way for Let's Encrypt to automatically validate that a wildcard certificate is safe to issue.


It's a long answer that completely fails to address the possibility of validating ownership of the domain itself by e.g. adding a TXT record, which the ACME protocol already supports.


The general point is that being able to control the parent domain doesn't necessarily mean you control all possible subdomains as well. You need to prove ownership, not just control. Here's the relevant bit from the SO answer:

> If I have ownership of the parent domain example.com then I can freely create and control anything as a subdomain, at any level I choose. Note that here "ownership" is distinct from "control", which is what is validated by the ACME protocol.


Probably their agreements with their partner CAs. Given that those partners sell wildcards themselves...


Their "Let’s Encrypt Authority X3" intermediate is signed by their own root (ISRG Root X1). See https://letsencrypt.org/certificates/.


Subdomains were killed by SEO a long time ago (afaik, Google does not transfer domain PageRank credit to subdomains), so this is not limited solely by the cost of wildcard certs.


But this is orthogonal to the issue of LetsEncrypt not delivering wildcard certs.


They cost like $199 or less. It's an anoying tax, but they do offer a lot of options, so they're often worth it.


Way less than that. I've got a wildcard SSL cert for my domain for $60, although that was an add-on to the domain itself and hosting, purchased from the provider of the latter.


How would they prove that you own every subdomain?


I'm not sure I understand the question. If you own something.com then you automatically own any possible subdomains.


The Let's Encrypt process is about validating control of the content on a domain, not about OWNERSHIP of the domain. To get a cert, you just have to be able to update a file at a Let's Encrypt specified location on the domain. This is only proving that you are in control of the website for that specific domain, not that you are in control of the DNS for the entire domain and all subdomains.

Of course if I own a domain, I own all the subdomains. However, being in control of the site served at port 80 for a domain does not mean I own it.


But the ACME protocol, the automation underpinning Let's Encrypt, supports validation via a DNS challenge (adding a specific TXT record to the domain). Would it not be possible to issue wildcards if-and-only-if a DNS challenge succeeds?


I think you're right.


Side question: any plans for IPv6?



Not any immediate plans. Decent amount of development is necessary there. There are so many places in our various systems that work with IP addresses, and many of them don't support v6 addresses.


One thing that places do is support IPv6 on the outside and translate to IPv4 calls in the next hop beyond the servers that sit beyond the edge.


Given the scale of Stack Overflow, you'd think they could set up AAAA records that point to a proper TLS 1.3+ server and leave the peasants on IPv4 going to one that's more...accommodating.


We could - but the network side isn't the problem. There's a lot of logging, user banning, etc. pieces that need IPv6 love first. We just haven't had the time yet.

There are network bits we'd have to evaluate heavily as well, e.g. firewall rules - basically the very limited benefits don't make it a priority, yet. When things change there, we'll do it.


Despite the "Google gives a boost to https" reasoning, which comes from Google itself, in practice I've read several first-hand accounts of how traffic (non XP) dropped significantly right after the switch.


It would be better if scripts like jquery were not encrypted. This forces users to use e.g. a google service instead of caching/hosting the scripts themselves or getting them from another CDN. I do not understand why so many people do not consider the privacy implications of every single webpage requiring calls to google services. There are ways to avoid this, but it gets a lot more complicated when that requires MITM methods for SSL. Please: use a non-tracking CDN, host it yourself, or at least leave it HTTP.


Wow, I didn't expect this ("switching" to HTTPS) to be so hard.


It very much depends on the complexity and scale of your site. StackOverflow is a bit of an extreme case.

For example, if instead of having hundreds of domains serving millions of users with tons of user-generated content you're just serving static content from a single server on a small site, the entire process for you might actually be as simple as just running `certbot-auto` on the production server.

I suspect the difficulty of switching for most sites will fall somewhere between these two extremes.


Yeah, we've been working on this for about a year (not continually, but as we have time to try to work through the problems). We do use subdomains though, so that is part of the problem. We keep feeling like we are getting close, but then we run into another issue. It's like a rabbit hole that has no bottom.


> We keep feeling like we are getting close, but then we run into another issue. It's like a rabbit hole that has no bottom.

That's exactly what we experienced migrating a bunch of sites to https. There were so many things that we didn't anticipate.


Regarding the section "Mistakes: APIs and .internal"

Why wouldn't they use split horizon DNS for this? Seems like the perfect use case


Split horizon would point you at the same data center, rather than the writeable one. So that's more of a .local than a .internal. We discussed this, but ultimately the AD version we're on (pre-2016 Geo-DNS) it's not actually supported the way you'd need, and it's a nightmare to debug.

We'd consider it for a .local, when the support it properly there in 2016. Even subnet prioritization is busted internally, so that's a bit of an issue. Evidently no one tried to use a wildcard with dual records on 2 subnets before (we prioritize the /16, which is a data center) and it's totally busted. Microsoft has simply said this isn't supported and won't be fixed. A records work, unless they're a wildcard. So specifically, the <star>.stackexchange.com record which we mirror internally at <star>.stackexchange.com.internal for that IP set is particularly problematic.

TL;DR: Microsoft AD DNS is busted and they have no intention of fixing it. It's not worth it to try and work around it.


Interesting, thanks!


Has anyone tried running Fastly behind Cloudflare? Are the tradeoffs worth it?


Why would you want to double your CDN costs for negligible benefit?


Is there some reason other than cost to do that? Curious.


Mitigate attacks much better than Fastly, flatten CNAME etc.


> Mitigate attacks much better than Fastly

if that's the concern, probably just better to configure switching than put both in front all the time.


Funny how the main reason for lack of SSL is said to be the lack of support from 3rd party services... and the first service quoted is ads.

https://nickcraver.com/blog/2013/04/23/stackoverflow-com-the...


Funny how?


I work at a government facility. Stack Overflow and github are now both blocked (in addition to all social media and webmail). But Hacker News is apparently ok.


Your blog posts are always an interesting read


How many questions on stackoverflow to these migration?


sadly that haproxy (which stack overflow uses) does not support http/2 directly, you need to terminate it via nginx or anything else.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: