After much teeth gnashing and research, we determined that a large segment of our user base was still using WinXP and the encryption protocols we offered weren't available to them.
We didn't think this would be a problem because the current version of the software wasn't compatible with WinXP any longer.
There was some debate internally whether the better fix was to including the legacy encryption protocols or just leave the HTTP version of the site running and use Strict-Transport-Security to move capable browsers to HTTPS.
In the end we had to include the legacy protocols so those customers could use our online store.
The logic that was communicated to them was that as a service provider, security a prime concern for us (as it should be for them as well), so we can't keep lagging on this forever. Currently, we have $single_digit merchants we're still waiting to make the switch.
It's made the whole switch process much easier and made customers actually appreciate our pro-activeness in this! :)
The scanning of the server logs occurred to us in hindsight as well.
They're admittedly few though and their moral high ground is debatable considering that there are self hosted FOSS alternatives around nowadays
When a client voluntarily makes a request to a server, it presents a bunch of information for the server to see and consume. This information is not meant to be kept secret from the server. Among such pieces of information can be some about the characteristics of the user agent, including OS. It is disingenuous at best to call collecting such voluntarily-presented and clearly-transmitted data as "spying" on a user.
A basic requirement for spying is for a collecting party to be obtaining information that can be reasonably considered confidential or restricted. Details about the system from which you send a request are by definition of the protocol not confidential or restricted to the recipient of your request. It is not reasonable to expect a server to not look at or use information you present to it. Therefore, it isn't "spying" for the recipient to consume the information. The information might be used in ways some people(e.g., OP) don't like, but that does not make obtaining the information "spying".
Because that is untrue.
That's why Google Analytics has an option to remove the last three digits of an IP.
You might be interested in the EFF's Best Practices for Online Service Providers:
> There was some debate internally whether the better fix was to including the legacy encryption protocols or just leave the HTTP version of the site running and use Strict-Transport-Security to move capable browsers to HTTPS.
Where can I read about this? Is there any way to display a special "Your browser is outdated" page for the users on WinXP?
Sorry if these seem like basic questions. I am just curious and would like to hear some expert advice.
Eventually, and I doubt we had anything to do with it, IE10 usage dipped below the magic .5% (when it costs is more money to support than it earns us) and it was finally unsupported.
The only crappy browsers we still officially support are ancient safari and IE11, both of which are still going relatively strong for reasons we've never been able to fully explain!
https://browser-update.org/ is a great service that does this.
For the case where SSL was broken, unfortunately that wouldn't help at all, because they'd never be able to load the webpage.
"Oh no, this isn't a Mac, it's Windows"
This is a user of a highly secure system, containing user PII, who expected to use it on a 5 year old browser with XP.
If its alright for you to answer,
1. What would be the best/cross platform way to proceed?We now have separate agents for windows, mac which causes maintenance hell
2. Is chrome remote's way of streaming desktop images as video better than images + diff.
3. Is there any open source mirror driver kind of thing in linux?
Inuvika/Guacamole also support plain RDP, but we didn't use this, just the html5 client (browser)
If you want to see what open-source can do then look at Guacamole and go from there.
Don't think that helps, but...
You can support HTTP and the occasional knowledgeable person will suggest you should upgrade. Or you can force TLS with SSLv3 enabled, and suddenly you'll hit a flood of people letting you know you're about to be hacked, based on online scanners. Often complete with requests for a bug bounty.
IIRC, Chrome and Firefox for XP support SNI because they bundle their own TLS libraries, rather than using a system library.
You ought to have more confidence in your writing. BRB stealing all your servers.
I was chatting with a non-engineer friend about why it's hard to estimate how long tasks often take, and this seems like a prime illustration: the dependencies are endless.
I also love the Easter egg:
"The password to our data center is pickles. I didn’t think anyone would read this far and it seemed like a good place to store it."
The enforcement is stupid (both the previous hack and now the block). For me this actually would be a sign that the workplace isn't quite the right fit for me, if the basic assumption is that I ignore the policies anyway - because that's what this seems to indicate?
Hack indeed. Seems like blocking POST would block posting stuff while blocking to log in allows you to just copy your cookie, and doesn't allow you to view your notifications.
Yes, they do. And I really love it. Because it means that MY bank eats their lunch, because the bank I work for actually UNDERSTANDS how to use technology, while still keeping (very!) strict controls.
And I'm probably biased, but I think we have some pretty great products also (checking accounts with no fees that pays some interest, savings accounts with very good rates, and so forth), so maybe you'll get a good deal as well as a technical focus.
I told them all estimates go up by 2 years since we would need to reimplement everything. It ended up being unblocked a week later.
All roads lead to Stack Overflow these days for progrmaming problems.
Edit: my estimate is wildly off. It's basically the opposite of what I said.
I'd say your 1:20 ratio is just a little bit off :)
I'd say 1:20 is a good estimate if I ignore answers that didn't read my question (which is most of them), but indeed the facts disagree.
A friend works at an investment firm, and has similar restrictions as the above commenter mentioned (no SO, no USB, no printing, etc), as well as pulling his phone out while at his desk or around any other computer being an immediate fireable offense.
* A 'secure zone' where work took place.
* All desktops virtualised, using thin clients.
* All Windows, no admin access.
* Screens, filesystem snapshots, and web access recorded, all the time.
* All software installation subject to approval (e.g. Firefox not permitted, only Chrome).
* Desks fixed in place, all cables in locked cable trays.
* Separate internal-only e-mail system.
* No printers.
* Specially printed notepads & other stationary in the 'secure zone', no secure zone stationary to leave or non-secure-zone stationary to enter.
* No cell phones, cameras or laptops permitted (lockers were provided).
* Entry points with human guards and metal detectors.
* No late working outside guards' hours.
While it would have been possible to get around the security if you were inventive enough (e.g. camera with no metal parts) it would be difficult to do so then believably claim it was an accident.
I didn't take the job, because I didn't feel I could be productive with so much bureaucracy.
People do incredibly stupid things. I've seen customer data dumps on web forums.
I knew someone who worked for the scientific civil eservice and they where not allowed to have a phone with a camera.
I have also been for an interview at a site (HMGC) where you have to hand in all electronics at reception - this was an avowed role btw so I am not breaking any laws the organisation even has job adverts on the local buses
You need to find a new job.
As the headers go, here's my current thoughts on each:
- Content-Security-Policy: we're considering it, Report-Only is live on superuser.com today.
- Public-Key-Pins: we are very unlikely to deploy this. Whenever we have to change our certificates it makes life extremely dangerous for little benefit.
- X-XSS-Protection: considering it, but a lot of cross-network many-domain considerations here that most other people don't have or have as many of.
- X-Content-Type-Options: we'll likely deploy this later, there was a quirk with SVG which has passed now.
- Referrer-Policy: probably will not deploy this. We're an open book.
Expect-CT is one to look at as well.
Basically just tells the browser that Certificate Transparency should be available through the provider (DigiCert in this case).
Is it possible to pin to your CA's root instead of to your own certificate? That would make rotating certs from the same CA easy but changing CAs hard (but changing CAs is already a big undertaking for big orgs).
Also, I see your five minute HSTS header ;)
Do you have references to back this up?
> Referrer-Policy is a matter of choice. It's a useful information for the target site as long as the referrer doesn't contain sensitive information. IMO, most sites shouldn't set this header.
Exactly. I think its primary use is when the original site's URL contains user supplied input like Google Search page.
Wonder what the point is then.
Content-Security-Policy:default-src 'none'; base-uri 'self'; block-all-mixed-content; child-src render.githubusercontent.com; connect-src 'self' uploads.github.com status.github.com collector.githubapp.com api.github.com www.google-analytics.com github-cloud.s3.amazonaws.com github-production-repository-file-5c1aeb.s3.amazonaws.com github-production-user-asset-79cafe.s3.amazonaws.com wss://live.github.com; font-src assets-cdn.github.com; form-action 'self' github.com gist.github.com; frame-ancestors 'none'; img-src 'self' data: assets-cdn.github.com identicons.github.com collector.githubapp.com github-cloud.s3.amazonaws.com *.githubusercontent.com; media-src 'none'; script-src assets-cdn.github.com; style-src 'unsafe-inline' assets-cdn.github.com
Public-Key-Pins:max-age=5184000; pin-sha256="WoiWRyIOVNa9ihaBciRSC7XHjliYS9VwUGOIud4PB18="; pin-sha256="RRM1dGqnDFsCJXBTHky16vi1obOlCgFFn/yOhI/y+ho="; pin-sha256="k2v657xBsOVe1PQRwOsHsw3bsGT2VzIqz5K+59sNQws="; pin-sha256="K87oWBWM9UZfyddvDfoxL+8lpNyoUB2ptGtn0fv6G2Q="; pin-sha256="IQBnNBEiFuhj+8x6X8XLgh01V9Ic5/V3IRQLNFFc7v4="; pin-sha256="iie1VXtL7HzAMF+/PVPR9xzT80kQxdZeJ+zduCB3uj0="; pin-sha256="LvRiGEjRqfzurezaWuj8Wie2gyHMrW5Q06LspMnox7A="; includeSubDomains
Those are 1220 bytes. I'm not sure what they'll compress down to, but it's still non-trivial and not near 0 (anyone want to run the numbers?).
The same pair of headers are 969 bytes for facebook.com and 2,772 for gmail.com.
I don't know what ours would be - since we're open-ended on the image domain side it's a bit apples-to-oranges compared to the big players.
When you take into account that you can only send 10 packets down the first response (in almost all cases today) due to TCP congestion window specifications (google: CWND), they get more expensive as a percentage of what you can send. It may be that you can't send enough of the page to render, or the browser isn't getting to a critical stylesheet link until the second wave of packets after the ACK. This can greatly affect load times.
Does HPACK affect this? Yeah absolutely, but I disagree on "negligible". It depends, and if something critical gets pushed to that 11th packet as a result, you can drastically increase actual page render time for users.
If it helps, I did a blog post with some details about this a while back: https://nickcraver.com/blog/2015/03/24/optimization-consider...
> When you take into account that you can only send 10 packets down the first response (in almost all cases today) due to TCP congestion window specifications (google: CWND), they get more expensive as a percentage of what you can send. It may be that you can't send enough of the page to render, or the browser isn't getting to a critical stylesheet link until the second wave of packets after the ACK. This can greatly affect load times.
I wonder how much of the page can be rendered in 10 packets...
Do you send Link preload headers?
BTW, did I blink and miss the "It really is all faster over HTTP/2, even given TLS" bit? My testing for my tiny lightweight sites close to their users (the opposite of what you're dealing with) is that HTTP/2 is slightly slower overall. Even with Cloudflare's advantages such as good DNS. And with the pain of cert management...
Anyhow, thanks for the warts-n-all.
haha, that page is a priceless timecapsule:
Use the Java applet below to search ExNet's main Web pages.
When the ``Status'' indicator stops flashing and says ``Idle'', type key words in the ``Search for:'' box.
The ``Results:'' box will show you the documents that matched your key words, the best matches coming first in the list. Click on any line in the ``Results:'' box, and that document should appear in a new browser window in a few seconds. When you are finished with that document, you can close it without killing your browser.
But wait, in that case browser will make another DNS fetch and open up a separate http connection!
What's the argument behind LetsEncrypt not doing that? Extended Validation stuff?
But it boils down to there being no practical way for Let's Encrypt to automatically validate that a wildcard certificate is safe to issue.
> If I have ownership of the parent domain example.com then I can freely create and control anything as a subdomain, at any level I choose. Note that here "ownership" is distinct from "control", which is what is validated by the ACME protocol.
Of course if I own a domain, I own all the subdomains. However, being in control of the site served at port 80 for a domain does not mean I own it.
There are network bits we'd have to evaluate heavily as well, e.g. firewall rules - basically the very limited benefits don't make it a priority, yet. When things change there, we'll do it.
For example, if instead of having hundreds of domains serving millions of users with tons of user-generated content you're just serving static content from a single server on a small site, the entire process for you might actually be as simple as just running `certbot-auto` on the production server.
I suspect the difficulty of switching for most sites will fall somewhere between these two extremes.
That's exactly what we experienced migrating a bunch of sites to https. There were so many things that we didn't anticipate.
Why wouldn't they use split horizon DNS for this? Seems like the perfect use case
We'd consider it for a .local, when the support it properly there in 2016. Even subnet prioritization is busted internally, so that's a bit of an issue. Evidently no one tried to use a wildcard with dual records on 2 subnets before (we prioritize the /16, which is a data center) and it's totally busted. Microsoft has simply said this isn't supported and won't be fixed. A records work, unless they're a wildcard. So specifically, the <star>.stackexchange.com record which we mirror internally at <star>.stackexchange.com.internal for that IP set is particularly problematic.
TL;DR: Microsoft AD DNS is busted and they have no intention of fixing it. It's not worth it to try and work around it.
if that's the concern, probably just better to configure switching than put both in front all the time.