Not only did it not support secure connections (which on its own is kind of shitty, but not laughing-stock material IMO), but it also failed to fail elegantly when you attempted to make a secure connection.
Just like with Google: Google Internet Authority. Interesting to see big companies not trusting intermediate CA's anymore that they go through the length of becoming a CA themselves. However it could also be a cost effective strategy.
With the amount of subdomains they use, and seeing as they each have a browser with significant market share, it's a no-brainer. I'm frankly surprised it took so long for them to bring such an essential security task in-house.
Wildcards are not as useful to massive organisations as you would think. The wouldn't want to have a single public/private pair (that is essentally what a certificate is: the public key to your private one, signed by a 3rd party key that the 2nd party has in its trust list) for .microsoft.com (nor would Google for .google.com) as a single all-powerful key could be far more hassle should it get into the hands of an incompetent or malicious individual/team. If they had to revoke the certificate for *.microsoft.com then there would be a hell of a lot of administration work to be done to reconfigure each part of their infrastructure and renegotiate relevant internal trust relationships between parts of the that infrastructure using the new keys. While having different keys for everything is a burden when all is going well, the burden is worth taking on the offchance that sometihng somewhere does go badly wrong: the damage caused by any given problem can be limited.
We have wildcard certs for each of our properties, but the resources using those certificates are many orders of magnitude less numbersome than the resources covered by the name of a multinational monster. And even though we use a wildcard for internal resources we get specific keys generated and signed for client specific stuff (if we host any service on <client>.<ourdomain>.<tld> for instance) just as we have different SSH keys and such for accessing information sources they provide for integration purposes: not having the one all powerful key limits the potential damage (and work involved) should any partticualr key/sub-key become compromised. If out internal key were to be stolen by a malicious entity or accidentally made public by a mistake on our part the no client specific resources would be affected (of course to ensure this separation you need to distribute access to the private keys effectively so that they can't all get compromised in a single event.
I see, hmm yeah indeed it'd be much more desirable to have an own CA if you have such huge infrastructure. They could even issue a new pub/private keypair per server for compartmentalization. Interesting idea.
Setting up SSL/TLS on a single server in a virtual environment that you don't even have to manage is a different story than getting major changes tested and deployed on a huge multi-billion dollar, multi-tenant distributed system spanning not just the globe but multiple teams, languages, people and requirements.
Just so i'm clear, your argument is essentially: we should be more impressed more because they designed it in a way that made it difficult for them to do this?
In my opinion, anyone should be able to get an A with a free certificate, or at least nothing beyond administration costs. Certificates cost nothing to make. Anyone paying a dime is just helping keep up the illusion that it is expensive.
In other words, don't use elliptic curves? And, therefore, don't use forward secrecy? Does current browser support for curves even allow you to set up a "NIST-free" ECDHE TLS server?
There's also DHE, which is not "NIST-corrupted" I guess. As far as I know, in theory it should be possible to use the Brainpool curves in TLS, but haven't seen such a thing in actual use.
You're using the same argument RSA used when they decided to just keep Dual EC DRBG, because it was "too late to change", even though they knew it was a backdoor. Granted, I think they are lying and did it on purpose, but even their lie is pretty bad logic.
They need to talk to Google, Mozilla and others, and decide on using a new set of safe curves in their browsers. Using a broken one is not a solution.
The "NIST corrupted curves" you refer to are, for all intents and purposes, the Internet standard curves. Microsoft could provide a configuration that used only the Brainpool curves, but no browser would be able to talk to them.
I thought this was inherent in following a link from a https resource to a http resource with a different authority, that the browser would never expose the secure URL (via header or script state) to the insecure content.
This is correct. Browsers will not send the referrer header when moving from HTTPS to HTTP URIs. This can easily be solved by just converting your website to HTTPS only at which point search engines will index your HTTPS URLs and you will begin to receive referrer headers once again.
The goal is to avoid leaking information on the url that might be sensitive. The fact that the user is coming from bing is not sensitive, and this might be important for webmasters. The query might be sensitive and thus should be stripped.
https://news.ycombinator.com/item?id=5576041 (8 months ago)
https://news.ycombinator.com/item?id=6937686 (1 month ago)
http://www.zdnet.com/bing-is-fine-insecure-as-ever-but-fine-... (April 2013)
> Bing has never supported secure connections...