<link rel="shortcut icon" href="http://static4.scirra.net/images/favicon.ico />
You should link as follows:
<link rel="shortcut icon" href="//static4.scirra.net/images/favicon.ico" />
The double forward slash will select the current protocol the page is being viewed on which means no security errors if you're switching between http/https!
<link rel="shortcut icon" href="/static4.scirra.net/images/favicon.ico" />
If it doesn't, then what does it do? Default to http?
1 cookieless domain is probably fine, but I found out that for us 4 subdomains (static1, static2, static3, static4) gives best page load performance. It's been a while now so I've forgotten exactly the reason why more can be good, but I believe the benefit of multiple cookieless domains is to do with parallelisation of requests.
If you set it up right it's pretty easy to do, I just have a function that randomly generates the static URL based on the hash of the name of the resource it's linking to.
<img src="<%=allocateStaticPath("~/images/logo.png")%>" />
The function then hashes the filename, randomly allocates it to static1,static2,static3 or static4 and then prints the path. Hashing is important, you don't want the resource skipping round static servers otherwise it wont be able to cache, and as hashes are designed to have a random distribution most pages static resources should be pretty evenly spread out across the 4 static sub-domains (I do tweak it on some pages though like the homepage as sometimes it comes out quite unbalanced).
All the staticN.scirra.net subdomains all point to the same folder.
Some people think it's overkill and a waste of time but it's not. Once it's setup right it's very low maintenance. Also, the pages load very fast :) I would offer you to see our homepage http://www.scirra.com but it's in a transitional stage of moving server so a lot of it is quite slow at the moment!
Page load speed is super important especially on the visitors first visit/request. Every second longer it takes the more visitors are going to press back or cancel the request. People are also capable of differentiating between very small periods of time (~10-30ms) so really every ms counts. A fast page load also sets a good precedent that your site is going to be one they will enjoy to browse as no waiting is involved.
Playing around with Construct has been on my To Do list for a long, long time; I'll take note of the serving performance when I do! :)
Then you can go into Preferences > Advanced after some browsing, and you'll see a list of all the good sites found. You can create HTTPS Everywhere rules directly from there, without ever being annoyed during your normal browsing.
I've been using HTTPS Everywhere for a while now, but it's been almost entirely a vanilla install since I got it. The only exception so far has been HN itself, and only because someone put the new rule in a comment. I then had to use Google to find out how to actually install the rule, as it was pretty non-obvious to me.
My comment was meant to be a sincere thanks for mentioning the HTTPS Finder extension. Reading my comment now, without that clarification, I can understand how it could read as a criticism of the HTTPS Everywhere extension instead. That was not my intention. I merely wanted to express my thanks for making me aware of a complimentary extension which adds useful functionality to an extension I already use.
If the downvote was for another reason, please let me know what was the reason for it so I can try to avoid committing the same error in the future.
In order to "get credit" for all of the traffic that they send everywhere twitter had to develop a fairly elaborate system of redirections (built into t.co) to make sure that clicks from twitter.com ended up being sent out to the rest of the web with referer information.
It would be a real shame if everyone in the world had to develop a similar process.
Part of me thinks that browsers should start sending referer information even when you click on links for SSL sites, though this change would bring with it other problems.
It is not at all obvious (to me at least) what that best thing to do here is.
Why should we care about retaining referrers? I think the only reason that people dislike the idea of losing them, is because they've got used to them being there.
I don't particularly like the fact that sites which I click through to can see where I'm coming from, or what I was searching for, so I've installed a Firefox addon called RefControl to get around it. The majority of people don't know anything about referrers though.
I'm sure that brick and mortar shops would also love to know how I was referred to them when I walk through their door. They don't get this information unless I consciously decide to give it to them though. And even though they don't get this information, they still manage to sell products.
While we're at it, let's get rid of User-Agent. No sarcasm intended, I'm serious. It only does bad things.
(We can get into all the evils suggested by http://xkcd.com/869/ , but for the purposes of this argument I'm assuming that web developers and sysadmins are competent and not evil.)
Now, just for fun and laughs, check out what user-agent string is sent by Chrome. I guess Google assumes everyone is like them and their poor browser would get blocked if they just said it's a Chrome.
# wget --user-agent="Mozilla/5.0 (iPhone; U; CPU like Mac OS X; en) AppleWebKit/420+ (KHTML, like Gecko) Version/3.0 Mobile/1A543a Safari/419.3" http://www.cnn.com/ -O cnn.mobile.html
# wget http://www.cnn.com/ -O cnn.standard.html
# ls -lhrt | grep cnn
-rw-r--r-- 1 pavel staff 29K Feb 24 13:12 cnn.mobile.html
-rw-r--r-- 1 pavel staff 104K Feb 24 13:13 cnn.standard.html
Could happen though I suppose. /shrug/
I don't think that referrers should ever have been a part of the protocol and I don't think that the commercial value of them existing should have any influence on whether or not HTTP continues to include them. Unfortunately, both Google and Microsoft benefit financially from the existence of referrer headers, so I don't see them going anywhere in Chrome or IE at least.
If you wanted to see who was linking to your site, and referrer headers didn't exist, you'd use a search engine. Hell, people would build dedicated search engines which alert you when somebody links to your site.
Referrers are good for identifying where a user came from, or what they were searching for when they land on your site. Well, sometimes they don't want you to have that information, and most of the time people are completely unaware that you're getting it.
Not in this particular case.
"And referrer headers are much easier than having to build a search engine and being at their whims."
Absolutely. And if your browser sent a HTTP header containing your name, address, sexual preferences and date of birth, that would also make things even better for website owners. Just imagine how much better they could target their adverts!
Though in my eyes sending referrers always has been questionable "insecure by default" behavior, as the internal structure of one site is leaked to another. With hindsight, maybe it should have been restricted to the domain name.
Also, it is very smart to use full-disk encryption and also encrypt sensitive info on that disk in a separate encrypted file (often preferably with something like TrueCrypt that allows plausible deniability via hidden volumes) if your computer is used for anything important.
Think of the extent of damage that would have hit HBGary or any of the many other companies that have found themselves in a similar quagmire if they had employed some of that computer security knowledge to encrypt mail and required digital signatures before doing anything important (hint: the answer is 0).
You may have a competitor hooked into your mail server for years before you know anything has happened, while you scratch your heads and wonder why they always beat you to the punch on new products and steal your big clients.
You may have a hostile government agency after you for completely innocuous things, like downloading public domain research article. In this case, lots of encryption is going to buy your lawyers lots of time even if the judge eventually orders you to decrypt all of it; hopefully the real goods are hidden somewhere where they won't find them (like in a TC hidden volume, perhaps "in the cloud" in an encrypted file on Tahoe-LAFS over I2P).
And the meme-ification only makes me feel slightly better about it. I mean, what's it cost Jeff to give a little credit?
I wouldn't have known where it originated from. It's possible/likely the source gave them permission to use it.
To use a straw-man example: Donald Trump owns the trademark to the phrase "You're fired." Am I stealing if I repeat this phrase? Now how about Amazon's 1-click, or humming "Happy Birthday"?
A hundred years ago, you could own the copyright to a book, but not claim ownership of a phrase or obvious idea. A thousand years ago, you could not effectively claim ownership of a book. Ten thousand years ago, you (probably) could not have claimed ownership of land, at least in any form we're familiar with. These are all things we made up, for better or worse.
Regardless of any person's or tribe's specific opinions on economic issues, private property, or ownership of ideas, I think it should be implicitly obvious that breaking society's copyright rules is a different act than stealing, just as manslaughter is different from first-degree murder. It doesn't justify it, it simply acknowledges that it is a different act.
Also, this particular case is laughably mild. The creator has only benefitted by sharing this free art snippet and is now internet-famous as a result. I guarantee that Jeff would remove it if asked, and also that the creator would never think to do so, given that this image is strewn across thousands of websites.
There's no other norm.
Your tribe endorses theft.
Or it just doesn't consider an act that doesn't deprive someone of anything they had to be theft.
Since Hyperbole and a Half is Creative Commons licensed (CC-NC-ND), it looks like its licensing requirements would be satisfied by attributing the original source ("Proper credit includes a prominent, easily visible link to the source of the material you want to use...").
I think making sure that images in your blog post are properly licensed is difficult , but on the same level of difficulty and importance as determining a license for a software project .
: I have personally had issues with this, which I wrote about at http://www.marteydodoo.com/2011/01/19/licensing-is-hard/
the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
the nature of the copyrighted work;
the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
the effect of the use upon the potential market for or value of the copyrighted work.
But you cannot claim authorship. That violates 17 USC 106A rights of attribution and integrity.
Woah ... I've just re-read 106A for the first time in a few years. There've been a few additions to that section, including a mess of integrity, modification, and destruction claims.
or... have i been working on this bug for way to long?
They need to be serving a different cert on a different IP for that (ignoring SNI since not all browsers support it).
However, Chrome surprised me the other day by not allowing me to continue on to a site that didn't have a valid certificate (I was MiTM myself using PAROS proxy to test something). In other words, we don't need stricter warnings, but the warnings must become errors for them to be noticed.
See Daniel J. Bernsteins pet project CurveCP (http://curvecp.org/). He also had a talk on 27C3 (http://events.ccc.de/congress/2010/Fahrplan/events/4295.en.h...).
This could be addressed using http://convergence.io/
2. Way easier, but leaving some tell-tale signs you can find is to simply put yourself between your victim's browser and his server and convert all the links that come back to be insecure links that go through you. You then encrypt them as you pass them on to their final destination, while being able to see everything that happens. This is trivial to set up, but can be gotten around simply by using bookmarks that specify HTTPS.
This won't go away until everybody is using 100% SSL and HTTP (unencrypted web traffic) is turned off in browsers.
DOWNLOADING AND INSTALLING ANYTHING OVER AN UNSECURED NETWORK IS ALWAYS A BAD IDEA.
3. For the very determined, it is possible to determine the symmetric key a particular SSL session is using if you have some luck, some skill, and some time (about 30 minutes).
This requires a protocol change to SSL. We've known about (theoretical) vulnerabilities for 10 years, yet most sites still run old versions of SSL. Given how slowly people like banks update infrastructure technology, I don't see this one going away for a long time.
This is a rather poor solution. The longer term one is Certificate Transparency: http://www.links.org/?p=1219
2) is solved with HSTS . You can contact me (@chromium.org) to be built in. There isn't a notability requirement.
3) The BEAST attack was tough to pull off and is fixed with Chrome, FF10 and IE.
4) Yep, cookies must be marked secure. HSTS can also fix this my eliminating the insecure requests. Even with secure cookies (but without HSTS), a MITM can also set the cookie. (i.e. to log you in to their account before you hit 'send' and then to log you back into yours before you notice the problem.)
I don't understand that. If I serve http://example.com/mypage which has a link to http://mint.com/justin, you won't convert that to https://mint.com/justing, right? Even if example.com has HSTS enabled? Cause that would assume that mint.com has https, or else the whole thing breaks.
Cause in that case a man in the middle can just insert links to other domains (say, http://examp1e.com/myotherpage when I was serving a link to http://example.com/myotherpage) and still have the attack work. Like the GP said, only starting at an HTTPS page would solve this.
But you're the expert and I'm not, so what am I missing? :-)
EDIT: and well it doesn't seem that mint.com even has HSTS enabled... so bad example :P
When I was working on WAN optimizers I actually did this during research. All the various sites I visited still proudly told me how they were "Verisign Trusted" and even clicking on "Verify" links would tell me how verified and correct it was.
The UI in the browsers tries hard, but in reality users want to access the site and they will hit OK to get there. convergence is nice (if you run Firefox) but it is of significantly less help when using corporate/intranet sites.
Of course you may be using it to impersonate external websites to your internal users, but the circumstances under which that may be an OK thing to do is a policy question that's still evolving.
The problem with SSL traffic is that it is encrypted and doesn't repeat even for identical underlying data, and hence can't be compressed, nor can it be modified. This significantly hurts performance. To work well the SSL would need to be stripped off, the traffic compressed/read ahead etc, sent over the WAN and then SSL put back on. (The communication between the WAN optimisers was itself within IPSec or SSL.) SSL is designed so that you can't pull shenanigans like this, unless you have the private key of the servers, or resign the traffic with a different CA that can generate the needed certificates on the fly and are "trusted" by the user.
Many internal corporate services have moved to SSL and branch office users need to access them. Think about benefits systems, HR, documents, accounting, sales forecasting and tracking etc.
Convergence leaks information about which sites you are visiting for the first time with the current key only if your bounce notary colludes with one other notary.
http://www.youtube.com/watch?v=Z7Wl2FW2TcA (Start from 35m35s)
Take for example a forum, we can force everyone on HTTPS quite easily but as soon as someone hotlinks an image in a post that's not HTTPS it'll throw security warnings up which are (in my opinion) overly dramatic and are very unfriendly to the user experience.
User submitted content and HTTPS can be a pain to get right, and on some platforms like common bb software it's basically so time consuming to modify it's just not worth doing.
SO solves this by using imgur as a proxy, a lot of sites don't have that luxury unfortunately or even the technical expertise to implement something similar. This is also a bit wobbly on the old copyright laws as well.
What about the case where you're virtual-hosting many sites on one IP address? Since the SSL handshake occurs prior to any HTTP data being sent, and the browser will reject a server-certificate whose hostname does not match, you're normally restricted to one HTTPS domain per IP address.
I gather there are TLS extensions to suppport SSL vhosts, but I don't know how widespread they are. There are also methods of including more than one hostname per certificate, but I'm assuming that when you purchase a SSL cert from a registrar, you're going to be restricted to one host.
This would be fairly lightweight way for me to say which CA(s) I use for a particular domain.
False certificates would be only problem in man-in-the-middle attacks of if the attacker can alter information in dns. In both cases the attacker could also fake the information about valid ca's.
I may be dreaming.
However, I do believe that we should investigate ways of authenticating larger downloads automatically.
that's "simple"! you've to solve the certificate issues. Easy as pie. I'll be waiting.
have a source for that?
Of course, every other major public email provider in the world stores email in plaintext too, do I don't get how this is a knock against Google specifically.
Cryptographic hashes (even keyed ones) are expected to be one-way functions.
For ASCII text, modulo 13 is a reversible operation (a/k/a rot13). It's not key-based, may not be a hash, and I'm not aware of any specifically key-based hashes, but that's along the lines of what I was thinking.
Fully admitting winging this one though.
So when you wrote "key-based hashes", I interpreted that as meaning a cryptographic hash-like function with key input, e.g. HMAC the "Keyed-Hash Message Authentication Code".
Modulo 13 is different than rot13. Modulo 13 is actually a hash function, whereas rot13 is a permutation.
If rot13 took a key (e.g. if it were rotN instead) it would make a primitive cipher. But it doesn't, so it behaves like a cipher that is always used with a fixed key or a cipher the key is already decided in the context of discussion.
The process of applying a specific key to a cipher is called "keying". So just to make things even more confusing, we could perhaps refer to rotN<N = 13> (AKA "rot13") then as a "keyed cipher".
Google is not storing hashes of your emails instead of the emails. How is anything under discussion related to passwords?
Says encrypt all the things...
This is too good to be true.
1% of what? Sure, if you're doing a lot of database read/writes for every HTTP request, then yes, that makes sense. I guess it that case, the actual HTTP would account for (say) 0.1% of CPU load – which makes HTTPS 10 times slower.
I think we should compare HTTP and HTTPS connection creation/maintaining/dropping resource consumption relative to each other, not to the whole process that takes to process request, query databases, create page and send it to browser.
I'm no expert in this matter, so I could be (and probably am) completely wrong. That assertion sounded counter-intuitive to everything I've ever heard, and as it is expressed a little vague, I doubted it.