You can. It's harder than it sounds, but you safely
problem is, having established a secure channel with SSL,
imperiled by other browser problems
For instance, the client might need data that is stored encrypted on the server, and you want to do the decryption on the client.
So, I'm actually going a step past the obvious argument against jscrypto ("this is an inferior way to build a secure channel"). Not only is it an inferior secure channel, but it's also an inferior and dangerous place to try to build crypto functionality even if you're using SSL/TLS to establish a secure channel.
Or the server actually doesn't want to store unencrypted data.
Indeed. Or simply because the same party owns the server and the client.
> Just as all programs evolve towards a point where they can read email, and all languages contain a poorly-specified and buggy implementation of Lisp, most crypto code is at heart an inferior version of PGP.
We've been doing this on the desktop for years.
I'd like to pose a question to the authors:
-A chicken egg problem
-A malleable runtime for apps with admin access
-Code verification being defeated by other hostile apps on the system
-Inability to trust the secure keystore because of the chicken-egg problem and other hostile apps installed prior to the crypto app's installation.
-The weakness of pseudo-randomness
It didn't hold up well against attack there but there was usually an underlying bit of assumed trustworthiness on the local network that shifted the blame.
But protocols designed for enterprise desktops always turn out to be an unmitigated disaster when run across a hostile internet, particularly from a security perspective.
Then again, the original version of it, when it was designed for enterprise desktops, would not be particularly effective.
It may work acceptably for VPN-like applications, but how does it work for actual internet applications? E.g., how do you enroll new clients into the authentication realm over the internet?
The problem with enrollment is more or less the same as the problem with enrollment using any other crypto trust scheme. In the most secure situation, you'll want to start the process in person.
In practice, with kerberos you have to trust the KDCs holding your keys. With asymmetric systems, you have to trust the CAs who have signed the keys of the parties you are communicating with.
I agree that it isn't a good solution for what we might call "web scale" applications. It is a reasonable way to do password authentication, but federated authentication would require establishing cross-realm trust between identity providers and relying parties.
(Note, originally in this proposal, I support the idea of using a browser plugin, although now I think it should be functionality baked directly into the browser)
By adding an encryption="" attribute to the html markup, and the user instructing the browser on how to utilize a private key on their system, you can encrypt and decrypt data in the browser, without requiring a trusted server.
Forget about cross domain scripting attacks, now you can't trust JS from the point of origin. Yet at the same time, you'll probably want it to be able to do all sorts of fancy AJAX-y stuff on the decrypted content. (Certainly on other parts of the page.)
Yes, potential functionality would be lost, but that's always part of the trade-off for security.
But I don't think the trade-offs would be detrimental, from a usability standpoint.
Their entire product is based on the model of in-browser cryptography of your credential payload.
You can read more here: http://www.clipperz.com/security_privacy
and numerous blog entries, and other articles around the web, but its open source, so you're welcome to review it yourself.
I'd love to see the author's take on the Clipperz implementation.
At first glance, without going so far as to create an account, they do a lot of stuff right.
I note that the HTTP session cookies set pre-authentication are not 'secure'. I.e., they can be be sent in-the-clear for HTTP connections as well. This may or may not be a real problem in practice.
All the big companies are making the change to SSL. Sure it may seem expensive, but how many engineer hours will you throw away on cutting those costs? Will those hours have cost you even more? Further: what is the cost if your engineers' implementation is poor and easily broken?
You spend money on SSL, SSL tech gets cheaper and faster. Rinse, repeat.
Or am I missing something?
E.g., http://www.matasano.com/ sources http://use.typekit.com/zpb4ovg.js. But there's no https://use.typekit.com. How are you going to fix that? It takes a village.
I completely agree with this article, but why do so many security researchers try to write like cartoon samurai?
I didn't submit the article as a story; in fact, I've been sort of dreading it being submitted ever since I wrote the comment that had the link in it.
"Ever read the story of the radioactive boy scout? He bought an old clock with painted with radium and found a vial of radium paint inside."
Tl;dr: A MITM could serve you a fake implementation of said crypto and you could very well be encrypting your content with ROT13 while thinking you're getting some NSA-grade security or what have you.
My initial instinct is "browsers should expose some native crypto functions which have always-preceding namespace priority" but after thinking about it more, it doesn't matter. The attacker could just push code that leaks data before it even hits the crypto functions.
Looks like the only workaround is to use SSL/TLS.
(This is all assuming that the implementation of the crypto is solid, of course.)
Throw in ssl and that fixes that problem.
Putting aside how poorly documented (and designed) the JS client is, what they use it for is authentication/authorisation not... for actually encrypting.
All requests from the JS client are signed with a RSA private key, the server keeps a copy of the public key, and verifies any incoming requests.
The private keys can be stored in html5 local storage (encrypted if you want).
The keypairs are initially generated on the server.
When they first started making this available it was being served over plain text http, now they use SSL for the reasons above.
The idea is to remove the need for servers to store username/password combinations. The users account ID is a sha256 hash of their public key.
https://glbse.com and the JS client https://glbse.com/client/glbse/ as I'd said the UI on this is terrible, but it's something they're working on.
The thing is, the browser still depends on the server for the crypto implementation EVERYTIME the user logs in to use the application.
Native application also suffers from this, since any downloads of the native apps still depends on the hosting server not being compromised. However, the vulnerability only applicable when the application is being download, instead of over and over again in each time of usage as in the case of web application.
The only way I can think of is of using extensions. In this manner the trust issue happen once during the download of said extension. By doing this, the implementation would be almost as secure as native application.
I personally thinks that it would be the best bet right now.
The writer also ignores that there are potential solutions to many of the problems pointed out:
In case of random number generation, check out seedrandom: http://davidbau.com/archives/2010/01/30/random_seeds_coded_h...
Furthermore, and most importantly, by using SSL/TLS, the data being sent by the browser to the server is encrypted as it passes through the network, but however remains readable by the server when it reaches it. Whereas with client-side JS crypto, the server cannot read the sensitive data. If the writer does not trust the server to send correct JS code, how can he trust it with the sensitive data itself, in plaintext, which is what he is doing by using SSL/TLS? As I mentioned above, there exist browser plugins for a few projects to verify the integrity of the JS crypto they use.
It's hilarious how you completely ignore the much more pressing issues in HTTPS that I've pointed out. The server can read plaintext in the case of HTTPS, which is mitigated in JS crypto; not to mention that things such as SSLStrip and the COMODO hack have put many chinks in HTTPS's armour - HTTPS is a technology that derives a lot of trust from being able to authenticate the server, whereas we've seen authorities such as China create entirely passable fake certs to fool dissidents.
If you are worried that China may own one of the root certs in your browser, remove the certs you don't trust. Hell, remove all the certs. Every mainstream browser allows you to do that. In most of them, you can even set up permanent exceptions for Amazon.com after you strip your certs out.
Meanwhile, HTTPS/TLS has over a decade of the most intensive study anywhere on the planet, and your favorite half-assed ad-hoc bespoke random "crypto-cat" scheme... does not.
What? I don't see how that's necessarily true. The crypto code can be verified and if it turns out to be fine, then the server shouldn't be able to read the plaintext "no matter what." That's silly.
> Hell, remove all the certs.
Is this seriously your solution?
> ...does not.
This borders on ad-homimen - under this logic, we should never strive to attempt creating new techniques but always rely on 10+year old ones. There will always be new techniques that need testing, and the fact that older ones exist is not sufficient to prevent the study and advancement of new research.
Yes. If you think China is intercepting your HTTPS traffic, remove all the root certs in your browser. Then browse to the key sites you're worried about protecting and create exceptions for each of them. China will not be able to use "fake certs" to intercept traffic to those servers.
Yes. In cryptosystems, 10+ years of study does count for a lot.
I'm truly sorry, I follow you on Twitter and actually really respect your opinion, but that's just nonsense. the HTML, CSS and JS can all be verified, either by a plugin or by researchers studying what the server is sending, or by a variety of other ways. This ultimatum you're giving is silly.
> Yes. If you think China is intercepting your HTTPS traffic, remove all the root certs in your browser. Then browse to the key sites you're worried about protecting and create exceptions for each of them. China will not be able to use "fake certs" to intercept traffic to those servers.
That is not a real solution.
> Yes. In cryptosystems, 10+ years of study does count for a lot.
Of course - I never suggested it didn't - but that doesn't mean that new research can't undergo and survive skepticism.
Neither is YOUR FACE.
We don't write comments like this here.
Are you genuinely trying to be funny or is this OK coming from you on HN?
Sorry if this appears kvetchy, but clearly I need to get the rules around here straight before I open my mouth again.
There are definitely major problems with HTTPS/TLS as a security system, but I think most of them are UI/UX issues.
What that means is we get a choice:
* Rebuild everything from scratch and spend 15+ years testing it back into reliability, or
* Rebuild just the UX for a protocol we otherwise known to be sound.
Trying to "fix" this by rolling your own ad-hoc encryption (while ignoring the authentication issue entirely) is completely missing the point.
This matters because you can literally write a HOWTO that my mom could follow to get a browser configured so that China can't snoop on (many of) your HTTPS connections. No code required.
This is an authentication and a trust issue. JS crypto does not solve this at all.
> What about the fact that the server can still read your data? Again, how can you trust the server with your plaintext if you can't trust it to serve you good JS crypto code?
If you don't trust the server, why are you sending it the data? And how is JS crypto any better?
If you're trying to bash SSL to promote JS crypto as a better alternative, you should probably choose problems that JS crypto doesn't also have.
> There have been attempts to mitigate this problem in JS crypto anyway, using browser plugins that perform integrity checks.
This isn't JS crypto. This is "browser plugin crypto, that we're choosing to compromise by tacking on a huge JS part that someone can backdoor".
JS crypto remains better since there exists techniques to verify the crypto, and the server does not receive plaintext.
> This isn't JS crypto. This is "browser plugin crypto, that we're choosing to compromise by tacking on a huge JS part that someone can backdoor".
That's ridiculous. Integrity checks exist for many crypto-systems, don't perform crypto - they are an extension.
When a server serves you a page, it basically owns that context. Crypto can secure further communication between that page and the server (via SSL), but there's basically no room to hide something from the server, and if there was that would probably be considered a browser bug at some level. If the server does not receive plaintext, which a network-level analysis may say it does not, it is only because the server has graciously consented to not receive some plaintext, not because you have actually somehow built a webpage that can prevent the server from getting that plaintext. One tweak to the server, it serves slightly different JS and offers a slightly different API and bam, it's getting plaintext.
Web pages just don't have enough identity of their own to do anything like this separate from the server, without further extensions (which is why the topic of plugins keeps coming up).
You are giving the plaintext to code that has (at best) the same trust level as the server itself.
What data is it safe to give to that code, that isn't safe to send (in a way that can't be mitm'd) to the server?
> That's ridiculous. Integrity checks exist for many crypto-systems, don't perform crypto - they are an extension.
I'm not sure what you're saying here - you might want to clarify exactly what you mean.
In doing so, perhaps you could tell me why you think "browser plugin validator + untrusted JS crypto code" is more secure or otherwise better than "browser plugin crypto with no JS".
When it comes to security, that is actually a negative.
Furthermore, it's possible to verify the code manually, although tediously, whereas a CA impersonation is perfectly transparent and very difficult to detect. The validator plugin usually does the job.
And vulnerabilities in the trust chain are not HTTPS vulnerabilities - they are trust chain vulnerabilities. It's straightforward to address them yourself simply by not trusting any CAs you don't trust.
Yes, I don't see why not.
> It's straightforward to address them yourself simply by not trusting any CAs you don't trust.
> a plugin can be verified locally at length whereas a compromised CA can issue a fake cert that is very, very difficult to detect.
Which is it?
Either you're claiming that verifying a plugin is easier than not trusting CAs that you don't trust, or you're arguing two completely contrary positions.
And this then brings us back to the earlier point, which is why not just use the plugin for crypto in the first place?
The plugin is the same for all users - not everyone who downloads and runs GnuPG knows how to verify it - but some people might go as far as to either compile it from source or ever decompile it. If they find something suspicious, they will report it so that other people who have downloaded the software can be aware. Same thing goes for the plugin (except that the plugin is easier to verify than a compiled binary.)
My point remains: a plugin can be verified by others who have downloaded it - a compromised CA is extremely difficult to detect.
1. just trusting the cert say for Amazon directly rather than trusting the chain of trust using CA (distrust all the root CA and just trust the cert of Amazon)
> And this then brings us back to the earlier point, which is why not just use the plugin for crypto in the first place?
Comodo was breached and the certificates were revoked using the theoretically-sound, tested, and implemented PKI solution.
Weeeellll actually... Mozilla, Google, MS had to rush out a code patch to manually blacklist the fraudulent certs. Revocation checking is implemented so weakly in browsers and other HTTPS clients that it just doesn't work when it comes down to it.
Furthermore, and most importantly, by using SSL/TLS, the data being sent by the browser to the server is encrypted as it passes through the network, but however remains readable by the server when it reaches it. Whereas with client-side JS crypto, the server cannot read the sensitive data.
This is a shortcoming in the argument made in the article, because it wrongly assumes that we would always want the server to decrypt our data.
I am not saying JS does solve this, but SSL doesn't, and it isn't even meant for that use case.
It can mean it comes from a source your trust.
Or it can mean you trust it to do something that you know and only that.
Which one do you refer?
The only way to verify behavior of codes is to do source auditing yourself. Which is what our average Joes would not be able to do himself anyway, so you get back having to trust entities instead of behavior. Which in that case, we are either get back to CAs or trusting individual certs directly.
Do you trust them?
If so, why not have them do the encryption rather than relying on untrusted code to do it?
At least with JS Crypto, amateurs with Wireshark and Firesheep probably can't steal your stuff. But since that's all the protection it offers, it shouldn't be advertised as anything more ("This NOTEr 2.0 App uses AES-256 in JS over HTTP to provide NSA GRADE security and privacy!"). In that sense, it probably causes harm to use JS crypto, but JS crypt isn't absolutely ineffective.
Edited to clarify since apparently I can't reply to a child comment: There are tools that require next-to-no technical expertise to passively observe HTTP sessions. There are no such tools that I'm aware of to execute an HTTP MITM without control of the network infrastructure.
HTTPS/TLS is designed to be secure against adversaries with millions of dollars dedicated to attacking it. Why am I even dignifying systems that rely on the attacker's inability to use a command line, or write Perl code?
JS encryption is certainly less secure than SSL, and may give a false sense of security to people who want to use it as a replacement.
On the other hand, it may be better than nothing.
Bruce Schneier wrote about why encrypting the video feed of predator drones was not as straightforward: http://www.schneier.com/blog/archives/2009/12/intercepting_p...
The command and control channel is, and always has been, encrypted -- because that's both more important and easier to manage. UAVs are flown by airmen sitting at comfortable desks on U.S. military bases, where key management is simpler. But the video feed is different. It needs to be available to all sorts of people, of varying nationalities and security clearances, on a variety of field terminals, in a variety of geographical areas, in all sorts of conditions -- with everything constantly changing. Key management in this environment would be a nightmare.
Despite what Hollywood says, using a command line does not make you a super leet hacker. If your security system can be broken with a command line, then I doubt you care that much about the data you're securing. Just put the data in plain text and call a spade a spade.
They can't with SSL either.
The attacker triggers some kind of bug or odd mode in a router. That router happily floods packets to the wrong places, but still only accepts them from the right place.
The attacker can decrypt your WiFi plaintext, but not in realtime. You've already received the response from the server by the time the attacker can craft one for you.
The attacker is afraid of getting caught (he possibly needs to be on-location, and can't work through an intermediary), and is therefore restricted to passive methods.
As it happens, all three of these are pretty unlikely. They aren't impossible, but why put forth a large effort for JS crypto when you could put forth a small effort for SSL?
The attacker broadcasts a SSID of 'Free Public Wifi', or one similar to what you normally use, then DoSes all other access points when you try to use them.
The attacker is on the same ethernet broadcast domain and replies to your DHCP query quicker than the legitimate DHCP server does.
The attacker poisons the router's ARP cache and impersonates your MAC.
The attacker leverages the unauthenticated nature of Spanning Tree Protocol STP.
The attacker pwns your home wifi router through a default password on its admin page or any number of other bugs.
The attacker messes with your DNS name resolution for sites of interest. Remember that time DNS resolution for some S. American (and a few US) ISPs got routed through the great firewall of China?
The attacker advertises a more-specific route for your IP address or that of a target site. Remember when that little ISP in Pakistan took YouTube offline?
The attacker could be your ISP working under order of some government. Yours, that of a country you're visiting, or of some territory the packets pass through along the way.
Or you could be using a hotel, airplane, library, mobile, etc. internet connection where transparent intercepting MitM proxies are standard fare. How good do you think the hotel staff are at securing that internet billing appliance they bought?
And of course, the attacker could always cut a cable somewhere and install a tap.
Do all of these things.
You still can't break my secure HTTPS channel to Amazon.com.
Either way: every vulnerability we've found in SSL/TLS (the protocol) makes me more confident in it. Those findings are the product of millions of dollars of attention. Why would we think that any alternative to TLS wouldn't have the same flaws, or worse ones?
I have no direct knowledge. I was personally extremely careful to never test any actual sites.
But unless they were running MS IIS or certain brands of SSL offload devices on every accessible host with the cert on it, they were likely willing to conduct client-initiated renegotiation. I know they use client certs for EC2 stuff, that likely involves a server-initiated renegotiation opportunity as well.
Just checked, they still haven't patched for the actual protocol fix, RFC 5746
Frank Heidt came up with an awesome little exploit that isn't affected by anti-CSRF mitigations. Just find one single URL under HTTPS which redirects to HTTP. (E.g., https://www.amazon.com/) Inject a request to that and now you have a plain HTTP request to play with. You could enter an sslstrip scenario, replace the cert with a legitimate cert to a phishing site at a point the user isn't expecting the URL to change. How many users stop browsing after receiving a mixed content warning I wonder?
Other researchers showed you could replace the EV cert with a simple DV cert at that stage and the browser would still show the green bar.
Absolutely. Those who criticize TLS need to understand why it is the way it is and either propose improvements (the IETF [TLS] mailing list is open) or propose a replacement that really does a better job delivering on all the security properties, not just one or two.
That's a blog post (and supporting data drop) that I would be interested in reading. Based on my experiences at a former employer the rate of cart abandonment on mixed pages is about 20% greater than on fully valid pages (someone hard coded a logo reference into a template) but the sample was fairly small and I would be curious to know what the difference would be on a broader sampling of sites.
If you were very selective about it, and pulled it off without any disruption, you could get away without anyone noticing. Or, do it from a third-world ISP, and say "It was an accident whaduyagonnadoaboutit".
I've worked on software that literally (intentionally) publishes BGP updates every ~10 seconds. It didn't bother anyone, because it wasn't playing with many addresses.
As far as self-contained Java applets go, every attack that can be used against JS can be used against Java. It's a trivial exercise to instrument any method call you like, including calls to any method that attempts any kind of verification. Java bytecode is really quite transparent.
In this way the attacker would be unable to gain anything useful from seeing the challenge or handshake packets, and the only remaining vector is some form of keylogger.
Or am I missing something here?
"Any attacker who could swipe an unencrypted secret can, with almost total certainty, intercept and alter a web request. Intercepting requests does not require advanced computer science. Once an attacker controls the web requests, the work needed to fatally wound crypto code is trivial: the attacker need only inject another <SCRIPT> tag to steal secrets before they're encrypted."
What am I missing here? Are people to cheap to purchase an SSL cert? Theoretically the PKI is only as trustworthy as the CAs but that can't be why people are acting like SSL/TLS isn't even an option.
SSL has defenses against MitM attacks, and is implemented in very widely-deployed libraries, so if there's an implementation error you'll hear about it before it's used on you.
The drawback is of course that you need to have a signed key. That is the price you pay for MitM resistance. We're not supposed to use self-signed keys (which are free), even though they are strictly better than JS challenge/response hashing.
If you don't trust the network to deliver a password, or, worse, don't trust the server not to keep user secrets, you can't trust them to deliver security code. The same attacker who was sniffing passwords or reading diaries before you introduce crypto is simply hijacking crypto code after you do.
You're missing the real solution: SSL/TLS.
In particular, requiring hacker to compromise not only the server but also each client adds to security and limits damage footprint, much more so if combined with external blackbox-style monitoring of served contents and scripts to catch server-side compromise timely.
I've spent 10 years in the security industry building new technologies, several of which are used widely in large banks and payment services.
Nobody should expect respect but disrespect should not be so readily served IMO.
Meanwhile, external network of monitors, masquerading as real users, detect that server is compromised and alert the admins who can take appropriate actions. Overall, compromised server can be shut down in minutes during which only a fraction of user base will have been affected.
How does this not add to security?
Or the connection in between, unless SSL is satisfying that part of our threat model.
No problem there, that's a core function of a server. It's a feature of MitM tools as well.
What's more difficult is an attacker who wants to target only a few specific users.
Meanwhile, external network of monitors, masquerading as real users,
A nice idea, let me know when you get that implemented.
detect that server is compromised and alert the admins who can take appropriate actions.
That's always a fun conversation to have. Be prepared for denial, anger, bargaining, and depression before getting to acceptance.
Site operator: "Oh, must be an old file on one of the servers."
Monitor network: "We think it might be a targeted attack"
Site operator: "Well, what are the chances of that"
Monitor network: "This is not a random phenomenon like earthquakes"
Site operator: "But, you can't prove it, and not all users are affected, right?"
...and so on
Overall, compromised server can be shut down in minutes during which only a fraction of user base will have been affected.
This scheme should not be confused with actual security.
More moving parts to break and make things worse. Every bit of script and supplemental page content is another weakest link.
And they will break, because they're being developed by those who are ether don't understand or are not willing to admit that they're pushing snake oil, even after the systemic flaws in their scheme have been pointed out repeatedly.
From my original comment: "With HTTPS."
> A nice idea, let me know when you get that implemented.
It's part of a service I'm currently previewing privately.
One of the projects I worked on was a sophisticated fraud-detection framework which was deployed to one of the largest banks and acquired by a top security company. Stuff like this is not easy to implement and work never ends but then that's why we get paid.
For most web servers, this involves replacing a file. And since your attacker can just go to the page with crypto JS to receive the "clean" version, s/he can prepare the malware JS in advance.
Oh, it adds to security. For the length of time between compromise and a replaced JS file. As noted above, s/he can easily rewrite your JS appropriately before entry, in which case you've got about a minute or two of extra safety.
If you assume you can detect any compromise of your system within 3 minutes, then yeah, 1 or 2 minutes out of 3 is great. If you miss it, then it's a whole lot of work for barely any benefit. And assuming you can detect any compromise within 3 minutes is highly optimistic.
Re any compromise, only detection of compromise to content/script served to client is necessary in this approach. In extreme case, public read-access to most of the server might even be plausible (although not advisable) IMO.
Maybe they don't modify your app's code, but insert an iptables rule that achieves the same effect.
When a scheme relies on perfect intrusion detection you are not discussing security, you're discussing damage control.
This is why I wrote "when applied appropriately" in my original comment.
Here's a hypothetical: you have producers and consumers of certain documents. You want a system where a producer can produce a document for a particular consumer, put it on a server, and that consumer (and only that consumer) can retrieve the document and read it. Ideally, you want to this as unobtrusively as possible from the point of view of the consumer. If the server is compromised, the intruder should not be able to read any of the documents. It's hypothetical, but something I once worked one would have benefited from such a system, so I spent some time considering how it could be designed.
One approach would be to have a public/private key pair for each consumer. When a producer prepares a document for the user, it is uploaded encrypted using that consumer's public key. The encryption would be done on the producer's client machines (producers are on a company LAN using computers administered by a competent IT department, and can be assumed free of malware).
On the consumer side, we need to solve three problems: (1) generation of the public/private key, (2) storage of the private key, (3) decrypting downloaded documents.
One approach to this would be to tell the clients to install PGP, generate keys, and send the public key to the producers. Clients could then download documents via their browser, and run PGP to decrypt them. That fails the "unobtrusive to the client" requirement.
If the private key is generated as above, it can be regenerated from the password as needed so there is no need for key storage.
The client unzips this, and when he wants to get documents launches the static page in his browser. The static page references its resources by file:// URLs. Essentially it is an application that happens to use the browser as an execution environment. The network is only used to make an AJAX call to get encrypted documents from the server.
If the server is compromised, the attacker could put a bad JS file in the HTML page (which is sent to the user), which will read the contents of the textbox showing the decrypted content, and then send this to $EVIL_HOST
Because you just removed the requirement of the attacker to steal (or at least compromise) the laptop, to get access to the encrypted key: the entropy of the key was still high, it was simply protected on the laptop with a weak password; that's very different from deleting the key and forcibly reducing its entropy to that of the password.
- Statically encrypt and publish content on HTTP server
- Transmit these via HTTP to an iframe component at client browser
- HTTP-iframe locally sends message to HTTPS-iframe via window.postMessage()
- HTTPS-iframe decrypts content (with pre-shared key) and renders it on page
I implemented a library to do this (3 years ago!). Anyone care to tell me just ONE vulnerability with this? Thanks!
Professional malware distributors seem perfectly happy to obtain placement inside an iframe:
Edit: Also, don't you trigger mixed content warnings in the browser?
I guess I shouldn't lock my car, either, right? After all, someone could simply break my windows and steal whatever I have inside. Or maybe, just maybe, using some protection actually is better than using no protection, provided you don't put more faith in the protection than the protection warrants. Just like I wouldn't store gold bars in my car, I wouldn't use jscrypto to protect things that were too important for that level of protection. I will, however, feel okay about keeping my sunglasses in my car, and I will feel okay about using jscrypto to send my phone number over email.
When you develop protocol or infrastructure code the more general the facility is that you develop the less control you have over how the result is going to be used.
For example SSL was originally developed for the purpose of making people feel safe transmitting their credit card numbers to e-commerce websites. Credit cards (like cars) have a pretty predictable upper bound on the financial loss should this system break.
However, the SSL/TLS protocol was very general and it has grown to be used for everything under the sun today. From protecting Twitter APIs to "launch the missiles"-type critical systems.
This is strictly true from the perspective of cryptography. Crypto is always concerned with the strength of crypto from a mathematics point of view.
However, much of that worldview falls down in the real world. In the case of hashing a password before it goes over the wire (even the simplest salt-free MD5), which is crypto-logically naive and "considered harmful," you are providing a general barrier for people that are FireSheeping in a cafe. Many would-be attackers are thwarted by what crypto-nerds would see as silly.
In general, the weakest point of crypto in today's world is the human element. RSA Security got hacked not because the algorithm was weak but because someone installed a back door in RSA's network for the hackers. Once you get passed the point where it is more effective to attack sideways than head-on, the crypto has done all it can do.
That being said.. just use SSL. It moves that gauge enough that anyone trying to get your users' information isn't going to bother direct attacks.
What I meant by the 'salt-free' MD5 is that of a challenge-handshake response. The challenge is sent by the server and the client responds with md5(pw+challenge). The salt would otherwise need to be sent in an insecure manner that would provide only marginal more security than a challenge-handshake. This was unclear and I apologize.