
Javascript Cryptography Considered Harmful - LVB
http://matasano.com/articles/javascript-cryptography/
======
jvdongen
This piece has surfaced before. Then and now I see people coming out of the
woodwork with seemingly smart ideas about how it still should be possible to
safely use crypto in the browser one way or another.

One of the things my company does is security testing of web applications.
Regularly we encounter 'creative' use of cryptographic techniques (both in the
browser and server-side) and each time it makes the hacker in us smile,
because we know it is not a matter of 'if' we'll crack it but 'when' we'll
crack it. Good crypto is a roadblock, bad crypto is just a challenge. And
although it is very hard to decide if the crypto is 'good (enough)', the 'bad'
is usually glaringly obvious.

With the current state of crypto in the browser - just forget it. That's what
Thomas is trying to get across: forget it - if you think you've found some
smart way around one of the weakness he addresses, you're very most likely
wrong. And even if you seem to have got it right, you're probably wrong still
without anyone realizing it (yet).

Same is true for building a crypto-system from primitives. Use what's out
there, designed by the few people who know what they're doing.

Remember: from the defensive side you need to get _everything_ right. As an
attacker I only need 1 hole. That's what makes it "capital-H Hard".

~~~
haberman
I don't know if you're referring to my comment or not
(<http://news.ycombinator.com/item?id=5123674>), but I was not saying anything
about what is safe. I was merely objecting to false equivalences that the
article was drawing.

To say that localStorage is literally _no better_ than server-side storage is
a strong statement, and one that does not appear to be literally true. Taking
issue with that equivalence is not the same as saying that any particular
system/design is safe as a whole.

~~~
jvdongen
I was not referring to your comment per se, but indeed your comment fits the
general sentiment I was referring to.

And I think that localStorage is indeed literally no better than server-side
storage. Whether it is any worse depends on the situation - but better it is
not.

" _This ignores the scenario of app deployment models like Chrome Packaged
Apps, in which the JavaScript code gets downloaded up-front and then is only
used locally. Since you don't re-download the code every time, you only depend
on the security of the code once, up-front, instead of on a continuous basis.
You aren't affected by server compromise (well, no more than compromise of
your OS vendor, but surely you aren't arguing that we might as well send all
our keys to Microsoft, Apple, and Canonical)._ "

It's true that you do not re-download the code every time. Still, the
trustworthiness of the code you received initially depends on the
trustworthiness of the server you received it from.

So you say, "but what if I have reason to trust it initially and not later
on?", e.g. when the government comes knocking.

Well, there are two things to keep in mind in that case:

a) you download other stuff with your browser. Stuff that can influence the
environment where your secure and trusted packaged app runs in. See also
comment of zimbatm - even if it is not formally meant to be that way, in
practice there are bound to be ways around any limitations - sandboxing in
browsers is still nowhere near perfect unfortunately. Server security is not
entirely peachy either, but at least on a properly secured server only a
limited, carefully screened set of applications is allowed to run which makes
things hell of a lot easier.

b) Chrome packaged apps support auto-updating, so unless you take steps to
prevent that from happening, you're never sure you're running the same version
today as you did yesterday. Again, trusting the server to serve you a
trustworthy version of the app repeatedly. And if you're trusting the server
already, local storage is no better than server side storage.

So I guess you could say local storage is better than server side storage (for
some definition of better) if you run one and only one packaged application
ever in a specific installed browser in an isolated environment. The browser +
locally installed web app then effectively becomes a natively installed
application without an Internet dependency. More secure indeed, but kind of
defeating the purpose of that whole web thing ;-)

~~~
haberman
_Chrome packaged apps support auto-updating, so unless you take steps to
prevent that from happening, you're never sure you're running the same version
today as you did yesterday. Again, trusting the server to serve you a
trustworthy version of the app repeatedly. And if you're trusting the server
already, local storage is no better than server side storage._

Again, I think this reduces to "Microsoft/Apple can auto-update your OS, so
you might as well send your keys to Microsoft/Apple." Would you argue that? If
not, why?

------
exDM69
JavaScript, and all other crypto code not done with native code is not safe,
for a reason not mentioned in this article: side channel attacks.

When attempting to create crypto code using an interpreter or a byte code
virtual machine, additional side channels are created by the differences in
the execution compared to executing native code. Crypto code should be written
in Assembler or C code where the assembly output is reviewed by the author.
This is the only way to create code that does introduce side channel
information that can be used with timing attacks, cache attacks, branch
predictor attacks, etc. This introduces a problem because it takes a
cryptographer and a hardware architecture expert in the team to write safe
code for cryptographic primitives.

This does not mean you can't safely use crypto from interpreted languages, as
long as the cryptographic primitives are good native code.

~~~
trekkin
As there can be backdoors and bugs in OSes and hardware, any crypto code done
on generic-purpose computers with standard OSes (Windows, OSX, Linux, BSD) is
not safe. That does not mean it is useless.

The same is true for JS crypto - yes, it is not as safe as crypto in native
code, but it can be used to add an additional layer of security in certain
(non-critical) use cases.

[Disclosure: I run AES.io]

~~~
hedgie
timing attacks don't necessarily need access to the actual machine to work.
his point is valid because a side channel timing attack may arise from the
differences in time it takes to receive a response from the server. there are
remote timing attacks that take advantages in the differences in execution
time between paths in the code. this means anyone observing message traffic
may be able to execute a side channel attack, instead of just people with
access to the hypothetical backdoors you mention.

you are saying not safe as if the term has a standard meaning across all
contexts. anything can be cracked - the question is whether the time it takes
to crack a computer is worth it compared to the data stored on the computer.
in almost any case you actually need cryptography and it's not just a nice to
have (aka credit cards, personal information) it's not worth using anything
but native code.

~~~
demallien
As someone that has spent the better part of the last 18 months getting
animations as smooth as possible in Javascript, I will happily buy a beer for
anyone trying to execute a time-based side-channel attack against Javascript.
You'll need the beer to cry into when the garbage collector craps all over
your assumptions of constant durations for code path execution...

~~~
trekkin
Exactly - interpreted code is harder to do timing attacks against because
interpreters add a lot of timing "noise", while native code is much more
consistent re: time taken to execute a specific routine.

~~~
hedgie
a timing attack doesn't require constant duration for execution paths in the
code. even with the noise in server communications remote timing attacks are
often feasible - the noise filters out when you up the number of measurements
or use the statistical techniques most modern side channel attacks rely on.

remote timing attacks contain noise by default, as they rely on server
communications passed across a network with latency instead of examining the
hardware directly to determine execution time. if you compare the network
latency to the cache timing you'll find the noise is actually pretty fucking
substantial.

a timing attack relies on a average difference in execution time between two
paths of code. certain noise isn't going to protect you - for example, if
everything on average takes 200ms longer, the average difference in execution
time is still there.

~~~
trekkin
I'm not saying timing attacks against interpreted code are impossible. I'm
just saying they are easier to execute against native code, and thus have
nothing to do with JS crypto being less secure than native crypto.

~~~
hedgie
no serious person writes native code that vulnerable to timing attacks. if
native crypto code has a measurable difference in execution time for different
code paths that usually results in huge security advisories, patches, and it's
against openssl a published paper. if the native code is written with constant
time taken for all execution paths then by definition a timing attack is not
possible. even if they are very close in execution time the timing attack is
much more difficult, if not impossible.

timing attacks are only easier against native code written by people who don't
know what they're doing, which means they made different execution paths take
variable amounts of time, didn't examine the generated assembly by a hardware
expert, and didn't bother to mask the crypto operations with _proper_ noise
generated using a cryptographically secure prng.

------
haberman
"WAIT, CAN'T I GENERATE A KEY AND USE IT TO SECURE THINGS IN HTML5 LOCAL
STORAGE? WHAT'S WRONG WITH THAT?"

"That scheme is, at best, only as secure as the server that fed you the code
you used to secure the key. You might as well just store the key on that
server and ask for it later. For that matter, store your documents there, and
keep the moving parts out of the browser."

This ignores the scenario of app deployment models like Chrome Packaged Apps,
in which the JavaScript code gets downloaded up-front and then is only used
locally. Since you don't re-download the code every time, you only depend on
the security of the code once, up-front, instead of on a continuous basis. You
aren't affected by server compromise (well, no more than compromise of your OS
vendor, but surely you aren't arguing that we might as well send all our keys
to Microsoft, Apple, and Canonical).

Also I feel that this analysis conflates security with access. You may trust a
company to keep their servers secure from compromise, but want them not to
have access to the documents when the government comes knocking.

~~~
zimbatm
Even if you app is certified, Chrome extensions can inject code in your page
and thus access your localStorage.

~~~
haberman
Do you have a reference confirming that this is true for packaged apps? I
couldn't find anything that says one way or the other.

------
erikpukinskis
The author is really only saying browser cryptography is bad for one specific
problem: securely transmitting data to an untrusted provider. But there are
lots of other use cases.

For example, maybe I want to upload a file to a server, and I trust them not
to try to steal my data, but I don't trust my government not to confiscate
their servers. In that case, SSL + browser cryptography is adequate to give me
the assurances I need that the government won't be able to get access to my
data, even if the service's engineers could.

~~~
tjoff
"For example, maybe I want to upload a file to a server, and I trust them not
to try to steal my data, but I don't trust my government not to confiscate
their servers. In that case, SSL + browser cryptography is adequate to give me
the assurances I need that the government won't be able to get access to my
data, even if the service's engineers could."

If the government might have the ability to confiscate their servers they also
have the ability to compromise their service during use. So, if you don't
trust your government you can't trust their servers either, regardless of
whether you trust the service's engineers or not.

~~~
makomk
That's the thing - you don't actually have to trust the servers to the same
extent. You can look at and verify any JavaScript and CSS and HTML that the
server's sending you, manually if necessary, but there's absolutely no way of
telling what code the remote server is running.

This actually matters for sites like Mega. There have been previous examples
of file storage services falsely promising that all your files would be
encrypted and their staff would be unable to access them, for example Dropbox.
If all services use server-side crypto there is no way to tell the difference
between a service that lies about this and one that's honest. With Mega it's
possible - not easy, but possible - to check exactly what kind of encryption
is being applied and where it's getting the keys from. You have to go through
every bit of content manually with a fine-toothed comb to be sure, of course,
and there are various ways to obfuscate things, but it's still better than not
being able to see the code at all.

~~~
DasIch
> That's the thing - you don't actually have to trust the servers to the same
> extent. You can look at and verify any JavaScript and CSS and HTML that the
> server's sending you, manually if necessary, but there's absolutely no way
> of telling what code the remote server is running.

You can verify the JS, CSS and HTML only for a specific request and only if
you have sufficient knowledge, meaning you have studied cryptography and even
then it will take a lot of time because it is not possible to not verify that
code manually.

This offers no additional security at all for anyone but the most paranoid
cryptography experts.

~~~
ufo
> This offers no additional security at all for anyone but the most paranoid
> cryptography experts.

This is not much different from how everything else works. Hopefully the
expert criptographers release tools to help the uninformed masses use stuff or
at least break the news is some bad shit starts going on.

------
Sami_Lehtinen
Well, if you use native encryption software, what makes things any different?
If they can replace key code and data on the fly for web application run over
SSL, what makes think they're unable to deliver you fradulend updates for
native apps?

I have been raising alarm about this for a long time. Automated updates are
dangerous, how many users make it absolutely certain that every update is
secure? Well, I can tell you nobody ever does. Because secure updates or
software doesn't exist at all. Even if the previous version was secure, the
next version could be boroken by mistake or on purpose, or you could just get
espionage version delivered which is made just for you.

Unfortunately there are countless programs that do not make update delivery in
very secure manner at all. Plain http, no signatures etc. That's quite much
100% fail.

------
moonboots
_The problem with running crypto code in Javascript is that practically any
function that the crypto depends on could be overridden silently by any piece
of content used to build the hosting page._

Ecmascript 5, the latest version of the Javascript standard, provides the
ability to lock down the malleable runtime. Functions can be frozen so that no
later code can overwrite or change their behavior. For more information, see
this talk[1] by Mario Heiderich in 2011 or his slides[2].

[1] <https://www.youtube.com/watch?v=yuNfO6I6pEA>

[2]
[https://www.owasp.org/images/a/a3/Mario_Heiderich_OWASP_Swed...](https://www.owasp.org/images/a/a3/Mario_Heiderich_OWASP_Sweden_Locking_the_throneroom.pdf)

~~~
jvdongen
I'm afraid this falls in the "Check back in 10 years when the majority of
people aren't running browsers from 2008." category ...

~~~
greggman
That's already true today. Chrome + FF have > 50% of the browser share by most
measurements. Both auto-update. So does IE10. Which means the number of people
with browsers running from 2008 is already below 50% and falling fast.

~~~
jvdongen
True - but how is a web server __with certainty __going to decide which
clients can be trusted (because they've a truly capable browser)and which are
not to be trusted (because they have a vulnerable and compromised browser that
just pretends to be capable and secure)?

Of course it may be possible that one day there is a way around that issue,
but currently there is not. Not even academically let alone practically. Hence
Thomas's next remarks about the impossibility of 'graceful degradation' for
crypto-in-the-browser issues.

------
acqq
If you'd heed the author's warning, you wouldn't do anything of any meaning in
the browser: no online banking, no purchasing... Because even if you don't do
"crypto" (in the sense of encrypting/decrypting primitives) everything else
you do also relies on the same TLS and JavaScript. My online banking site
uses, of course, JavaScript. My online trading platform uses JavaScript. If
the browsers and the sites are vulnerable to the attacks he presents (cross
side scripting, man-in-the-middle possibilities, code injection attacks from
another souce) then it's irrelevant if you do crypto primitives or not, you
are vulnerable.

On another side, _if_ you assume that other vulnerabilities don't exist, and
you do such things online like banking or trading, and you accept that the
sites use JavaScript, I don't see any argument why the crypto primitives which
run in addition to the rest of the code, everything delivered over TLS and
from the same site, are any more suspicious than the rest of the JavaScript.

The advantage of the encryption on the client side is obvious. Of course, it
would be even better to have the client side encryption controlled by the user
separately from the site. But under assumption that I personally control the
server from which I deliver my html and JavaScript over TLS, I still feel
better having the possibility to encrypt something that I'll upload to the
server as long as I assume that the browser is not attacked.

The only thing missing is the possibility to somehow checksum the delivered
html and code and then "lock" that in my browser. It's not something scalable,
I know.

But the problem is never that much technical as it's "political." Consider
Dropbox: in many use cases, they would be able to have all the encryption on
the client and not to deliver the key to them. However they do deliver the key
"because the users will need it." Who says that? They, and I can't choose.

Technically, the solution can be certainly achieved, the problem is that it's
not an interest of the current service providers.

Maybe is Mega the first one that really has such interest?

~~~
trekkin
Mega is not the first one. AES.io (my company) and several others have been
available for some time. Mega is the first one to bring client-side JS crypto
into public discussion.

------
cperciva
For the same reason: If a backup service allows you to access your data via a
web browser, it's not secure.

~~~
bigiain
Is that inevitably true?

I'm imagining a system (kind of like tar snap) that backs up my files all pgp
encrypted ith my public key, and which allows me do download those encrypted
files (which I can then decrypt locally).

If the pgp encryption is done client side (by a native app, not in-browser),
and the "backup service" oly ever sees pgp encrypted files - is there some
other hole I've not seen there?

(I guess theres metadata leakage with that scheme, the number and sizes of
backed up files could be determined, even if the contents are secure)

~~~
DanBC
> If the pgp encryption is done client side (by a native app, not in-browser),
> and the "backup service" oly ever sees pgp encrypted files - is there some
> other hole I've not seen there?

Doesn't that mean the browser can't access the data? The browser can access an
encrypted lump of stuff, but it doesn't encrypt it or decrypt it.

~~~
bigiain
I guess that depends on what you mean by "access your data via a web browser".
I'm imagining a Dropbox type system, where a native client uploads my files
after pgp encrypting them, and there's a website where I could log in from any
browser and download any/all of my encrypted files. It's not actually a very
good idea, for the website to be of any use, there needs to be enough cleat-
text metadata (or at least decryptable by the web server metadata, which may
as well be clear text), so that I can find the file I want. If that file is
called "Metalica_Album.mp3.pgp", or if there a directory called
"Disney_BluRay_rips", the large encrypted lumps probably won't do me any good
as a defense in court.

------
vy8vWJlco
I agree that browsers simply don't have consistent enough APIs for the strong
guarantees required for encryption, including strong random number generation
and memory allocation behavior. That was the takeaway for me when I read this
the first time.

The "if SSL, why JS crypto?," DOM, and "chicken-v-egg" trust problems seem
more like straw-men and sophistry though. Desktop crypto underwent an
iterative evolution with early adopters bearing the bulk of the risk too.
(Mega got the digest part wrong, but they fixed it, for example.) SSH doesn't
use certificates, but you can read the host fingerprint and follow the chain
of trust that way. If people are going to use crypto, they have to take
responsibility for these pieces, which is improbable en masse. _"[T]he
security value of a crypto measure that fails can easily fall below zero"_
definitely rings true. Repeated malware infections, however, suggest peope
don't even learn after they are burned... "Normal users" can't be bothered to
update their browser _or_ verify trust (leading to VeriSign having complete
power, for example), for the same reason "normal" people don't use the
existing native encryption (GPG/PGP) and, if they did, there would be no need
for JS crypto.

------
politician
Why can't we build the necessary primitives into the browsers themselves? We
have Web Audio APIs, file system APIs, no?

~~~
thirsteh
It's coming: <http://www.w3.org/TR/WebCryptoAPI/>

~~~
daeken
tptacek (original author of the OP here) and I responded to the draft here:
<http://news.ycombinator.com/item?id=4549504> The tl;dr is: the trust model is
broken, and low-level primitives exposed in the way that they are is begging
for pain.

Not a lot has changed since then, sadly.

~~~
javajosh
Doesn't the Chrome plugin model sort of highlight the way forward? Chrome
plugins have their own JavaScript environment, sharing only the DOM with
client pages. A client wanting to do crypto could, for example, communicate
this by mutating a piece of the DOM (for example, setting the text of a meta
cleartext node) and then waiting for a DOM mutation in answer (a meta
cyphertext node that, from the page's perspective, magically pops into
existence).

Anyway, that's how I'd do it. But yes, doing crypto _within_ a page seems to
be asking for trouble.

~~~
ville
But couldn't this magical meta node as easily be created by malicious code
running on the client page?

~~~
javajosh
I suppose so. But if that's your level of paranoia you probably shouldn't be
doing native crypto either, as any process on your machine could be finagling
the memory regions of your "secure" program in the same way. In fact, the only
secure way to do crypto, I suppose, is pen and paper.

------
gingerlime
There are some good examples and arguments there, but to me it reads like it
mostly boils down to having SSL, which does quite a good job at solving the
chicken/egg situation. You can have a good degree of assurance that you talk
to the right server, and that your communication to it is not in the clear (of
course, what's good degree is debatable, and depends on what you're trying to
protect and against whom).

I don't quite get why

    
    
        > You could use SSL/TLS to solve this problem, but that's expensive and complicated
    

You can get an SSL trusted certificate for a few bucks, and installing SSL is
probably one of the most well-documented sysadmin procedures on the web. How
can it be more expensive or more complicated than implementing your own
javascript crypto?

------
Joeboy
Regarding the malleability of the js runtime, would this be addressed by
making all the page content a single html+js file and making its hash widely
available for manual verification? Obviously "normal" users aren't going to
check it's kosher, but it should mean if a site's serving dodgy code
_somebody_ will notice.

Obviously that's a bit impractical for most websites, but it could make sense
for a site whose primary raison d'être is encryption.

~~~
zachrose
Interestingly, this is exactly what the HTTP Content-MD5 header is for.

~~~
martingordon
I am way out of my league asking this, but couldn't any MITM attack just fake
that header as well?

~~~
Joeboy
Yeah, I don't think it actually solves any problems for us. It is, as zachrose
says, interesting that it exists though.

Also way out of my league, btw :-)

------
benatkin
Interesting points. I'd thought of these issues but hadn't heard them so
clearly stated.

I don't know if this situation would be common but I have an idea of where it
could work. Perhaps a web app you completely trust could talk to an API you
don't trust over CORS. The web app you completely trust would only talk to the
api you don't trust over XHR and wouldn't eval anything it got back.

------
el_cuadrado
Is it just me, or are they successfully killing a strawman in this article?
Who the hell uses js cryptography for such things?

~~~
DanBC
This national UK newspaper has some javascript cryptography challenge response
type stuff on their member login page:

(<http://users.guardian.co.uk/signin/0,12930,-1,00.html>)

This website claims it's their script: (<http://pajhome.org.uk/crypt/md5/>)

------
chris_wot
For those interested in the "Radioactive Boy Scout" referenced in the article,
a copy of the Harpers magazine article about David Hahn can be found here:

<http://www.dangerouslaboratories.org/radscout.html>

------
pfortuny
Honestly, being downvoted for being clear does not show a great intelectual
clarity. Really: do you people check all the hashes of your software? Do you
trust all the certificates in your browsers? Have you really honestly checked
all the ssh fingerprints of the servers you connect to?

I repeat: security is not abstract, it depends on the problem and the trade-
offs.

If this hurts please check your mind.

And feel free to downvote OF COURSE.

Be happy.

------
Flenser
When was this written? The only date I see in it is this:

 _Check back in 10 years when the majority of people aren't running browsers
from 2008._

Edit: The oldest version on archive.org is Sep 2011, so it's at least 16
months old.

[http://web.archive.org/web/20110815000000*/http://matasano.c...](http://web.archive.org/web/20110815000000*/http://matasano.com/articles/javascript-
cryptography/)

------
willscott
Are there many examples of websites using client-side javascript cryptography?

~~~
simcop2387
I've actually done client side HMAC before to keep from sending passwords in
plaintext at least. The site couldn't do SSL at the time. Not perfect, easily
MITM-able but at least not network sniffable.

~~~
MichaelGG
Are there many scenarios where you can read, but not write, to a network? On
WiFi, you can inject if you can read. On a switch, if you are reading via,
say, poison ARP, you can also write. Passive taps like mirror ports can't
read, but it seems that WiFi/Ethernet/some other physical thing is a more
common vector in the first place.

~~~
simcop2387
Likely not that many. Which is why it only prevents someone from being able to
do a replay attack on the login form. (The server chose the secret for the
HMAC). It made the better choice for attacking the site to be something more
like firesheep to take over the session instead. Though I made that a little
more difficult by rotating the session keys after every transaction, so if you
take over it would force the other person to be "instantly" logged out and
they'd notice and could do something about it (log back in and log out
immediately would suffice). Not perfect but also not the worst way to do it
(assuming they aren't injecting javascript into the client to do the work
anyway, no way to prevent that over only HTTP).

------
DanBC
Previous discussion (517 days ago):
(<http://news.ycombinator.com/item?id=2935220>)

------
pfortuny
Ermh sorry to nag but after the google (et al) rogue certificates I think one
would better say SSL is considered harmful as well....

Security is a set of layers and trade-offs.

Harmful for what? That is the question.

Your computer is considered harmful. Did you check the hashes of all the
software you downloaded? Oh wait there are no checks to be done for the little
app you got a couple of months ago...

So: take an enemy and look if what you do is reasonable enough.

Flying is considered harmful, hence the TSA.

Remember the github fiasco? But you still trust them do you? I might (I do
not) consider github harmful as well.

~~~
pfortuny
Did I say anything wrong? Because to be honest, I think I was just laying bare
some hard facts.

------
sgdesign
A little off-topic, but was this written by patio11? The writing style feels
very similar.

~~~
DanBC
I think tptacek.

------
rorrr
Random number generator problem is solved in Chrome:

    
    
        var x = new Uint8Array(10); 
        window.crypto.getRandomValues(x);
        console.log(x);
    

It's also supposed to work in FF, but for some reason doesn't:

<https://developer.mozilla.org/en-US/docs/DOM/window.crypto>

~~~
jvdongen
And how are you going to be sure that 'window.crypto.getRandomValues' points
to the function you expect? Currently you can't be.

~~~
jQueryIsAwesome
Yes you can be sure; using an iframe + innerHTML because the later one is not
a function and can't point at one or can be defined any other way and the same
goes for its parent objects.

    
    
        document.body.innerHTML += "<iframe></iframe>";
        document.body.childNodes[document.body.childNodes.length-1].contentWindow.crypto.getRandomValues;
    

And please don't talk about how the JS engione of the browser can be
compromised too; I know that but here we are aiming for practical applications
not a philosophical debate about how everything is just an illusion.

~~~
khuey
I know this isn't the point of your comment, but modifying
document.body.innerHTML and then using document.all to access it is probably
the worst possible way to append an element to the document and then use it.

~~~
jQueryIsAwesome
To avoid some of the bad you can put this before the other Javascript files
because that way you don't destroy event listeners.

Also I changed "document.all" for "document.body.childNodes" that is cross-
browser and can't be compromised.

------
dschiptsov
Just Cryptography?)

