Hacker News new | comments | show | ask | jobs | submit login
Security is Mathematics (daemonology.net)
71 points by s-phi-nl 2178 days ago | hide | past | web | 29 comments | favorite



If you want someone to understand security, just send him to a university mathematics department for four years.

This is a testable assertion.

I've met a bunch of people in data security in the last few years. In fact, I do know one with a math degree and he's very sharp.

But I still think the premise is ridiculous. Math proofs are tall towers of lemmas and theorems existing in an insulated universe.

Logical and rational thought are critical, yes, but in real world security you must be very careful not to build your towers that high. Instead you need a defense-in-depth strategy, one which assumes at least some of your assumptions are going to be violated on a regular basis.

Seen in a crypto paper:

* An attack on [cryptographic primitive] A implies an attack on [cryptographic primitive] B.

* B is not the subject of this paper.

* Therefore, A is proven secure.


Your premise is interesting, your example (and supposed proof?) is a strawman. I don't think Colin is saying the best security people are mathematicians. Rather, he's saying the best mathematicians are well-suited to be security professionals.


Unfortunately, your A-B example is about the best we can do for cryptographic proofs at this point. We don't even know how to prove P≠NP at this point, and worse, as far as I know, we don't even know how to design cryptosystems whose attacks would reduce to a known NP-complete problem. The best we know how to do so far is math problems that nobody has found an efficient algorithm for yet, such as discrete logarithms and integer factorization.


Which is all fine, and it's all very valuable work of course.

The problem is when cryptographers say something is "proven secure" it turns out not to mean that nobody can hack it, especially in an actual deployed system.

This is surprising to many people.


It's a good point. My uni's Master's level security coursework was surprisingly very math heavy. A typical class started with about 60 people and ended with about 10 very exhausted students.

Folks without a strong math background, expecting to learn how to tweak file permissions or domain controllers or something were in for a rude surprise when we cracked open a textbook that consisted of mostly greek letters and pages of proofs and lemmas.

But having studied some hard maths in my undergrad, I walked away with a feeling I couldn't shake that much of it was form over function. We spent weeks showing proofs of very simple security circumstances "atomic changes to an access control list are provable secure" and then summed it up with "anything more complex than this, like nonatomic changes to an ACL are provably insecure".

And that was that. I came away very disappointed in the field as a whole, that instead of spending time pragmatically finding ways of improving security in an applied sense, we were spending an inordinate amount of time cranking through higher level maths that just showed the whole thing was hopeless in the end anyways.

Depressing.


It may be disappointing that many things are provably insecure, but if they are, isn't it good to know that?


Absolutely. It's totally not an area I plan on working in, but it was a fascinating insight into the field. I ended up respecting it quite a bit more, there was a lot more thought in the field than I had expected.


But now you have really good ammunition when you need to explain to someone else why his race-condition laden system of nonatomic asynchronous updates is insecure.


I would argue the opposite: The "security is mathematics" mindset leads to overall system designs that are often less secure. How? The mathematically inclined security experts fail to incorporate (mathematically fuzzy) human factors such as usability and carelessness.

Typical example: Require users to change a password once/month and there's a maximum of one month's time when a password thief has access to an account. True enough. So what do users do (and let's thrown in a requirement for at least one capital letter, at least 1 digit, at least one symbol, 9 character minimum)?

Month 1: Charlie1!

Month 2: Charlie2!

Month 3: Charlie3!

Month x: Charliex!

Meanwhile, formulations for password security that strike a reasonable balance between security and usability (and thus actually get used) are rejected immediately because of mathematical edge cases.

Here's my formulation for a balanced security approach for home users:

"Use a password manager to assign unique, random 15 character passwords for all accounts, protecting them with a strong master password."

I can think of quite a few edge cases where this system will fail. Keystroke logging is the most obvious. In actual practice, there has not yet been a case of a password database being breached due to keystroke logging the master password (at least not among the market-leading password managers).

In actual practice, taking into account human factors, this system is more secure than that practiced by the vast majority of end users. And far more secure than rotating Charliex! passwords.


"there has not yet been a case"

How would you know if this had happened?


The compromised product's competitors would publicize the failure--at least, if the free market worked perfectly.


How would the compromised product's competitors know?


not one case, ever? I highly doubt that


I should have said, "one known case." I researched the issue a year ago by extensively Googling to see if I could find a case among the top 4 password managers in market share: lastpass, RoboForm, 1Password, and KeyPass. I also had some direct communication with RoboForm and they claimed to have never received a report about a RoboForm database being broken into because a Master Password had been logged.

And please - if anyone is aware of even a single known case among these 4 market leaders, please tell us about it.


Since the article references Bruce Schneier, he gave an interesting TED talk where he once again describes the usability/security tradeoff and how humans are very poor (because of hardwiring) perceivers of risk. This might also be why one needs to be "trained" to get a security mindset.

http://www.ted.com/talks/bruce_schneier.html


There are so many instances where non-mathematical variables play an important role in authoring security conscious code. Yes, most security issues arise from not checking edge cases, but you have to know what they are. Mathematics will surely adapt you into thinking about this practically, but it will never teach this.



Nope. Security is primarily social science and engineering. Mathematical training can help [and of course is crucial for areas like cryptography] but it's far from the only way to get a security mindset.


someone failed logic


Well, he's basically correct — unwarranted assumptions create security holes. But the people commenting here that usability problems create security holes are also correct. If you have a system with no usability problems and no unwarranted assumptions, is it therefore secure? Or are there other ways that security holes arise?


There's insider risk. Someone who's job it is to maintain security is most able to breach it.

There's also "hidden back door" risk. Programmers may code in means to get at data for code testing or government sharing purposes, which the security people may or may not fully understand.

I guess you could call these unwarranted assumptions (that insiders are always honest and that no back doors exists or are fully known if they do), but they don't seem amenable to mathematical modeling.


I'd wager that any system that is complex enough to be useful has unwarranted assumptions built in. You can (in some cases should) use mathematical formalism to elicit these assumptions, but from then on, you handle them with risk management, which is classic engineering.


Its a weak counterargument to Scheiner's. The ability to think logically (one example of which is writing proofs though I think writing good code requires it as much) is no protection against attacks you don't know exist. If we knew exactly how security systems could be broken then building an unbreakable protocol would just require thinking about how to satisfy a bunch of constraints (which mathematicians and computer scientists are trained to do). But, in addition to being resistant to known attacks, a new security protocol should be stress tested by trying to think of new attacks that could break it. Most university courses on security don't emphasize this way of thinking. That's all Scheiner is saying. This guy doesn't mention anywhere how a math degree trains you to do that.


Also why does the title have nothing to do with the point he is trying to make?


And this is why I trust my files with this guy.


Interesting read. Falls into what I feel is a mindset mastery. Some individuals are just good at some things for whatever reason. As an example the best software testers I know have an almost instinctive ability to find obscure bugs.

No idea what can get you to land in this zone (training or otherwise) but if you do fall in one you tend to be very good at whatever it is your brain is wired for.


Is this a joke? Navel gazing and patting one's self on the back in one post. This makes it to the front of HN?

I guess he might be on to something. Every philosopher I have ever read has always treated assumptions in a willy-nilly matter. Not to mention any decent legal opinion.

Good security requires critical analysis. The same thing can be said for jurists, inventors, supply-chain logistics...etc.

What a surprise the math guy thought math people have such insight. What's that saying about carpenters and their hammers? At least carpenters don't go around telling everyone that everything is a nail...


Patting himself on the back? I thought he was being rather matter-of-fact. If Colin starts patting himself on the back, trust me, you'll know: http://news.ycombinator.com/item?id=35076


Security is mathematics.

People just occasionally don't use the system correctly: that isn't the fault of the system (in terms of security).




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: