Hacker Newsnew | comments | show | ask | jobs | submit | josephg's comments login

What an amazingly entitled post. Why would outside contributors call the shots? Its Apple's project; not yours.

You misread. I don't expect outside contributors to call the shots. b00gizm referred to "people inside Apple who really want feedback and contributions from the community to improve their language" and I suggested that those inside contributors might not have the power to allow effective outside feedback and contributions.

Incidentally, if Swift is to be widely adopted outside the Apple ecosystem, it'll help hugely if it becomes a community project, not "Apple's project".


In my experience, its more like "get your $100, then offer you another $500". I've been offered several jobs to continue working on open source software I've written at companies where that software is being used. In all of those cases I've been allowed to continue opensourcing the changes I've made. For small projects, nobody wants to maintain their own fork.

-----


Github has a stable business model which depends on their reputation as a host. As I understand it, that isn't something that could ever be said of SourceForge.

I'm not saying github will be around forever, but I highly doubt they'll make the same mistake sourceforge is making now.

-----


SF was highly reputable back in the day - why else do you think so many projects which have roots back in the 90's are hosted there?

Never say never. 15 years ago nobody would've dreamt SF would have gone this way.

-----


Agreed. As a high schooler I loved sourceforge. I would talk it up to people and I had a couple of projects that I put up there. I thought it was the best thing since sliced bread.

Then I saw that famous talk by Linus on git in 2007 (https://www.youtube.com/watch?v=4XpnKHJAok8). Since I had never managed to get SVN working properly for me git was awesome. No server software to install. By the time I wanted to put up another project on the web Github was a thing and I used that. I never looked back, I loved how it was about the code, not how many installer downloads you had.

That for me was the main problem with sourceforge. In the end it was a game (for the devs) to get the most downloads because that was how your projects were judged and ranked. The Github "game" is slightly better and there are multiple ways to play.

-----


I don't think there were decent alternatives back then. Also, their business model has been based on web advertisement. It's a valid point that GitHub has a solid model and might not need to do shady stuff in the long run.

-----


> I'm not saying github will be around forever, but I highly doubt they'll make the same mistake sourceforge is making now.

Github could be sold, just like Sourceforge was sold, and the new owners could behave very differently from the current owners.

-----


It has been studied (first in 1909 by Henry Ford!). As I understand it the results agree with the original post.

In short, you can use crunch time strategically to meet deadlines but you always need a recovery period afterwards. If you work 60 hour weeks, in about 4 weeks your productivity will drop lower than it was when you were working 40 hour weeks - despite putting in 50% more hours. Also many people in this burnout zone will self report that their productivity is higher than it was (and they're wrong). If you've been crunching for a month straight, you're working ineffectively and you're too tired to tell.

Here's a presentation on the topic from Dan Cook, with links to papers: http://lostgarden.com/Rules%20of%20Productivity.pdf

And here's a fantastic write-up of a recent quantitative study in the games industry. They looked at how the success of video games correlates with crunch time and overwork. I suspect that these results would also hold true amongst startups: http://www.gamasutra.com/blogs/PaulTozour/20150120/234443/Th...

-----


This is not quite correct. Generating a random vector using three calls to rand() is the equivalent of picking a random point inside a cube. If you normalize the points, you're effectively squishing the corners of the cube in toward the center to make a sphere. Points near the corners of the containing cube will appear more frequently than the top, bottom and 'sides'.

@ginko's comment is correct - you can fix the algorithm by throwing out any points that lie outside the sphere before normalizing.

-----


As others have pointed out, the distinction between randn() and rand() is crucial here - the former gives you points from a (spherically symmetric) normal distribution, the latter gives you uniform points from the unit cube.

One of the advantages of the normal-sampling route over @ginko's rejection-based method is that in high dimensions almost all of the volume of the unit cube is situated outside of the unit sphere (the volume of the unit cube is always 1, whereas the volume of the unit sphere decreases exponentially with dimension). So the rejection method becomes exponentially slow in high dimensions, while the Gaussian method still works just fine.

-----


Surely there would still be a bias towards the corners?

-----


Interestingly, there isn't for Gaussians (see my comment below).

Other simple distributions tend to give biases towards the corners or axis. Perhaps the Gaussian is unique in this regard? I'm not sure.

-----


In three dimensions (to keep the notation reasonably simple) you need a distribution with density f such that f(x)f(y) f(z) is a function of x^2 + y^2 + z^2 (i. e. the distance from the origin). It looks like the Gaussian (up to scaling) is the only one.

-----


Yes, the fact that i.i.d. Gaussians give a rotationally symmetric joint distribution is a defining property of the Gaussian.

-----


Ah I suspected as much. Can you provide proof?

I think I found one, but I'm not sure:

Without loss of generality, take f(xi) = k1 * exp(-g(xi)) [1], for some g. Then we need the joint pdf to satisfy f(x1,...,xn) = k2 * exp(-h(R^2)), R=sum(xi^2)^1/2 (the R^2 and h(.) is w.l.g. too). So we get g(x1)+g(x2)=h(x1^2+x2^2). Then assuming the functions g and h analytic we end up needing g(x)= k * x^2, otherwise we get cross terms in the Taylor expansion that can't be cancelled out for all xi. Sounds good?

[1] The function f trivially needs to be symmetric, justifying no loss of generality.

-----


Perhaps coincidentally, someone just asked about this at math.stackexchange: http://math.stackexchange.com/questions/1255637/joint-pdf-of...

-----


Yes, that was me :)

I wanted to make sure I got a proof, since I didn't really find this elsewhere.

-----


You have to assume that the one-dimensional marginals are Gaussian for otherwise the statement is not correct.

-----


Hmm I'm not sure I get your point. I'm trying to prove that the joint pdf of N iid RV's is isotropic if and only if the RV's are gaussian. If I assume the pdfs are gaussian in the first place the proof isn't valid?

-----


Oh, I see. I thought you were trying to prove something different.

-----


No, OP is correct; he spoke about an isotropic normal distribution (randn as opposed to rand).

-----


I don't totally understand the math, but davmre is suggesting calling randn(), not rand() -- so sampling from a normal distribution instead of a uniform distribution.

-----


The math is fairly simple. The probability distribution of a multivariate standard gaussian has a simple form that is f=a * exp(-x1^2+...+xN^2)=b * exp(-R^2), where a and b are some normalizing constants and R is the norm of the position , that is, it is obviously rotationally symmetric [1].

But that pdf is also the joint pdf of N i.i.d. gaussians, evident by decomposing f=a * exp(-x1^2) * ... * exp(-xN^2) [2], which is the joint pdf x1,...xN s.t. fx1=c * exp(-x1^2), ..., fxn=c * exp(-xN^2).

[1] Since exp(-R^2) does not depend on direction but only on distance from the origin

[2] The fact that f(x1,...xN)=f1 * ... * fN if x1,...xN are independent follows directly from the fact that P(A & B) = P(A)*P(B) if A and B are independent events.

-----


Great point, you are correct. It's not enough for the distribution of the random vectors to be rotationally symmetric. It must be isotropic, looking the same in every direction from the origin, which a cube does not. (Nor does any other method deriving from a polyhedral solid, as I was thinking might have been possible, through something like selecting a random face from an icosahedron then a random point on that face.)

-----


FoundationDB. Kyle didn't bother running Jepsen against FDB because foundationdb's internal testing was much more rigorous that Jepsen. The foundationdb team ran it themselves and it passed with flying colors:

http://blog.foundationdb.com/call-me-maybe-foundationdb-vs-j...

Sadly, fdb has been bought by Apple[1], and you can't download it anymore. I sincerely hope foundationdb gets opensourced or something.

[1] http://techcrunch.com/2015/03/24/apple-acquires-durable-data...

-----


I believe the real reason was that FDB was not open sourced.

Jepsen tests cannot be used to prove the system is safe, but to prove it isn't.

It looks like very often he looks into source code in order to figure out how the system operates, that way he can find weaknesses and write test to his testing framework to demonstrate the issue.

I wouldn't trust any company that uses Jepsen to show that their product is safe.

-----


[deleted]

He has said [1] that each test takes literally months to do. I am not surprised that he picks the most popular databases, given the amount of work.

[1] https://github.com/rethinkdb/rethinkdb/issues/1493#issuecomm...

-----


Actually, look at some of the things that @aphyr has tweeted about FoundationDB:

https://twitter.com/aphyr/status/542792308492484608

https://twitter.com/obfuscurity/status/405016890306985984

-----


I haven't heard of FlatBuffers before, but it looks similar to Cap'nProto[1] (which was written by the guy who wrote protobufs). Does anyone know how they compare?

[1] https://capnproto.org/

-----


Here's the comparison by the author of Cap'nProto: https://capnproto.org/news/2014-06-17-capnproto-flatbuffers-...

-----


I bet that's more than enough https://capnproto.org/news/2014-06-17-capnproto-flatbuffers-...

-----


So, there's a pattern I'm noticing again and again. Whenever a startup announces a new secure communication product, extremely knowledgeable and respected people in our community write posts talking about the (awful) mistakes they've made.

In the wake of the Snowden revelations, its incredibly important that we bootstrap a usable new generation of software with security at its core. But I'm worried that lots of people will be afraid to make security a core feature of their product because of the bad press they'll get if they do it a little bit wrong. Frankly the security community does not have a history of making systems that my mum can use. We really need an army of software engineers to work on fixing this problem and I don't think blog posts like this are helping. (Notable exception is Textsecure).

In this case, as I understood the press release:

- The QR code is used to initialize a direct (encrypted) bluetooth / wifi direct connection. Taking a picture of the QR code isn't enough to get the one-time pad.

- Cameras on modern mobile phones should be able to generate more than enough entropy (even if this isn't happening yet).

But there is no discussion of these points so far. Everybody is just piling hate on the product for its failings. We need to be better than that if we want a secure app ecosystem.

-----


People should be afraid to make security a core feature without actually getting it right. This stuff is literally a matter of life and death in some places, and a product that says it's secure but isn't is much worse than a product that doesn't say it's secure in the first place. Should we avoid criticizing airbag manufacturers for making their products spew shrapnel because we don't want to make them afraid of building safety equipment?

The actual problems here are that the claims made by Zendo are 1) pointless and 2) impossible. They're pointless because one-time pads offer zero real-world security benefit over a modern symmetric cipher. They're impossible because there's no way to securely generate and share a one-time pad the way they're saying.

For example, you say the QR code is used to initiate a direct, encrypted wireless connection. Encrypted how, exactly? With a symmetric cipher! Either a symmetric cipher is good enough to keep things secret, in which case the one-time pad is pointless, or a symmetric cipher is not good enough to keep things secret, in which case the one-time pad is compromised in flight.

If you want to make security a core feature and not get criticized, all you have to do is get your security right. If you want to get your security right, bring in a professional. Any crypto professional worth anything could have pointed out all these problems within five minutes of being given a summary of their design. They either didn't consult with a professional, or they did and ignored the advice.

Blog posts like this may not help bring about the security revolution you want, but they sure are helping to prevent a security disaster in the form of a bunch of completely broken programs leaking our info every which way while pretending to be secure.

-----


We need software to protect privacy, not security cargo-cult and bizzare homebrew encryption schemes

>... The first step is always optical, and that is an exchange of an AES 256bit key, plus an authentication key, and so those are the keys to encrypt the One-time pad as it’s being transferred ...

This software does not use one-time pad. It uses AES-256 with some extra hoops. Authors are either incompetent, thinking that having message-sized chunk of random data somewhere during encryption is 'one-time pad', or dishonest, using 'one-time pad' as empty buzzword. Both are huge red flags against using that software

-----


I believe the issue involved hype about one type pads being used being a 'game changer'. This is not the case as you can use the qr code along with existing crypto primitives to do the same thing much more securely. E.g. qr code could have a public key and a nonce, send your public key and the nonce (signed by your key) and now both of you have enough information about the other to do diffie-hellmen (create ephemeral keys).

In other words they are hyping and trying to sell people something that is at best equally secure to current things and more likely less secure.

-----


To be fair, it is much more common that these new softwares add a dash of crypto to spice up their marketing than it is that they carefully approach the problem of creating secure channels.

People that want to do the latter should go out of their way to avoid appearing to be the former, not call up Techcrunch to show off their unicorn.

-----


No, not at all. At least in the teardowns I've seen here on HN, the companies make big mistakes, market it like they are perfect, and are arrogant regarding fixing things.

If companies launched with a real crypto review, this wouldn't happen. On the flip side, most companies wouldn't launch at all because their product is simply inherently insecure. Or, it addresses a mostly useless threat model. For instance, it requires you to trust their server while claiming to be super duper secret.

Even Silent Circle, which touts some famous names, talks up their secure PSTN service which is so obviously interceptable it's bizarre to make any security claim at all.

The problem with crypto your mum can use is that key management is a pain in the ass. And if users don't verify and maintain their own keys, most systems are useless against strong threats. zRTP is the closest thing to great, because there's a simple in-but-out-of-band key verification system built in.

And if you day your mum doesn't need that much security, she just doesn't wanna be in a dragnet or have her ISP read her email, then choosing, say, Gmail, and emailing other users with TLS-enabled SMTP servers gets her pretty damn far (if SMTP actually verified certs). But that's not a very sexy product, is it? (Despite being what many corporations do -- configure explicit SMTP-TLS policies.)

-----


>> The QR code is used to initialize a direct (encrypted) bluetooth / wifi direct connection. Taking a picture of the QR code isn't enough to get the one-time pad.

If, as the article says, the QR code gives a symmetric AES key, then yes it could well be enough to get the OTP if you've eavesdropped on the traffic as well.

>> But there is no discussion of these points so far. Everybody is just piling hate on the product for its failings. We need to be better than that if we want a secure app ecosystem.

What we need to be better at is recognising that "Hey, look at my brand-new awesome crypto-system!" is a huge red flag, and what we should be looking for is "Hey, we're using tried and tested protocol X, which we're pretty confident about".

From an app-writer's perspective I can see why they want to announce that they've made this awesome new security system - they want to differentiate themselves, and security is in the public eye right now so seems to be a selling point. Human nature being what it is, new and shiny is eye-catching and good for sales. But we really want pretty much the opposite here.

-----


Nothing keeps startups from asking crypto experts for consulting and guidance. Nothing keeps them from reading publicly available docs and rants.

-----


Companies should be implementing good security not because of the good press it will get them but because of the bad press it won't get them.

-----


Bad press rarely costs as much as good press makes.

Plus, even bad press is getting your name out there, another form of marketing which can be spun by the company into revenue.

-----


In this case I meant the bad press that results from a security leak.

-----


Security leaks are so common now as to be a non-event for 99.9% of the populace. Even I just go change a password or two and move on with my life.

-----


While I understand that that's primarily a normative statement, you have to have something significant to lose to be worried about the potential losses. It's going to make more sense, initially, for a company to invest in activities that they expected to generate revenue than it is for them to invest in activities that secure the source of that revenue. It would only be once you already had significant liabilities that it would start to make sense to invest in security, and even then only if you expected breaches of security to result in significant loss with respect to the source of that revenue (questionable, if you've achieved significant market leverage then people may not be that worried about breaches to their data. I'm reminded of this sort of thing: http://www.huffingtonpost.com/kyle-mccarthy/32-data-breaches... .)

-----


The fundamental problem is that cryptographers are working on this, but it's hard. Security is always about tradeoffs, and when actual lives are on the line people need to understand what tradeoffs they're making.

For instance, a messaging protocol that authenticates its participants can be strongly privacy-preserving for the initiator or recipient, but as far as we know, not both. One side can initiate the connection anonymously, but at some point one or the other must identify themselves before the other.

Another issue is that crypto is not, and may never be, at the point where anyone can securely assemble a protocol from raw primitives (e.g., AES or SHA-2) without specialized knowledge. These primitives provide security features that are well-defined mathematically, but that definition may not map well to intuitive real-world security properties. As an example, a one-time pad done correctly is information-theoretically secure. But it's trivial to tamper with, because there's no authentication of the message. AES-CBC has the property of indistinguishability under chosen-plaintext (IND-CPA) and chosen-ciphertext (IND-CCA1) attacks, both of which are considered necessary properties for a block cipher to be secure. For a long time, we though this was good enough. Unfortunately, IND-CCA1 didn't model adaptive chosen-ciphertext attacks, where an attacker uses the outcome of previous decryption attempts on chosen ciphertexts to influence his next choice of ciphertext to submit for decryption. Authenticated modes provide indistinguishability under adaptive chosen-ciphertext attacks (IND-CCA2), but unauthenticated modes like CBC remain popular (they do still have their use, but you need to understand what the risks and tradeoffs are before using an unauthenticated mode like CBC).

We likewise don't know of a good, user-friendly way to authenticate identities without the use of a central authority (à la certificate authorities) or a web of trust (à la GPG). Both suck. With CA-style authentication, if a CA is compromised, you lose many of the security guarantees. This is perhaps good enough for e-commerce, where governments typically have little incentive to steal credit card numbers, but it's potentially catastrophic for communication between political dissidents. A web of trust is perhaps better for this kind of use-case, but nobody knows how to make the process of bootstrapping the web of trust simple, the decision of who to trust is inherently complicated (Alice trusts Bob and Carol, who both sort of know Dave who vouches for Eve implicitly; how should Alice feel about Eve?). You can hide that complication at the cost of decreasing the utility (trust conservatively) or increasing the risk (trust liberally). Or you can expose that complexity and let individuals weigh the risks themselves.

The more of these decisions and choices you hide from the user, the more user-friendly you can make your system. But the cost is control over exactly whom and how you trust. Unfortunately, if you build one easy-to-use system for people to securely share cat photos and one hard-to-use system for political dissidents, it's easy to identify the political dissidents.

Anyone who solves this problem deserves a fucking Nobel.

-----


It ends up being a product of the fact that the smartest crypto folks aren't software developers, and the smartest software developers (or the most innovative -- the guys starting companies or building products) aren't crypto folks.

So you have a bunch of what are effectively scientists who aren't all that familiar with how products get made (first build something, release it, then refine it over time through feedback) and you have a bunch of developers who aren't familiar with how security products are managed (you don't release anything until you're willing to put people's lives on your product).

What you're left with is an ecosystem of developers releasing code that's not cryptographically sound because they need feedback and a community to help them refine the software, and crypto scientists who scream bloody murder every time they find a bug because, "what if the next Snowden uses your new tool!?!"

Everyone's right in this argument, it's just hard to bridge the gap between these two very different schools of thought.

-----


Snowden and Greenwald actually used one of the most popular of those tools, as you know, and according to the person who did the most conclusive, reputable, and devastating security analysis of that tool, they used it during a time when it was catastrophically broken.

The clear message you are trying to communicate is that the security community hyperventilates over security problems to the detriment of the community. That's not what happens. Look at Lavabit --- another example: the problem with serverside encrypted Internet email was absolutely written off as hyperventilation, to the detriment of every single Lavabit user, whose communications were almost certainly recorded and are, as a result of the way Lavabit handled a request for Snowden's communications, retroactively decrypted.

-----


Since our prior conversations, I've softened on my stance and at this point I think it's mostly just a UX issue. I don't think you or any other expert is wrong in being as... "hyper" about the problems in current software. I just think "your" (as the stand-in for crypto experts everywhere) solutions are, while strong, not the best UX, and I consider UX to be a critical part of security -- if people won't use it, then it's not good.

All I'm saying here is that there's a disconnect between security and usability that needs to be bridged, and sometimes one side forgets about the difficulty of correctly implementing the other.

-----


I'm sorry, but just to make sure I'm getting my point across, I'm going to restate it more simply:

Security people freak out at new amateur encryption tools because when they don't, those tools get adopted by hapless users who entrust real secrets with them. It's not a theoretical concern; it happens over and over again in the real world. It happened to Snowden twice, once with Cryptocat and once with Lavabit. In both instances, crypto experts begged people not to use those tools. In both instances, misguided privacy advocates pushed back on the crypto experts. The tools either didn't get fixed (because, in Lavabit's case, they couldn't be) or didn't get fixed in time. Almost certainly, NSA has decrypted intercepts as a result.

It's happening right now with other tools. It will, with certainty:100%, happen again with Cryptocat.

During the snake oil era of medicine, people used to take radium tonics as a cure-all. That's what these tools are: radium tonics.

-----


It doesn't help that Nadim seems to have stopped supporting Cryptocat to focus on Peer.io instead.

-----


That's not really what I'm talking about, though it is a manifestation.

"Don't use the tool" is a cryptographically sound piece of advice, given what you know about the specific implementation, but it's utterly useless for Cryptocat, from a product standpoint. What's a developer supposed to do when a crypto expert just says, "no"? That's terrible feedback, it needs to be better, and it can be better (sometimes it actually is better, I think you've mentioned in the past that you've lent a hand to Cryptocat).

As with every other "theory vs. practice" discussion, the theorists need to tone down the rhetoric, and the practitioners need to actually listen to what the theorists are saying.

-----


To be clear: I've never done anything to help Cryptocat, nor will I.

The right thing to happen with Cryptocat is for it to be scrapped. Its security issues pervade beyond bad cryptography. Its users are literally better off with Google Hangouts or iMessage.

And, in fact, the same was true of Lavabit: there was no recommendation that would allow a mail service that worked (for users) the same way Google Mail to resist serious attacks. But that was the premise of the system! It needed to be scrapped.

-----


And this is the nonuseful advice that the crypto experts need to stop giving. Joe Developer sees this advice and decides not to write crypto software.

That's unambiguously bad, because the software industry thrives on failing a lot until a good solution is found.

There needs to be a way to develop crypto software that falls in a land of, "Don't use this yet, but maybe someday after we're done testing it, you can start using it and be secure" rather than "nope, just scrap the whole thing".

-----


I am not a crypto expert and don't have a position wrt this particular problem domain but:

"the software industry thrives on failing a lot until a good solution is found"

may be at the heart of the disagreement. It is true that some parts of the software industry are best served by this style of iterative failure based progress, but it is unfair to say that the entirety of software is best developed this way.

There exist whole classes of problems where the failure modes of their solutions are quite simply worse than any potential value those solutions can provide. Those class of problems should NOT be iteratively developed, rather they should be rigorously engineered and formally verified.

I suspect your disagreement stems from whether you think crypto systems exist inside or outside that problem set.

-----


You are wrong on practically every account:

> Joe Developer sees this advice and decides not to write crypto software.

False. Moe Manager stomps his foot and makes Joe Developer wire together "something good enough", regardless of Joe's qualifications. Since neither Joe nor Moe are able to tell how good a crypto system is, they invariably end up with something in the awful-to-kludge spectrum. Both Moe and Joe need to be repeatedly reminded that they don't know enough about this non trivial problem to make good enough calls.

> That's unambiguously bad, because the software industry thrives on failing a lot until a good solution is found.

"Unambiguosly bad" is a thought stopper, a rhetoric trick. This just means that since you don't like any of the options made available to you by reality (like... paying big bucks to a real expert), self deception is preferable.

Now, the fact that our software industry thrives on failure... I am ashamed to say that this just means our industry is really good at externalizing the consequences of our own incompetence to the customers. This is all good and fun, until real lives are on the line. Crypto is one of those cases, so it is one of those lines you don't cross.

> There needs to be a way to develop crypto software that falls in a land of, "Don't use this yet, but maybe someday after we're done testing it, you can start using it and be secure" rather than "nope, just scrap the whole thing".

Are you familiar with the term "Security by Design"? It is one of those pesky realities I was talking about. It basically says that you cannot create a product giving no concern to security, sprinkle some crypto fairy dust on top of it, and magically turn it into a secure product. If you try to do that, you'll end up with a product with broken security.

Which is not all that bad in practice. There are lots of useful products that we all are better of having in insecure form than not having at all. But if the whole point of product X is to be "Y, but done securely" there is no redeeming feature that can save X from the axe.

-----


I'm not sure how to respond to this. On one hand you talk about "rhetorical show stoppers", and on the other hand you reply to a statement I've made with the one word sentence, "False.", and in another sentence you say I'm "wrong on practically every account".

I guess I could point out that there lies, in a continuum of software, a place for experimental products which offer nothing other than the exploration of an idea and the opportunity to refine said idea through testing and discussion. I could also point out that your strawman of "sprinkle some crypto fairy dust" is entirely irrelevant to this conversation, and that your lack of understanding of how threat actors are modeled is evident in your inability to comprehend a tool that might protect against certain threats but can acceptably not protect against all threats.

But I think those points would fall on the "ears" of someone who just wants to be combative. Maybe someone else will pick up where you left off and engage in a more honest and less antagonizing manner, but this deep in the comment tree, "here be dragons".

-----


I guess am sorry that we ended up in bitter ends of a pointy discussion. For what's worth, I was not trolling. I stand by anything that I said (specially the "sprinkle of crypto fairy dust", which I have seen IRL more often that I wish). If I am ignorant or not, I am by definition unqualified to tell.

Other than that, if you prefer to have non combative discussions, please stop bossing professionals around and tell them what to do. Specially if they are not providing a service you paid money for.

-----


I am a professional and I "boss" other professionals around on this topic (admittedly, mostly other topics, but this specific one does come up) for a living.

But it's funny, because what you're saying here (don't "boss" people around) is similar to what I am saying, with regard to experimental crypto software. The crypto experts need to let the developers develop, and find a constructive way to contribute to that process, rather than chicken-little every time a bug appears.

Imagine software that comes out, stating: "This software is not meant to do anything other than hide your porn from your mother. Please do not use this software to do anything other than protect yourself against a threat who has limited understanding of computers and software."

As that software matures, the disclaimer could expand to, "This software is not meant to do anything other than protect information against low-level criminal activity. Please do not use this software to do anything other than to protect your information from larceny or identity theft. This software will not protect you against a sophisticated or well-resourced actor."

Eventually expanding to, "This software has been tested by a community of crypto experts. While there is currently no known reason why this software won't protect your information from all bad actors, there is no guarantee in place that your information is absolutely safe. Always exercise extreme caution when protecting your data from sophisticated actors."

The idea being that, once an experimental piece of software becomes "good enough", it can be used to protect against a more diverse threat model.

The idea that you have to ship an NSA-proof product on day 1 is completely unfeasible and will never happen, and the crypto scientists out there need to realize that.

-----


You keep talking about bugs and therefore you keep missing the point. As a matter of fact, I tend to agree with you that the Infosec community would benefit enormously by adopting some of the best practices that we lowly programmers take for granted. There would arguably be less bug and less high-profile bugs that way.

What I am trying to point out is that you cannot fix a flawed Security Design by incremental improvements. I am not talking about individual coding defects or even desirable features that have been left temporarily unimplemented on purpose. I am talking about Stupid!Ideas that get implemented because there was nobody around that recognized them as such, and then the usual traits of human nature (denial, previous investment bias, rationalization, etc) prevents the incumbents from scrapping those ideas until insurmountable evidence is assembled.

The article in question talks about using One Time Pads (OTPs) to conceal secrets. This is an idea that sounds good, OTP is mathematically proven to be "unbreakable" and all that... The problem is when theory meets practice and nobody asks the hard question: How are sender and receiver going to share the OTP prior to sending the real message.

This is not a trivial question to ask, giving the fact that OTP must have the same amount of bits as the message it is protecting. You can also not "recycle" a single OTP for many messages (that would be a Many Time Pad, which by definition violates the very preconditions that make OTP secure).

So, you end up with the scheme used in spy movies (I have no idea how real spies worked during the Cold War). Each spy carries around a code-book of OTPs that he received in hand by a trusted contact. The fact of being in possession of such book is itself a proof of being a spy so he was to take great care to hide it, which is hard because it is a whole book (OTP, lots a lots of messages). Also, the spy agency will have the cumbersome work of providing a different code-book to each spy (because otherwise, when the enemy catches one spy, they could theoretically run the OTP on all previous communications from suspected spies and have each one of them rounded and shot before the word comes out that the first spy was in prison).

This example also serves to illustrate another point. Maybe a Bad!Idea can be made to work in a limited way, if need is dire and resources are plenty. But any organization which kept secrets of real importance and tried to rely on this kind of solution would end up eaten alive by any adversary that had heard the words "asymmetric cryptography" spoken together.

-----


I'm trying to understand how you could possibly think I've been talking about "Bad!Idea" at all.

I really can't find in anything I've written where I talked about writing a "bad" piece of crypto software. In fact, I've been talking about a potentially "good" piece of crypto software that isn't getting written because no software is perfect, but that's what crypto experts are demanding out of the gate.

Can you more succinctly sum up what you're trying to say? Do you think software shouldn't be written in the crypto space unless it's perfect from v0.0.0? I can't imagine you think that, so I'm wondering what exactly it is you're arguing here.

-----


Where software industry thrives on failing a lot until a good solution is found.

Except where it is important. Failing a lot in crypto can have deadly consequences that are not visible until it is too late.

-----


>> Joe Developer sees this advice and decides not to write crypto software.

He probably shouldn't. That's pretty much the point.

(--edit-- or he should consult with people who have studied this, find out about best practice, use existing solutions where possible, etc etc)

-----


Why must he? Why can't he build whatever he likes? Who are you to tell Joe what he can and can't do with regards to software development?

This is the problem -- you and others are trying to perpetuate the idea that software must always be "complete" the moment it's put on GitHub. Why must Joe Developer adhere to your rigorous definition of what a software product is?

Joe Developer should be free to tinker, and if he wants to tinker with crypto, crypto experts should let him. Currently they don't, and that's why we don't have any good crypto software.

-----


>> Joe Developer should be free to tinker, and if he wants to tinker with crypto, crypto experts should let him. Currently they don't, and that's why we don't have any good crypto software.

He can tinker.

He can learn.

He can do any of the many excellent free courses, tutorials and challenges on the net.

He can implement attacks against toy problems, and real problems.

And eventually, he can get deeper into implementing it.

What he will absolutely and rightly be shot down for is making unverifiable claims about the security or what he's tinkering with, and try to profit from his claims.

I'm sorry if you have a problem with that.

-----


When did he make unverifiable claims? When did he profit from his claims?

It's a nice straw man, but not what I suggested at all.

-----


>> When did he make unverifiable claims? When did he profit from his claims?

When he launched an app that claimed to be the new safest thing ever, surely? Perhaps with an unbreakable One-Time-Pad. You know, the stuff we're commenting on here...

-----


> You know, the stuff we're commenting on here...

Nope, not at all the stuff we're commenting on. We're commenting on the stuff that hasn't been written yet and probably won't be written at all because of your negative attitude.

-----


If it won't be written because people get shot down when they make outrageous claims about their security then I'm ok with that. Everyone should be.

I'm not sure what it is you want here. Nobody's telling you not to use it. Nobody's telling you not to tinker. Everyone's encouraging you to learn, and giving you advice on the best way of doing that. You seem to want every bad, amateurish attempt at novel crypto, accompanied by self aggrandising copy, to be hailed as if it was awesome. It's not.

-----


> If it won't be written because people get shot down when they make outrageous claims about their security then I'm ok with that. Everyone should be.

Again, you strawman me. Where in my argument did I talk about making outrageous claims? I would very much appreciate it if you separated "make outrageous claims" from "develop open source security software".

-----


>> Again, you strawman me. Where in my argument did I talk about making outrageous claims?

You appear to be annoyed that folks write take-downs on software like the one linked to by the article. This is the outrageous claim. If you feel this is not what you're saying then fine, I'm not sure we actually have an argument.

>> I would very much appreciate it if you separated "make outrageous claims" from "develop open source security software".

There's no reason not to develop OSS security software. There's every reason to put up big warnings saying you can't vouch for its security. There's every reason to ask people who know what they're talking about to take a look, and to be humble when they point out flaws, and to rework it if that's the case. There's every reason to take some of the excellent free courses.

But there's also no reason to go off-piste. Why develop this crazy new OTP scheme? There's existing crypto for exactly these use cases. Leave the devising and proving of new crypto constructs to the deeply-versed and the academics, who know how to prove this stuff (and disprove it).

It's like saying "I can't write my OSS networking project because people keep telling me not to rewrite the TCP stack without learning about TCP first". Do you need to rewrite the stack? If so, if you want to do a good job then it's going to take a hell of a lot of knowledge and training. Why not use the OS API?

-----


Your need to be right has driven us so far afield from the original topic, I'm actually somewhat impressed.

I don't know what you think I wrote, but no one here at any time suggested writing a new crypto scheme.

It's 4 days later. I've made my point over and over again, and you understand it. You have even restated my point yourself, in your second paragraph.

I'm done. You agreed with me, and everything else you're trying to talk about is just you begging to be validated. I can't feed this anymore, sorry.

-----


I think we probably still have a fundamental disconnect here.

Can I ask you, when I say - "people writing software products generally shouldn't be creating novel cryptosystems" - what is it you hear?

Do you hear someone saying "you shouldn't be writing software that uses encryption" ?

Because that's not what I'm saying.

A cryptosystem is something like what's going on here - we have messages secured by OTPs, generated using SecureRandom and exchanged over an encrypted channel using an AES key displayed on the screen for the other party to scan.

So what I'm saying is "hey, why not use the industry standards for your crypto? Inventing cryptosystems is way hard and you probably got it wrong. You can still make your cool apps on top of it!"

-----


With fairly minimal training one can learn to use existing, secure constructions. Hell, there are now protocols designed specifically for this, open source ones at that.

In fact everyone who even considers touching anything crypto-related should know one simple thing - do not invent your own schemas!

-----


If it's minimal, can you please outline the training required to write secure code?

Perhaps you could forward that minimal training to the openssl folks, and maybe you could singlehandedly solve the security crisis currently taking place in software.

-----


>> If it's minimal, can you please outline the training required to write secure code?

Nope, because that's not what I claimed.

-----


Then can you please outline specifically what training would be required to accomplish... whatever it is you did claim that was, "relatively minimal"?

What classes? What books? What certifications or degrees?

-----


My suggestion is: work your way through the Matasano Crypto Challenges. They're free, just sitting there on a static web page, with no signup required.

Actually implement cryptographic attacks before trying to design crypto.

That's not just my recommendation; it's also what Bruce Schneier has been writing for decades.

The problem with my suggestion is that most people who work their way from block cipher attacks through RSA error oracles are "scared straight", and lose the desire to plunk fancy crypto into their applications. Once you see how subtle some of the most devastating attacks are, you start to see why so few crypto engineers write chat apps.

-----


> The problem with my suggestion

I fail to see the problem...

-----


Personally I would recommend (if this is a serious question and you're not just trying to challenge my assertion about minimal training) putting yourself through Coursera's Crypto 101, and learning to use a library like NaCl, or borrow a well-reviewed open protocol like TextSecure, which is in TextSecure, Signal and (lately) WhatsApp.

I'm not saying that with minimal training you can write an OpenSSL equivalent, or even create a novel cryptosystem on top of OpenSSL, people writing software products generally shouldn't be creating novel cryptosystems in the first place (and I speak as someone that implements crypto-systems for a living, usually very well defined crypto-systems from banking standards).

-----


I'm highly doubtful that a person who's done the training you've recommended will be capable of writing secure software.

The fact that you think it's that easy is a wonderful example of why there's such a disconnect between developers and crypto experts.

-----


For the second time, I didn't claim they could write secure software! I'm not even claiming I can write secure software, it's a hard problem.

What they can do is use existing, secure constructions, which at least gives them a fighting chance. Inventing new stuff does not.

-----


You did claim they could write secure software, at least that's what you wrote.

If that's not what you meant, then I understand.

-----


I think we must be miscommunicating here, I'm genuinely not seeing where I said that (it would not be the first time I have been hilariously unobservant though).

The nearest I can see to that is where I said - "With fairly minimal training one can learn to use existing, secure constructions." - which I think I've qualified/explained now?

-----


He's misunderstood you, and then run away with his misunderstanding, perhaps because he thinks it's a more interesting back-and-forth to bounce around than what you actually said.

My reading of what you wrote is: it doesn't take much training to use Libsodium. To the extent that providing cryptographic security is important, you can solve the problem by delegating it to the Nacl designers and Sodium implementors. That's mostly true.

I did not see you making an argument about how simple the entire security problem was to solve.

-----


Precisely this. "It's not hard to learn to use a crypto library" is not an interesting conversation, and so I actively tried to have a more interesting conversation about "it's actually pretty damn hard to implement crypto software".

Since when is learning a single crypto library the end-all solution to writing good crypto software?

-----


>> I actively tried to have a more interesting conversation about "it's actually pretty damn hard to implement crypto software".

You might have said that!

>> Since when is learning a single crypto library the end-all solution to writing good crypto software?

It's not, but it's a start, a start which all of these stories that hit HN and get torn to pieces are missing.

-- edit --

When I made my comment that you can learn to use existing things with minimal training I should have padded it out by telling you the reason I was saying that - learning to use well-understood crypto primitives, and (if you can) pre-provided crypto implementations is far from all you need to make secure software. But it's a start.

"Hey, come and look at our neat new crypto, it's awesome!" is a big red flag, and it's a red flag with a subtext that reads: We don't know what we're doing.

Software that does use good constructions and good implementations can still be done very badly, it's true.

To go back to your first comment, you say there's this gap between the way crypto people work and the way developers develop - that may well be so, but in most of the messaging apps we see here on HN the developer hasn't really even attempted to bridge that gap.

Think about it like cooking a meal - you're asking what it takes to become a top chef but in the case we're discussing they didn't even start their marinara sauce with tomatoes.

-----


There is infinitely more to writing secure software than correctly using secure cryptographic constructs.

-----


This is what I should have said a few messages up the chain!

Writing secure software is hard and covers a lot more than your choice of ciphers, protocols etc. It's possible to make all the right choices and still make insecure software for so many reasons. OpenSSL was referenced up-thread - simple buffer overruns/lack of bounds checking is one way in which that failed (heartbleed).

In order to write secure software one needs first to define what we're aiming at, what does secure mean in the domain in which we operate? And that's non-trivial in itself.

But by choosing well-known constructions and (hopefully) well tested implementations we can minimise some classes of potential problems.

-----


> smartest crypto folks aren't software developers,

This dismisses the talents of DJB who is one of the better software developers and a very good cryptographer.

-----


"We really need an army of software engineers to work on fixing this problem"

That is what the team at Ionic Security* is doing. With about 50 open engineering positions we are hiring :)

My email is adam at our domain.

* I'm the founder/CTO

-----


Speaking of OpenSSL, what state are the competing libraries in at the moment? I'd love a version of OpenSSL without all the potentially-insecure legacy code given all the problems its had.

Are there decent implementations of OpenSSL in more secure languages like Rust?

-----


A full rewrite of OpenSSL in a better language like Rust will probably take a very long time to be production ready.

Your best middleground is LibreSSL, which is still C but is at least written by developers with huge amounts of experience writing secure C.

-----


There's LibreSSL: http://www.libressl.org/

But it's not in a more secure language.

-----


Go has a TLS package and dependencies which are almost entirely written in Go: http://golang.org/pkg/crypto/tls/

-----


Go's TLS stack is very nice but beware of the default tlsConfig (containing insecure 3DES and RC4 algorithms): https://wiki.mozilla.org/Security/Server_Side_TLS#Go

Also, it might be vulnerable to side-channel timing attacks: https://www.imperialviolet.org/2013/02/04/luckythirteen.html

-----


F# has a proven implementation, but I doubt that's what you're after. OCaml is also getting one for Mirage, but I don't know its status, and it won't be proven correct.

-----


miTLS considers timing attacks as out of scope for the proof, so it's unclear if it's actually secure.

-----


Past Discussion about Ocaml-TLS[1] also mentions Rust somewhere.

[1] https://news.ycombinator.com/item?id=8005130

-----


Do people really believe in more secure languages? Are they the same people that think switches make networks secure? Switches don't and neither does a given language. I recall a CTO that would not allow C++ development because he thought the language was insecure. Java was the only language allowed. Even college courses are still teaching that security is one of the benefits of the virtual machine. We only have to look at all the patches for java to see that it hasn't been secure. Then we look at every other software that has been patched to see that nothing is secure.

Please stop perpetuating the myth that security is produced by a programming language. People make security happen just like they make it not happen. Obligatory Schneier: https://www.schneier.com/blog/archives/2008/03/the_security_...

-----


> We only have to look at all the patches for java to see that it hasn't been secure.

All those big security issues aren't in the Java language, they are in the JVM running untrusted Java byte code. Not to say that situation isn't bad, but you can't compare it to C++ because nobody ever thought running untrusted C++ code without some other sandboxing was a good idea.

That aside, memory safety is great for security. Of course there are 1000 other things that are important, too, and so I'd trust a C program written by a security expert much more then the same program written by someone who thinks his program is secure because he used Java. But I'd feel even better if the security expert used a memory-safe language because I am certain that all C programs above a certain size are vulnerable to memory attacks.

-----


> Not to say that situation isn't bad, but you can't compare it to C++ because nobody ever thought running untrusted C++ code without some other sandboxing was a good idea.

This is actually kind of a point for the other side. You can sandbox code regardless of what language it's written in. Maybe what we need is not better languages but better sandboxes. Even when code is "trusted", if the developer knows it doesn't need to e.g. write to the filesystem or bind any sockets then it should never do those things and if it does the OS should deny access if not kill it immediately.

-----


Isn't this exactly what SELinux does but nobody bothers to configure the rules?

-----


But sandboxing does nothing to protect information if the information resides in the sandbox (sandboxing wouldn't have stopped heartbleed).

Rust and friends would aren't going to make all securty issues go away, just as sandboxing would not. There is no one true silver bullet in securty, at least not yet.

-----


> This is actually kind of a point for the other side.

I wanted to move the goalpost from "Java is insecure" to "the Java sandbox is insecure". I completely agree with the second statement, so I don't think I made a point for any other side.

-----


You made the point that I was trying to make: implementations are not secure. A programming language can follow a philosophy but implementations never quite line up with the theory. We only use implementations of the theory and experience shows that implementations all have vulnerabilities.

-----


I'm sorry if I misrepresented your post, but I feel you do the same to mine. I didn't say the JVM is insecure - I said the sandboxing part of the JVM is insecure and C++ doesn't have anything comparable.

-----


True, and a malfunctioning sandbox is worse than useless.

People tend to base security on them. Google did in their AppEngine cloud, but they put a lot of engineering resources and defence-in-depth behind it.

-----


> a malfunctioning sandbox is worse than useless

Are there any sandboxes in existence which are definitely not worse than useless?

-----


seccomp is simple and useful, in both incarnations.

-----


I don't think security is produced by picking one language or another, but I do believe that it's harder to write secure code in a language like C than a language like Java or Rust. There are simply way, way more ways to shoot yourself in the foot.

-----


> I don't think security is produced by picking one language or another, but I do believe that it's harder to write secure code in a language like C than a language like Java or Rust. There are simply way, way more ways to shoot yourself in the foot.

The trouble is that everything is a trade off. It's very hard to get a buffer overrun in Java but that doesn't make Java a good language. It tries so hard to keep you from hanging yourself that it won't let you have any rope, so in the instances when you actually need rope you're forced to create your own and hang yourself with that.

For example, you're presented with garbage collection and then encouraged to ignore object lifetime. There are no destructors to clean up when an object goes out of scope. But when it does you still have to cleanup open files or write records to the database or notify network peers etc. Which leaves you to have to manage it manually and out of order, leading to bugs and race conditions.

In other words, C and C++ encourage you to write simple dangerous bugs while Java encourages you to write complicated dangerous bugs.

That isn't to say that some languages don't have advantages over others, but rather that the differences aren't scalar. And code quality is by far more important than the choice of language. BIND would still be less secure than djbdns even if it was written in Java.

-----


Not that this really proves anything one way or the other, but remember that in regard to the Java SSL implementation shipping with the JDK, it was very recently found that:

"...the JSSE implementation of TLS has been providing virtually no security guarantee (no authentication, no integrity, no confidentiality) for the past several years."

-----


I don't understand this argument. For example, if I use a language that doesn't allow buffer overflows to happen, I've eliminated an entire class of security bugs being caused by programmer error. Why would you not want to use such a language? Performance and existing libraries will factor in to this obviously but I don't understand why you wouldn't consider security built into the language as a benefit.

Yes, security issues are found in Java and every other language, but when these are patched all programs that use that language are patched against the issue. The attack surface is much smaller.

-----


> Yes, security issues are found in Java and every other language, but when these are patched all programs that use that language are patched against the issue. The attack surface is much smaller.

All patches work like that; when there is a bug in libssl and OpenSSL patches it then all the programs using libssl are patched. The difference with Java is that when a C library has a bug only programs using that library are exposed but when Java has a bug all Java programs are exposed. Moreover, Java itself is huge. It's an enormous attack surface. Your argument would hold more weight if the "much smaller" attack surface actually produced a scarcity of vulnerabilities.

-----


> For example, if I use a language that doesn't allow buffer overflows to happen, I've eliminated an entire class of security bugs being caused by programmer error.

There are several assumptions behind "if I use a language that doesn't allow buffer overflows to happen" which you aren't taking into account. For instance, are you entirely sure that the implementation of that language's compiler will not allow buffer overflows to happen? We have a good example of a possible failure of that model in Heartbleed: when it came up, a bunch of people in the OpenBSD community raised their eyebrows, thinking hmm, that shouldn't happen for us, we have mitigation techniques for that. Turns out -- for performance reasons -- OpenSSL was implementing its own wrappers over native malloc() and free(), doing some caching of its own. This, in turn, rendered OpenBSD's own prevention mechanisms (e.g. overwriting malloc()-ed areas before using them) useless. The language specifications may not allow such behaviour, but that doesn't mean the implementation won't, too.

You're also underestimating a programmer's ability to shoot himself in the foot. Since I already mentioned OpenBSD and Heartbleed, here's a good example of a Heartbleed-like bug in Rust: http://www.tedunangst.com/flak/post/heartbleed-in-rust . The sad truth is that most vulnerabilities like this one don't stem from accidental mistakes that languages could have prevented; they stem from fundamental misunderstanding of the mode of operation which are otherwise safe constructs in their respective languages.

Granted, this isn't a buffer overflow, which, in a language that doesn't allow arbitrary writes, would be an incorrect construct and would barf at runtime, if not at compile time; but then my remark about bugs above still stands (and I'm not talking out of my ass, I've seen buggy code produced by an Ada compiler allowing this to happen), buffer overflows can be increasingly well mitigated with ASLR, and the increased complexity in the language runtime is, in and by itself, an increased attack surface.

Edit: just to be clear, I do think writing software in a language like Go or Rust would do away with the most trivial security issues (like blatant buffer overflows) -- and that is, in itself, a gain. However, those are also the kind of security issues that are typically resolved within months of the first release. Most of the crap that shows up five, ten, fifteen years after the first release is in perfectly innocent-looking code, which the compiler could infer to be a mistake only if it "knew" what the programmer actually wanted to achieve.

-----


My point is simply that every programmer will make mistakes when coding so I want the most automated assistance possible to point out those mistakes. If a programmer has a pressing need and the persistence to work around those checks, that's fine, at least the surface area for those mistakes are then limited to a smaller amount of code.

-----


From comments I read about that when it was written, it's not clear to me that the author actually demonstrated the same behaviour of Heartbleed. I'm not the person to be the judge of that, but for what it's worth here is the top comment from /r/rust on the topic. Then you can make up your own mind about that.

https://www.reddit.com/r/rust/comments/2uii0u/heartbleed_in_...

-----


That comment sort of illustrates my point:

> You should note that Rust does not allow unintialized value by design and thus it does prevent heartbleed from happening. But indeed no programming language will ever prevent logic bugs from happening.

Under OpenBSD, that values would not have been uninitialized, were it not for OpenSSL's silly malloc wrapper -- a contraption of the sort that, if they really wanted, they could probably implement on top of Rust as well. What is arguably a logic mistake compromised the protection of a runtime that, just like Rust, claimed that it would not allow uninitialized values, "by design".

Of course, idiomatic Rust code would not fall into that trap -- but then arguably neither would idiomatic C code. It's true that Rust also enforces some of the traits of its idioms (unlike C), but as soon as -- like the OpenSSL developers did in C, or like Unangst did in that trivial example -- you start making up your own, there's only that much the compiler can do.

At the end of the day, the only thing that is 100% efficient is writing correct code. Better languages help, but it's naive to hope they'll put an end to bugs like these when they haven't put an end to many other trivial bugs that we keep on making since the days of EDSAC and Z3.

-----


> Under OpenBSD, that values would not have been uninitialized, were it not for OpenSSL's silly malloc wrapper -- a contraption of the sort that, if they really wanted, they could probably implement on top of Rust as well. What is arguably a logic mistake compromised the protection of a runtime that, just like Rust, claimed that it would not allow uninitialized values, "by design".

I really disagree. Rust does not allow uninitialized values by design - end of story. If a piece of Rust code let's uninitialized values bleed through, then it is broken. The semantics of Rust demands this.

(OpenSSL on the other hand only broke/Overrode OpenBSD's malloc - they didn't break C.)

It is news to no one that you can break - break - Rust's semantics if you use anything that demands `unsafe`. That's why anyone who uses `unsafe` and intends to wrap that `unsafe` in a safe interface has to be very careful.

Complaining about Rust being unsafe - in the specific sense that the Rust devs use - by using the `unsafe` construct, is like complaining that Haskell is impure because you can use `unsafePerformIO` to `launchMissiles` from a non-IO context.

> Of course, idiomatic Rust code would not fall into that trap -- but then arguably neither would idiomatic C code.

It's not even a question of being idiomatic. If someone codes in safe (non-`unsafe`) Rust, then they should not fall into the trap that you describe. If they do, then someone who implemented something in an `unsafe` block messed up and broke Rust's semantics.

What if that same thing happened in C? Well, then it's just another bug.

---

I'd bet you'd be willing to take it to its next step, even if we assume that a language is 100% safe from X no matter what the programmer does - "what if the compiler implementation is broken?". And down the rabbit hole we go.

-----


> I really disagree. Rust does not allow uninitialized values by design - end of story. If a piece of Rust code let's uninitialized values bleed through, then it is broken. The semantics of Rust demands this. (OpenSSL on the other hand only broke/Overrode OpenBSD's malloc - they didn't break C.)

I'm not familiar enough with Rust (mostly on account of being more partial to Go...), so I will gladly stand corrected if I'm missing anything here.

If the OpenSSL did the same thing they did in C -- implement their own, custom allocator over a pre-allocated memory region, would anything in Rust prevent them from the same sequence of events? That is:

1. Program receives a packet and wants 100 bytes of memory for it. 2. It asks custom_allocator to give it a 100 byte chunk. custom_allocator gives it a fresh 100 byte chunk, which is correctly initialized because this is Rust. 3. Program is done with that chunk... 4. ...but custom_allocator is not. It marks the 100 byte chunk as free for it to use them again, but continues to retain ownership and does not clear its contents. 5. Program receives a packet that claims it has 100 bytes of payload, so it asks custom_allocator to give it a chunk of 100 bytes. custom_allocator gives it the same chunk as before, without asking the Rust runtime for another (initialized!) chunk. Program is free to roam around those 100 bytes, too.

I.e. the semantics of Rust do not allow for data to be uninitialized, but custom_allocator sidesteps that.

-----


Rust doesn't have custom allocator support yet, so no, it's not currently possible to make this error ;)

-----


It doesn't have custom allocator support as in "you can't have one function allocate memory and pass it for another function to use it", or as in "you can't replace the runtime's own malloc"? OpenSSL were doing the former, not the latter.

(Edit: I'm really really curious, not necessarily trying to prove a point. I deal with low-level code in safety-critical (think medical) stuff every day, and only lack of time is what makes me procrastinate that week when I'm finally going to learn Rust)

-----


Well, anything is possible because ... human ingenuity.

However, Rust currently statically links in jemalloc - even when building a dynamic shared library. There is no easy way around it.

(because someone might ask: rustc -C prefer-dynamic dynamically links everything except jemalloc)

Having said that, I hope jemalloc gets linked externally soon so my code doesn't have to pay the memory penalty in each of my memory-constrained threads.

-----


You can't say "this vector uses this allocator and this vector uses another one." If you throw away the standard library, you can implement malloc yourself, but then, you're building all of your own stuff on top of it, so you'd be in control of whatever in that case.

(We eventually plan on supporting this case, just haven't gotten there yet.)

-----


Yes, my point was that OpenSSL did not throw away the standard library! openssl_{malloc|free} were thin wrappers over the native malloc() and free(), except that they tried to be clever and not natively free() memory areas so that they can be reused by openssl_malloc() without calling malloc() again. I.e. sometimes, openssl_free(buf) would not call free(buf), but leave buf untouched and put it in a queue -- and the openssl_malloc() would take it out of there and give it to callers.

So basically openssl_malloc() wasn't always malloc()-ing -- it would sometimes return a previously malloc()-ed aread that was never (actually) freed.

This rendered OpenBSD's hardened malloc() useless: you can configure OBSD to always initialize malloc-ed areas. If OpenSSL had used malloc() instead of openssl_malloc, even with the wrong (large and unchecked) size, the buffer would not have contained previous values, as it would have already been initialized to 0x0D. Instead, they'd return previous malloc()-ed -- and never actually free()-d -- buffers, that contained the old data. Since malloc() was not called for them, there was never a chance to re-initialize their contents.

This can trivially (if just as dumbly!) be implemented in Go. It's equally pointless (the reasons they did that were 100% historical), but possible. And -- assuming they'd have done away with the whole openssl_dothis() and openssl_dothat() crap -- heartbleed would have been trivially prevented in C by just sanely implementing malloc.

> (We eventually plan on supporting this case, just haven't gotten there yet.)

I'm really (and not maliciously!) curious about how this will be done :). You guys are doing some amazing work with Rust! Good luck!

-----


https://github.com/rust-lang/rfcs/issues/538 is where we're tracking the discussion, basically :) And thanks!

-----


> Please stop perpetuating the myth that security is produced by a programming language.

Security happens by taking care of what you're doing; if a language can eliminate a whole class of bugs then you might as well use it. That's why people keep arguing that some languages can eliminate some kind of bugs, but that absolutely doesn't make programs implemented in these languages bug-free.

Said differently: more secure (relatively) doesn't mean secure (in absolute).

> We only have to look at all the patches for java to see that it hasn't been secure.

We only have to look at all the patches for java to see how much it is analyzed; it doesn't mean java is relatively more or less secure than any other language.

I've seen no patches for this nim interpreter for brainfuck [0], does that mean it's more secure than java ? Absolutely not.

You can draw some parallel with crypto schemes: anybody can come up with some cipher, nobody will analyze it unless there is something to gain (that includes fun). When you've reached the state where you're under scrutiny of every crypto analyst and their student, and potential vulnerabilities are found, does that make it a weak scheme ? We don't know. Only a real analysis of the vulnerabilities can tell us.

[0] https://github.com/def-/nim-brainfuck

-----


Is 'security' really something that requires faith or belief? It seems that "more secure" (not secure in an absolute sense) can be made tangible, and I think that programming languages can give you "more security", in the sense that they close off certain possibilities or make them much harder to exploit/mess up.

Not that I know about security, but all you're doing right now is to fend off the claim of "more secure" by stating that java is not secure in an absolute sense - no one has claimed absolute security, only relatively more.

-----


I love a lot of the outreach that Matasano does, but I strongly disagree with that article. Most of the points they make aren't fundamental problems of websites - they're just issues for poorly implemented websites. Some of it is out of date / plain wrong - for example, we've had a SRNG (window.crypto.getRandomValues) on the web since Chrome 11.

Most of the rest of the complaints you could also reasonably level against installed apps with update mechanisms. The article compares web apps (and their update mechanism) with desktop apps (without their update mechanism). Then it points out flaws in update mechanisms in general (eg they can send you malicious code), then says the thats why the web is flawed. Yeah, nice try.

The fundamental question is: How do you trust the code that I give you? No matter what platform you're on, at some level, you need to trust me. Lets say I'm writing a 'secure' todo list app. You have to trust that I'm not going to forward your todo list entries to any hooded figures. And I'm not going to change your shopping list to quietly add entries from my sponsors. Also both on the web and locally, apps can open mostly-arbitrary network connections and send any data anywhere we like.

Its as simple as that. On the web, I send you code, you run my code, my code does something useful and might betray you. In native apps, I send you code (maybe via an app store or something). You run my code. My code does something useful but it might betray you.

As far as I can tell there's only two fundamental weaknesses of web apps:

1. The JS isn't signed 2. The JS gets sent on every page load

The combination of which makes it much more convenient to do spear-phishing type attacks. But that said, any threat that looks like "but on the web you might send malicious code to user X" is also true of other app update mechanisms. Even on the iOS App Store, nothing is stopping me from writing code which says `if (hash(username+xyz) == abc123) { send data to NSA(); }`. I can't think of any binary-downloading systems (app stores, aptget, etc) which would discover that code.

And remember - desktop app code is potentially much more dangerous. Desktop apps can take photos with your webcam, record audio, record keystrokes and access every file in your home directory.

Its definitely true that most web apps are poorly implemented - they dynamically load 3rd party JS and they don't use SSL / HSTS. Its also embarrassing how many desktop apps have simple buffer overflow exploits. But the solution isn't to go back to desktop apps - the solution is to push for better best practices on the web.

In my opinion, the biggest security problem with the web is that most web apps store all your data in plaintext on someone else's computers. This is a problem that we we need to start addressing systematically via projects like CryptDB. Ie, we need more serious security work done on the web. Not less.

-----


> But that said, any threat that looks like "but on the web you might send malicious code to user X" is also true of other app update mechanisms.

Traditionally, app updates don't happen without the user's knowledge. I'm not entirely certain what the case is with the app stores. Point is, you can easily choose to stick to a specific version with, at least, most desktop applications. You only have to verify that one version.

-----


Practically speaking, what verification are you talking about? Who do you imagine is verifying the security of your application binaries?

I'm confused by the argument that slower app update mechanisms lead to a more secure platform. The reverse is clearly true sometimes - when a vulnerability is discovered, updating an app quickly is important. When would low latency deploys lead to less secure code?

-----


> Practically speaking, what verification are you talking about? Who do you imagine is verifying the security of your application binaries?

I can build it from source where any number of people may have verified it, or otherwise, I can trust the creator/packager/distributor now but not have to continue trusting them every time I access the app.

> When would low latency deploys lead to less secure code?

When any number of entities are either taken over by, decide to collaborate with, or become, an attacker at some point after you first download the application.

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: