Hacker Newsnew | comments | ask | jobs | submit | jleader's commentslogin

If losing one executive at a 200+ employee company "ruins the company's culture and causes [that company's] downfall", then that company and that executive have been doing it wrong!

Regardless of the facts of the case and whatever damage it may have done to GitHub's culture, if this incident and this individual's departure are enough to cause GitHub's ruin and downfall, then that's evidence of an absurdly fragile company and culture.


Generalizations about whether open-source developers or closed-source source developers have more resources, or are more professional, or whatever, are silly. The two groups of developers are very large with high variance in many dimensions, and a lot of overlap. There are open-source projects with one or 2 developers, and open-source projects that are the primary focus of $100-million, thousand-employee companies. There are also closed-source commercial projects developed by teams of hundreds, and closed-source commercial projects developed by a solo programmer when he's not busy answering customers' phone calls. Lots of developers work on both open and closed-source projects at one time or another.

It's important to discuss what changes we can (and should) make to make problems like heartbleed less likely in the future, but wildly waving competing generalizations in the air doesn't help anything.


Spearchucker 11 days ago | link

What you say is true. The argument is still useful though. There are those that follow one at the expense of the other. This is no more useful than saying the only way to develop software is agile.


"What's its temperature coefficient like?"

That reminds me of a common practical joke from EE lab in college (~30 years ago): take the "right" value of carbon resistor (probably 1/4 watt, I forget how resistance mapped to time delay), plug the leads into an ordinary 120v power outlet, and walk away.

As the current flows through it, it'll warm up a little. Because carbon resistors have lower resistance at higher temperatures, as the resistor warms up, it'll start conducting more current, which will make it warmer, until a few seconds or minutes later (depending on resistance value, ambient temperature, etc.) ... "bang!".


I'm not sure what you mean by "a private GitHub enterprise account", but GitHub has a product called "GitHub Enterprise", which is essentially GitHub in a VM that you can install in your own data center. We have a lot fewer than 60 developers, and we have a GitHub Enterprise installation, partly out of concern about issues like this (and also maybe a little IP protection paranoia).


Was anyone else amused by the reference to "the Al gore contingency in the room"? For non-native English speakers, I'll note that there's a noun "contingency" with a meaning related to the adjective "contingent" (meaning roughly "conditional"), but not directly related to the noun "contingent" (meaning roughly "faction").


nirnira 53 days ago | link

Hah I didn't see that, nice catch!


I was going to complain about the 4 errors in 4 lines of code, until I realized that HN converts asterisk, text, asterisk into italicized text.

I assume you wanted your code to look like this:

    void *memset(void *_p, unsigned v, unsigned count) {
        unsigned char *p = _p;

    -   while(count-- > 0) *p++ = 0;
    +   while(count-- > 0) *p++ = v;

        return _p;
In which case, I'm not sure what you're complaining about. Is it that memset here has a non-standard signature (v is supposed to be int, not unsigned)? Or that the "before" code ignored v? Or the un-braced loop body on the same line? Or is the some other problem I'm overlooking (which is possible, it's been about a decade since C was my main day-to-day language)?

[Edited to try to coax better text formatting from HN]


chetanahuja 53 days ago | link

Thanks for the formatting. I fixed my comment too. As for the bad legibility of the code as written (not due to formatting), where do I start...

Oh I know, the writer obviously wants very very badly to write this while loop as a one-liner. That's the start of all the other badness. As a reader of this code now, I have to keep every possible combination of post-increment side effects and precedence rules in mind before this code makes sense.

And yes, I know code like this is quoted in books as example of how cleverly you can write "powerful" one-liners. As far as I'm concerned, it's just a dumb show of alpha-coder pride and it's not a surprise to me that a serious bug was buried here.


If a bank app wrote their own security code, and didn't use the standard platform library, then when a bug was found, they'd get piled on here for rolling their own and not going with the much more widely tested platform implementation.

Should a well-put-together banking application re-implement the entire OS? After all, there are lots of places where security bugs can hide throughout the OS.


Me too, "403 Forbidden", which seems like a strange response. I could understand the site deciding to take it down, and showing a "404 Not Found", or getting overloaded and showing something like a "500 Internal Error", but this implies that they want it seen by some people and not others.

(Edited to add: it looks like the whole site is 403'ing, which suggests it's probably just a configuration error)


jleader 66 days ago | link | parent | on: The C10K problem

If you scroll down to the bottom, there's a changelog describing some changes applied during 2003-2011, and then "Copyright 1999-2014", which might give a few clues about how old the document is.


Adding "security through obscurity" is only a net win over "published security scheme" if your engineers are substantially better than any engineers who might comment on your published scheme. Even if you've hired world-class security engineers, there could still be some flaw they've overlooked, that an outside commentator might notice.

Note that the security of a whole system is harder to get right than the security of any one component; keeping it obscure just makes it much less likely that a white hat will notice the flaw and notify you. A simple example of this is the freshman CS majors' perennial idea that combining multiple PRNGs will yield a "more random" algorithm (it typically makes the result easier to predict). There are plenty of cases of a secure algorithm being used to build a system that ends up insecure because of some obscure flaw that nobody noticed initially.


hga 67 days ago | link

"Adding "security through obscurity" is only a net win over "published security scheme" if your engineers are substantially better than any engineers who might comment on your published scheme."

I think that overstates the case. Your engineers don't have to be better, just good enough know what's correct and safe, or do research until they know.

My approach is to just add things that are clearly correct to published and studied systems that are thought to be correct. In this case, we're talking about applying multiple or multiple runs of one way hashing functions; I'd actually do some research before trying the latter, but it strikes me as safe, and the former is by definition safe, right? Whereas it would never even occur to me to combine PRNGs ... or not use a hardware source of entropy to begin with in the first place (they aren't that expensive in the scheme of things).



Lists | RSS | Bookmarklet | Guidelines | FAQ | DMCA | News News | Feature Requests | Bugs | Y Combinator | Apply | Library