So either my high-school classmate was extraordinarily lucky to get admitted in spite of the fact that there was nothing spectacular about his academics/extra-curricular activities, or he was admitted because his father was alumni. Call me a cynic, but I'm still leaning towards the latter.
In reply to you and yajoe, obviously it doesn't hurt to be the child of an alum. But first MIT has to decide if you can do the work, e.g. the calculus and calculus based physics all students must take.
That sets a pretty high bar. The last time I checked, which was before the Great Recession changed the game and increased applicants, MIT was getting 13,000 applications a year despite the extreme amount of self-selection, of which they judged 3,000 could do the work. From that they construct a class of around 1,100 students.
Its possible that "security by obscurity" has a different meaning in the programming world. When talking about computer or network security it is used to describe any mechanism that attempts to defend a system by hiding something. One of the classic examples of security by obscurity is when a person disables the broadcasting of the SSID on a wireless router.
>That is again a definition which would include ssh keys, which is to say a useless definition.
For those of us who work in security, its taken for granted that when we are talking about security through obscurity, we aren't talking about passwords and cryptographic keys. I should have been more clear.
>"Security through obscurity is generally a pejorative term referring to a principle in security engineering, which attempts to use secrecy of design or implementation to provide security."
That's the exact same thing that I said worded in a different way.
>I'm not sure that's an example at all, since hidden SSIDs can be readily sniffed from a publicly documented protocol.
And that's the entire point. Security through obscurity alone is not a good thing. Starting from the second sentence, and continuing through to the second and third paragraphs, the article elaborates on this.
>A system relying on security through obscurity may have theoretical or actual security vulnerabilities, but its owners or designers believe that if the flaws are not known, then attackers will be unlikely to find them.
>Security through obscurity has never achieved engineering acceptance as an approach to securing a system, as it contradicts the principle of simplicity. The United States National Institute of Standards and Technology (NIST) specifically recommends against security through obscurity in more than one document. Quoting from one, "System security should not depend on the secrecy of the implementation or its components."
>It is analogous to a homeowner leaving the rear door open, because it cannot be seen by a would-be burglar.
It worries me a tiny bit that you work in security, and yet your thinking seems this muddled here. I'm hoping you haven't had your coffee or it's a failure of communication or something...
"its taken for granted that when we are talking about security through obscurity, we aren't talking about passwords and cryptographic keys."
I agree that passwords and cryptographic keys are not what we're talking about - the question is why. Whether, confronted with a proposed setup that relies on the secrecy of X for its security, whether X should be seen as analogous to keys (and thus not a case of "security through obscurity"), or more analogous to protocol or algorithm (and thus a case of "security through obscurity"). There are, of course, other concerns as well.
"That's the exact same thing that I said worded in a different way."
SSID is neither design nor implementation of a wireless network.
"Security through obscurity alone is not a good thing."
I wholeheartedly agree that security through obscurity is not a good thing, but not everything that is bad is "security through obscurity". "Don't broadcast secrets from radios in cleartext" is a rather simpler (and stronger!) rule violated here if you are trying to use SSID as a secret.
>SSID is neither design nor implementation of a wireless network.
As it relates to the physical implementation of a network, part of the implementation includes configuring the devices. Security through obscurity can be something as simple as manually assigning a non-standard port to a service (such as HTTP). The example of disabling the broadcast of an SSID is also a valid example of security through obscurity.
You seem to have an issue with me using the SSID as an example because its so easy to defeat that it would be a waste of time to even try. Like I said before, that's the entire point. Its obvious to us now, in 2014, but for years people were told by 'experts' to disable it as a security measure. Some individuals actually used to attempt to secure their networks by hiding the SSID instead of using a password. The prevalence of this kind of negligence is one of the key reasons that disabling the broadcast became the prototypical example used to explain why security through obscurity doesn't work.
In short, its not an example I made up on my own formed from a lack of understanding security, its an example that's used in textbooks and security-specific certification study guides. Its an example that's taught in introductory IT and IT security classes. I'll try to find some more specific citations to put your mind at ease, its been quite a while since I've taken an introductory course on the subject.
>not everything that is bad is "security through obscurity".
And I agree with this, but my example most certainly is a valid case of security through obscurity.
Update: Better Sources
From Hacking Exposed: Wireless, 2nd Edition Pg. 80
From the first paragraph on security through obscurity:
>Many wireless networks today operate in hidden or nonbroadcasting mode. These networks don’t include their SSID (network name) in beacon packets, and they don’t respond to broadcast probe requests. People who configure their networks like this think of their SSID as a sort of secret. People who do this might also be prone to enabling MAC address filtering on the AP.
From Enterprise Security: A Data-Centric Approach, Chapter 7
>In order to provide security through obscurity, methods such as hidden SSIDs and MAC address filtering have been employed to keep the wireless network invisible to eavesdroppers and more difficult to connect to for an unknown host.
From Certified Wireless Design Professional Official Study Guide, Section titled SSID hiding.
>In order to provide security through obscurity, methods such as hidden SSIDs and MAC address filtering have been employed to keep the wireless network invisible to eavesdroppers and more difficult to connect to for an unknown host.
These are just a few examples out of dozens, if not hundreds.
I'd sincerely appreciate it if you would educate yourself before implying that other people are incompetent in the future.
"As it relates to the physical implementation of a network, part of the implementation includes configuring the devices."
Arguable, but then what makes keys and passwords "not a part of the implementation"?
"You seem to have an issue with me using the SSID as an example because its so easy to defeat that it would be a waste of time to even try."
That is not my objection. I object to SSID as example of "security through obscurity" because I think it doesn't get at the core of what's wrong with security through obscurity and it has other far more glaring issues - so someone can fully grok that keeping SSID secret is not worthwhile and see "why" but without that "why" generalizing to (say) the company saying "We use our own proprietary cryptosystem. We can't show anyone the algorithm; it's more secure that way!"
"In short, its not an example I made up on my own formed from a lack of understanding security, its an example that's used in textbooks and security-specific certification study guides."
I wasn't questioning your understanding because you picked the example, but because you seem to persist in conflating things that seem clearly (and meaningfully) distinct. If textbooks and security-specific certification study guides use SSID as a canonical example of security through obscurity, I'm more worried not less.
>Arguable, but then what makes keys and passwords "not a part of the implementation"?
They are part of the implementation of a network, but the issues involving passwords and cryptographic keys are so complex that they generally aren't lumped into a conversation about security through obscurity. I made the statement in response to your claim that someone obtaining your SSH key would render SSH an example of security through obscurity, if my definition and the author's definition were used. With that line of thinking, the entire cryptography industry is merely security through obscurity and we should just give up. In practice, cryptographic technologies aren't considered to be security through obscurity unless the technologies are provably broken to a point where only an idiot would rely on them (such as WEP or LANMAN). As a caveat to this, when a cryptographic algorithm is hidden from the public in order to remain unbroken, it does become an example of security through obscurity. I just object to the idea that this is the only meaning to the phrase.
> I object to SSID as example of "security through obscurity" because I think it doesn't get at the core of what's wrong with security through obscurity and it has other far more glaring issues - so someone can fully grok that keeping SSID secret is not worthwhile and see "why" but without that "why" generalizing to (say) the company saying "We use our own proprietary cryptosystem. We can't show anyone the algorithm; it's more secure that way!"
The issue we began with was that you stated that it was not an example of security through obscurity at all. That was an incorrect statement and it still is. If you now want to argue that its not the best example, I would agree with you.
As an introductory example it does a good job of explaining the concept to a layperson or even a technical person that's still a bit inexperienced. Could you find a better example to give to a room full of newbies? Probably, but that wasn't the issue we disagreed on.
Its also important to note, that in a room full of security experts, you would be able to delve into the more complex aspects of the issue without losing the audience, so of course the example would be different.
"The issue we began with was that you stated that it was not an example of security through obscurity at all."
If you read up, that is not where this began. You initially responded to my objection to the phrase (from the article): "If the only thing protecting your security is a lack of others knowing the secret, then you have no practical security."
I still fully and full-throatedly stand by that objection - particularly since in a security context "secret" is often used to mean "key or password or...".
I did eventually state that I wasn't sure hidden SSIDs were actually an example of security through obscurity - I'm still not sure they are. Maybe I'm wrong about that. It's still appearing to me that you aren't really understanding my objections, though.
"As an introductory example it does a good job of explaining the concept to a layperson or even a technical person that's still a bit inexperienced."
I still say SSID is - at best - a piss poor choice of something that is barely an example of the phenomenon. It does not do a good job of explaining the concept to anyone - newbie or otherwise - because it is poor security for more obvious reasons that have nothing to do with security through obscurity, and is therefore likely to lead to confusion.
You're right, the first thing we disagreed on was that the article's pretty much industry standard definition of security through obscurity was incorrect because in your opinion, its too ambiguous. I thought we had already resolved that considering the article you linked said pretty much the same thing that I did. It was your second post where you stated that the SSID isn't an example of security through obscurity.
>I still say SSID is - at best - a piss poor choice of something that is barely an example of the phenomenon.
Considering what you listed as your definition of security through obscurity, it isn't surprising that you think the example is piss-poor. Your personal definition is itself a very specific type of security through obscurity. Therefore, anything that deviates from that specific example is going to seem wrong to you.
... I can't resist. Since when does "secret" in a security context exclude things like keys and passwords? Usually it means things like keys and passwords. You can't say "it's implicit to 'security through obscurity'" because the article was defining security through obscurity. If the audience is expected to be sufficiently unfamiliar to need that definition, they aren't going to know that there's this mystical blessing of keys as "not a part of that". Seriously, where do you work anyway?
First, a failure to communicate doesn't have to mean that either of us is dumb. It just happens sometimes. Second, the thought that you might be trolling has crossed my mind as well, so it now seems unlikely that either of us are. Third, there really isn't any need to repeatedly resort to insults. I have no idea what you do for a living but I wouldn't be so ready to dismiss you as incompetent as you seem to be with others.
Let me try to clarify some of the things, because I often forget to add details that others might find important.
Security by obscurity is generally used to describe very bad attempts to secure something by hiding either the item intended to be secured, or something else that is used to secure it.
To clarify, a password CAN potentially be secured so inadequately that it represents an example of security by obscurity. For example, if a person tapes a copy of his password to the bottom of his keyboard.
Unfortunately, many of the best security mechanisms we currently have still use passwords, or RSA tokens, or something else that we have in our possession. In some cases, we don't yet have (or haven't popularized) anything better, so rather than lump some really useful, yet imperfect technologies in the same category as the jackass that tapes his password to his keyboard as "security through obscurity." We eliminate passwords/tokens that have been properly secured from the conversation.
To clarify even more, if a SWAT-team storms a secure facility, murders the guards, and spends several hours cutting open a hardened safe that contains cryptographic keying material, most security experts aren't going to look at the situation and say "Well, what else do you expect when you rely on security through obscurity? I knew someone would find that password." The situations aren't comparable.
So when I say that passwords aren't included in security through obscurity, what I mean is that the entire concept of having a password should not be considered an example of security through obscurity, because the phrase is generally reserved for practices that are considered very poor. In the distant future, if we find a way to eliminate passwords completely, to the point where only an idiot would depend on one, then things would change and the entire concept would be considered security through obscurity. Similar things have happened in the past. For example, WEP used to be considered secure, now only a fool would use it.
When I say that cryptographic algorithms aren't included, once again what I mean is that the concept of obfuscating your data through the use of a cryptographic algorithm itself is not supposed to be considered security through obscurity, because although most algorithms will eventually be broken, right now some of them are considered to be extremely safe. We aren't ready to abandon our most secure cryptographic protocols, and so the concept of cryptography itself doesn't qualify as a very poor security practice.
There are certain situations where a cryptographic system can serve as an example, such as the example you gave where the algorithm is hidden in order to 'keep it secure.' Another example would be when advances in computing power allows for a previously secure protocol to be broken through brute force. When it becomes trivially easy to do so, use of the specific technology would be accurately described as a form of security through obscurity.
So in practice, what happens quite a bit when two knowledgeable people are having an academic conversation about the theory of security through obscurity, we are taking for granted that passwords should be properly secured. We are also taking for granted the fact that we shouldn't be using broken or unproven cryptographic protocols. If we had to stop and explain the semantics behind security through obscurity to other security professionals it would be like a couple of senior software engineers having to remind each other what a global variable is every single time they want to talk about a complex engineering issue.
I hope I've done a better job of explaining the statements I made, I can see how they may have been overly vague, but I was focusing on your claim that the SSID broadcasts aren't a valid example of security through obscurity.
"First, a failure to communicate doesn't have to mean that either of us is dumb. It just happens sometimes."
Agreed. I'd meant my other comment to better indicate that those were potentially transient attributes. Also, smart people are perfectly capable of lacking understanding about something in particular.
"Second, the thought that you might be trolling has crossed my mind as well, so it now seems unlikely that either of us are."
That doesn't really follow, but okay.
"Third, there really isn't any need to repeatedly resort to insults. I have no idea what you do for a living but I wouldn't be so ready to dismiss you as incompetent as you seem to be with others."
I apologize where I was overly rude or confrontational.
"Security by obscurity is generally used [...]. I hope I've done a better job of explaining the statements I made, I can see how they may have been overly vague, but I was focusing on your claim that the SSID broadcasts aren't a valid example of security through obscurity."
It still seems to me that you're more or less using it to mean "weak security", as opposed to a principle which can be used to guide what to fix. It's quite possible that the meaning has weakened/broadened since I first learned about it (plausibly here, though I feel like it was earlier), but if so I think that's unfortunate - I think the tie to Kerckhoffs' Principle is important.
">Seriously, where do you work anyway?
This is extremely unnecessary and unproductive."
You brought your profession into the argument. I'm not sure it's extremely unnecessary to establish it, or for the argument to reflect on your profession. Neither am I entirely sure that it was actually appropriate, though.
Yeah, I'm giving up on this until someone else weighs in. At least one of us is lacking in at least one of understanding of the issues, English language reading comprehension, and ability to write clearly, and I am running out of the patience to figure out which. Presuming, I hope correctly, that you are not simply trolling.
>Americans don't understand buying lunch for your superiors.
More wealthy/higher ranking/socially powerful/popular/lucky != superior.
I occasionally buy lunch for my friends and co-workers out of mutual respect and a genuine desire to show them that the relationships we share are appreciated. If one of them happens to be my supervisor, so be it. But there has to be a mutually respectful relationship between us.
I'd also have to seriously question the character of anyone who, being aware of the fact that they make multiple times your income, would still allow you to buy them lunch (Unless it was a trivial amount).
Richer obviously != better. But being richer === being richer. He's saying if you want more money, hang out with richer people, and you'll pick up some tips. And a good way to get time with someone is to buy them lunch.
You're right that the entire theme of the article sort-of implies richer is better, but a plain reading of it just says: here's how to own a car and a house. It makes no promises about being a nice person.
I wasn't criticizing the entire article. I was disagreeing with a single person's claim that disagreeing with a cultural norm is the same thing as failing to understand it.
Most of the advice in the original article was pretty good. I just happen to prefer meeting people in a more natural manner. I just wouldn't feel right spending my free time trying to manipulate my way into a rich person's life with the hope that I might somehow benefit from it in the long run.
>I anticipate that I will be able to remove the features which I didn't think were needed.
This is a really bad idea. While you might think that its impossible, some of your users are going to absolutely love the useless functionality that you added to appease Apple. When you remove it, you are going to receive an influx of 1-star reviews.
I think you can make moderately safe choice if you have the usage data. I.e. if you see 0.5% of active users use some functionality, you can remove it. It can help rest of the users (make the app more simple) and you as well (smaller code base). Also adds possibility to add another feature without cluttering the app.
Would you refund those people automatically after removing the feature they use? Since this is not anymore the same value proposition as before. One small feature can make the whole app not anymore worthy those original 1.99$ user paid for.
Can we get rid of the absurd disgust over the huge price of 99 cents people pay for apps? If he thinks the app is better being simpler, let him make his app better. It doesn't matter if he "offers" a refund or not. They can ask Apple for one, and Apple will give it to them. Don't act like the guy is a jerk for removing a feature he deems unnecessary because someone might be upset over the 12 cents of value lost when it very well could improve the experience of the other 99.5% of his users.
I'm sorry, but for 99 cents you don't get complete control over my time and decision making process. Feel free to make suggestions, but for the love of God don't "tell me" that "I need" to make feature X for you. I particularly enjoy 1-3 star reviews that say it's great but it needs X to improve review. As if they would actually come back and change it when it gets X. I promise they don't.
Noone is complaining about 99¢, but the principle stays. One small feature that noone except me uses if removed makes the product/service unusable for me. I could have paid 99€, I could have paid 99$ or 999%, it doesn't matter for the topic.
If you remove that one feature under my feet, the app is to me worth nothing. I would be ok using the old version of application that still has that feature, but in case of a service or phone app where I simply can't use the old version or easily rollback to the one, I would ask for refund, regardless of the original price.
You could set the release data of the app like 2 weeks in the future and have some sort of webservice that you use to enable or disable app features. You could disable the bogus features after the app has passed Apple's review, but before the app is visible to end-users.
Legally so, but not rightly so. Apple has arbitrary and unreasonable rules. They are not in a morally justified position when they exert those rules, even though they are legally able to.
Now, in this case, it is obvious what the right move is for the developer. They have to ship their app on android or windows phone instead of iphone. If apple refuses entry into their walled garden, the developers should take their app and go somewhere else. If enough apps do this and become popular, apple will change their rules.
I think it's still rightly so on Apple's part: Apple can set the rules however they like because the App Store is their playground. If you want to play there you have to abide by their rules - even if those rules are contradictory or arbitrary.
I do agree that developers should take their apps to other platforms (most notably Android). I don't expect Apple will change their rules, but customers may switch to other devices. I've personally switched from iPhone to Nexus 5 because some of the apps & features I wanted were blocked by Apple.
Apple's rules aren't arbitrary, they are built around a fairly clear set of aims about which Apple have been fairly public. You may disagree with those aims but that doesn't mean the rules are unreasonable or arbitrary.
I'd also say that as an iOS user and an Apple customer, one of the things I like about the AppStore is that there is a degree of curation, that they do have rules. I don't agree with all of those rules but over time the rules have improved and, on balance, I personally like the end result more than the alternative.
What is more arguable as unreasonable is that the AppStore is the only means of loading Apps to your phone without a developer license but I don't think changing the AppStore is the right solution.
Personally I'd argue that sideloading should be possible (though would need to be enabled somewhere down in the guts of the settings with warnings and all), but I wouldn't change the AppStore which is a service with a specific aim which it meets pretty well.
Because it fits. Do you think that Wal-Mart doesn't have rules for vendors?
Perhaps Apple is more mercurial and arbitrary. But the institutional arrogance is the same.
As a manufacturer, Wal-Mart is your best friend and worst enemy. They pay you promptly and order lots of stuff. But they demand steep discounts and punish you harshly if you fail to meet commitments, and you must be able to rapidly ramp up your supply chain when their demand grows.
It functions as such. Developers make a product, they convince Apple to stock that product, and Apple takes a cut when they sell it. You can argue that it shouldn't be like that, but you can't really argue that it ISN'T like that.
The only thing you can really argue is that they should allow other app stores to exist that aren't under their control. And frankly, they do exist on jailbroken phones and if you want that you can jailbreak your phone, or get an Android.
I found the hitbox so frustrating I took an airplay video of several games and was surprised when instant replay showed a 1 pixel collision between the bird and the pipe, when I thought I was clear.
The conspiracist in me wanted to believe the pipes had additional gravity or something that caused me to hit, but practically speaking I wonder if we're simply used to games making hitboxes smaller than the avatar. I certainly noticed that in Jetpack Joyride.
> I wonder if we're simply used to games making hitboxes smaller than the avatar.
Yes, it's an often-used trick to enhance the feel of gameplay. Generally you want to make the hitbox of the player, and of the "bad" things smaller, and the hitbox of "good" things (items etc) larger.
This is all (just one) part of the philosophy that the game should behave as the player intends to control it, which is not always equivalent with the literal interpretation of how the player controls it.
Another example is jumping in platform games, if you walk off the edge of a platform, many games will give you a few frames of leeway in which you can still jump, even if the character is actually in mid-air. (alternatively a game can make the platforms' hitbox slightly larger than they appear, but in my experience the leeway approach makes for smoother gameplay)
All of these tricks basically make a game easier to play, but in a way that feels very satisfactory to the player. The idea being, you can always make up for the level of difficulty by making the levels harder, the enemies faster, etc. This shifts the balance from hardness by frustration to challenge.
The fact that Flappy Bird obviously subverts this philosophy, I think is part of its wtf-intrigue. Whether the author of the game did it on purpose or not, is another question.
I'm calling you on this one. It might not be obvious to us right now, but new game genres will continue to sprout up. Touch (included pinch, swipe, etc.) as an input mechanism is new enough that many genres have only come into being in the past few years or so. Games like Osmos and Kosmo Spin are pretty much unique, the latter I just can't imagine working on anything other than touch. People have surely been saying that all genres have been covered in every medium for the last 60 years, yet new ideas keep getting created.
I can remember playing what is essentially the same game on an acorn computer in 80s, pretty sure a helicopter. All this talk is just a witch hunt by jealous people, ignoring how rare original ideas actually are.
>one of our Bad Hires failed on three of the following items which are usually a make or break for me:
Sounds pretty serious. Let's find out what it is.
>They did not include a personalized cover letter with their resume when applying.
Oh my. No personalized cover letter? The bastard should be shot, or at least unemployed for the rest of his life. Or maybe instead the author should think for a moment why his number one priority (its the first thing he listed) when it comes to evaluating candidates has fuck all to do with anything that an actual adult might care about.