The rant about "Hacking is Cool" is really dumb. Hacking is cool because it encourages thinking adversarially and learning a lot about how the target system works. You learn things and you get better at security. The author seems to think this is purely talking about learning about known vulns that will be obsolete, but it's also about discovering vulns and developing a natural defensive attitude for your own software. This actually comes off as someone who's bitter about other people having skills the industry considers "cool."
None of this aged well. Some context worth having here: Ranum and, to some extent, Bruce Schneier were part of a small reactionary movement of security thought-leader-types that opposed vulnerability research --- not just the publicizing of it (which I think is also a misguided critique) but also of the entire ethos of vulnerability disclosure.
Most of it seems fine and pretty trivial (to the point of not really seeming worth writing). Like yeah, default deny is going to be safer than default permit, generally. Enumerating badness is kind of dumb, though it's worth noting that AV "works" for at on of people.
"Penetrate and Patch" starts to get a little less straightforward. It's a naive view of pentesting, but given that view it's fine to say it's dumb, and lots of people do naive pentests.
Hacking is Cool is obviously the stupidest section. I really skimmed it and missed what he was saying because, tbh, the first half was sort of just trite. It's pretty garbage.
IDK, to me this is just sort of what security blog posts were like 15 years ago. Kinda just dumb rants that didn't say much. I certainly wrote plenty myself. This one seems to be just a particularly naive version of that.
> "Let's go production with it now and we can secure it later" - no, you won't. A better question to ask yourself is "If we don't have time to do it correctly now, will we have time to do it over once it's broken?" Sometimes, building a system that is in constant need of repair means you will spend years investing in turd polish because you were unwilling to spend days getting the job done right in the first place.
I have seen this everywhere I went, one way or the other.
One of the dumbest things I see is the idea that adding security can't be done for consumer items, as it will make it too hard for the consumer.
Case in point, HP home printers that have for years (and still?) have an integrated web interface with no password by default. You can hit up one of the nicer home office printers and read or modify someone's contact list. For example, if they do a "Scan document and send to contact "Accountant", you very well may have modified it to be your email or phone. Or ask them to scan whatever's on the scanner bed. Guarunteed some of them will have some drivers license, legal document, etc. left on them. Or just pump a ton of printouts of 99% gray sheets to them for a "denial of paper and ink" attack. There are so many of them on Shodan it's not funny.
I suppose the designers thought that it would be too much pain for users if this wasn't allowed. Or they didn't even think of it as an issue.
The thing is it's just trivial to say have it so the first time you connect to the server, it makes you set a password. And you limit it to just the local network for connections by default.
It sounds like a low risk. How many people would go through the trouble of trying to create a denial of ink and paper attack? How many people would notice?
If you are trying to get your ex back perhaps these small victories help but to go on shoden to find these printers and use more color is a weird thing to do over spying on cams.
Well, let's not disregard the more serious attack that the parent poster mentioned:
> For example, if they do a "Scan document and send to contact "Accountant", you very well may have modified it to be your email or phone.
In places where personnel PII is closely guarded this would absolutely be a big deal. Also, I wouldn't be so quick to dismiss the impact of a denial of ink attack at larger scales. In my experience, printers spewing gibberish isn't usually intentional, but a side effect of an improperly configured scan. Not only can this be costly in ink alone, but also in time and the patience of your employees. You've assumed that the attack is targeting the more expensive color ink, but what happens when employees can't print simple documents in black ink? Just change the cartridge (on hundreds of printers), right? And when it happens again tomorrow? After a few days you're going to have to start thinking about the wages you pay these folks to do that.
> One of the best ways to discourage hacking on the Internet is to give the hackers stock options, buy the books they write about their exploits, take classes on "extreme hacking kung fu" and pay them tens of thousands of dollars to do "penetration tests" against your systems, right? Wrong! "Hacking is Cool" is a really dumb idea.
Hah, every big company out there has a bounty program now.
This is the most common professional deformation I've seen among security professionals. Plenty of them are great at saying this and that is dumb, but precious few actually have anything to offer on how to do things properly. Insofar as "security" is possible at all, it's going to be by construction.
There are literally thousands of pages of positive recommendations for security practices. If you're arguing that most of them don't offer anything "proper", well, you might have a point. Michael Brunton-Spall notes that we still don't do security very well. https://youtu.be/6U41SSz15xw
My complaint isn't about the quantity or quality of security research. By definition security professionals should be up to date on the state of their art. As a developer I want security professionals to help me constructively design systems that are secure by design. Instead I just get vetoes with no real explanation or worse yet just a human frontend to an OWASP tool or some other linter that I could run myself.
I know such security professionals exist, but in my experience they're so massively outnumbered by the bad ones that I've never had the pleasure of working with one directly.
Well, I work in security. Identity and access management specifically. So ponder this, if you wish. It's hard to have a discussion about security recommendations with decision makers without getting pushback over costs, time, complexity, and so forth. So of course it's always good to be able to say what problems the recommendations prevent, and give them some idea about the tradeoffs for not doing it.
But decision makers rarely involve security people up front. In most shops, security is an afterthought. You see, organizations build a website, or an app, make it work, and then go, "oh, hey, now we need a login for our users". At that point, there's almost always a slate of things that have been done that ought to be corrected before further work, otherwise you get what I call "sprinkling security sugar on top of a pile of manure". Yeah, you might get to a point where it's palatable, barely, but it's still a steaming pile.
Some of the things that make the work a steaming pile can be addressed by positive recommendations, but a great many are things that just need to go away. At that point, you're right, the conversion is going to largely be "don't do X, Y, and Z".
The question then, is, why not make the OWASP tool or other linter part of the development process from the beginning, rather than wait until it's all done and then ask a security professional to evaluate for you? Imagine writing a letter or spec by just typing and adding stuff without checking your spelling and grammar along the way. Now hand it off to an editor. Would you expect the editor to provide a bunch of advice about how to do things after the fact? No, the time for that past. The editor is going to point on the flaws, the misspellings, the grammar errors, and worse. Does that sound like saying "don't do that"? Would you be mad if the editor just ran ispell and a grammar checker and said, "fix these things"?
My constructive advice, then, is that you engage with your security professional before starting work, ask them for their input, and follow it. Not without asking questions about relative risk vs cost, of course, but giving them their due for being the expert in the art and having knowledge and context which takes years to acquire and put to work.
> My constructive advice, then, is that you engage with your security professional before starting work, ask them for their input, and follow it.
You sound exactly like the kind of security professional I’d like to work with and I agree with everything you said.
To give you an idea of my lived experience, I’ve actually had security team members refuse to disclose the tools they are using. This baffled me, because as you say why wouldn’t they want developers to run linters as part of their process? The only possible explanation I can think of is that being a human frontend for automated tooling is an easy high paying job.
Although there's some good stuff in here (some of which may be considered "obvious" these days), the tone is so typical of computer security: "Everybody else is dumber than me! No, seriously! I'll just enumerate a bunch of things they do, create a straw man case for why it's dumb (by for example not mentioning the constraints people are under, like time pressure and lack of funding/training), burn the straw man, and all that's left is my obviously superior advice which they should be ashamed for not discovering by themselves."
Oh, I kinda skimmed the "Hacking is not cool" bit. I sorta read the first bit of it as avoiding idolizing offensive research but they're actually saying that... you shouldn't learn to hack... at all.
It's too black and white though. For example I definitely agree that educating users shouldn't be "the solution". If you can make things secure in a way that doesn't require educating users then definitely do that.
But security is about probabilities, and if you can effectively educate users then that will help improve the probabilities. There definitely are some attacks that you can't block completely and rely a bit on user savviness. His email example is a good one - sure you can block attachments, but what about emails from `johnsmith34@gmail.com` that just say "Hi, this is the CFO, my laptop died so I'm on my phone; please urgently wire £10m to 00-01-03 4058394!"?
There are things you can do to help, but user education is definitely one of them.
That said, my company has some bullshit online phishing training course and it is terrible. I don't think it would be hard to do a good online training course, but I doubt many are.
Yeah, agreed, but I think in 2005 this was just sort of the common way to write about security. It was a lot more hostile, a lot more brazen.
I think it's because the industry was even worse back then than it is now, with even weaker goal posts. We're in a better position now to discuss nuance today.