- Learn C. Learn it well. That will give you the level of understanding you need of the way the computer works.
- Learn Python or Ruby. If you can automate something, do it. Computers are better than you at repeatable tasks.
- Learn x86 assembly (to start). Write some C code, compile it down to assembly, and read it; understand how the two map to each other. Get your friends to write stuff for you and decompile it back to C, and have them check your work.
- Learn the basics (at least) of web development. Understand how web applications work, understand the constraints it puts in place, understand the interactions between the client and server.
- Internalize the OWASP Top 10. Understand and be able to recognize and mitigate XSS, SQL injection, command injection, arbitrary file reads/writes, etc.
- Grab old versions of open source software and rediscover known vulnerabilities. Grab the latest versions of open source software and discover new ones. Start off looking for simple things, and know it well.
- Reverse-engineer network protocols. Pick a game, write a server emulator for it. This is a great way to use all your skills up to this point. It's also a lot of fun (it's how I cut my teeth).
- Write a debugger. Understand the interaction between hardware, the kernel, and userland.
- Understand, understand, understand. Ask questions. Ask a lot of questions. In my opinion, the key to security is always asking "Why?"
These are in no particular order, and none of these is more important than the other. It also jumps all over the place, perhaps because that's what I do myself; I may be breaking a web app one day and reversing some hardware the next. But these are things I feel are important, and will give you some direction.
If anyone has any questions, wants direction, or anything else, my contact info is in my profile. The world needs more breakers.
I've had a lot of trouble bringing people into C that have a strong background in (say) Python, but I don't remember C being all that hard for me --- but that might be because I didn't have any other options, besides Borland Turbo C, when I was getting started.
Also: wow do I hate the OWASP Top 10. Can we just rattle off an HN Top 10 right here? It'll be better.
2. CSRF / Clickjacking / Reframing
3. SQL / db-metacharacter Injection
5. Unauthenticated Encryption and Bad Block Cipher Mode Handling
6. Filesystem/Backend Storage Path Sanitization
7. Exposed Admin/Diagnostic Functionality
8. Memory Corruption Vulnerabilities in Native Code Extensions (cext gems, &c)
9. Shell-out Command Injection
10. Insecure Password Hashes
I feel like mass assignment, resource exhaustion, filter return code mistakes, and wildcard routes all belong somewhere, but they feel too Rails-y for a generic list.
Imagine if someone said to write a secure program for some computer. But, btw, there's going to be an attacker logged in via VNC at all times you'll need to defend against. I'd just throw up my hands and walk away.
Learning C super well is important, so is assembly, so is other topics but I think fundamentally, step one needs to be: "I have product X. What is it trusting? What is it trying to protect? How do I get around it?" Then, find similar vulnerabilities and continue on stumbling to the ultimate goal.
1. Look in the manual for statements of the form "Don't do X"
2. Do X
(Also, anything icamtuf touches is probably going to be good.)
Understanding the function, and boundaries, and interactions of a system, application, protocol, device et al and being able to identify who are threat actors, what the assets to protect are, and what the real threats against the entity are will point you at the more valuable things to start. Far, far too many people gloss over threat modeling because they don't equate it with technical work.
But on top of being "a great way to use all your skills up to this point" I think it can also be a strong motivation for picking up new ones.
Regardless, I think Bruce is right about the mindset: if you want to be good at security (no matter where you want to live in the continuum), you have to think differently. That is why most software engineers are good at writing code that can be easily broken. We think in terms of building up, not tearing down.
I also think you are right: once you decide where you want to live on that continuum, you have to understand it inside and out.
Beyond the deep, technical knowledge you'll gain following daeken's recipe, there's a whole set of security thought and security models you'll need to learn. It's hard to figure out and differentiate between threats unless you can think like an attacker, then work a system like an attacker does.
Or better yet build a single bus architecture in something like Logisim. It's much easier to learn a simple instruction set architecture down to the logic gates then it is to learn x86 assembly.
In security you're asking a similar question addressed instead at yourself. You're asking, "what am I assuming automatically?" Schneier for example has talked about why pilots don't get reduced screening at airports. The problem is that the pilots who cry out, "this is absurd, I could crash the plane I'm flying, how could I possibly be more of a risk to these people?" don't realize a certain automatic assumption: the assumption that the only people wearing the pilot's uniform are fellow pilots.
I tell this story occasionally, sorry if you've already heard me tell it. I once corrected a major security leak in an application I was paid to help develop -- the leak existed in the dev but not in production (thankfully!). The problem was that the team who had asked me to help out had made an assumption: "logs are good and are one of the only ways to create an audit/revert trail, we should log every request which comes our way just in case." This was built deep into the system. When I heard that, I was almost floored. As a proof of concept of the seriousness of what I'd realized, I looked through the audit logs for my boss's dev password and sent it to him.
It's a fundamental perspective shift: "if I wanted to break this, what assumptions could I use against it?" Whenever you see an implicit assumption you ask "how could that come back to bite us later?"
I spent 6 years in computer security right out of college, and most of it was PKI and assessments. To stay on top of other areas of security required constantly learning and doing stuff on my own, outside of work.
Even if you make it a profession, you have to keep learning and doing if you want to be considered an expert or specialist.
I agree that this is worthwhile to be careful of, but it very much depends on your employer as well. As with any career, find a good employer. So if you care about being a generalist, find an employer that needs you to be a generalist. I know for a fact that there's plenty of these in the security space.
I'm not sure if the CISSP is the way to go but I want to feel as if I am moving forward on a career search so I don't get stuck in a rut.
Maybe the perfect cert would be a useful tool for some purposes (corporate hiring, huge projects with consultants doing low-level IT, etc.), but those are crappy jobs (and not really "expert" in any way).
More importantly, the extant certifications are all crap. CISSP in particular. Get it if an employer requires it, but it's independent of your actual knowledge and learning process.
I personally got it just so that no one else in my company would ever need to do so; there are stupid companies which won't buy a product without integration, and where they have artificial requirements for integrators being certified. Given that it is only 1% of useless pain to enable 99% useful rewarding stuff, I found the sacrifice worthwhile.
But to be clear: I believe pretty firmly that for technical / software security, the CISSP is useless.
I'm not sure how I feel about SANS/GIAC. Absurdly expensive IMO, but potentially actually has some value for sysadmins doing system security. I can't think of what CISSP is actually good for, except maybe trivial pursuit - crappy consultant edition.
I'm under no illusions about the certification's marketplace value and I doubt I would have ever paid for the course/cert on my own, but it felt like one of the better formal trainings I've been through in my professional career (which, granted, isn't saying a whole lot by itself).
Also, the certificate comes mounted on a comically oversized plaque, which provides some entertainment value.
Once upon a time I was a contractor at an insurance company, and I saw that most of the people in their IT department had various certifications hanging on their cubicle walls. I thought, "I want one of those."
So I selected the Security+ certificate, inhaled about two-thirds of a book covering the material, passed the test, framed the certificate and put it on my office wall.
That's about it. It was fun.
Security+ seems a bit more focused, and obviously vastly less comprehensive (Part of CISSP is some fairly esoteric and never-used theoretical models). In practice I'd say it's on par with CISSP.
DoD considers Security+ to be level I or level II, CISSP to be ok up to level III, although prefers CISM or CISA for certain roles over CISSP.
That said, there are a few certifications that are very hard to fake your way through. --I wouldn't put much stock in a CCNA or CCNP, for example, but someone who has CCIE most likely does know quite a bit about the area covered by the CCIE exam. Likewise, the Microsoft Certified Master program, which not only requires exams, but a certain number of years of experience (varies by product) in the product you want MCM status for, shows that you've been working with the product long enough that you probably actually know something about it (whether or not it really makes you an expert...) But these certifications don't really say much about your software development skills, which is what the Hacker News audience is probably more interested in.
Focus your efforts on actual learning, instead of proving through a worthless piece of paper.
edit: Just to drive the point home: 9 years later I still don't know shit about security.
I'd trust a RHSE to know redhat security for redhat deployments more than I'd trust a CISSP, mainly because I put close to negative value on the CISSP, and a lot of infrastructure security is actually following best practices, not anything too specific to security.
For networking, Cisco (assuming a Cisco shop).
For virtualization, I've heard the VMware stuff is good if you're enterprise doing VMware. I wonder if there's value in the Amazon AWS courses for AWS deployments; I'd almost take one just to see what they're like.
Here at HN, many of us tend to view things in terms of how it relates to software development, but there is more to security than computer security, and the author mentions this in some of his other articles.
There are physical security experts, often having an extensive background in military or law enforcement operations. There are COMSEC custodians who are experts at implementing, operating and repairing a wide variety of cryptographic equipment, while also managing a program that generates, issues, maintains and disposes of an organization's crypto keying material. There are also security managers who at one time may have been experts in a specific domain, but are now called upon to maintain a bird's eye view of an organization's entire security strategy in order to develop a cohesive plan for minimizing the treats presented to their employer. There are intelligence analysts who make an organization more secure by using their expertise of social engineering to actively track down those who would wish them harm. Each of these types of employees is a security expert in their own right, yet none of these examples really need any knowledge of programming.
When you look at the content from the Security+, CISSP, and CISM, the classes lean heavily towards information security, but they address the other areas briefly. More specifically, they focus on information security from the perspective of a security manager. In other words, you don't take these certifications to gain technical expertise.
Unfortunately, many businesses fail to understand this, and so they end up making CISSP a requirement for their technical recruits out of ignorance.
Does this make the CISSP a worthless piece of paper? Not really. It's just misunderstood by those whom are doing the hiring.
However, CISSP is a worthless piece of paper for IT security managers, crypto custodians, developers, pentesters, and auditors.
(Well, you can, but it requires dedicating a lot of time and energy to it, more than most people are willing to give up after the normal day job.)
One of the best secure developers in the world is Daniel J. Bernstein, and, two things you'd want to know about DJB: (i) he missed LP64 integer overflows, which got flagged by (pure breaker) Georgey Guninski, and (ii) he's a world-class cryptanalyst and breaker in his own right.
There's a reasonable "builders vs. breakers" argument to be had, I'm sure, but my experience suggests that overwhelmingly the people making this "world needs more builders and less breakers" point are people who are annoyed at the prospect of sinking the time into becoming a competitive breaker.
(Maybe you don't think I'm a good secure developer, though.)
As an example I deal with the registries (the people that deal with the registrars such as Verisign, PIR etc.) and I know what security they have in place to prevent a person from imitating a registrar and gaining access. I also know exactly how to circumvent that security (which you don't) and obviously the registries don't (because if they did they would have steps to prevent what I know could do to circumvent their security).
While this only illustrates one example, the mindset of someone whose thought process is excited by how to break into something is not necessarily the same as someone who designs a system (although that could be the same person).
Here's another example that is maybe a little closer to the parent's point. I once got into a medical conference by setting up a website about the medical specialty that the conference was about. They requested I fax them a request on a letterhead (easy to whip up in a graphics program) and was quickly issued press credentials. A domain was also registered called "(Speciality) Treatment News" with a simple banner. Doing that revealed flaws in their system for vetting who should be issued press credentials because the person setting up the system a) didn't think like a breaker does and b) didn't realize the simple skills that are necessary to fake a letterhead that looks legitimate.
The idea is you're not going to be able to think of as many attack vectors unless you also spend time breaking things.
As for me being a breaker... I'd say that my security-related time is split roughly 90% building, 9% fixing, and 1% breaking.
In my role as security engineer for my company (as opposed to being a developer), it's nice finding possible exploits and passing them to our appsec guy for review. Being the guy who both finds (or develops) and also exploits/documents/patches the security flaws leads to a feedback loop far too often. Being a security conscious developer doesn't preclude the need for an application security engineer.