Now there are obviously great security people around, but the path one needs to take to become a professional in the field seems to be more theoretical and rubber stamped.
For me this is why I focused more on just software engineering with less focus on network security. I get paid to hack without worrying about the police knocking on my door for experimenting and exploring.
But that also means that the aggressive laws here are not only useless but harmful, because there is no middle ground between fooling around and real crime. If fooling around gets you a slap on the wrist then domestic kids fool around, test the fences, cause people to learn of the vulnerabilities and they get fixed. But they still don't commit credit card fraud or corporate espionage, because those would still be highly illegal.
If, on the other hand, there is a dichotomy where in one country looking at a computer sideways is a felony and in another there is complete lawlessness, then the first time you get hacked it's by foreign raiders rampaging through the village and causing widespread destruction, instead of some kid who only tags your website with graffiti and in so doing impresses upon you the need for better security.
It's easier for lots of reasons, many of them banal, like the fact that you can get a modern Unix system on your laptop in minutes for free, and tear it down and bring it back up on demand with virtualization, or the fact that everyone has Internet access and nobody has to bounce through an X.25 Internet gateway or an outdial to get to IRC.
It's also easier because huge amounts of vulnerability research have been published since the 1990s. The cutting edge of vulnerability research in the 1990s is almost unbelievably primitive compared to what's available in open school curricula now, let alone refereed vulnerability research venues. In the 1990s, basic details about things like stack overflows or even temp file races were permanently embargoed and made available only to large-company system operators. Most of what's disclosed today by vendors would have been decried as unethical by the security community in 1995.
People that want to do active vulnerability research today can participate in thousands of bug bounty programs, none of which existed in the 1990s. And, despite what the prevailing sentiment on HN would indicate, I think it's actually a bit rarer for people to be prosecuted for undirected, non-monetized hacking than it was in the '90s.
I think th is sentiment regarding bug bounties programs kind of solidifies my point. Corporate America and the government has made only the type of hacking they benefit from legal, while experimentation is off the table.
I’m not so much advocating for a free for all by the way. I think there is difference between curiosity and malicious intent.
In the dialup and early broadband phase, looking "under the hood" and learning about everything that was there was common practice. It was a 10 course meal daily for the life-long learners.
Up until 2000'ish, there were only "certain types" of people online, with trust and integrity being self-regulated and intact.
Between 2000'ish and 2007'ish, we saw substantial growth of broadband and wifi, expanding the userbase further, while attention was starting to be paid to internal security. Enter the ad explosion of popups, the occasional virus exploit and post-dotcom wisdom (/s) of the economists, and the self-regulating model erodes practically overnight.
The smartphone bubble is where "all types" had access to the vast trove of resources that we used to call an information superhighway.
Fast-forward to the present and when you open the hood, theres now several yellow warning stickers (idiot alerts) and all of the doors internally are starting to be locked as common practice.
It looks like the modern crux of safety is the mix of three types (there are more) of fundamental operations of open platforms; (almost) full trust backends/platforms, self-regulating models and software packages (the bulk of OSS, linux, etc) and the current full lockdown approaches to research, exploration and security.
I guess we can officially say "the more the Internet changes, the more unrecognizable it becomes", but data is still the most valuable asset and will most likely migrate to "safest path routing", rather than using the "three billy goats gruff" model that the Internet has become.
I’ve looked into this, in order to transition from software engineering into a cybersecurity niche it requires a drastic pay cut. Sure if you’re the self-employment type you could try doing bug bounties, but the top 1% make like 35k a year on average, and everyone else makes peanuts or zero. That is still better than starting a startup, but you're also better off being a day trader where 10% of people make a better living with only 2 hours of trading a weekday and two hours of analysis and strategizing in the morning, maybe then you could find time to hack systems just for fun.
The other thing you could do is become some kind of consultant and make better money consulting companies and running pen tests for them, but if you can do that you could just as easily make more money building out and consulting on custom software solutions...
And if you don’t care about the money fine, but eventually someone that does care about money will come along and do it better than you anyway and eat your lunch.
Now, to leave this post on a positive note and not become a target for downvote gangs, I’ll propose this: the way to have more hackers at scale is to grow every software engineer to be a security conscious engineer by default, such that they are constantly looking for vulnerabilities throughout development and deployments to production. It works well because I’d say engineers spend maybe 4 hours a day actually working and the rest is thinking about problems, which can happen in parallel with hacking quite nicely.
Most companies I’ve worked for have encouraged this to a degree. However, there are some major objective mismatches when trying to fit security engineering into product engineering. It’s great to understand the OWASP top 10 and read up on basic issues to avoid (like SQL injection), but I don’t think this proposal is very realistic for a few reasons.
Security engineering requires deep knowledge of protocols, networking, system design, etc. that a lot of product-oriented engineers just don’t have enough context to own that responsibility outright. New vulnerabilities are being researched on a regular basis. How do I keep up with netsec research while trying to stay current in my primary skill set? You might as well suggest to a front end dev that they should start rolling operating systems.
On top of that, pen testing can be incredibly time-consuming, and putting that responsibility on engineers who are focused on shipping as quickly as possible is going to cause problems. My performance as a product engineer is tied to my ability to ship, not testing my feature against every known attack vector out there.
We’re better off trying to change the culture of making everyone a half-assed security engineer by fostering a culture that treats security as a first class citizen.
As for your proposition - I agree. A simple way to do this would be to have, for example, a day a month where the organization's engineers try to break into their own system.
It's common to write tests and follow best practices here and there, but developers often won't actually dedicate some serious time to actively try to break into their own system as an attacker.
I think this could be fun and beneficial for all involved.
Where do you get that idea from?
Also no mention of starting a testing business? No mention of cyber security jobs?
This can be both bottom-up with more Responsible Disclosure programs, as well as top-down (i.e. via the legal system), making sure to take intent and actual harm caused into greater consideration when seeking to prosecute a white hat.
Top-down would be important for accidental discoveries though. It's quite problematic to go after someone that accidentally finds an issue in your service.
Obviously, if a company hires somebody to try and steal to check their LP, or if a company hires somebody to break into their network, it’s different (and I know there have been issues with one part of a company trying to prosecute somebody that another part tried to hire, but that’s just stupid corporate governance).
An ethical hacker will find out your door is open and warn you without going inside. If they go in and steal, they're not a white hat hacker.
And if you have an open door with something interesting inside, a black hat hacker will eventually find it anyway.
To start with, loss prevention is the store's problem. They have all the incentives they need to do it themselves. With data it's completely the opposite. They have your data and it costs you, not them, when they secure it poorly. Many of the entities with poor security are effectively judgment proof, so legal liability is ineffective. The users are really the only ones with the right incentives to poke around and make sure their data is secure. It's like storing your valuables at a bank, but if you notice a security vulnerability in the bank, not only is reporting it or informing their other customers likely to get you arrested, they're not even required to let you remove your valuables from the bank. (Try to get Equifax to delete your data.)
Then, the scope of what's considered "hacking" extends far beyond things that cause actual harm. The sort of things investigative reporters have long done to humans with a telephone are a felony if you do them to computers with an internet connection, even if no "harm" is done to the target other than to reveal their misconduct to the world.
Then, the penalties are unconscionable. If an investigative reporter goes past the "employees only" door to take some pictures, that's illegal, but the most common result is only that they'll be removed from the property, and even if charges are pressed it's only a misdemeanor in most places. Do the same thing with a computer and you can expect federal prison. It discourages people from testing the fences, even though you want them to.
You know, software is more like criminal laws interpreted by an instant judging robotic courthouse, so you absolutely don’t want bad precedents and loopholes, or do we? Because tests are sparse events but if it becomes real world applications it will be dense and miserable...
Point being, I'm sure there are companies out there that do the same but digitally, either willingly or accidentally. Reporting that however, is illegal in oh so many jurisdictions, that's not okay.
There is a trope about suppressing forest fires that has come around in a few different areas, where by suppressing forest fires with controlled burns of scrub the only fires that do occur are the ones that consume the fuel-intensive old growth forests creating super infernos. Nassim Taleb talks about it in the context of "suppressed volatility," where most attempts to reduce volatility in complex systems just cause risk to bottle up, as though there were a law of "conservation of risk," at play.
We don't have "more hackers," right now because the incentives just aren't worth it. Oddly, enforcement of hacking laws and high profile busts of worm/malware writers have had the effect of discouraging the low level hacking that would clear the scrub out of corporate networks - leaving a lot of super vulnerable and unmaintained infrastructure tucked away behind forgotten firewalls.
I have been trying to figure out how to get positioned product wise for this next extreme event, which I think will be caused by all these crappy security controls that removed the low hanging fruit while letting massive data risks age into a super fuel for a tail risk event. Gaming out the scenario of what companies will need when all that old infrastructure in-effect goes up in flames mainly yields migration tools for legacy code into containers that can be moved into cloud environments in a forced move, un-sticking vendor lock-in, BYOCredentials, some kind of liability mitigation for all the PII it will compromise and customer alternatives to locked in dependencies, etc.
From a product perspective, I have a contrarian view where I'm less interested right now in what people in companies say they want, and more in how to be positioned for the Forcing Function event that their risk exposure has them set up for. Enterprise demand is a synthetic effect of political positioning and leverage, and not user desire that startups and consumer products respond to.
Forest Fires: https://www.nature.com/articles/s41467-020-15961-y
Taleb riffing on a similar theme via FSBlog:
Every implementor then only has to review/subscribe to the versions of the software they use. Any custom software would still have to be handled differently but I presume that patterns of use of frameworks and libraries either makes up a larger portion, or will trend in this direction.
I interned at a company that did 10 million a year with a website built with nothing but the w3schools tutorials. You could print the database from the login form as it was started by a guy who just read those tutorials.
We weren't allowed to fix it either as the owner didn't want a system he didn't understand.
I dont mean that in a dismissive way, I mean that to hack you are tinkering and figuring things out with your own mind rather than asking others.
I think I could hack. I've been hacked, I've read about how people hack, and I have a pretty good understanding how attempts are made. Basically you learn the exploits that historically happen.
Worst case if you need to do something, you can Phish.
Currently my company just assumed that Spring handles all those issues for us.
- Sending data to an external system that processes commands, including destructive ones like DROP DATABASE? Verify what you're sending doesn't contain commands you don't want to run, each time, every time.
- If you are programming in C or derived, and using expressions that will be involved in array indices or pointer arithmetic, make sure the expression falls within acceptable range before using the resultant data. Since NULL or 0 is an invalid pointer value, it shouldn't ever reach the code that tries to access pointers.
- Should a sensitive function only be called from certain other functions? Should a sensitive variable only be modified/accessed from certain other functions? Enforce that.
- Someone will eventually find anything that's not documented, so don't rely on that for security. Don't expose an interface (especially not through any network accessible method) without an authentication or verification mechanism if it's not OK for it to be 100% public.
- Define everything. A function call should never have an unexpected result. Invalid inputs should cause the function to return errors or throw exceptions.
The above is a good percentage of it I think.
A lot of developers do not know that data should be validated.
The hard part is identifying the invariants of your system. But (and I know I'll sound like Dijkstra), that hard part is also the minimum bar. If you don't know what it takes for your system to behave correctly, how can you hope to build a correct (and therefore secure) system to begin with?
Some people love to say "don't roll your own crypto", with the strong implication that whoever is trying to it is not competent enough. Well, the same applies to any server exposed to the internet, any moderately interactive website, any network library, any video decoder, any compiler… all those things have the potential to go catastrophically wrong in the face of hostile input, and therefore should be held to similarly high standards.
Here's my advice:
1. Make sure you understand invariants, and why they matter.
2. If you don't, train yourself. It may take years.
3. Validate your inputs (but you already know that by now).
Not helpful? Sorry pal, programming takes time to learn, and I don't have the time to teach.
Two books I like to start this process:
* Threat modeling by Adam Shostack
* CISSP all in one handbook by Shon Harris
You don't need to get a CISSP cert, but the resources and education are generally applicable in most situations.
It is astonishing how often people neglect basic programming best practices (input validation, error handling, logging, access control) which in turn leads to security vulnerabilities. My theory is this is caused moreso by Agile Management forcing developers to cut corners, rather than just developers being ignorant/lazy.
Threat modeling can be a good practice to learn, because it gets you into the habit of thinking about how you could hack each new thing you're developing.
I dont think the CISSP is a good recommendation for this person's case, unless they plan on becoming a cyber security manager.
Also, having a developer understand the value and need for code security from the perspective of a security person is important to the overall success of an infosec program. Otherwise both engineering and infosec are going to be grating on each other.