Hacker News new | past | comments | ask | show | jobs | submit login
The Internet is too unsafe: We need more hackers (medium.com)
53 points by yakkomajuri 15 days ago | hide | past | favorite | 48 comments



I think being a hacker, phreaker or cracker became so illegal it really scared people off experimenting and learning about computer security the practical way.

Now there are obviously great security people around, but the path one needs to take to become a professional in the field seems to be more theoretical and rubber stamped.

For me this is why I focused more on just software engineering with less focus on network security. I get paid to hack without worrying about the police knocking on my door for experimenting and exploring.


And of course the opposite is true in some other countries. Some of the best computer security people come out of places like Russia because they don't much care what their people do to computers in America.

But that also means that the aggressive laws here are not only useless but harmful, because there is no middle ground between fooling around and real crime. If fooling around gets you a slap on the wrist then domestic kids fool around, test the fences, cause people to learn of the vulnerabilities and they get fixed. But they still don't commit credit card fraud or corporate espionage, because those would still be highly illegal.

If, on the other hand, there is a dichotomy where in one country looking at a computer sideways is a felony and in another there is complete lawlessness, then the first time you get hacked it's by foreign raiders rampaging through the village and causing widespread destruction, instead of some kid who only tags your website with graffiti and in so doing impresses upon you the need for better security.


The opposite is true; it is much easier today to engage with computer security legally than it was in the 1990s. And the consequences for crossing the line are generally smaller.

It's easier for lots of reasons, many of them banal, like the fact that you can get a modern Unix system on your laptop in minutes for free, and tear it down and bring it back up on demand with virtualization, or the fact that everyone has Internet access and nobody has to bounce through an X.25 Internet gateway or an outdial to get to IRC.

It's also easier because huge amounts of vulnerability research have been published since the 1990s. The cutting edge of vulnerability research in the 1990s is almost unbelievably primitive compared to what's available in open school curricula now, let alone refereed vulnerability research venues. In the 1990s, basic details about things like stack overflows or even temp file races were permanently embargoed and made available only to large-company system operators. Most of what's disclosed today by vendors would have been decried as unethical by the security community in 1995.

People that want to do active vulnerability research today can participate in thousands of bug bounty programs, none of which existed in the 1990s. And, despite what the prevailing sentiment on HN would indicate, I think it's actually a bit rarer for people to be prosecuted for undirected, non-monetized hacking than it was in the '90s.


Yeah I was comparing the situation to circa 2000 onwards. Not around the time when Linux came out.

I think th is sentiment regarding bug bounties programs kind of solidifies my point. Corporate America and the government has made only the type of hacking they benefit from legal, while experimentation is off the table.

I’m not so much advocating for a free for all by the way. I think there is difference between curiosity and malicious intent.


The current state of "safety" online was actually a masterclass in growth, trends and thresholds of open platforms. Each interation could only bear so much.

In the dialup and early broadband phase, looking "under the hood" and learning about everything that was there was common practice. It was a 10 course meal daily for the life-long learners.

Up until 2000'ish, there were only "certain types" of people online, with trust and integrity being self-regulated and intact.

Between 2000'ish and 2007'ish, we saw substantial growth of broadband and wifi, expanding the userbase further, while attention was starting to be paid to internal security. Enter the ad explosion of popups, the occasional virus exploit and post-dotcom wisdom (/s) of the economists, and the self-regulating model erodes practically overnight.

The smartphone bubble is where "all types" had access to the vast trove of resources that we used to call an information superhighway.

Fast-forward to the present and when you open the hood, theres now several yellow warning stickers (idiot alerts) and all of the doors internally are starting to be locked as common practice.

It looks like the modern crux of safety is the mix of three types (there are more) of fundamental operations of open platforms; (almost) full trust backends/platforms, self-regulating models and software packages (the bulk of OSS, linux, etc) and the current full lockdown approaches to research, exploration and security.

I guess we can officially say "the more the Internet changes, the more unrecognizable it becomes", but data is still the most valuable asset and will most likely migrate to "safest path routing", rather than using the "three billy goats gruff" model that the Internet has become.


Unfortunately it doesn’t pay well to be a hacker, you’re better off being a software engineer. So yea, I guess the internet can rot until there’s a way to do white hat hacking at scale.

I’ve looked into this, in order to transition from software engineering into a cybersecurity niche it requires a drastic pay cut. Sure if you’re the self-employment type you could try doing bug bounties, but the top 1% make like 35k a year on average, and everyone else makes peanuts or zero. That is still better than starting a startup, but you're also better off being a day trader where 10% of people make a better living with only 2 hours of trading a weekday and two hours of analysis and strategizing in the morning, maybe then you could find time to hack systems just for fun.

The other thing you could do is become some kind of consultant and make better money consulting companies and running pen tests for them, but if you can do that you could just as easily make more money building out and consulting on custom software solutions...

And if you don’t care about the money fine, but eventually someone that does care about money will come along and do it better than you anyway and eat your lunch.

Now, to leave this post on a positive note and not become a target for downvote gangs, I’ll propose this: the way to have more hackers at scale is to grow every software engineer to be a security conscious engineer by default, such that they are constantly looking for vulnerabilities throughout development and deployments to production. It works well because I’d say engineers spend maybe 4 hours a day actually working and the rest is thinking about problems, which can happen in parallel with hacking quite nicely.


> the way to have more hackers at scale is to grow every software engineer to be a security conscious engineer by default, such that they are constantly looking for vulnerabilities throughout development and deployments to production.

Most companies I’ve worked for have encouraged this to a degree. However, there are some major objective mismatches when trying to fit security engineering into product engineering. It’s great to understand the OWASP top 10 and read up on basic issues to avoid (like SQL injection), but I don’t think this proposal is very realistic for a few reasons.

Security engineering requires deep knowledge of protocols, networking, system design, etc. that a lot of product-oriented engineers just don’t have enough context to own that responsibility outright. New vulnerabilities are being researched on a regular basis. How do I keep up with netsec research while trying to stay current in my primary skill set? You might as well suggest to a front end dev that they should start rolling operating systems.

On top of that, pen testing can be incredibly time-consuming, and putting that responsibility on engineers who are focused on shipping as quickly as possible is going to cause problems. My performance as a product engineer is tied to my ability to ship, not testing my feature against every known attack vector out there.

We’re better off trying to change the culture of making everyone a half-assed security engineer by fostering a culture that treats security as a first class citizen.


Great points indeed.

As for your proposition - I agree. A simple way to do this would be to have, for example, a day a month where the organization's engineers try to break into their own system.

It's common to write tests and follow best practices here and there, but developers often won't actually dedicate some serious time to actively try to break into their own system as an attacker.

I think this could be fun and beneficial for all involved.


I worked at a company where we did this. It was indeed fun, and we learned some things about our system that showed up some security issues. It also made everyone think more about what a curious attacker could do in a short amount of time.


> but the top 1% make like 35k a year on average, and everyone else makes peanuts or zero

Where do you get that idea from?


They are probably talking about freelance. Not mentioned was that the top 0.1% are ones that make 200 to 500k.

Also no mention of starting a testing business? No mention of cyber security jobs?


There was a mention, testing businesses and cybersecurity jobs all pay less than software engineering businesses or jobs.


We need better protection for whitehats. Every government is anti-hacker at some level.


Agreed.

This can be both bottom-up with more Responsible Disclosure programs, as well as top-down (i.e. via the legal system), making sure to take intent and actual harm caused into greater consideration when seeking to prosecute a white hat.


Bottom-up is perhaps the best approach, however. I mean, if an organization doesn't want white hats snooping around, then that's up to them, period. So should be a matter of more organizations actually putting out some guidelines.

Top-down would be important for accidental discoveries though. It's quite problematic to go after someone that accidentally finds an issue in your service.


Top-down is how governments work. Policy first, technology third.


  s/Responsible/Coordinated/g


On its face, hacking, even the white hat kind, is no different than saying we should allow trying to steal from stores to check if their loss prevention functions are working. As far as the government is concerned, people shouldn’t be trying to steal or hack to begin with.

Obviously, if a company hires somebody to try and steal to check their LP, or if a company hires somebody to break into their network, it’s different (and I know there have been issues with one part of a company trying to prosecute somebody that another part tried to hire, but that’s just stupid corporate governance).


Hm, yes and no. Ethical hacking is bounded by "scope". Organizations specify exactly what is allowed and what isn't. Also, the general guideline is that you must stop once you reach data.

An ethical hacker will find out your door is open and warn you without going inside. If they go in and steal, they're not a white hat hacker.

And if you have an open door with something interesting inside, a black hat hacker will eventually find it anyway.


An ethical cracker might take a CD box with one of those radio tags and wave it at the bleepy scanner thing to check if the scanner's working, but they wouldn't walk out the door with a fridge under their coat.


What a white hat hacker does in many (not all, maybe not even a majority) of instances is the equivalent of trying new lock picks in the door, watching them swing open, and then reporting dutifully on it. "Not going inside" and "not stealing" aren't sufficient defenses in the eyes of the law for this behavior any more than attempted burglary.


There are multiple reasons why computer security is nothing like "trying to steal from stores to check if their loss prevention functions are working."

To start with, loss prevention is the store's problem. They have all the incentives they need to do it themselves. With data it's completely the opposite. They have your data and it costs you, not them, when they secure it poorly. Many of the entities with poor security are effectively judgment proof, so legal liability is ineffective. The users are really the only ones with the right incentives to poke around and make sure their data is secure. It's like storing your valuables at a bank, but if you notice a security vulnerability in the bank, not only is reporting it or informing their other customers likely to get you arrested, they're not even required to let you remove your valuables from the bank. (Try to get Equifax to delete your data.)

Then, the scope of what's considered "hacking" extends far beyond things that cause actual harm. The sort of things investigative reporters have long done to humans with a telephone are a felony if you do them to computers with an internet connection, even if no "harm" is done to the target other than to reveal their misconduct to the world.

Then, the penalties are unconscionable. If an investigative reporter goes past the "employees only" door to take some pictures, that's illegal, but the most common result is only that they'll be removed from the property, and even if charges are pressed it's only a misdemeanor in most places. Do the same thing with a computer and you can expect federal prison. It discourages people from testing the fences, even though you want them to.


This comment is kind of eye opening for me, so most people understand instances of software as manufactured tangible items that has manufacturing variances!

You know, software is more like criminal laws interpreted by an instant judging robotic courthouse, so you absolutely don’t want bad precedents and loopholes, or do we? Because tests are sparse events but if it becomes real world applications it will be dense and miserable...


In a sense, corporate whistleblowing is the distribution of corporate secrets, illegal if it weren't for the protections. Is that protection bad? I don't think so. We can just look at what Volkswagen did.

Point being, I'm sure there are companies out there that do the same but digitally, either willingly or accidentally. Reporting that however, is illegal in oh so many jurisdictions, that's not okay.


I've been coming to a similar conclusion. In recent consulting memory, I have seen enterprises with thousands of EoL Windows 2008 servers and vulnerabilities that won't get patched until a catastrophic event forces the businesses to replace them.

There is a trope about suppressing forest fires that has come around in a few different areas, where by suppressing forest fires with controlled burns of scrub the only fires that do occur are the ones that consume the fuel-intensive old growth forests creating super infernos. Nassim Taleb talks about it in the context of "suppressed volatility," where most attempts to reduce volatility in complex systems just cause risk to bottle up, as though there were a law of "conservation of risk," at play.

We don't have "more hackers," right now because the incentives just aren't worth it. Oddly, enforcement of hacking laws and high profile busts of worm/malware writers have had the effect of discouraging the low level hacking that would clear the scrub out of corporate networks - leaving a lot of super vulnerable and unmaintained infrastructure tucked away behind forgotten firewalls.

I have been trying to figure out how to get positioned product wise for this next extreme event, which I think will be caused by all these crappy security controls that removed the low hanging fruit while letting massive data risks age into a super fuel for a tail risk event. Gaming out the scenario of what companies will need when all that old infrastructure in-effect goes up in flames mainly yields migration tools for legacy code into containers that can be moved into cloud environments in a forced move, un-sticking vendor lock-in, BYOCredentials, some kind of liability mitigation for all the PII it will compromise and customer alternatives to locked in dependencies, etc.

From a product perspective, I have a contrarian view where I'm less interested right now in what people in companies say they want, and more in how to be positioned for the Forcing Function event that their risk exposure has them set up for. Enterprise demand is a synthetic effect of political positioning and leverage, and not user desire that startups and consumer products respond to.

Forest Fires: https://www.nature.com/articles/s41467-020-15961-y

Taleb riffing on a similar theme via FSBlog: https://fs.blog/2012/11/learning-to-love-volatility/


Is there somewhere a list of insecure patterns in versions of popular frameworks? This is how I'd organize results. Instead of treating each implementation separately, work on finding insecurities of sites that do have white-hat-friendly policies. Any findings can be reported, patched, and documented against the software/version that was mis/used.

Every implementor then only has to review/subscribe to the versions of the software they use. Any custom software would still have to be handled differently but I presume that patterns of use of frameworks and libraries either makes up a larger portion, or will trend in this direction.


Really, how many hackers are needed for everyone to stop saving passwords in plaintext?


If they are using plaintext passwords, I wouldnt be suprised their next step would be using MD5 or SHA1 or self made crypto...


This is exactly the place where good defaults and good tutorials are incredibly important. PHP is notorious for this mistake.


Or better, to stop using passwords altogether!


Unless the framework automatically does this, this will never stop. So many tech companies are really a guy who did a Udemy course or read a Dummies book.

I interned at a company that did 10 million a year with a website built with nothing but the w3schools tutorials. You could print the database from the login form as it was started by a guy who just read those tutorials.

We weren't allowed to fix it either as the owner didn't want a system he didn't understand.


It’s not just dumb people. All systems need good defaults to prevent security errors. The world’a most skilled c++ programmers collapse as soon as a fuzzer is thrown at them.


Hackers are/should be the short sellers of the Internet.


The Internet is too paywalled: We need more archive links

https://archive.is/dn6fN


hahahaha cool how can one be a good hacker?


I'm not qualified to answer that question as I'm not a hacker myself. But apparently Hacker101 is a great resource (https://www.hackerone.com/hacker101). I myself have gone through some CTF there and can recommend.


I think generally being a hacker is a mindset. Being able to answer a question like "how can one be a good hacker" is fundamental to that mindset.

I dont mean that in a dismissive way, I mean that to hack you are tinkering and figuring things out with your own mind rather than asking others.


Read/education

I think I could hack. I've been hacked, I've read about how people hack, and I have a pretty good understanding how attempts are made. Basically you learn the exploits that historically happen.

Worst case if you need to do something, you can Phish.


For all you hacker types out there. I'm a software engineer who knows very little about security. Any good resources to learn it?

Currently my company just assumed that Spring handles all those issues for us.


- Validate ALL data that comes from external sources, even if you think it could not possibly be changed unexpectedly. External sources includes things like user input, configration and session files, and even things like dynamic libraries if you care enough.

- Sending data to an external system that processes commands, including destructive ones like DROP DATABASE? Verify what you're sending doesn't contain commands you don't want to run, each time, every time.

- If you are programming in C or derived, and using expressions that will be involved in array indices or pointer arithmetic, make sure the expression falls within acceptable range before using the resultant data. Since NULL or 0 is an invalid pointer value, it shouldn't ever reach the code that tries to access pointers.

- Should a sensitive function only be called from certain other functions? Should a sensitive variable only be modified/accessed from certain other functions? Enforce that.

- Someone will eventually find anything that's not documented, so don't rely on that for security. Don't expose an interface (especially not through any network accessible method) without an authentication or verification mechanism if it's not OK for it to be 100% public.

- Define everything. A function call should never have an unexpected result. Invalid inputs should cause the function to return errors or throw exceptions.

The above is a good percentage of it I think.


Telling people to "validate all data" is like telling them "not to have security bugs". It's not helpful advice. Validate for what? Length? SQL metacharacters? Javascript notation? HTML? Authentication? Concurrent submission? The list of things to consider is essentially a list of practically all vulnerabilities.


I know a lot of forms out there that will accept any input. The standard to improve upon here is often rock bottom.

A lot of developers do not know that data should be validated.


The point isn't that you shouldn't validate data. It's that telling developers to validate data doesn't help them. It begs the question of "how" to validate data, and the answer to that question is so big that it encompasses most of software security. Meanwhile, left to their own devices after that one instruction, developers come up with silly input validation schemes, like password fields that don't accept SQL metacharacters.


While hard to apply, the correct answer is pretty obvious: identify the invariants of your system, and make sure your inputs abide those invariants. To do this, there are two complementary approaches: you can parse your input into something intelligible and safe for your system, or you can reject invalid input.

The hard part is identifying the invariants of your system. But (and I know I'll sound like Dijkstra), that hard part is also the minimum bar. If you don't know what it takes for your system to behave correctly, how can you hope to build a correct (and therefore secure) system to begin with?

Some people love to say "don't roll your own crypto", with the strong implication that whoever is trying to it is not competent enough. Well, the same applies to any server exposed to the internet, any moderately interactive website, any network library, any video decoder, any compiler… all those things have the potential to go catastrophically wrong in the face of hostile input, and therefore should be held to similarly high standards.

Here's my advice:

1. Make sure you understand invariants, and why they matter.

2. If you don't, train yourself. It may take years.

3. Validate your inputs (but you already know that by now).

Not helpful? Sorry pal, programming takes time to learn, and I don't have the time to teach.


Security is mutli-faceted, so it's hard to know where to start. I like to help people begin by getting them to learn the fundamentals. A lot of security training is changing your mindset (perspective) on situations.

Two books I like to start this process:

* Threat modeling by Adam Shostack

* CISSP all in one handbook by Shon Harris

You don't need to get a CISSP cert, but the resources and education are generally applicable in most situations.


If you're looking for something more practically related to securing the code you write, the OWASP Top 10 (and OWASP in general) is probably the best place to start. Many modern frameworks like Spring have lots of nice security features baked in, but I've still seen plenty of low hanging Top 10 vulnerabilities in Spring apps -- most often caused by simply failing to use allowlist (formerly known as whitelist) validation on key inputs.

It is astonishing how often people neglect basic programming best practices (input validation, error handling, logging, access control) which in turn leads to security vulnerabilities. My theory is this is caused moreso by Agile Management forcing developers to cut corners, rather than just developers being ignorant/lazy.

https://owasp.org/www-project-top-ten/OWASP_Top_Ten_2017/Top...

Threat modeling can be a good practice to learn, because it gets you into the habit of thinking about how you could hack each new thing you're developing.

I dont think the CISSP is a good recommendation for this person's case, unless they plan on becoming a cyber security manager.


That's not my point suggesting CISSP. The book I recommended takes the reader through the different OSI layers for networking and security, a basic intro to threat modeling, and other stuff that is fundamental to security. One can always skip the managerial stuff, but it doesn't mean the CISSP book is not a good recommendation.

Also, having a developer understand the value and need for code security from the perspective of a security person is important to the overall success of an infosec program. Otherwise both engineering and infosec are going to be grating on each other.


Ignore threat modeling and CISSP, both of which are industry kabuki dances. The rest of the advice in this comment is fine.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: