
The Internet is too unsafe: We need more hackers - yakkomajuri
https://medium.com/@yakko.majuri/the-internet-is-too-unsafe-we-need-more-hackers-c9742fc1a03b
======
bamboozled
I think being a hacker, phreaker or cracker became so illegal it really scared
people off experimenting and learning about computer security the practical
way.

Now there are obviously great security people around, but the path one needs
to take to become a professional in the field seems to be more theoretical and
rubber stamped.

For me this is why I focused more on just software engineering with less focus
on network security. I get paid to hack without worrying about the police
knocking on my door for experimenting and exploring.

~~~
tptacek
The opposite is true; it is much easier today to engage with computer security
legally than it was in the 1990s. And the consequences for crossing the line
are generally smaller.

It's easier for lots of reasons, many of them banal, like the fact that you
can get a modern Unix system on your laptop in minutes for free, and tear it
down and bring it back up on demand with virtualization, or the fact that
everyone has Internet access and nobody has to bounce through an X.25 Internet
gateway or an outdial to get to IRC.

It's also easier because huge amounts of vulnerability research have been
published since the 1990s. The cutting edge of vulnerability research in the
1990s is almost unbelievably primitive compared to what's available in open
school curricula now, let alone refereed vulnerability research venues. In the
1990s, basic details about things like stack overflows or even temp file races
were _permanently_ embargoed and made available only to large-company system
operators. Most of what's disclosed today _by vendors_ would have been decried
as unethical by the security community in 1995.

People that want to do active vulnerability research today can participate in
thousands of bug bounty programs, none of which existed in the 1990s. And,
despite what the prevailing sentiment on HN would indicate, I think it's
actually a bit rarer for people to be prosecuted for undirected, non-monetized
hacking than it was in the '90s.

~~~
bamboozled
Yeah I was comparing the situation to circa 2000 onwards. Not around the time
when Linux came out.

I think th is sentiment regarding bug bounties programs kind of solidifies my
point. Corporate America and the government has made only the type of hacking
they benefit from legal, while experimentation is off the table.

I’m not so much advocating for a free for all by the way. I think there is
difference between curiosity and malicious intent.

------
aSplash0fDerp
The current state of "safety" online was actually a masterclass in growth,
trends and thresholds of open platforms. Each interation could only bear so
much.

In the dialup and early broadband phase, looking "under the hood" and learning
about everything that was there was common practice. It was a 10 course meal
daily for the life-long learners.

Up until 2000'ish, there were only "certain types" of people online, with
trust and integrity being self-regulated and intact.

Between 2000'ish and 2007'ish, we saw substantial growth of broadband and
wifi, expanding the userbase further, while attention was starting to be paid
to internal security. Enter the ad explosion of popups, the occasional virus
exploit and post-dotcom wisdom (/s) of the economists, and the self-regulating
model erodes practically overnight.

The smartphone bubble is where "all types" had access to the vast trove of
resources that we used to call an information superhighway.

Fast-forward to the present and when you open the hood, theres now several
yellow warning stickers (idiot alerts) and all of the doors internally are
starting to be locked as common practice.

It looks like the modern crux of safety is the mix of three types (there are
more) of fundamental operations of open platforms; (almost) full trust
backends/platforms, self-regulating models and software packages (the bulk of
OSS, linux, etc) and the current full lockdown approaches to research,
exploration and security.

I guess we can officially say "the more the Internet changes, the more
unrecognizable it becomes", but data is still the most valuable asset and will
most likely migrate to "safest path routing", rather than using the "three
billy goats gruff" model that the Internet has become.

------
xwdv
Unfortunately it doesn’t pay well to be a hacker, you’re better off being a
software engineer. So yea, I guess the internet can rot until there’s a way to
do white hat hacking at scale.

I’ve looked into this, in order to transition from software engineering into a
cybersecurity niche it requires a drastic pay cut. Sure if you’re the self-
employment type you could try doing bug bounties, but the top 1% make like 35k
a year on average, and everyone else makes peanuts or zero. That is still
better than starting a startup, but you're also better off being a day trader
where 10% of people make a better living with only 2 hours of trading a
weekday and two hours of analysis and strategizing in the morning, maybe then
you could find time to hack systems just for fun.

The other thing you could do is become some kind of consultant and make better
money consulting companies and running pen tests for them, but if you can do
that you could just as easily make more money building out and consulting on
custom software solutions...

And if you don’t care about the money fine, but eventually someone that does
care about money will come along and do it better than you anyway and eat your
lunch.

Now, to leave this post on a positive note and not become a target for
downvote gangs, I’ll propose this: the way to have more hackers at scale is to
grow every software engineer to be a security conscious engineer by default,
such that they are constantly looking for vulnerabilities throughout
development and deployments to production. It works well because I’d say
engineers spend maybe 4 hours a day actually working and the rest is thinking
about problems, which can happen in parallel with hacking quite nicely.

~~~
yakkomajuri
Great points indeed.

As for your proposition - I agree. A simple way to do this would be to have,
for example, a day a month where the organization's engineers try to break
into their own system.

It's common to write tests and follow best practices here and there, but
developers often won't actually dedicate some serious time to actively try to
break into their own system as an attacker.

I think this could be fun and beneficial for all involved.

~~~
scaryclam
I worked at a company where we did this. It was indeed fun, and we learned
some things about our system that showed up some security issues. It also made
everyone think more about what a curious attacker could do in a short amount
of time.

------
Kednicma
We need better protection for whitehats. Every government is anti-hacker at
some level.

~~~
selectodude
On its face, hacking, even the white hat kind, is no different than saying we
should allow trying to steal from stores to check if their loss prevention
functions are working. As far as the government is concerned, people shouldn’t
be trying to steal or hack to begin with.

Obviously, if a company hires somebody to try and steal to check their LP, or
if a company hires somebody to break into their network, it’s different (and I
know there have been issues with one part of a company trying to prosecute
somebody that another part tried to hire, but that’s just stupid corporate
governance).

~~~
yakkomajuri
Hm, yes and no. Ethical hacking is bounded by "scope". Organizations specify
exactly what is allowed and what isn't. Also, the general guideline is that
you must stop once you reach data.

An ethical hacker will find out your door is open and warn you without going
inside. If they go in and steal, they're not a white hat hacker.

And if you have an open door with something interesting inside, a black hat
hacker will eventually find it anyway.

~~~
wizzwizz4
An ethical cracker might take a CD box with one of those radio tags and wave
it at the bleepy scanner thing to check if the scanner's working, but they
wouldn't walk out the door with a fridge under their coat.

------
motohagiography
I've been coming to a similar conclusion. In recent consulting memory, I have
seen enterprises with thousands of EoL Windows 2008 servers and
vulnerabilities that won't get patched until a catastrophic event forces the
businesses to replace them.

There is a trope about suppressing forest fires that has come around in a few
different areas, where by suppressing forest fires with controlled burns of
scrub the only fires that do occur are the ones that consume the fuel-
intensive old growth forests creating super infernos. Nassim Taleb talks about
it in the context of "suppressed volatility," where most attempts to reduce
volatility in complex systems just cause risk to bottle up, as though there
were a law of "conservation of risk," at play.

We don't have "more hackers," right now because the incentives just aren't
worth it. Oddly, enforcement of hacking laws and high profile busts of
worm/malware writers have had the effect of discouraging the low level hacking
that would clear the scrub out of corporate networks - leaving a lot of super
vulnerable and unmaintained infrastructure tucked away behind forgotten
firewalls.

I have been trying to figure out how to get positioned product wise for this
next extreme event, which I think will be caused by all these crappy security
controls that removed the low hanging fruit while letting massive data risks
age into a super fuel for a tail risk event. Gaming out the scenario of what
companies will need when all that old infrastructure in-effect goes up in
flames mainly yields migration tools for legacy code into containers that can
be moved into cloud environments in a forced move, un-sticking vendor lock-in,
BYOCredentials, some kind of liability mitigation for all the PII it will
compromise and customer alternatives to locked in dependencies, etc.

From a product perspective, I have a contrarian view where I'm less interested
right now in what people in companies say they want, and more in how to be
positioned for the Forcing Function event that their risk exposure has them
set up for. Enterprise demand is a synthetic effect of political positioning
and leverage, and not user desire that startups and consumer products respond
to.

Forest Fires:
[https://www.nature.com/articles/s41467-020-15961-y](https://www.nature.com/articles/s41467-020-15961-y)

Taleb riffing on a similar theme via FSBlog:
[https://fs.blog/2012/11/learning-to-love-
volatility/](https://fs.blog/2012/11/learning-to-love-volatility/)

------
karmakaze
Is there somewhere a list of insecure patterns in versions of popular
frameworks? This is how I'd organize results. Instead of treating each
implementation separately, work on finding insecurities of sites that do have
white-hat-friendly policies. Any findings can be reported, patched, and
documented against the software/version that was mis/used.

Every implementor then only has to review/subscribe to the versions of the
software they use. Any custom software would still have to be handled
differently but I presume that patterns of use of frameworks and libraries
either makes up a larger portion, or will trend in this direction.

------
rini17
Really, how many hackers are needed for everyone to stop saving passwords in
plaintext?

~~~
diamondo25
If they are using plaintext passwords, I wouldnt be suprised their next step
would be using MD5 or SHA1 or self made crypto...

~~~
Avamander
This is exactly the place where good defaults and good tutorials are
incredibly important. PHP is notorious for this mistake.

------
dsalzman
Hackers are/should be the short sellers of the Internet.

------
wcerfgba
The Internet is too paywalled: We need more archive links

[https://archive.is/dn6fN](https://archive.is/dn6fN)

------
monanjunk
hahahaha cool how can one be a good hacker?

~~~
emerged
I think generally being a hacker is a mindset. Being able to answer a question
like "how can one be a good hacker" is fundamental to that mindset.

I dont mean that in a dismissive way, I mean that to hack you are tinkering
and figuring things out with your own mind rather than asking others.

------
StupidPeople
For all you hacker types out there. I'm a software engineer who knows very
little about security. Any good resources to learn it?

Currently my company just assumed that Spring handles all those issues for us.

~~~
typicalrunt
Security is mutli-faceted, so it's hard to know where to start. I like to help
people begin by getting them to learn the fundamentals. A lot of security
training is changing your mindset (perspective) on situations.

Two books I like to start this process:

* Threat modeling by Adam Shostack

* CISSP all in one handbook by Shon Harris

You don't need to get a CISSP cert, but the resources and education are
generally applicable in most situations.

~~~
banads
If you're looking for something more practically related to securing the code
you write, the OWASP Top 10 (and OWASP in general) is probably the best place
to start. Many modern frameworks like Spring have lots of nice security
features baked in, but I've still seen plenty of low hanging Top 10
vulnerabilities in Spring apps -- most often caused by simply failing to use
allowlist (formerly known as whitelist) validation on key inputs.

It is astonishing how often people neglect basic programming best practices
(input validation, error handling, logging, access control) which in turn
leads to security vulnerabilities. My theory is this is caused moreso by Agile
Management forcing developers to cut corners, rather than just developers
being ignorant/lazy.

[https://owasp.org/www-project-top-
ten/OWASP_Top_Ten_2017/Top...](https://owasp.org/www-project-top-
ten/OWASP_Top_Ten_2017/Top_10-2017_Top_10.html)

Threat modeling can be a good practice to learn, because it gets you into the
habit of thinking about how you could hack each new thing you're developing.

I dont think the CISSP is a good recommendation for this person's case, unless
they plan on becoming a cyber security manager.

~~~
typicalrunt
That's not my point suggesting CISSP. The book I recommended takes the reader
through the different OSI layers for networking and security, a basic intro to
threat modeling, and other stuff that is fundamental to security. One can
always skip the managerial stuff, but it doesn't mean the CISSP book is not a
good recommendation.

Also, having a developer understand the value and need for code security from
the perspective of a security person is important to the overall success of an
infosec program. Otherwise both engineering and infosec are going to be
grating on each other.

