
Setting Up Your Tech on the Assumption You’ll Be Hacked - rbanffy
https://www.nytimes.com/2018/10/03/technology/personaltech/hacking-protection-passwords.html
======
fghtr
_My setup is to basically have two systems: a work laptop, on which I try to
keep everything professional so that if and when it is hacked the damage will
be minimal, and then a second laptop, which I use for anything personal and
never connect to my work system._

Sounds like a perfect use case for Qubes OS.

~~~
justincormack
Partitioning into two computers seems a better strategy. May make sense to use
Qubes on the work one but just isolating risks physically and partitioning
life is safer.

------
9wzYQbTYsAIc
Things like the Intel ME vulnerabilities and some of the fancy audio based or
RF based attacks really make that assumption important. Maybe a safer
assumption to make: any computer can be hacked at any time.

------
njsubedi
IMHO security is a primary concern for only the founders/lead engineers during
early days. When it reaches hundreds of thousands of users, there is little
you can do about security. There will be several managers, dozens of
developers and testers. It's pretty easy to hire someone who doesn't really
care about your company and product, and ruin a lot of your hard work. I might
be wrong but many breaches have occurred because someone working on a small
module has made a laughable commit, and it has slipped from the hands of a
lousy tester.

~~~
jchw
Strong code review, enforcing good security practice (like sanitizing user
input,) unit testing, fuzz testing, auditing, logging, layered security and
more are things you can implement to mitigate risk. Obviously there's a trade
off and almost anything that matters gets hacked, but it will show if you did
your homework: you will know before your customers, and hopefully you have at
least limited the damage.

A disturbing amount of serious breaches come from simply using versions of
software known to be insecure.

~~~
ownagefool
Not sure this is really relevant to the article but:-

Protective Monitoring - Take that auditing and logging and alert people based
on things happening. I.E. is it normal for your production app to try and call
out to the internet? Frequency of actions of a user. Usage of privledged
accounts. Rulesets will be app specific, but people should actually try to
write these.

Rotation of secrets (or better yet, avoid static secrets) - Leavers shouldn't
have your root db creds, because they used them once to setup the system. If
someone hijacks a system, it's better they have a temporary token, rather than
lots of credentials.

Zero trust network - You should run every system like it's connected to the
internet. They should all have auth, auditing, monitoring and layered
security. Just because something is on your internal network, doesn't make it
secure.

Patch - Systems are like children, but many product teams are like deadbeat
dad's you leave once the kid is born. Actually having a support policy that
keeps systems up to date. This should be formalised in larger orgs.

Scan - CVEs are available for every popular language and OS for free. You
should probably have SAST, DAST and Dependcy Scanning. Have this in your
pipeline. Do it nightly against prod.

Use a framework - If it's on the web, you should probably be using a framework
like Django, Rails or Symfony, or whatever seems sensible for the language
you're using. Even your API responses should use your templating layer.

Basically, if you assume you're going to get hacked, which is a fair
assumption, you want to:

a) Know that you've been hacked. Think about what a hacker is going to do once
they breach your front door, or come in through a CI pipeline or a bad
dependency.

b) Make recovery easy as possible (just patch the issue).

c) Limit what could be done with the hacked access. If your leaked access
tokens only give you access to the hijacked app, deny and alert when someone
trys to use it against something else.

