
Learning from a Year of Security Breaches - arice
https://medium.com/starting-up-security/learning-from-a-year-of-security-breaches-ed036ea05d9b
======
mag00
Hi, I wrote this!

To continue a discussion:

    
    
      - How does your engineering team track new "debt" after releasing code? (if at all, and why not)
      - Do you pay anyone for centralized logging, or wish you didn't? Are you making it useful?
      - Do you feel like your company is good at managing access when hiring / firing people?
    

Otherwise thanks for any feedback, I enjoy writing these!

~~~
haser_au
Can only speak about my corner of a very large organisation;

\- Technical debt of custom coded solutions is a known issue across our
organisation. New strategy is to move to market solutions, therefore
outsourcing the risk to organisations with (hopefully) better code management
than we have. For my corner, we don't have technical debt measured accurately
enough for my liking.

\- Yes, we pay for an use centralised logging. We've actually been through two
solutions, and are now moving to a third due to various factors (cost,
integrations, speed, out-of-the-box metrics). Integration into the centralised
logging system is part of our Request for Tender marking criteria.

\- Relatively good at disabling access after someone leaves. We integrate as
much as possible to a central repository. It's just the outliers that tend to
last beyond someone in the organisation. Critical systems are absolutely
shutdown within 24 hours of a leaver departing (usually immediately if they're
a bad leaver).

Edit: Formatting

~~~
BuuQu9hu
> (hopefully)

I hope you are auditing the code of those external orgs.

~~~
haser_au
When you use SaaS products, auditing the code is not a service they offer. You
have to rely on certifications from independent certifying organisations, etc.

------
acidbaseextract
Where do I start on centralized logging? I'm primarily an application
developer, deployment isn't my strong suit. My hair is on fire at my current
startup. There's a ton to do, we're trying to launch several new major efforts
in January. What's a good plug and play solution that I don't have to think
about?

Are there hosted installs of Elasticsearch/Logstash/Kibana? Is ELK even what I
want?

Every time I start looking at centralized logging stuff it seems like a rabbit
hole of problems we're too small to be worrying about, stuff that's _not
shipping features on my app_.

~~~
lvh
You have a lot of decent options. You could do a lot worse than ELK. If you're
on AWS, you can get hosted Elasticsearch. It comes out of the box with
Logstash you can hook up to DynamoDB, and it also does Kibana out of the box.
There are a number of other vendors; but there are decent reasons for keeping
your logs as close as possible.

CloudWatch works fine too. CloudWatch comes integrated with AWS services out
of the box. It can be more annoying to get your logs into it than ELK (the
latter seems overall more popular). Its alerting and the AWS CLI integration
pretty slick, though.

You should also go turn on CloudTrail right now. It lets you automatically log
side-effectful API calls. It is not a replacement for a centralized logging
pipeline, but it's great high-signal data to put into one.

I appreciate that your complaint (totally valid!) was "this is a rabbit hole",
and I just gave you two options, and that might not help your perception that
it's a rabbit hole. If you find yourself paralyzed by choice, either choice is
much better than deferring the choice! Just pick one. Heck, if you can't pick,
let me help: pick AWS hosted Elasticsearch.

A lot of people (also in the security space) like Splunk. I find it annoying
to deploy (I've heard rsyslog-in-front-of-forwarders as a canonical deployment
method for just ingesting syslog more than once because reasons) and
overpriced. YMMV.

Disclaimer: shameless plug! You're not the only one with your hair on fire.
One of the first things we're doing for Latacora customers is setting up a
centralized logging pipeline.

~~~
tptacek
I've had good luck with Cloudwatch and, if you're on AWS, I'd recommend it
over any other hosted log system (with the possible suggestion of a more
elaborate ELK setup that you build yourself).

The trick to Cloudwatch is --- like most AWS services --- never using the web
UI.

~~~
lvh
That's a good point! If you have someone consuming it that wants a (shared)
web UI, you want Kibana. If they prefer to consume their text in a terminal
and are fine with typing `aws logs` a bunch, CloudWatch is fine (and probably
a little less twiddly than ELK).

~~~
emfree
These are such great comments, thanks for sharing your insights. For folks
looking for other options, I'd also mention
[https://honeycomb.io](https://honeycomb.io), perhaps the most promising
newcomer in this space. It's essentially Facebook's Scuba for the rest of us.

------
tptacek
This is the best security article I've read in a long time. If you're at a
startup right now, drop most things and take a few minutes to read it
carefully.

------
bm98
Ugh. A good and scary reminder of what's lurking around the corner for any of
us at any time - including holidays and vacations (Linode's holiday attack
last December comes to mind). IMHO, the emotional impact of breaches on the
staff who respond to them is under-discussed. The author touches on it here:

> The discovery of a root cause is an important milestone that dictates the
> emotional environment an incident will take place in, and whether it becomes
> unhealthy or not.

> A grey cloud will hover over a team until a guiding root cause is
> discovered. This can make people bad to one another. I work very hard to
> avoid this toxicity with teams. I remember close calls when massive blame,
> panic, and resignations felt like they were just one tough conversation
> away.

------
mikeyk
Had the pleasure of working with Ryan when he was at FB--he's one of the best.

~~~
atmosx
You were part 'Red Team' incidents[1]? I can only imagine the panicked
sysadmins running around like crazy, but jokes apart his is the _best_ way to
train a team's incidence response I've seen.

[1] [https://medium.com/starting-up-security/red-
teams-6faa8d95f6...](https://medium.com/starting-up-security/red-
teams-6faa8d95f602#.kfudlxkgf)

------
jonstewart
It's interesting to see press leaks highlighted here as a pattern for insider
threat. I don't doubt the author that this is so for the limited scope of
organizations considered (SFBA tech companies), but I've worked on several
insider cases, had insight into many more, and it's almost always an employee
or ex-employee, with an axe to grind, taking trade secret information to a new
job at a competitor. In many instances, the competitor has no idea and is
pissed when they find out.

One piece of advice that I'd give out with such cases is to listen to your
Spidey Sense. A lot of organizations will say, after the fact, "well...
something didn't seem right with Bob...". If you sense something isn't right,
prepare to secure evidence and analyze it. Don't put IT assets back into
circulation if there's doubt, and don't sit on it.

------
lvh
This is solid advice. To illustrate a little based on my own experiences and
goals this year:

\- Yes, centralized logging is the biggest thing. What you put into it
matters; queriability matters; but nothing matters as much as having that
centralized logging pipeline to begin with. Once you have that, you can start
adding other relevant metadata, like host config states, API calls, et cetera.

\- Giving employees a budget to buy the device they want is probably a better
idea than BYOD. Strong password policies still matter. If it's BYOD, you
probably still want to bring the device into policy. That can include physical
rules (only do work work on the VPN or from the office) and software ones (you
can use any device you want but it has to be running our osqueryd or
whatever). Unfortunately, visibility becomes a double edged sword: there are
good legal and ethical reasons for not wanting to see everything on an
employee's laptop. (Overall, I think BYOD is a bad idea for most companies.)

\- 2FA is pretty cool. It doesn't just solve the usual "bad/compromised
password" model -- it also typically makes it a lot harder for employees to
mismanage their credentials (e.g. re-use the same SSH keys and have their
personal box be compromised). For some reason, having that around seems to
remind developers that you can make users re-authenticate for
important/unusual actions -- you don't just have to count on the ambient
authority of a session cookie.

\- We'd all like to imagine that we're going to be attacked by space alien
0day ninjas. Realistically, the main vector is an employee (rogue or confused
deputy). Trainings are boring and don't work. Signature-based detection gets
outdated pretty quick. I've done a little work on faster analysis tools -- I'm
hoping we get a lot better at unobtrusively protecting people from even
spearphishing in the next few years. (The tools we're building at Latacora are
ready to beat a lot of attacker tactics right now, but I think we have an arms
race ahead of us. Boring domain generation algorithms still aren't detected by
most organization, so there's not a lot of evolutionary pressure.)

\- I have no idea if we'll get better at quantifying metrics for debt and
security risk. I did a little bit of research into this, and it's a wide open
field. You can get decent high-level reports with a "DEFCON number", but most
of these models are not sophisticated in the sense you'd expect actuarial
tables to be. And that's what they should be! It's revenue-at-risk! Step one
here is fortunately getting all of that data into that centralized logging
pipeline, and security professionals seem to mostly agree that's what you do
first, so hopefully we get better here.

------
coldcode
Logging everything is a great idea, but only if you read the log data. Target
installed a system to monitor for certain kinds of security hacks which wrote
to their logs files. The logging was turned off due to a high number of
warnings cluttering up the logs. Of course the logging was telling them they
were being hacked which they ignored for months, leading to all sorts of
business disasters.

------
weld
Great article! For those interested in security debt and how it relates to
startups, I wrote this in 2011:
[https://www.veracode.com/blog/2011/02/application-
security-d...](https://www.veracode.com/blog/2011/02/application-security-
debt-and-application-interest-rates) and presented it that year:
[https://www.youtube.com/watch?v=MKdiiXgvz_U](https://www.youtube.com/watch?v=MKdiiXgvz_U)
This predates by a year the referenced security debt presentation which has
much of the same material uncited.

------
user5994461
> I wasn’t roped into a single intrusion this year at any companies with
> completely role driven environments where secrets were completely managed by
> a secret store.

> This can either mean one of a few things: These environments don’t exist at
> all, there aren’t many of them, or they don’t see incidents that would
> warrant involving IR folks like myself.

What are these secrets store? Do they exist?

~~~
lvh
In general, secret stores "manage secrets so that you don't have to". That can
mean a few things, depending on who's using the term.

Sometimes, it's as simple as a shared password store (I've used one powered by
GPG, for example). This is better than YOLO password policy, but not by much:
humans still see individual keys.

If you want to be really fancy, you authenticate the human and then decide
what they get to do, in a centralized fashion. This is often tricky to do,
because you either don't have the funds to do that if you're small, or you
have too many services to interact with if you're big. (Many organizations get
pretty close -- I'm told that the DoD pretty much authenticates everything
with smart cards, for example.)

Sometimes, it means a more automated system where software authenticates
instead of a human, and it gets e.g. a certificate. Usually this is still
always the same certificate, though; so the main difference is that it's a
human versus a machine authenticating.

Sometimes, it means an HSM (hardware security module). These are secure
physical devices that perform cryptographic operations for you, so that the
key stays on the device.

~~~
user5994461
Still need a secret to access the secret store... so steal the secret then
steal the secrets in the store.

I fail to see how it is secured. (Though, I can understand that it is less bad
than a YOLO policy).

> Many organizations get pretty close -- I'm told that the DoD pretty much
> authenticates everything with smart cards, for example.

I've been at a place with RSA SecurID (smart card and OTP) + active directory
account as SSO authentication for everything (use one or both for 2FA). It was
nice and well done.

~~~
lvh
You made that point elsewhere in the comments; and I replied to it there; for
the benefit of other people wondering why a secret store _isn't_ just robbing
Peter to pay Paul:
[https://news.ycombinator.com/item?id=13224802](https://news.ycombinator.com/item?id=13224802)

------
rokosbasilisk
Im surprised credential theft is still the lowest hanging fruit.

I thought Banks seem to have solved alot of that.

~~~
kevinr
Banks are in the business of managing financial risk for their customers, and
they have enough money to eat a lot of risk before it becomes a problem for
them. Other business models with less money in them do not have the same kind
of resources.

