
Ask HN: How do you integrate corp security/risk people into agile projects? - motohagiography
With large orgs coming around to agile dev, as developers or product owners, how are you handling the gatekeeper review&#x2F;approve role of security analysts?<p>Edit: e.g. How do you get threat intelligence, compliance assessment, and a shareable security architecture to assure customers&#x2F;stakeholders into a backlog?
======
lvh
I don’t know what you had in mind for “large”, but I can tell you how it’s
working out for us.

\- Most customers write design docs/tickets. This helps because we can suggest
alternative approaches early. It’s rare that we tell people that an idea is
fundamentally flawed, but it’s common that we suggest a different
implementation approach or additional safeguards.

\- We review individual PRs. Here, we're looking for concrete individual
appsec concerns (or infrasec if it’s a Terraform PR for example). Not the
right place to have overarching product design conversations.

One concern is that there are just too many PRs for a fledgling security team
to review every single one. For this we rely on 2 mechanisms:

1) IC opt-in (they tag us on GitHub via @CLIENT/security when they would like
a set of eyes on something or if a PR is part of a product change we’ve been
involved in)

2) Automated heuristics. These have a hair trigger: lots of false positives.
It’s just stuff like “are you adding middleware?”, “are you adding an
endpoint?”, et cetera. That’s OK, because we don’t make it a gating part of
CI: its job is to turn 100 PRs for us to look at into 10 PRs for us to skim
and make a decision if it’s the 1 PR that deserves extensive scrutiny.

So, overall: stop thinking of yourself as a gatekeeper and start thinking of
yourself as an advisor. Our job is to make sure you understand what you’re
doing and what your alternatives are. Security is never the only requirement:
you have a business to run.

------
lvh
'tptacek and I were just discussing this the other day for dependencies (both
libraries and vendors). ICs in technical and non-technical roles ask us if a
particular dependency is OK, and we evaluate it. Examples include the
privileges a browser extension wants, the OAuth scopes a vendor wants, and the
general security posture of a new library.

Individual evaluations don't capture risk you get from just having a lot of
dependencies. "What happens if this gets compromised" is something we consider
individually, but it becomes more actionable when you look at your
dependencies globally. Examples: someone loses their Chrome Store credentials,
or someone accidentally pushes AWS keys to GitHub, or someone loses their npm
creds in an eslint-scope style problem. Now the dependency itself is
malicious. If it's a library, that means code execution on your developer
laptops, CI systems, and possibly customer devices.

Individual dependency requests are not the right place to push back on
introducing a dependency. You need a second speed to address those by teaming
up with e.g. the VPEng and usually someone in PeopleOps/HR/Sales/Marketing
(for things like calendar mgmt, sales tools, ad-hoc CRMs) to periodically pare
that list down.

------
toomuchtodo
EDIT: lvh/latacora's answers above in thread are most likely better suited to
your use case

Cloud security architect at a financial services firm here (supporting
thousands of devs). We're using the usual pull request review process between
dev/ops teams and our risk management team (which is composed of various
subject matter experts) with an SLA. We're also hard gating ("you shall not
pass") using Sentinel rules for Terraform Enterprise and service control
policies in AWS accounts.

I highly recommend you put a training program together that incorporates a
feedback loop from the above process. Before devs and ops folks start, they
should be gated until they go through your training process. That should
minimize the exceptions you'll need to catch in the review process, but when
those exceptions occur, that should be noted for training improvement.

~~~
hluska
A training program is an excellent idea. Thank you very much - that helps me a
lot.

~~~
toomuchtodo
Happy to help.

------
meowface
Code/PR reviews (by other devs and by people who specialize in application
security), static analysis, secure coding practices training, constant
vulnerability assessments and penetration testing, and a strong security team
who understands the need to balance productivity and security.

Really, above all else it's just hiring or contracting (ideally hiring) good
security people. Good appsec people to review code and look for vulns, people
who just do education and training, people who just do penetration testing,
people who just do security architecture and tooling. And of course a good
security operations and incident response team to deal with the day-to-day
"frontline" defense, investigation, and response. If they are full-time
employees who are integrated with the company culture and are experienced
enough with agile development, then odds are they're not going to set up more
red tape than is necessary.

If security is mostly offloaded to devs or operations and no independent
security org exists at your company, you're in trouble.

~~~
elptacek
I may not be caffeinated enough to participate in these threads articulately,
but this question made me realize something that hadn't quite clicked before.
As I have experienced it, the existing (waterfall?) model for deploying new
features and bug fixes in large organizations requires going through QA and
security review, where (hopefully) the on site security team has some sort of
checklist or guideline for doing whatever analyses they do. But as they move
over to agile, continuous integration/delivery, is there still time to deploy
a complete staging build and wait? We have been working with small companies
for ~2 years now and I haven't rigged up something to test code in at least a
year. Mostly I just stare at it until I've figured out if I can exploit it.

This is where large companies might trip and fall, if they expect their
existing security analysts/teams to do this (or, as you say, if they try to
offload it onto developers or ops people, even ones who are more interested in
security). I don't like how this sounds, what I am about to type, but the
majority of security people -- particularly the ones at the level where they
are hired to run QA-type audits -- do not know how to review code. It really
does make me uncomfortable to say this, but if that's the rock, the hard place
is how many times I've had to tell someone they really need to learn to code
if they want to "work in security."

So when you say, "good security people," what you mean is people who can read
and write code, and also have enough experience to know when that code can be
used in a way that the original author did not intend to. It's almost as if
there's a misconception that being able to read and write code is what you do
if you're a dev or devops. This mindset is odd to me in so many ways. It's
like saying, "I don't need to know how to use a screwdriver or a hammer
because I'm not a carpenter." Even better, someone who understands what it
feels like to be under some pressure to roll out changes where there might not
have been a lot of time to consider malicious behavior. Someone who has your
back.

Just yesterday or the day before, I had this inexplicable crisis where I was
concerned about the value I am providing. It's gone now. Thanks!

~~~
meowface
Yes, a lot of companies are definitely missing good application security
personnel. Many companies don't even have a real infosec department or team at
all; many have an infosec department but no dedicated appsec team or process;
many have appsec people but to them "application security" means a team of 2-3
people who run IBM AppScan once per week and basically just attach a computer-
generated report of findings to an email sent to a distribution list with
almost no other input. Often without even reviewing the code flagged by the
tool or eliminating false positives from the results, let alone performing
manual self-driven code reviews.

For others, their appsec team(s) is/are constantly building security
libraries, frameworks, and tooling, and scanning code for security issues with
software and manual review on a daily basis.

Information security is still just a checkbox for a lot of companies. This is
gradually changing with more and more breaches in the news every few days and
executives who are finally starting to appreciate that the consequences of a
breach can be very bad, but it's still pretty common. I really don't think
many companies have solid appsec teams that are doing the things you and I
would hope they would do, and I agree that probably a scarily high percentage
of "application security analysts/engineers" do not and cannot review code
effectively.

You are absolutely right that knowing how to write and read code are crucial
skills for many aspects of infosec and that a lot of people neglect that, and
it's disheartening that there are many companies who don't really have people
like that on their security teams - even companies that claim to have an
appsec program. But application security is also only one aspect of a solid
information security program, and some other aspects do not necessarily
require development knowledge beyond the basics.

From what I know and have experienced to a small extent, FAANG (and others
in/near that tier) invest a ton into application security and really are doing
it right, at least. (Or are at least doing it _way_ better than 99% of other
companies out there.)

~~~
elptacek
My money is on there being a direct correlation between companies who did
early adoption of internet technologies as crucial to business and 'doing it
right' versus those other companies who view IT and security as a 'cost
centers.'

~~~
meowface
Absolutely. I recently moved from a company that viewed IT and information
security as cost centers to one that views them as core business components,
and the culture (and competence) difference is very refreshing.

------
jbob2000
At my company (80,000+ people, bank), we have an app security department that
handles everything. We give them a URL to our test environment for a dynamic
scan and we give them the contents of our release branch for a static scan.

We are not allowed to release until the security scans are done and reviewed
with the app security team. We release about once a quarter because of this.
We do not practice continuous delivery and doing agile is difficult. There's
no gatekeeper, story grooming, product owners, nada. You do what app security
says when they say to do it, we work on their schedule.

As a tangent, working for a large company has shown me that agile doesn't
really work at this scale. There's too many third parties that you have no
control over. And the business is always trying to respond to the market, so
projects are changing all the time and people are always shifting roles. I
just come in, put my commits on the branch, and pray that the machine has my
back.

~~~
motohagiography
Wrt agile not scaling because of 3rd parties you don't control, would you
attribute that more to unwillingness or inability of third parties like
security to collaborate?

~~~
jbob2000
I don't want to call it an unwillingness or inability, because they are good
people who do good work. I really just think that at our scale, we move with
the world, and the world isn't agile nor does it care about agile, it just
_is_. So really, we aren't agile because the world isn't agile.

For example, we were preparing to run some world cup soccer promotions based
on the teams that won. We literally had to wait until a soccer match finished
in order to deliver our changes. How do you practice agile with that kind of
restriction?

------
lvh
Train your ICs to be better at spotting risky behavior. They don't have to be
able to exploit and fix the bugs: they just have to know when to ask for
someone else to look at it.

One thing we've found to be effective here: we start with a comprehensive
audit (appsec, infrasec, corpsec, deployment security, external network
footprint) that tells us about systemic problems you face. Then we can provide
targeted training where you contextualize the problem for ICs.

I'm sure Ryan McGee (magoo) has ways to tie that in to quantitatively making
people better at risk estimates, but we're not trying to get ICs at clients to
do that (yet).

------
franzwong
We use AppScan from IBM. [https://www.ibm.com/security/application-
security/appscan](https://www.ibm.com/security/application-security/appscan)

