
Ask HN: Whistle blowing when PII isn't protected? - throwaway-sorry
Increasingly, vast amounts of personal information are being lost in data breaches.  I know, from experience, that in a lot of cases the development team was well aware of unacceptable levels of risk long before the breach happened, but could not raise these concerns past management.  Management who either didn&#x27;t care or didn&#x27;t understand.  Not just people doing a bad job with new code, but lack of testing, relying on untrustworthy code for security, lack of review, lack of release engineering processes, lack of change control, insufficient access control, etc.  Things truly outside the dev team&#x27;s control.<p>I&#x27;ve seen great developers depressed and near-suicidal because they were involved in a massive PII breach that shouldn&#x27;t have happened, but was inevitable due to nonexistent engineering and quality practices.  The obvious answer is to leave those jobs and avoid the situation, but that doesn&#x27;t remove the risks to the data.  If anything, risks increase as conscientious staff leave and clueless staff accumulate.<p>The CVE system seems to work well to expose problems when an actual vulnerability exists, but we seem to lack a way to blow the whistle when too much risk exists that cannot be addressed internally to the company.  It&#x27;s great to think that those companies will eventually fail, but that doesn&#x27;t un-steal an identity, and it doesn&#x27;t hamper adoption of technology in general.  In many cases, especially government breaches, the people who had their information stolen may not have a had a choice in sharing it in the first place.<p>It&#x27;s in everyone&#x27;s interest to prevent data breaches, but how can we do so when company politics won&#x27;t allow it?
======
scryder
If company politics won't allow it, there are several possible reasons why
which I can come up with:

>Possible Good Reasons

>They don't want to divert resources from guaranteed useful development to fix
the possibility of a hack

>No one has got around to it yet and they're just hoping nothing goes wrong in
the meantime

>No one has the expertise or has had enough time with the code to know that a
problem exists at all

>Possible Bad Reasons

>Higher ups or peers want to implement and take credit for having implemented
security fixes themselves. You doing it, specifically, would hurt their resume

>Personal data exposure is intentional and someone's either looking through it
or wants to, for whatever reason.

So it seems like it's either a question of how priorities have historically
worked out, or how your peers and managers get their jollies.

The former is easy to deal with: Find three similar companies (or bigger if
you can't find similar) who had data breaches, tell someone in your company
with the ability to alter priorities (probably your manager) how much money
they lost from it (shuttered being the 'number' you really want to say, it's
dramatic but tickles the brain in all the right ways), and hope they agree
that you can fix that.

Alternatively, and even better, before even speaking with him, spend 20% of
your work time implementing what would be a working solution to the security
flaw(s) allowing a data breach before even making your pitch.

Selling a manager on

>"Hey, I have this thing that I built that would make the whole company much
more secure, it won't cost you any more development time then I already spent
since it's ready, and blocking holes similar to this probably will save the
company from bad press and something like 5 million dollars down the road,
based on similar breaches to the ones it stops."

Is a lot easier than selling them on the current alternative it seems like
you're giving them:

>"Hey, we have an abstract problem that will probably bite us later. I think
it's morally wrong, but I have no idea how long it will take to fix because I
haven't tried yet, I have no proof that this problem will impact our bottom
line, and I don't necessarily care enough to fix it myself. Despite all this,
would you do us all a favor and make fixing it, rather than making new,
sellable features, a priority?

Which is most of the real politics at work. The first speech is selling a
ready-made and immediate resume boost to your manager with the literal words
he would actually use, which I've never seen rejected, the latter is asking
THEM to make the resource commitment of unknown time to fix the holes and, in
doing so, when he could instead tell you to build features, to stick his own
neck out for security's sake. It's not company politics that's the problem
here in this case; it's that your manager knows better than to follow whims.

Give him fodder to stick his neck out, or do it for him, and you're more
likely to succeed.

In the other case, where it's because someone else wants to fix it instead of
you or data breachability being a problem is intentional, you'll meet pushback
the whole way if you try, although you still can if you don't mind the
potential fallout. Take comfort in the first case that at least someone cares,
and that in the second case that CS hiring is still enough of a hot market
that you'll probably be able to find somewhere else to work.

