Hacker News new | past | comments | ask | show | jobs | submit login
Security policy audits: Why and How (arxiv.org)
60 points by randomwalker on July 27, 2022 | hide | past | favorite | 26 comments



When I needed a security policy for my startup, I looked everywhere and couldn't find anything that made sense. I only found policies that are very, very, verbose, so much that I didn't know what to do with them. And most often outdated. In the end, we wrote our own, focusing on producing something that's comprehensive yet concise. (After all, if it's not concise, no one is going to be able to understand it.)

We're planning to make another round of changes and publish it under a permissive licence. Here it is in case it's useful to someone: https://www.hardenize.com/about/security_policy

A good security policy is very important early on to inform architecture and design. Ours has worked very well for us. It also often helped us get away from having to complete customer security questionnaires.


> Work shall be carried out exclusively using corporate equipment. There shall be no access of company networks and data from personal devices. Corporate equipment shall not be used for personal activities.

This one caught my eye. As a dev, corp laptops are usually so locked down as to be useless, i.e. unable to install dev tools etc. So I often use personal equipment (not connected to a corp) and use git as my gateway back into big corp.


>As a dev, corp laptops are usually so locked down as to be useless, i.e. unable to install dev tools etc.

In a past life writing my company, which was a .NET shop, was acquired by a large company that used Macs for everyone except salespeople. They didn't provide Macs to anyone on my team, so we had to use the Windows boxes that were so locked down we had to get exceptions for everything. While we were successful in getting Visual Studio and other tools added to the exception list, were weren't successful in getting our compiled software added to the list. i.e. we literally couldn't run the software were were acquired to create.

For the short term, we discovered that anything done in WSL (Windows Subsystem for Linux) was completely ignored by endpoint security, so that let us work around many local issues for the 18 months or so it took us to get Macs for work.


Another "trick" I've used before is to run everything inside a Hyper-V VM on a locked-down Windows box.

Locking down machines makes sense, but won't someone please think of the developers?!


So (as a corporation) don't lock them down as much. That's what we do as a small company who has to pass ISO 27001 (and NEN 7510) because our product is used for healthcare. From a security audit standpoint, not allowing personal devices saves a lot of time and trouble. (This of course means that anyone who is on-call just gets a phone from the company.)

As a developer; if you need to work around locked down environments the security policy doesn't really matter to you, as you are already violating it. Whether your employer cares about that or not is another thing of course. Some managers will consider you a liability, some will accept that this is the only way you can do your job.

Ideally employees embrace the security policy, and whoever tweaks it makes sure everybody can still do their job. In reality, that will vary a lot.


> As a developer; if you need to work around locked down environments the security policy doesn't really matter to you, as you are already violating it.

Or the security policy has an "exceptions shall be reviewed and approved by X and documented at Y" clause.


The struggle between security and usability is very real. Personally, I don't think that it's possible to lock dev equipment whilst not significantly impacting productivity. That said, ensuring that high-value environments (e.g., production networks) can't be accessed from dev equipment with elevated privileges, well that's necessary. And I feel often neglected in small companies.


You’re doing yourself, and all other devs at your company a disservice - at least in my opinion.

If the devices are locked down to a degree that you cannot do your dev (aka your job), that should be brought up with management. Of course, it’s easier said than done xD

I’m personally a fan of locked down corporate devices and then either dedicated laptops or cloud vms for development.

It’s not necessarily an easy problem to solve though


It’s pretty rare that a manager will help fighting stupid security restrictions even if they prevent you from doing your work. I currently have to look into Remote Desktop solutions for one of our devices. IT in their wisdom have decided to block access to this category of products (they are blocking VNC sites and TeamViewer, Zoom is ok for unknown reasons). From past experience it’s clear that I either have to do it on my personal equipment or not at all because IT won’t accept even reasonable requests for unblocking. Sometimes running a VM with a non-corporate image will help.


The truth is, security is hard. IT/CorpSec are asked to implement policy drawn up by lawyers/auditors/compliance specialists with limited insight into the real needs of the organisation. Asking them to make complicated exceptions that they don't understand and can't control puts them into an untenable position with respect to hard requirements that are handed to them as absolutes. At the other end, this often filters through as decisions that look utterly nonsensical (you can use Zoom but not something else, etc).

Meanwhile, for a developer, telling people they can't install and run tools they need to do their job is also untenable.

Like others I've come to the opinion that you almost need separate computers and completely isolated networks to do this properly. If you are doing things right, there should be very little need for developer equipment to ever connect into any place that sensitive data resides. And consequently there should be very little need to lock down developer equipment. Unfortunately not many places can architect things that well, nor build that type of nuance into their security policies. Among other things, you need to invest a lot of work in creating fully representative non-sensitive test data so that developers can do their work in a realistic setting.


I wonder about the option of having an unlocked dedicated development machine that has no or very limited access to network resources but minimal security software in addition to a locked down laptop for email, slack/teams, etc...


Curious how they handle personal mobile devices. So they offer company phones, don't offer after hours support, or outsource support to a company with a compatible policy?


Company phones are standard for any organisation like that has a policy like this. Another alternative is provoding a personal phone number that you can be contacted/paged on, and you log in to a corporate device to receive information as to why you're being called.


Totally on board with the goals, and I've done some similar work, though haven't gotten anything nearly as trim as this as the output.

I'm interested in if/how this has stood up in externally-audited scenarios, like SOC2/ISO27001 or similar. I get that it's successfully avoided some customer scenarios, but am thinking of more formal processes.

At a glance, it covers many of the bases at a high level, but wonder if it's missing the specifics that an external auditor might typically expect to see from a policy manual. Are there additional sub-documents/playbooks/etc for many of these that elaborate further?


We haven't yet gone through any audits [we're small/young], but we've began to prepare for SOC2. The policy itself is absolutely insufficient for anything of the sort and we expect that we will generate a ton of further documentation. After all, SOC2 is essentially all about documenting your processes in detail.


> Information security isn't just about software and hardware -- it's at least as much about policies and processes.

I recently came across these, which really helped me to understand the basics:

- https://fly.io/blog/soc2-the-screenshots-will-continue-until...

- https://scrty.io

- https://latacora.micro.blog/2020/03/12/the-soc-starting.html

It was an informative read for someone who had paid little attention to this stuff.


In my experience, security policies often start short, readable, digestible, and actionable. Over time this changes as more and more people / groups add to them or edit them for specific purposes - engineering, privacy, HR.... New hires come in and make changes. Policies just evolve.

To me, the worst culprit is audits. During an audit, there's this tendency to continually add to policies to bring them in line with the framework you're being audited against, or to bring them in line with what your auditor interprets as required by the framework. We used to host our policies on Github and you could clearly track our audit season by the revision frequency. In a week with an auditor, we'd have 10 or 50 or 100 revisions to policies.

Over time, what you end up with is a Frankenstein set of documents that are not short, readable, digestible, or actionable. But they helped you pass your audits.

This is not how it should be done but this is the reality I've seen.

At my last company, we wrote and open sourced policies that many people used to pass audits - https://github.com/globerhofer/HIPAA-policies. I don't know if much of the policies were relevant to sec ops for those companies but the purpose a lot of the time was the audit.


I really get mad at peoples bias towards adding more “to be safe”. More security policies, more restrictions, more alerts, more process, more everything. The very fact that many audits apparently exist to ensure you have a shit security policy by bloating it with nonsense and crap is ridiculous. It’s all insanity.

A classic example is password reset policies, a rule that makes security worse by burdening its users in ways that predictably and reliably make them use the lazy way to do things. In general you get good security by investing in it and hiring careful people, not by writing a list of 10000 rules and exclaiming “if only you followed rule 7354!” when something happens. Security policies seem to be designed like horoscopes, designed to address any possible variation of a security incident that might happen, rather than truly pointing you towards what’s important.


Security audits don't exist to ensure you have a shit security policy... I think it goes without saying that's untrue.

If a particular standard or framework you need to meet requires these things there's unfortunately not much your auditor can do. As such, your problem is with the standard or framework you're aligning with not so much the auditor.

If a security standard requires you to have a reset policy, then you should have free reign to design that how you see fit in the context of your organisation and its compensating controls.


This paper raises important points, although its not well written and is patchy.

Security "policy" is a disaster area, and is only getting worse.

Why? Increasingly, policy is being treated as separate and different from technical matters. Yet these are not legitimate specialisms. A fake, forced partition that aligns with hierarchical ideals of western "management culture" leads to parts being written, implemented and enforced by different groups of people. This crystalises into rigid divisions, obsessive attempts to document and "nail down" everything, and a mind-set of "compliance and audits".

Schisms then arise that are worse than the outright failure of any individual part. The idea that policy relates to implementation as management does to production/execution is deadly in its stupidity and grave in its implications because it fails to apprehend security as a continuous dynamic process of reconnaissance and adaptation.

Although I do not believe military metaphors are always appropriate to security, consider by analogy the staffing of senior generals and government advisors on defence. Years of crawling around in the jungle with a dagger in your teeth is pre-requisite to sitting in a leather chair in Whitehall deciding whose sons will go risk their lives. Nobody "comes in at a senior level" from a career in contemporary dance and a conversion masters degree in Peace Studies.

By comparison the technological society has been caught with its pants down and we are desperate to fill cybersecurity roles. Anyone of sufficient seniority with a "policy background" is presently welcome to stick their oar in and make a sideways leap to CSO. Digital security however is a complex, nuanced and extremely demanding area. It can't be bolted-on or fixed with off-the-shelf products. You can't order people to do it. Almost every decision is an impossible gut-wrenching compromise. Put simply, you actually need to know what you're doing. You need to have seen and dealt with the consequences of shallow decisions and reactionary practice, and to have experienced real users, who are diverse and sometimes infuriating.

This paper correctly identifies the need for a more mature and integrated ethos for research and practice, one that moves beyond "mere" software and hardware. But it fails to apprehend the symmetry of the problem. Those who claim to have something to teach about "policy" must be scrutinised to ensure their focus is not on power rather than polity, on hierarchy rather than effective structure, or on "grand plans" rather than skillful adaptation.


Kind of an unusual paper, they seem to mix up things like SIM swappping which are more in the area of policy with things like S3 buckets being left open (mentioned as an example on page 1) which are easily caught by CSPM style solutions and are down to individual customers.

One of the conclusions, that regulator based policies are needed to correct behaviour is likely correct. In some cases the incentives of the companies who operate the control (telecomms companies in the case of SIM swapping) don't line up with the people affected, so it needs regulator/legislator intervention.

On a related note, one of the best ways to affect change as an individual in security is working to set policy and industry standards. One requirement in a CIS benchmark is likely to have more impact that 100 recommendations written in pentest reports :)


Shameless plug: if you’re looking to conduct Security Awareness training as well as distribute and get sign off on policies/SOPs, my startup Haekka.com is a good place to start. We have an extensive catalog of training and see a nearly 96% completion rate across customers thanks to our deep integration with Slack. Where Haekka really shines is with continuous and lightweight security/privacy engagements. These help employees keep security topics top of mind without requiring them to go through heavy training.


Very flawed paper. I assumed it might be an undergrad paper but I see the lead author is an associate professor, and the 2nd author a PhD student.

That said, it is indeed interesting that security policies are inward-facing. And I did enjoy the SIM swap study.


Could you elaborate what makes it very flawed? To save the rest of us some time?


The one that sticks out to me first and foremost is the assumption that a tech business (they studied the top 120 websites, by what measure they don't say -- Alexa? -- anyway let's assume most are tech companies) is run in a top down fashion. That policies drive all activity and therefore the only hammer for this nail is the policy. When in actuality, many big tech businesses are bottom up. One doesn't need a policy to fix authentication, one needs smart engineers that understand authentication. A policy saying "it must be like this" doesn't make the authentication good, a good implementation makes it good. Sure, we can imagine that this is dictated from the top, but it isn't.

I call that one first because it is a basic assumption of the paper as a whole.

The next huge glaring flaw is that the authors talk about authentication flaws in the top 120, and that SIM swap is a big factor here because even when 2FA is in place, SMS 2FA is fairly weak. (They don't actually say that -- they studied some OTHER undescribed set of 140 companies that use SMS 1FA or 2FA.)

While focusing on the top 120 tech companies' lack of policy that SMS 2FA perhaps be disallowed, are they really insinuating that the phone companies, which are in fact top down and have a vast body of policies and procedures, themselves haven't solved the auth problem by which SIM swaps occur? That SIM swapping occurs because they lack policies saying don't swap SIMs for people that don't properly authenticate? Clearly the phone companies have very strong policies, that agents violate frequently. So much for policies being the fix!

The last flaw I would point out in limited time I want to spend here is that they don't consider that the top 120 are the top 120 not in spite of, but perhaps _because of_ (in part) their policies or lack thereof? In the first section, titled _security policies matter_, at least they do contrarily note that a "vanishingly small number of people are actually harmed". So do policies matter or not, at the end of the day?


Auditability in general is a staple and pillar of security. If you don't check, you don't know. If you don't know, you can't trust.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: