
Security Architecture Anti-Patterns - napolux
https://www.ncsc.gov.uk/whitepaper/security-architecture-anti-patterns
======
tptacek
This is pretty unhelpful; the case I would make is that it's providing
security largely by defining the problem away. For instance: it's usually
unrealistic to require that all administration happen through clean-room
systems that don't ever browse the web.

The real-world practice of security is in large part the deployment of risky
systems with mitigations in place for the likely attacks that will target
them. So, for instance, getting everyone to talk to the admin console on a
purpose-built Chromebook with no Internet access is probably not a realistic
option, but getting every system with admin console access MDM'd and requiring
access to admin consoles to route through an IdP like Okta to enforce 2FA is
much more realistic, and thus likely to happen.

The patterns in here that aren't unrealistic are pretty banal. I don't doubt
that UK NCSC sees systems designed to be unpatchable, but modern engineering
norms (Docker, blue/green, staging/cert environments) --- norms that have
really nothing to do with security and are common to pretty much every serious
engineering shop --- address that directly anyways.

Other patterns don't really make sense; for instance: you should design to
make your systems patchable (sure, that's again a basic engineering
requirement anyways), but also make sure your dev and staging environments
aren't continuously available. Why? Those are conflicting requirements.

~~~
amanzi
I respectfully disagree. I have seen many of these antipatterns in production
in many medium & large size orgs, and I think the six scenarios presented in
this doc are more common than you think.

The "browse-up" scenario is extremely common because engineers/administrators
usually prefer to remote directly onto the systems their working on from their
main machine rather than endure the inconvenience of needing to securely
connect to another host first. Many of these admins/engineers would think it's
inconceivable for their machines to be vulnerable but have no issues
downloading dev tools, libraries and dependencies onto their machines from
third party & untrusted sources (e.g. Github, NPM, etc).

'Docker, blue/green, staging/cert environments" \- believe it or not, these
are seen as emerging trends in many orgs rather than the norm as you suggest
here.

And regarding designing systems to be patchable, you say: "sure, that's again
a basic engineering requirement anyways", but again I'd counter that I've come
across many systems that haven't been patched in months or years because it's
deemed too hard. Another similar issue I've come across is where an org's DR
processes have not been properly tested because it's too hard to failover
without causing significant disruption. Both can easily be designed for early
on, but for legacy systems that were implemented without this foresight it
still remains an issue.

~~~
EvanAnderson
The way that I'm reading the "browse-up" scenario, however, isn't how you're
describing it. Admins wouldn't "secure connect to another host"\-- they'd have
to use a trusted and known-clean device to perform all that administrative
activities. Connecting to that device from another host (i.e. using it as a
"jump box") seems to be specifically disclaimed as an "anti-pattern".

~~~
theossuary
That's not how I read it. This past in particular:

> There are many ways in which you can build a browse-down approach. You could
> use a virtual machine on the administrative device to perform any activities
> on less trusted systems.

The point is to tailor your risk to the systems your accessing. You should
interact with less trusted content in more secure ways if you're also
interesting with high security systems.

So if you're using firejail/bubble wrap to consume less trusted content (web,
email, videos, etc.) and selinux/apparmor; I think your system would match
their description of browse-down, for most low to mid security systems. For
high security maybe Qubes/VMs. Then highest security you start thinking about
multiple machines with kvm switches.

------
closeparen
This is _amazingly_ concrete and understandable from a technical perspective
for a government security document. Where can I find more like this?

Everything I’ve seen in ISO security standards, for example, is written at an
abstract theoretical level about the design of security bureaucracy rather
than the design of actual systems.

One bone to pick: basically all tech companies expect you to be oncall for
your services via your laptop. They’re not paying anybody to sit in the office
overnight, and commuting in when you get paged with seriously delay
mitigation. Is “browsing down” even possible under those circumstances?

~~~
adamlett
_Is “browsing down” even possible under those circumstances?_

From TFA:

 _There are many ways in which you can build a browse-down approach. You could
use a virtual machine on the administrative device to perform any activities
on less trusted systems. Or you could browse-down to a remote machine over a
remote desktop or shell protocol. The idea is that if the dirty (less trusted)
environment gets compromised, then it’s not ‘underneath’ the clean environment
in the processing stack, and the malware operator would have their work cut
out to get access to your clean environment._

~~~
closeparen
Yeah, tech workers aren’t going to tolerate doing everything except admin
consoles in a VM or RDP session. Maybe on a special purpose workstation but
not the daily driver company Macbook.

------
cs02rm0
On a personal note, the advice not to browsedown from less trusted devices
often means an organisation supplying a trusted device.

That potentially conflicts with IR35 for contractors who would then not be
supplying their own equipment.

I've also seen it result in a contractors *nix laptop being swapped out for a
Windows laptop (built by a junior employee) with mandated "phone home"
software installed. Personal biases persuade me that this wasn't necessarily
an improvement in the security of the system.

I should say, I'm generally a fan of NCSC advice and I think it's great
they're putting their thoughts out there.

------
luord
This is a great list and I'll keep all this in mind, even if I see number one
as near unfeasible.

The one I found most surprising was number four. If using PaaS is better
practice, then I certainly won't feel lazy anymore for not wanting to deal
with the administrative overhead of IaaS (or kubernetes, but that's a
different beast entirely, one I try to avoid by using nomad). This will help
my impostor syndrome, though calling it that is probably presumptuous on my
part.

That aside, _why_ is this site using react to the point they need JavaScript
enabled for it to run? What could a site such as this _possibly_ need all that
interactivity for? Also why is what seems to be the entire css embedded on the
head? That's just weird.

~~~
andai
The main page's source code actually doesn't contain CSS. That tag is written
dynamically by the JavaScript.

~~~
luord
Is that a common practice recently (or maybe a practice I missed entirely)?
Las time I used webpack (for example), we were still creating a minified css
file that was meant to be linked in index.html, but I haven't done frontend in
a while.

------
paulddraper
> Anti-pattern 4: Building an ‘on-prem’ solution in the cloud

It didn't use this as an example, but good heavens how it is that I still have
to understand VPC and NAT to use half of the cloud?

10./192\. was never a security measure; it was a IPv4 rationing scheme is that
IPv6 made obsolete.

AWS made a big deal of the fact that Lambda functions can now be launched in a
VPC in less than 15s.

Why are people doing that at all????

Because of bearded network admins that setup their on-prem network in the
cloud.

Now networking in the cloud is so complicated, people are turning to FaaS
because has a better chance of skipping the morass.

\---

P.S. Don't get security groups confused with NAT. AWS had security groups long
before its VPC service was even a thing.

~~~
nullpage
While 10./192\. private addresses in IPv4 were in largely designed to help
deal with address space exhaustion, they also are important because
organizations can use them without having to own the addresses or register
them in any way with IANA (or equivalent) since they are not publicly
routable. IPv6 still maintains this feature with unique local addresses, and
the entire fc00::/7 address range in IPv6 is allocated to private networks and
is not routable on the public internet (not that AWS uses these, any IPv6
address they assign to you is a globally routable address). A lot of stuff
just still doesn't support IPv6 yet (RDS for example
[https://aws.amazon.com/premiumsupport/knowledge-
center/rds-i...](https://aws.amazon.com/premiumsupport/knowledge-center/rds-
ipv6/)) so you're options are to either give that endpoint a public address
and manage your security groups well, or give it only a private address which
gives you the added benefit of the endpoint not being publicly routable (which
is a nice second layer of security beyond security groups), downside being the
things that need to talk to it must now also live in your private subnet,
hence Lambda launching in a VPC.

~~~
paulddraper
> A lot of stuff still doesn't support IPv6 yet (RDS for example)

Almost all software (OS's, browsers, databases, load balancers, etc.) supports
IPv6.

Some third-party services don't. But that's usually irrelevant to my point.
For example, RDS allocates public IPv4 address at no extra cost. In any case,
my point is that cloud providers are unwisely shoehorning NAT into everything;
citing AWS as a contributing factor just reinforces that.

\---

There's no longer any need for local IPs, except so that we can still design
90s-style networks.g

------
zinssmeister
Excellent and to the point. I see this apply to many technology SMB companies
as well. We once compiled a few actionable recommendations for smaller
companies that host on AWS and that post ended up being our most popular
article [https://www.templarbit.com/blog/2018/11/21/security-
recommen...](https://www.templarbit.com/blog/2018/11/21/security-
recommendations-for-hosting-on-aws/)

------
inetknght
> _You need to enable JavaScript to run this app._

Nah.

~~~
danShumway
There's no way you would have been able to find it without the page loading,
but for anyone else in the same position, the direct PDF is available at
[https://www.ncsc.gov.uk/pdfs/whitepaper/security-
architectur...](https://www.ncsc.gov.uk/pdfs/whitepaper/security-architecture-
anti-patterns.pdf).

I would maybe question whether an article that can be perfectly embedded in a
static PDF without any changes or downgrades really needs an entire React
stack and a Service Worker for the browser, but :shrug:. Every org is free to
make their own engineering choices.

It does seem to be a pretty good list, so worth taking a look at.

------
hmmm1021
Bloody hell, this document is great, it's like reading Ross Anderson. Exactly
the type of security advice we need to get out to IT people.

------
ran3824692
Top comment "This is pretty unhelpful", ya, requiring running nonfree
javascript to learn about security anti-patterns is an antipattren. Fuck corn,
fuck bread.

------
motohagiography
Great description. How do you get security architecture into the design phase
of a system when you are doing dynamic and iterative product development?

~~~
munchbunny
They are only mutually exclusive if your business and product management teams
deprioritize security. In my experience, the typical reason that security gets
neglected (as opposed to just making reasonable trade-offs) is that management
and product management both care too much about just shipping shiny things and
don't care enough about doing right by the end user. I've seen better and
worse teams. Most teams fall into a category of "you're lucky you're not big
enough to be a target."

General best practices I can think of, in broad organization level strokes:

1\. Make sure security is implemented at the dev ops layer through practices
such as logged just-in-time access to production systems, secret vaults for
service keys and certificates, airgapped machines for handling secret keys,
etc.

2\. Make sure security best practices are implemented by default into your
API's (CORS, TLS 1.3, whitelist based firewalls between services that
shouldn't need to talk to each other, etc.) and make it transparent to the API
caller, at least when it's your own services talking to your own services.

3\. Make security an element of design and code reviews. Square, for example,
did this by having subject matter experts advise teams on security design when
projects were still in the ideation/design phase.

Ultimately, security costs a non-trivial amount of time, and it requires
training your developers to be able to reason about security.

------
hestefisk
This is super helpful. Good find.

