Given the number of corporate networks that I've worked I've found this pattern:
Jump/Bastion servers everywhere, with the ability to "get work done" (via scp, and restricted SSH options) severely limited. That with VLANs lets' the PHB and 'Cyber insurance' people checkbox that "yes we are secure".
Everybody can `sudo su` with no restrictions and no additional password/2FA.
Boxes on internal network can't access the internet (outgoing comms blocked).
While this is handy during general usage, during setup for a 'TESTING' environment that might require 100 ruby/gems and 3rd party files to be installed, it's a royal pain in the ass.
I could easily breakout with SSH tunnels, but I'm trying to respect the client wishes. It makes the job 1000x unnecessarily difficult.
Penny wise and pound foolish.
So they give lip service.
OH, and all passwords are a variation on 123_$CORPNAME, $CORPNAME123, and abc$CORPNAME123.
More patience than me haha, I remember the last project like that I was on with a small group of contractors - we spent the half of the first few weeks working around all these theatrical limits. We had an HTTP Proxy that could run over RDP. Fortunately nobody was really looking (or caring).
Bastions seem to be something that a lot of organisations really don't understand. They feel that they need to have them for "security" or to tick some kind of compliance box, but haven't really thought about how to configure them, how to design the network or even what they're trying to achieve by having them.
Lemme guess. Aerospace/defense? This is the usual strategy to satisfy defense procurement regulations regarding IT security. Also the reason nothing ever gets done.
I think it's easier said than done in practice. What it means to have secure defaults varies based on what the production environment looks like, and the reality is that most software is feature rich and isn't usually intended to run in production out of the box, which is what makes a lot of the default configurations insecure.
IIRC typing "apt install sshd" (or whatever the package name is), is enough for Ubuntu to start a systemd service with password authentication enabled, so we are definitely far away from having secure defaults even for the most basic things.
very much this. one package I work with to be secure requires an LDAP server already setup correctly plus TLS packages already to go signed by a proper root certificate. Secure by default in that case would be tough.
From this list I struggle the most with "Insufficient internal network monitoring", all the other 9 are easier than somehow monitoring everything going on around the entire network. For my small team, I've yet to find anything that's easy/cheap/useful built to be used for a small team. We can't have a single person specialize in doing network monitoring, but I feel like to really do it RIGHT you need someone who does that.
Pentester/red teamer here, this point from the article is the key:
"Properly trained, staffed, and funded network security teams can implement the known mitigations for these weaknesses."
You need someone who actually understands networking tech at a deep level to accomplish anything beyond what expensive tooling/devices will offer you. Otherwise, you're always going to be limited by whatever vendor you're using and the capabilities they build in, assuming you're using the solutions to their full capability.
Yeah, I feel that. Anyone can have a good password, but it's hard to do all that networking stuff. Small teams really suffer on this one more than the other 9, at least in my experience.
So many companies have a single large network that spans their data center, offices, and remote workers. There should be a DMZ between the data center systems and end user compute. The goal be to treat the end user compute environment the same as the Internet, completely untrusted.
How does everyone here handle the patches and updates on their network? Do you perform updates when you remember to, or do you have regularly scheduled sessions, or did you (heaven forbid) automate it?
The more home servers and services I have, the harder it is to keep everything up to date…
Ubiquiti recently had a horrible update to their AP firmware that made the devices unusable with cryptic symptoms. Users would attempt to connect to the Wifi SSID and just get rejected.
This burned me so bad that I have to second guess updating software now, which creates a moral hazard. We need to figure out how to price and actually extract the externality cost that these errors create. Otherwise, we are in a perpetual gradual slide.
I use Ansible to update machines en masse. List your machines in your inventory file and then run `ansible $GROUPNAME -m command -bK -a "$UPGRADECOMMAND"`. Groups are especially useful if you have machines with different Linux distros that use different commands. I understand it's even possible to use Ansible to manage Windows machines.
Of course, now you have to be really careful to guard access to your Ansible setup...
Use '-m package -a state=latest' and it should upgrade most distributions without needing to care about the underlying managers... I think. I could see some oddities with package name resolutions!
We have defined weekly and/or monthly maintenance windows scheduled for this kind of stuff. Nobody on the user side ever remembers these, but at least in theory this is a time when systems can be expected to be down for patching and updates, or other maintenance/upgrades.
I set up a systemd timer that runs `pacman -Syu` every morning, and I keep an eye on the Arch Linux mailing list. It's worked out pretty well over the past few years. There's only been one or two times where something broke to the point that I had to roll back an entire machine.
Core network infra is typically redundant such that supervisor modules can be upgraded “hot”. Leaf networking attachment to the hosts should be redundant such that taking out a single path to apply updates to half the fabric should similarly be a non-event.
For my home network I do it once a year. My nano pi r4s and tplink access points all run openwrt but the amount of updates they put out is so quick and the update process not seamless that it ends up becoming a pain.
docker and watchtower solves this problem for me. Automated updates sound bad, but usually updates rarely go wrong. Even if some update does, the risk of being compromised far outweights the happy feeling of 99.999% uptime green bars.
I read this and my first thought is that when it comes to network security we're doing it all wrong.
Every one of the mitigations relies on failable-by-design human beings to do a perfect job at closing every potential security hole. Often using obscure wizard level knowledge about the system they're working with.
Does anybody trust that ACLs will always be perfectly maintained? That credentials will always be kept up to date? that patch management will always be timely and comperhensive? You can try as hard as you want - or how much of a budget you have - but in real life you'll never patch all the potential holes. At any given time we - the royal organizational we - don't even know all the potential holes.
I think we need to treat network security like nuclear weapons. Networks need to be fail-safe. The reason nuclear weapons have many fail-safe features are because their history is rife with failures tracable to human error.
What does fail-safe even mean in network security? I have no idea but I think the question is worth asking. How do we remove humans out of the security loop? How do we make networks smarter and self securing and fail-safe?
In my (slightly dystopian) imagination the network needs to be some AI overlord model which controls it all and asks users in plain language what they want to do and decides what to allow based on the user's moral character. Then it uses its omniscient eye to surveil everything and shut down any transgressive actions.
Because clearly human beings are not up to this task.
But seriously is there any research on autonomous and self securing networks out there?
> The presence of easily crackable passwords on a network generally stems from a lack of password length (i.e., shorter than 15 characters) and randomness (i.e., is not unique or can be guessed).
Randomness I get, but:
log2(72^15)~92
(alphanumeric including some symbols) and even only all lowercase:
log2(26^15)~70
I still think searching a 2^70 space is pretty hard? (Making big assumptions: no rainbow tables (proper salt), no daft ntlm 15=8+7, actually random characters).
A five word passphrase from a moderate length dictionary is like 80 bits, I don't understand why passphrases are not recommended in place of passwords more often.
The problem is that as soon as passphrases are common, it isn't 80 bits anymore, it's however many words you choose. It just becomes a slightly harder dictionary attack.
…no? If you follow the scheme the person is suggesting, 80 bits of entropy is 80 bits, and it will take 80 bits "worth" of brute-force searching to crack a passphrase during such a search.
If you're literally choosing "correct-horse-battery-staple", yeah, it's just a slightly harder dictionary attack, but that's not the suggestion being made, and that's not different than choosing hunter2 as a password.
The idea of "Correct Horse Battery Staple" password is it should take a sufficiently hard dictionary attack, like 2k candidates ^ 4 choices ~= 44 bits. You could do eight words random out of 2k dictionary and that's 87 bits in theory.
Typical average wpm is 40, means 1.5s per word or 7.5s for five words, 15s is only double that. And that's for meaningful non-random sentences on desktop keyboards typed by under reasonable concentration.
happy to see "Lack of Phishing-Resistant MFA" added to the top 10 here. Moving away from SMS / Push to App MFA to something like a yubikey is a major upgrade to operational security.
The most common misconfiguration in your democracy is creating an agency with the mandate to spy on everyone (friends and foes alike), defeat all encryption, generally make information security always have at least one flaw. Hopefully one the “good guys” are able to keep private. And one they are able to avoid misusing. And not get caught misusing.
| 10. Unrestricted Code Execution
| If unverified programs are allowed to execute on hosts, a threat actor can run
arbitrary, malicious payloads within a network.
The list was good but this last one reads more like end-point security.
Who needs service accounts anyways? First thing I do on a new server is kill the firewall, open up ssh to root, and generally just set the password to root as well. NSA can't touch this.
Jump/Bastion servers everywhere, with the ability to "get work done" (via scp, and restricted SSH options) severely limited. That with VLANs lets' the PHB and 'Cyber insurance' people checkbox that "yes we are secure".
Everybody can `sudo su` with no restrictions and no additional password/2FA. Boxes on internal network can't access the internet (outgoing comms blocked). While this is handy during general usage, during setup for a 'TESTING' environment that might require 100 ruby/gems and 3rd party files to be installed, it's a royal pain in the ass.
I could easily breakout with SSH tunnels, but I'm trying to respect the client wishes. It makes the job 1000x unnecessarily difficult.
Penny wise and pound foolish.
So they give lip service.
OH, and all passwords are a variation on 123_$CORPNAME, $CORPNAME123, and abc$CORPNAME123.