run services as normal user and confine them using apparmor (or selinux if you are masochichst)
the NSA will probably hack you anyway, that drive by script-kiddie that kills your company JIRA instance by DDoSing some gameserver while exploiting some old Wordpress plugin can be stopped by apparmor!
What's missing is some kind of kernel live patching - if someone is not a script-kiddie on your machine that Wordpress shell is a nice tool to elevate to root - your kernel is rotting and exploits are plenty... so consider some kind of live-patching if your are paranoid.
Also no discussion about updates, the need to restart services for reloading updated .so files (like openssl)...
You can obey that guide and be busy with life and some skiddy root exploits your box anyway...
so /rant - there must be some better guides? found those cis benchmarks (https://www.cisecurity.org) a mixed bag - is there anything better out there?
Lists like this exist to give administrators and their managers (1) a false sense of security when implementing it and (2) a subconscious excuse not to actually have to think about what you actually have to defend against. The list of topics is almost entirely useless if you are running any sort of containers on top of your Linux install, just as one example.
I'm sorry to be sounding so harsh, but I sincerely detest the security-by-checklist approach. It barely protects against any real-world threats, needlessly complicates matters, and gives credence to the perception that "security slow things down" through over-engineered solutions to non-existing problems. Moreover, it's a subconscious excuse to actually stop thinking about one's actual risks. Reasoning about actual threat models may not be easy but it is what one should be doing.
There's nothing "practical" about this at all, in my view.
Checklists are essential (or becoming so) in all professions that require a level of safety or assurance in their operations. They don't replace careful though and action, but they do supplement it be making sure there's a minimum level of items that were checked and not omitted because someone thought they didn't apply when then did, or someone just plain forgot. A checklist is a good way to start some rational thinking about what's required for your specific case (especially if it's so restrictive that you have to (and are expected to) selectively alter portions of the checklist just so your current use case functions.
Checklists are well known and save lives in some professions (such as aviation), and are being applied to others even when there's push-back (Surgeons and emergency room operations) because the benefits are just so large.
I think it's fair to dislike how people and organizations adopt a checklist in lieu of careful thought about security, and to dislike poorly defined and reasoned checklists themselves, but I for one would be much happier, and feel much safer, if security checklists were much more common overall.
"Linux security" is not a fixed problem space. Not at all. And that's precisely my problem with this - this checklist pretends that it's a fixed problem space, and therefore grossly misrepresents the problem.
There are plenty of things in Linux security that are static solutions that can be employed almost all the time, such as not allowing direct access to root accounts, always running a local firewall, making sure remote services aren't run as root without dropping privileges, etc.
Would've been so much easier to repair that machine if the boot partition wasn't so weird.
But threats occcur on a continuum - at a point (which is focused organised crime/nation state actors) you will lose...but there is still a lot of low hanging fruit you can win on.
It looks like work in progress. Selinux, updates, deployment and many other topics are on the table of contents, but not present in the document. (Yet?)
It's got some context which is better than almost any other guide. Ideally they'd include some threat models applicable to each point, but that could come in the future (also you can send prs)
Really, fire protection and locking racks with configuration of Nginx and Apache in same guide? If I am going to harden Apache, I am going to find Apache only guide.
Data center HVAC engineer here. The part about tight tolerances is not entirely accurate. IT equipment can handle a reasonably wide range of temperatures. The ASHRAE guidelines  have a range of 18 to 27 deg C for all classes of equipment.
What is an issue is rapid changes between temperatures. For a data center with any kind of devices (eg tape storage), ASHRAE limits this rate of change to 5degC within a 15 minute period and obviously you also want to avoid any situation where condensation could occur or static build up (high or low relative humidity).
Data centers do try to keep the temp stable towards the top of the thermal envelope for economic reasons but there's always some fluctuation due to server load.
 ASHRAE TC9.9 Data Center Power Equipment Thermal
Guidelines and Best Practices http://tc0909.ashraetcs.org/documents/ashrae_tc0909_power_wh...
I stuck a camera on a stick at the southern boundary of my property, about three feet above the stream. The stream rose four feet for a while. I moved the camera.
Now, 0 to 80C. That is just plain odd. Here (southern UK) the temp range is something like -10 to +35C. Nowhere has 0-80C apart from a weird oven.
Rapid changes in temperature and interactions with certain ranges of humidity can be interesting.
Ah, the 'Aliens' threat model.
Surely if we grope everybody's balls/boobs on their way in, and have a secret list of "people with names that are vaguely similar to middle eastern sounding names" that we won't let in, that'd make up for the walls that you can just crawl under of climb over? Right?
Don’t make your security encourage legitimate users to work around it due to pointless friction
Smart security allows users to do what they need to do efficiently and safely.
This unfortunately leaves a disconnect between the people who harden (who might actually hear about issues), and the people who write. Even if the writers do hear, it won't be implemented until the next revision.
the password rules force you to choose [heuristically] guessable passwords, therefore they must be changed every 90 days. simple!
ssh keys instead of passwords are a good example of better security and more convenience (for the most common use cases).
It'd be nice if more "security improvements" came with ways to make them convenience improvements too...
Good security measures are like this. Add sandboxes so you can let users do what they want. Add authentication so people know who they're talking to. Support security keys so people don't have to worry ad much about being phished. And so forth.
Note: This is an official Debian manual, not some random third party.
Also, first time I hear about polyinstantiated directories, so TIL.
1. Run centOS or Redhat
2. install openscap-workbench
3. Use the centOS stig and choose which profile (I recommend US govt configuration base)
4. Uncheck the firewall rules (they set it to deny all incoming; change to DMZ with basic rules)
5. Click remediate and apply.
6. OpenSCAP does the work for you to harden the system
You'll have to use other security appropriate tools for appropriate servers, but you'll know which service and its ramifications.I know that MySQL has a comprehensive security script to prepare. Other tools have similar built in functions.
Also note you can download Nessus and get a 7 day free trial as well. It's not perfect since the ticket price is $2400/yr . You could also use OpenSCAP for compliance, and metasploit as a substitute for application. There's also websuites like Burp and OWASP.
But regardless you pay or not for automated testing, you need something to automatically find bad things so you can fix them.
Also, after you do "remediate" a machine, be sure to allow plenty of time to find and fix everything that's now broken.
Not really all that different from what goes on in the Microsoft world with WSUS/Appstores/etc. Just the implementation is different and much more customizable.
That seems very inefficient and friction prone. We have stuff like the nix package manager that can install stuff locally without sudo. It also has other nice features from what I've read. What I'm saying is that we need general purpose distros where you don't need sudo to use them.
I designed such a system which runs with appropriate permissions to a task, is modular, and accomplishes the provisioning via 'job' files submitted to a service
endpoint. A user could describe the software they want installed. The SA performs due diligence on vetting software
and designs a standard install module then generates a skeleton jobfile and provides it to the requesting user to fill in necessary details and submit. Wash , rinse, repeat.
That's the way its done in general: User makes request, request is approved, action. I was noting that a Linux system does not get in the way of that work flow by being a Linux system. If the box belongs to a company then the company gets to decide what is on it.
If the box belongs to the end user then obviously root rights are in order. However, my wife never gets to use anything requiring root rights on her Arch Linux laptop and does not care and would not know what it is anyway. I keep her laptop functioning effectively by watching disc space and other metrics and regularly patching/updating it for her.
On my university we still had real UNIX servers, no way IT could allow for everyone installing what they felt like.
And the local PCs with Slackware/RedHat would get booted via tftp.
they all have enterprise support and onprem repos
we also had a bunch of scanners probing for common php scripts on the http server (we don't run php). simply returning a 404 header and disabling logging in nginx for \.php$ uris cut down a bunch of garbage (mostly from China). additionally, using ngx_http_limit_req_module reduced the effects of numerous DoS-type scanners.
so far, this + iptables for only 80/443/ssh has been a superior solution to instaling fail2ban that continually scans bloated log files and performs constant iptables banning/unbanning foo.
clean logs are essential, not just nice to haves.
"Log noise" is a legitimate security concern all by itself. It is actually disappointing how rarely it is discussed, and how in this world of "AI everywhere" there's no great smart AI log analysers to reduce the signal:noise.
Moving the SSH port might "only" reduce noise, but that's a legitimate security goal in its own right.
That's because after many people learn about not using "security through obscurity" they apply the rule without thinking as if obscurity never has any benefit for anything.
For instance, if I (or my software anyway) see a bunch of sshd login attempts from some IP, and then that IP decides to try imap ... yeah, that's getting insta-blocked.
And that's hard to do if you decide to just ignore the sshd attempts.
SSH key-based logins are so much more secure and convenient as heck once you invest the time to learn how they work.
However, indeed. Minimized log noise is great for analyzing anomalies manually.
Putting SSH on a nonstandard port is pretty silly.
what's the alternative while still keeping ssh logs that aren't filled with thousands of login attempts from bots per day? afaik there isnt any solution that is both as simple and as effective as changing the port.
Last commit June 2012? Does it work with modern distros?
Do you really think making firewalling decisions based on unstructured attacker-controlled log data is a good idea?
Many unexpected things can happen, as a simple example SSH can generate log entries like this
Jan 30 17:37:04 server sshd: Invalid user root from 127.0.0.1 from 10.0.0.1
Changing SSH to a non-standard port is an inelegant solution and doesn't actually bolster security.
If your environment contains multiple ssh servers running on non-standard ports you would not notice when something out of the ordinary happens.
Something listening to an unexpected port should be something you must look into, not shrug at.
Moving to non-standard means more sophisticated discovery
and possibly a more sophisticated attempt.
Likewise for VPN stacks: https://nvd.nist.gov/vuln/detail/CVE-2017-7521
If you're not prepared to expose a service to the world, you probably should run it at all. Ad hoc, non-standard configurations add substantial complexity and maintenance burdens. Complexity is the enemy of security, and having less time to manage more complex configurations is not a good recipe.
If you really need a service, then choose the best one and move on. The rule of thumb is that if you can reach a service, you should assume anybody else can, as well. This is especially true regarding SSH. I've seen plenty of servers p0wned via SSH, but never by breaking SSH. Instead the vector was always through an SSH user's computer infected by malware.
You want secure SSH? Disable password authentication and force everybody to use smartcard authentication like a Yubikey. I do rate limit SSH access using OpenBSD PF, but only because the authentication failures fill up and pollute the logs.
This is not OpenSSL. Language alone does not make a product insecure. For practical purposes it is certainly not just a matter of when.
Contrast that with your VPN product, which probably falls over if someone looks at it funnily. Unless your product is called Wireguard, and we are ten years in the future, perhaps.
it would be if it was meant to be comprehensive.
I'm not saying decrease your supported algorithms to the point where you break things you're using, but you'd be surprised at how much you can get away with not having and never notice it.
At the end of the day this means all those horrible libssh bots and worms can't even begin an SSH session with you, much less attempt any sort of exploitation or bruteforcing. It still generates an entry in your logs (key exchange algorithm mismatch or the like), but decent syslog daemons can filter out such messages by regexps, and redirect them to another debugging log file that you hardly check unless you're having an issue connecting somewhere. Metalog is an example.
It stopped my logs from getting splattered with failed attempts.
I'm of the opinion that if an attacker gets a local account then I'm already hosed given the regularity of local privilege escalation CVE's
EDITED TO ADD: I wholeheartedly agree with the sentiment though.
Or you could use the standard port, turn off passwords, and wait for the sun to burn out before someone brute forces their way in.
How do you ssh-copy-id without some allowance for passworded logins? Especially if you're doing it from such distant machines that you can't scp over a .ssh/config file.
Sure, I could have puppet push a ~/.ssh/config file everywhere, just seems more effort than it's worth. After all if a hacker wants to know if you are running ssh, it's not hard to scan all ports for ssh.
Why inconvenience yourself more than the attacker?
meh. Just means typing in the port in putty. Pick an easy port - 60k. I can live with that.
Seems like a big inconvenience for minimal extra security. Not like scanning for all open ports that have an ssh listening is hard.
There is a reasonable argument against using tcp for the tunnel instead of udp due to what can happen to latency when you have a tcp inside tcp stream when one or the other or both are dealing with retransmits (bear in mind that tcp has some guarantees about delivery, whereas udp does not).
In theory you can get an exponential stand off instead of linear due to tcp (say RDP or http) in tcp (openvpn). In practice I find that it does not generally matter these days and I can generally stream BBC iPlayer when abroad with minimal fuss over OpenVPN.
I actually have the port knock action on a blank website w a unusual subdomain. So we need i it I pull up website and can then vpn in temporaryly
Except no. Less attempts translate directly into less risk.
All attempts have a >zero chance of succeeding. Honestly...struggling to see how this concept is even remotely contentious or how someone could credibly argue against it.
If the number of brute force attempts drops substantially then the chances of success drop too. This is true for a high security setup and a low security setup. In both cases there is going to be a proportionate drop.
To be honest I consider things like BMC and the like and whatever goes on inside the BIOS and the like as far greater opaque problems than some of the more esoteric security "assurances" like measured boot.
Don't forget your own laptop from where you login to administer all this is the best chance an attacker has to breaching all this security. So, ensure there is good role separation for administrators and good security hygiene on all the individuals machines (use password managers, do only official work on these laptops, don't visit random websites, run vetted authorised packages only).
For highly sensitive stuff (like CA or any root of your chain of trust), setup a SCIF and use only terminals inside a SCIF to access/administer them.
Securing the chain of trust is a critical element of securing any system. If the root of the chain of trust is not securely bootstrapped and continuously verified to be still secure, then all bets are off.
The provenance and integrity of binaries, config and data running on your servers should be fully known – their integrity verified with cryptographic methods tied to your chain of trust.
There should be no way for these entities to be mutated without cryptographic hardware tokens also being broken. This raises the security bar significantly. (mostly targeted supply chain attacks mounted by highly motivated likely state actors can overcome this bar).
Without this, you are always standing on shaky suspicious ground no matter how much you harden the layers above.
In proper devops teams, there is no even ssh access. System is deployed from image and configures itself. And is killed in scaling events.
Grsecurity is for hardcore stuff. Or openbsd, ultimate solution :)
That'll teach'em not to make mistaeks.
chmod -R 600 /
I wouldn't recommend doing that.
Especially after the next reboot.
Also, from a practical point of view: sshguard is very useful and easy to install.
This is a nice dream, but absolutely not practical in the real world. All the major hardware vendors have an equivalent, eg. CMIC for Cisco UCS, DRAC for DELL, etc etc.
Not to mention the impracticality of not having these when your equipment is locked inside a cage in a datacenter on the other side of town.
But yeah, good luck convincing anyone that it's a good idea to run me_cleaner on production hardware.
I know this because the original developer of our system used a /boot partition of 500 MB. That's right, 500 MB. I noped out of that. Don't even get me start on 2 GB total disk space for everything else on our API server. This was built in 2013 mind you, plenty of disk space by then...
There would be a permissioning matrix where I can list allow-all-permissions and deny-all-permissions by default and then change the granularity
(cue the "but I did it with a Yubikey" responses)
The default configuration should work fine for most.
I know on centos, I put this in cron 2x a day:
yum update-minimal --security && curl https://logserver/api/$(hostname)/$(needs-restarting -r)
It grabs all the security based updates, and installs only those. And it reports the reboot required status. I watch that and make a reboot ticket if it hits the true endpoint.
Edit: the server part is just Node-red watching for those endpoint GETs. Once it sees a true, it fires a Jira ticket in the Change Management space for a security based reboot.
Also, I thought the --security flag did not have any effect on CentOS?
For more security-focused situations is as follows
What about compiling everything with GCC security enabled like Google does on Android?
There is a script which reads elf binaries and outputs security info: https://unix.stackexchange.com/a/89214
Thanks for the script, I wasn't aware of it.
So yeah, your binaries need to be built with CFLAGS="-fPIE" and linked with LDFLAGS="-pie".
There are tools which will verify the situation for you, but not force a specific solution.
(disclosure: I'm core maintainer of dev-sec)