This is something that really needs to be said more in amateur circles (i.e. self-hosters and homelabbers). For these scenarios I think it's even worse, though, because it's a case of insecurity through transparency. People don't realize that all ACME/Let's Encrypt certificates are published in transparency logs that get scanned constantly, giving attackers a shiny target. I saw a reddit post recently (which I won't link for the victims' sakes) where someone had searched for Heimdall (a popular dashboard) in a web-security-oriented search engine and found a bunch of insecure publicly facing instances, some of which contained credentials.
Fixing this would be as simple as using wildcard certs, wildcard dns, and unique subdomains. Configure your web server to 404 any request without a valid subdomain (esp. www.domain.tld or domain.tld) and you've avoided nearly every web-based scan because the attacker doesn't know the host name. This is pure obscurity, but it definitely works.
Yes, host name can get leaked through SNI, but if someone is monitoring your traffic, you probably need something more sophisticated anyways.
If you set up a wildcard cert and configure your server to reject invalid subdomains, you've already done more work than it takes to actually secure your site. If you've learned to do all that, you'd just secure your site. There is no time benefit to the obscurity approach.
Why do people insist that "security" is binary, i.e. something is either secure or not? A residential building has way less security than a military base, yet can be consider secured while the base not.
The article points to one of the core equations every business should embrace: `cost_of_risk = unit_probability * unit_impact`.
If some measure can improve security metrics (time to discover, time to break, skill/power to break, etc.) then it can be considered a security measure. SSH server running on non-default port is not much more secure than otherwise equal server, but probability of random attack is lower.
> you've already done more work than it takes to actually secure your site
Finding and plugging RCE holes in every dependency you have deployed is way harder than making the host less discoverable. If running security-patched software, having non-default, hard to guess passwords are security measures that lead to lower chance of exploitation, why can't non-default, hard to discover hostnames/ports be considered security measures if they do reduce likelihood of exploitation? Relying solely on `apt upgrade` is similarly insecure as merely exposing software on non-default ports.
Security is not binary. Setting up firewall and using 192 bit basic auth password for all your web properties is however going to stop maybe 99.9% of attacks that wildcard cert + subdomains will stop and takes like, 2 hours. If you can do both, sure, go for it. But there is certain truth in applying traditional normal approach first
Weird, I do not see cost-benefit analysis in OP, which I am missing. OP discards the obscured hostname measure claiming it takes "more work than it takes to actually secure your site".
To me this reads as there is some inherently secure approach and then this obscure approach, slightly increasing security. I cannot agree with such assessment.
If you're going to talk about vulnerabilities that you don't know about yet, then you should be looking at how you're running old kernel versions and unpatched software. Hiding your hostname doesn't protect you against anything other than insecure application code. Old httpd or openssl will still be exposed.
There are plenty of ways that your secret hostname can leak. It's just a weak, low-entropy, guessable password that can't really be rotated. And moreover, if you're going to choose one thing to do (or time box your efforts around security), it's the highest complexity I can think of for almost no real return. I mean, if you're so concerned about leaking the hostname from your HTTPS cert because your site is that insecure, why are you even using HTTPS? What are you even protecting?
Is that the best way to spend an hour securing your site? Even just slapping the six lines to put hard coded HTTP auth credentials on your nginx virtual host will do more than that.
In this particular case, the community I'm talking about is primarily (exclusively) using their http server as reverse proxy, pointing to various web-based backends. They have to set up subdomains and certs anyways, so doing it with wildcards is actually less work.
Yes, they can and should also set up things like fail2ban or crowdsec, add geo-based ip blocking, etc, but many don't because it's not fundamentally required to make their services work. Even harder still, or even impossible, is making sure all of the backend services themselves are secure.
Is a non-public hostname “obscurity” or is it a form of password/credential? Or is a credential actually “obscurity”, only so vast an attacker can’t possibly have enough electricity to shine a light on even a remotely relevant part of it.
It’s all about risk/probabilities in my view. How likely is it to find the hostname? How likely is it to find the password?
The only real differences is that there are best practices ensuring passwords are never logged in clear text anywhere, whereas it’s not the case for a hostname.
> Configure your web server to 404 any request without a valid subdomain (esp. www.domain.tld or domain.tld) and you've avoided nearly every web-based scan because the attacker doesn't know the host name.
But you haven't actually avoided it, you kicked the can down the road at best. Sometimes that's useful but it's not a sound general strategy.
If someone is doing that, you're in the realm of targeted attacks instead of scans, which is outside the scope of my original comment. It's similar to someone monitoring your traffic; as I already said, if that's the case you need more anyways.
Security is not choose your own buffet where you get to only think about scans but ignore other common attack vectors. Botnets are common enough that targeted attacks like I described are just as common as scans, so you always need more anyway.
Configuring your host to return 404 on invalid subdomains is just not a general solution, at best it just buys you some time until attackers find the subdomain, ie. kicking the can down the road, like I originally said.
No, I'm saying that's one of many behaviours. They mine domains and URLs scraped from email addresses, email headers and bodies, online content, and more. Your site is not "secure" when that security can be circumvented by someone pasting a URL into an email.
You can actually buy "Passive DNS" records. Big DNS providers collect all the answers they learned while serving, deliberately without recording who asked and the answers are aggregated and available for purchase.
So if Sarah in accounts once went to secret-webserver.internal.example.com from her laptop at home before turning on the VPN by mistake, her upstream DNS provider will tell any attackers with some $$$ that secret-webserver.internal.example.com existed, when it existed, what the A or AAAA records said and so on.
Targeted attacks will know about secret-webserver.internal.example.com even though only *.internal.example.com is listed in the CT logs.
I do. I see no SSH attempts on the active high port. None. It may only be a matter of time of course. I continue to see French metric tonnes of attempts to subvert mysql and the Web.
We've been on the shifted port for more than 5 years.
I can confirm it makes a huge difference for SIP as well. Toll fraud attempts are not making the logs completely unreadable anymore when using a non-standard port.
> People don't realize that all ACME/Let's Encrypt certificates are published in transparency logs that get scanned constantly, giving attackers a shiny target.
FWIW, this is true of all public certificates right now, regardless of issuance method (ACME, manual, etc.) or CA (ZeroSSL, LE, DigiCert, etc.). I don't point that out to be pedantic, just to emphasize that that information is going to be out there when someone grabs a cert, regardless of how they do it. :)
> I saw a reddit post recently (which I won't link for the victims' sakes) where someone had searched for Heimdall (a popular dashboard) in a web-security-oriented search engine and found a bunch of insecure publicly facing instances, some of which contained credentials.
There were some instances recently of the same thing happening with Wordpress installations, as the default Wordpress installer would go get itself a Let's Encrypt certificate before the user had completed setup and set an admin password on the install. No vulnerability necessary, just hop onto it and set the admin password. I suspect this is going to be a frequent vulnerability discovered as more things bundle "get me an LE/ZeroSSL/etc. cert" into their software/OS installers.
I have an idea I want to pass by you, since you seem to understand the importance of this better than some others. (At least as far as my under educated opinion on network security goes) (I'm a hobbyist, self taught most things.)
So lets say you are going to be running a home server to be setup as a read only server to the outside world, but write capable through a separate port connected only to a laptop that has no internet access (or very restricted) which also has the nicety of being so obsolete it doesn't have IME or any other intel idiocy backdoors attached to it.
Would you still put a hardware firewall between each of these connections? And if so, would you also run it through a VPN on the read side of the server?
I personally don't trust VPN's, since I see them as middlemen you pay to pretend they don't keep logs of anything. Of course there is always the whole argument of 'not having anything to hide, so no worries'; but I see it as false, since the whole point of using a VPN is to hide your bits from attackers and snoops. Even if it's legitimate/legal data.
So, what would you do to avoid using a VPN, provided you can't own the VPN instance somehow somewhere due to being a bit of a cheapskate? Would some basic OpenWRT firewalled routers be enough for your purposes (and thus mine possibly) or would you go with some more complex setup where a person has to trust yet another company to not be trying to hijack data somehow?
Server intended:
Opteron build, DDR3 tech. 6 cores, hyperthreading (if any) disabled. All forms of speculation turned off. All that jazz.
1 nic port is to be setup to be downloaded data only, no upload allowed.
Other nic port is access point for SSH via old laptop setup for security purposes.
Everything running on linux, as much as possible. No windows allowed.
"Write/Read-only" doesn't really make sense to me in this context. What services are you running? Are you just trying to lock SSH behind a single laptop? That seems like overkill to me.
If your laptop and your server are on the same network, which presumably they'd have to be if the laptop has no internet access, you shouldn't need any kind of firewall or VPN.
I would be hosting an FTP service for my files for my own use in other locations. So that would require some 'read' access from a network. So that network connection would have to have internet access somehow, thus the firewall and possibly VPN. These are not incriminating files in any way mind you. I just am wary about things like packet injection, and other sneaky practices that miscreants use.
I would also be hosting a webpage or two, for blog and possibly web-shop purposes. The blog would again be "read-only", but the web-shop would require some semblance of 'write' permissions available for users. So the blog would share the 'read-only' connection ideally. The Web-shop would share the write capable connection instead.
Finally, the laptop being able to SSH into the server solely is for security purposes due to not wanting to use any form of IPMI due to some security concerns over it. I would instead being using a dedicated network card for just its purposes only. This laptop would not connect to the internet through anything, even the server. No shared connections between the network nics at all.
And I realize it may seem overkill to some people, but I don't care if it is overkill. It's when people get sloppy and cut corners that backdoors and security vulnerabilities arise. IMHO.
If I had a million dollars, I would have the most secure server in the world, lol.
The firewalls/VPN's are essentially there to act as a stop-gap measure just in case anyone decides to poke their nose in where it doesn't belong. Partially to catch them in the act, partially to stop them in the act. Ideally.
Here is a simple text explanation of sorts of my setup I have in mind.
- Nic 1: Blog/FTP, Read only. No copying files to the FTP, just copying files from it. You can only read the blog, not comment, or anything like logging in. The only person who ever needs to 'log in' is me, from my laptop.
- Nic 2: Web-shop and maaaybe a game server for testing purposes.(Considering making a simple game that will need some net code tested in the future.) This will have full read and write capability, since it will need to. This is the network that will require all the extra firewalls and VPN connections, if I use them at all. The other one might be able to get away with not having them, but this one will need them in my mindset on the matter. Logging in is definitely a thing on this part of the server.
This server will have (and maybe I should have mentioned this before) a virtualized instance for each service. This way I can sandbox each, and kill each sandbox if ever needed due to whatever malicious actions some dingus decided to do.
The laptop is essentially going to be my monitor, keyboard, and mouse; so I don't need to run multiple of each for yet another machine. (I have 2 desktops, and another laptop. I need to simplify things down a bit, even if this seems more complex, lol.)
All of this is getting its own intranet essentially, completely separated from my main internet connection. It will also be getting its own business connection instead with a static IP address for any sort of connections to the outside world. The only way my two networks will ever talk to each other, is either through the internet itself, or via a firewalled connection between the intranet I have setup, and my other computers. In this way, it will act like a local NAS for my other computers, but also for when I am out and about, and need a certain file suddenly.
I should also mention I tend to live with roommates, so I like having an extra layer of security here and there when doing so, since you never know when your roommate is going to try to do something sneaky. Like my current one who decided to give our password to the neighbors downstairs... and across the wall... Why? Because they lied to him and said they pay for the internet here too.(They don't.) Or so he claims. Quite frankly, I have found out rather recently because of this and some other things that he has a habitual need to lie and deflect. Fun stuff.
Again, this may all seem like overkill to some people, but I have long learned from experience that what one person considers overkill, another considers underkill. I would much rather do things to a point where people go "jeezus" than be the one going "ah damn".
With that note, there will be absolutely zero windows operating systems on this machine, and any machine that directly connects to it, like my laptop; will also be running non-windows environments.
The machines that do need to run windows, due to things like my capture card from Avermedia not supporting linux basically at all... they are going to be locked behind the firewalls, and allowed to connect only to the basic internet connection I already have setup. Everything else is linux. Everything. Even my 'other' laptop that currently has windows on it, only has it, because it came with it. That changes, very soon.
And besides, you wanna see real overkill?
I'll be setting up my own version of Kali essentially on the first laptop for SSH and stuff into my server, so I can also do security audits. But it's either going to be Arch based, or Gentoo based. Why?
Because I don't trust the folk who made Kali, otherwise used to be known as Backtrack. Why?
Because they still use torrents, and not magnet files, to start. And while even Arch has a way to be used on Windows now; I can at least install it via Bash on my own without needing to use some pre-made packaged installation. Hence why I might move on to Gentoo.
And I realize that no OS is perfect, and security flaws exist everywhere.
That's why I am going overkill. Also, this is how I learn things. By doing them. And I basically want to learn how to make some of the most redundantly secure servers, so that people who come to me for my services get something they can trust isn't going to be easily hacked by some script kiddie.
The thesis of the article, that security through obscurity is underrated, is “because it has a low implementation cost and it usually works well.”
But I contest both of those things. Common obscurity methods provide low benefit for the amount of work put in, relative to methods with a better foundation.
One of the best examples of this is port knocking, a resurging fad in self‐hosting circles, that is completely beaten both in simplicity and in actual protection by putting your SSH server behind WireGuard.
Even the example in the article seems ridiculous. I always advocate disabling SSH passwords and using FIDO‐backed SSH keys instead, but of course people will complain that they lose the ability to log in from arbitrary machines (well worth it in my opinion, but fine). So rather than using SSH with a weak password on a non‐default port, why not use SSH with a strong password on a default port, which provides more entropy and also some protection against attacks by a local user, without having to remember weird port numbers?
Yep and Password auth can also be augmented with some additional PAM modules (like pam_oath and/or pam_yubico) as long as you don't configure them in a way that allows user enumeration.
Really the only thing you get by changing the port is less log spam.
If your system is so poorly configured that an automated drive-by attack by a bot would be successful then you're gonna get owned anyway if someone decides to target you.
I think reducing log spam is actually a great security outcome, if the only thing normally present in the log are my real logins, an attacker's attempt world stick out like a sore thumb.
Exactly. "Security by obscurity" is a badly defined term that security people use to name the practices that bring too little benefit for their implementation cost.
It's derogatory by definition, so it can not be underrated. One can disagree about the evaluation of some specific practice, but the people that insist on doing that usually have a horrible track record and even completely wrong mental models (like using the Swiss cheese model for security, when it's only useful against Nature, not humans).
It's only worth it when it has real value and doesn't introduce undue operational overhead.
I've worked at multiple places where I was told by CISSPs (!) that we mustn't under any circumstances name the servers after what they do because then "the hackers would know what to attack" and that we must instead name them after superheroes or tequilas or some other such whimsy. I pointed out that getting LDAP bind credentials or the zone file from Active Directory is a tall order and that `nmap`ing the subnet for running services is ten times easier only to have the same tired rationale parroted back to me with an appeal to the authority of some 15-year-old security "best practices" document.
I've also had similarly-futile discussions about disabling ICMP, browser development tools, RDP/SSH timeouts, and a dozen other things that I've forgotten by now and at no point did I ever get a good reply when asked about why we were introducing so much pain for so little tactical gain.
To be clear I always make it a point to disable unused services, change account names, rename wp-admin, etc. because those aren't serious operational burdens and indeed reduce attack surface significantly, but at some point "defense in depth" becomes self-perpetuating box checkery designed to create security work streams. As somebody who cares about practical security _and_ delivering a good experience it's maddening.
Using a random port for SSH (or even "better", port knocking) is a "clever trick", until you forget the one you used for that server that quietly runs on which you log once every two years. Then it can range from minor annoyance to major PITA.
Security by obfuscation has the potential to confuse some (mostly dumb) attackers, but it has also the potential to confuse your future self, for VERY minor benefits (since everyone agrees but most of your security should not come from security-through obfuscation anyway).
I would say that every attacker I've ever encountered was dumb in the sense that it's a bot just scanning around. For ssh, I'm amazed by 2 things - 1, how quick a new server with 22 open on a public IP will be found and attempted to be compromised by brute force guessing, and 2, how changing the port (even to an obvious one like 2222) will eliminate all that noise.
I suppose you could say that the attackers are filtering out anyone who has done some basic hardening, but I suspect the truth is mostly more mundane - the attackers just aren't that motivated/clever; at least the ones who mass scan the internet trying to compromise ssh.
You can maintain a .ssh/config file with all the port that you used (and backup it) or use always the same custom port.
Anyway, it is useless but it can avoid all sort of bots trying out common login to your SSH server, that consumes resources (even if minimal) and fills your logs.
> You can maintain a .ssh/config file with all the port that you used (and backup it)
Once you start distributing config files to all your clients you might as well switch to keypair authentication, which so completely reduces risk of intrusion that leaving sshd on 22 simply does not matter.
Have a stock Kia car? It’s trivially to steal because their key fob sucks and they can all be stolen in the same way.
Now, take that shitty Kia and personally wire a little toggle in the center console that unplugs the Starter with a relay. All of a sudden no Kia theif will know how to steal it. Easily 100x less likely to be stolen than the fully standard car next to it.
Having at least one layer of obscurity should be standard practice. It makes the theif need to put in custom effort. Most people are lazy.
But that would be a 'real' security measure. Security by obscurity would be to park it 3 miles away where noone would think you parked it and relabel it as a Skoda.
No, it's a perfect example of security by obscurity. It's just a button, anyone can press it, no key or anything else needed. It's just different from the other cars, so it'll confuse a thief who is following a scripted runbook on "how to steal Kia". Hopefully enough that they move on to the next car.
Uh, no. While the basic risk formula is pretty much the standard one, the author fails to distinguish between the likelihood of a random or opportunistic attacker (the “most people” of the article) and a targeting/persistent attacker being deterred by a port number change.
If you have high value assets and are being targeted, changing the SSH port number has virtually no effect on likelihood. Blocking port scans?
Great, they will pay someone in your org $500 for an .ssh/config. Or $5000.
Changing SSH port numbers and the other mechanisms in this article are so much bike shedding.
Do the hard work first. Implement multiple layers, patching, monitoring, thresholds for automatic disconnect, etc.
(Aside: why do you even think the president is in that convoy? They may well be elsewhere, moved into the third suburban at the last possible invisible moment.)
I don't think the author is discouraging anyone to do the "hard work", but rather encouraging them to do the easy work to further protect all that hard work at low added cost.
My issue with that is since people have to choose how to spend their time, they may opt to do the easy work first, for very little value, then never get to the hard work, because busy/overloaded.
The article oversells the value of this easy work and may lead some astray, lulling them into a false sense of security.
> (Aside: why do you even think the president is in that convoy? They may well be elsewhere, moved into the third suburban at the last possible invisible moment.)
Unless they are actively repelling an attack, I can guarantee you that the President is absolutely not anywhere except the Beast. All of the other vehicles in the motorcade are less armored, and do not carry the critical items the President may need, namely his blood. The standard operating procedure if the President's motorcade is attacked is to exfil the President while the CAT (Counter Assault Team) lays down massive amounts of suppressing fire. To that goal, the Beast is the safest vehicle for the President to be riding in.
Security through making things harder in some unquantifiable way for an attacker to exploit[1] is usually a waste of time (IMO) because there is no way to measure or even estimate with any kind of accuracy how much value it adds versus the costs of implementing and maintaining it. Maybe it will deter attackers forever, because you'll get lucky and no one will ever care enough to put in the effort. Maybe someone becomes obsessed with the thing you're trying to protect, and just for fun figures out how to bypass all of your work in a week, and publishes the result to Full Disclosure.
The author of the article cites a typical information security faux version of a real thing: calculating risk by multiplying impact by likelihood. Risk is a real field, with real data collected to make those estimations as accurate as possible. Insurance companies use complex actuarial tables, which is where the old saw about red cars having higher insurance premiums comes from. They really do collect massive volumes of data to make estimates of likelihood from.
In my field (information security) people who talk about likelihood are generally just guessing, or trusting their knee to ache when the haxxors are about to pwn the Gibson. There is no data, just someone guessing and plugging the number into a formula so that the result has the appearance of objectivity and science. Implementing controls that "make things harder" is a variation on that same theme.
If one wants a security control they can trust, they should pick the ones that have actual math behind them, like "using this random token[/key/whatever] means that an attacker would have to guess for literally one million years to make a successful request".
[1] of which security through obscurity is a subset.
A few simple obscurity tricks - 22 => 2222, /admin/ => /hq/, etc. - can save your ass if there's a remote zero-day, and the bad guys are busy scanning for low-hanging fruit while the good guys learn about it then scramble to patch.
With a well configured SSH server, 22=> 2222 is pointless, very much the definition of obscurity providing zero security.
OpenSSH is impeccably written software, widely battle-tested and its been literally forever since there was any significant vulnerability in it despite being one of the most widely deployed servers on the planet.
Enable key/cert only authentication and you are already effectively 100% secure.
Then, to double-down:
- Harden KexAlgorithms and Ciphers
- PermitRootLogin no
- Compression delayed
- PerSourceMaxStartups
- PerSourceNetBlockSize
Okay, so are they going to live forever and keep writing impeccable software for eternity? What happens when they tire of their immortality, and decide to pass the torch?
When that day comes, a machine running sshd on a high port will have to be wiped, because how can you trust that nobody’s scanned your ports, exploited your server, and eliminated the traces?
When an OpenSSH zero‐day gets released, you can bet your bottom dollar people will be scanning full port ranges for SSH servers. And only one of them has to find you.
If that’s the scenario you’re worried about, don’t rely on obscure ports. Run your sshd behind a VPN.
Have you ever been responsible for security in a professional sense? Hiding attack surfaces is standard operating procedure. No one with a triple digit IQ seriously thinks any piece of software is going to be secure to infinity.
> OpenSSH is impeccably written software, widely battle-tested and its been literally forever since there was any significant vulnerability in it despite being one of the most widely deployed servers on the planet.
I know you're lying because it's not written in Rust.
> What part of "if there's a remote zero-day" you did not understand?
What part of how about you look at CVE and see its been YEARS/DECADES since the last severe vulnerability do you not understand ?
OpenSSH zero-day my ass .... if there were any, they would have been found and exploited by now.
OpenSSH is written by the same people as OpenBSD. Those guys are OBSESSIVE about writing secure and correct code and they have the track-record to prove it.
Fact is there are millions (billions ?) of OpenSSH servers out there exposed to the internet.
OpenSSH is not the way hackers are getting in my friend.
> how do you configure it, in order to not make it a DOS vector?
You use it in conjunction with a third parameter that I forgot to mention, namely MaxStartups.
MaxStartups introduces a degree of randomness to the whole process, so you have MaxStartups X:Y:Z where:
- X = Reference number of unauthenticated connections ("n")
- Y = Percentage probability of dropping when n>X
- Z = n=Z == all further unauthenticated connections dropped
Additional parameter that may be of interest is LoginGraceTime.
Fun fact, MaxStartups/PerSourceMaxStartups/PerSourceNetBlockSize were specifically introduced >6.1 in order to add firepower to combatting unauthenticated connection DoS attacks.
Putting random login pages at /wp-admin/ is great for farming IP ranges to block. They don't even need to go anywhere, all you need is a form with a login button and bots will try to log in.
You should drop down a few layers on the OSI model and catch this at your reverse proxy instead of publishing login pages to catch every permutation of 'wp' in urls.
Camouflage is absolutely at least as defensive as it is offensive. In the military context - You can defend your country by hiding ie with a nuclear submarine. Camo nets for covering anti aircraft guns, etc. outside of the military context, just look at how camouflage is actually deployed in nature.
Isn't that because they don't have a more effective alternative? They're forced to rely on the security by obscurity of camo (ignoring the guns).
If you had an indestructable soldier, you could paint them bright pink and have them live stream their position, it wouldn't matter they're indestructable.
Isn't that true for software too though? You cannot ever rely 100% on any tool you use, so if you're serious about actually avoiding penetrations and not just about the academic discussion, you're better off using at least a bit of obscurity on top of your real security system.
Or the entire classification system. Why should you give the enemy a freebie? Obscurity is just another hurdle/layer that wastes an adversaries precious time.
Because at one time it was effective, then someone created IR optics and at least for heat generating/storing objects the effectiveness of camo dropped significantly.
Now camo is still effectiveness, say against poor soldiers or one's who had their division leader sell that equipment off to make a little extra income. But you better understand your enemy well, or just putting up camo and then huddling together in a group with no further security will get you killed all at once.
Straw man. Nobody is arguing for ssh on 2222, with root:root for every server.
Also - acquiring / maintaining / training / using good IR equipment consumes far more resources & attention than basic camouflage does. Good troops do something called "field training", where they become quite familiar with how effective cover & camouflage can be. And they've figured out that the stakes on a battlefield are "death", not "deal with server being hacked".
obscurity is great when you have the ability to strike back. it's less good when you're a server that needs to withstand attacks and can't kill the attackers.
This reminds me of how the best spam prevention methods involve making your site's registration system different enough from everyone else's that bots can't fill in the forms automatically. Yeah, it's useless against a dedicated attacker that can write a program specifically to attack your site, but it makes common automated systems useless, and (if done by enough people overall) makes the act of posting spam more expensive overall.
You could probably say the same thing here. Yeah it won't work against an attacker who knows what they're doing, and no, it'll never work as your only or main line of security.
But it'd probably slow down or stop attacks by random script kiddies using bots bought from dodgy internet forums or what not. And those are a decent percentage of attacks a site or service receives now.
Making your username utka or changing the sshd port raises the cost to attack by a meaningful amount, but by simply requiring key based authentication you have raised the price of brute forcing your way in to practically $infinity. If the cost is already at $infinity it's pointless to take additional measures to raise it further as attackers will already have switched to trying to compromise your desktop instead to steal the key.
There are some things like anticheat / antivirus / DRM where there is no simple way to make it cost $infinity to break your protection. These kinds cases are where obscurity comes in handy in raising the attacker's costs.
The more I learn, the more I find that “best practice” isn’t always best. It should be called good defaults.
For example “Disable ssh password auth and use keys”. It depends on your threat model. A good password can be secure and may in some cases be more secure than a stolen laptop containing id_rsa. SSH keys are more convenient but rely on physical security. It should be discussed as a trade off.
But then you can protect against both keyloggers and stolen laptops by enabling TOTP 2FA. You can even require all three!
I have a bastion setup somewhere in my network that's locked behind either an SSH key or a password + TOTP token for when I lose access to all devices with a signed SSH certificate. All devices are encrypted and I don't lose sight of them in public so my threat model would include "the police" and "people violently breaking in and stealing my stuff" but a password isn't going to protect me from that.
A wordpress example from some time ago: Don't have user named 'admin' + plugin that generated random 'login.php reroute' and some more that I can' remember now. Good example of obscurity on extremely popular software (at the time)? I'd imagine it would certainly bounce lots of hacking scripts.
Security through obscurity is fine if it's an additional layer in a well thought out security implementation. I've build a bespoke Node.js site/service where I sometimes have to kick out clients due to various reasons. I sometimes fear reprisal and have to consider a targeted attack on my infrastructure. And indeed I do get the occasional hack attempt with for instance hand crafted sql injection attempts (I receive an instant notification when this happens). The best approach in hardening your infrastructure I think is trying to hack your own service by trying a plethora of methods like sql injection attacks or denial of service attacks on your public api's.
If someone is capable enough to combine things, he does not need to be told this advise. All the others we tell just to follow common advise and not live by obscurity because it is crap most of the time. Strong security does not need obscurity
"Firstly, we’ve eliminated the global brute forcers again since they scan only the common ports."
Some attackers scan all ports for hosts that are known online through some other port/service/registration. They can also observe your traffic flows at each hop.
Brute force scanning all ports on all IPs might take you a month. But after that you have all these easy targets that people don't patch because they think their server is invisible.
Putting a painting in front of a safe does not make your safe safer. Every thief knows safes are hid behind paintings.
Whilst I agree philosophically with you, if you see a reduction on in your logs of this unwanted traffic, then it serves a purpose. And that's the point here. There is no claim that this is a good safety precaution, but the claim is that it is underrated, and at least that its worthwile to reduce some of the junk.
Its not because something isn't perfect, that it's not worth doing
Turn off logging. It will have the same effect (you won't see brute force attacks on common ports) with the same security.
You could also add 2 IPTables rules which would temporarily block all connection requests after hitting a threshold. But I guess actually stopping a brute force attack is less "cool" than changing a port number.
I agree. That said, it's clear that a culture of "obscurity is fine" is not helpful either.
From first principles, what is security through obscurity? It's literally security though something being vague, old, out of date, little known.
That can work. If your physical network is so old that nobody has a network card to plug into it, then they can't hack it. And maybe it works great most of the time. Until somebody finds an old token ring card and laptop.
The problem is, it's only as "secure" as the old school knowledge of the hacker. With an old enough blackhat, all security via obscurity becomes shallow. So as a practice, it's always defeatable.
Real security countermeasures resist all known attacks. Otherwise anything that fools a script kiddie would be valid security.
One could also interpret the "safe behind painting" (common ports) as the common practice and the "safe under carpet" (shifted ports) as security through obscurity.
I loved the “5 monkeys” comparison and I agree that everything depends on the real case scenario.
My point here is that what the article propose it is no “security through obscurity” it’s just good configuration practices. Change username, port…
Security through obscurity goes along in the line: “I have this algorithm in the js that obfuscate the password in the front end, and nobody is gonna guess it because it’s super complex”.
While I generally agree, there are two problems with the scan example. First, a survey on twitter isn't exactly good data to base an opinion on. Second, even if the majority would scan the defaults, you still don't know the likelihood of someone doing a targeted attack and scanning the defaults. Such a survey is pretty impossible because you would need data of real attacks.
This is tangibly related to gov software being open or not.
E.g. do you want your social security or medical system to be open source or not? Much more eyes can look at it and catch bugs. Many adversaries also have eyes and can do a passive analysis even before connecting. And they DO have budgets to do that very thoroughly. This is also a form of security by obscurity.
Why? The author of Metasploit was widely seen as a "bad guy" by nearly everyone until security researchers realized it was a net-positive to the security community.
If my system gets hit with a ransomware that's open source (and thereby likely very easy to create antivirus signatures for), I'd blame my system and the attacker, not the author.
A kind of a tipping point for me was when one commenter called port-knocking ‘security through obscurity’. Sure, let's see attackers pick one of 65535^x combinations of ports, and meanwhile let's hear this smartass wax about how that's categorically different from a password.
I think in fact that the sequence can be changing, based on some secret and e.g. current time. Haven't had a chance to implement anything like that, though, and not sure if there's software that supports such an arrangement. It's true however that above some effort it's more of a meaningless flex, and the proper aim of hiding the ports from most attackers would be achieved long before fancy combinations.
Gonna get myself in trouble, but boy I don't really want to debate or collaborate with folks that want to argue this point. Or use things they're working on. Defense in depth, obscurity is not. It's not even interesting to consider beyond dismissing.
I really wish people would stop conflating reducing the attack surface with safe software default configurations. They are not the same. There is value in hiding your listening sockets/ports from the world. Anyone who does not believe so frankly has never been responsible for security beyond their laptop and maybe some random VM in AWS they SSH into.
Fixing this would be as simple as using wildcard certs, wildcard dns, and unique subdomains. Configure your web server to 404 any request without a valid subdomain (esp. www.domain.tld or domain.tld) and you've avoided nearly every web-based scan because the attacker doesn't know the host name. This is pure obscurity, but it definitely works.
Yes, host name can get leaked through SNI, but if someone is monitoring your traffic, you probably need something more sophisticated anyways.