Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It doesn't have to be more secure, it just has to be less likely to get hacked.


Less likely to have targeted attacks but you are still at risk of someone finding an exploit in the software and sending a bot to scan the internet for the software


I thought this kind of attack was usually done with relatively old bugs, for which patches are often available.

If you sat on a fresh exploit, would you really waste it with automated, untargeted mass scans, which may draw a lot of attention, causing your bug to burn out quickly?


Um, yes? You'd use it as widely and as quickly as possible, ideally compromising every single vulnerable host on the entire Internet before any sort of coordinated response can be mounted.

You see these kinds of attacks frequently with cryptolocking/cryptojacking software. The more quickly you deploy an attack targeting a new vulnerability, the more victims you'll have.


Probably more at risk too because how rigorous are you really about staying up to date on the most recent security patches? How much time and money did you actually spend setting up security infrastructure like automated security testing or vulnerability bounties? Enterprises, even many of the ones that have had data breaches, dump a ton of time and money into those areas.


Sure. Nobody claimed you could get the risk down to zero.


Its even more of a risk potentially because big companies have people full time working on keeping systems up to date and monitored. How many self hosters have a full monitoring system powerful enough to detect attacks and keep their software up to date and secured as soon as updates come out?

How many people self hosting are even qualified to run a secure system? I bet most of them are just regular devs who know just enough about linux to get something online.


I don't think you understood my point. Yes, one particular risk might be higher. But you don't need to do security better or even on par with a big company. You just need your total risk of data exposure to be lower. You can bet big companies have lots of hackers trying to break into them with the newest 0-days, spearphish their employees, etc... there are so many threats you practically don't face if you're self-hosting.


> Its even more of a risk potentially because big companies have people full time working on keeping systems up to date and monitored.

Beyond the thing about different types and frequency of attacks - sure, I trust Google's security more than my own. But I do trust my own security more than that of Random-Startup.IO, who likely have no full-time security people, and little incentive to get the job right (paying attention to security slows down your incredible journey).

Also, even with big companies, this argument applies primarily to the few like Google, Facebook or Apple. Your Random Megacorp from outside tech community usually focuses its security efforts on satisfying regulators and neutering their own employees, who'd otherwise happily copy out all sensitive data to make their jobs easier.


I'm not sure that off-the-shelf software on your own server is necessarily less likely to get hacked. It's easy to fall behind on security updates when you don't think about deploys regularly.

Look at the logs for your existing infrastructure. I can pretty much guarantee that there are drive-by Wordpress attacks, regardless of what software is actually serving requests. There will be ssh login attempts.


You're not showing anything by telling me to look for attempted attacks. I realize they will be there, I don't even need to check. But if attack attempts are how you measure risk then I'd bet you whatever attack you can think of, Google et al. will have orders of magnitude more of them than I would.

You gotta realize, it's not like I'm arguing you should set up a server with 1234 as the root password. I'm assuming you're reasonably competent in security, just mostly lacking in the bandwidth needed to e.g. keep your server constantly checked and updated on a daily/weekly basis. With those assumptions I have no reason to think the slightly increased risk of getting hit by a brand-new attack through an IP scan or something is going to outweigh all the entire classes of risks that you do away with as a result of not being part of a massive corporate attack target.

Although, heck, if you're absolutely paranoid about random IP scans, you could just move your stuff to some obscure port, which I'm sure you realize already. There you go, you're not going to be found through random mass scans anymore.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: