Hacker News new | comments | show | ask | jobs | submit login
HackedThat: Breaking in to a hardened server via the back door (polynome.co)
271 points by bradleybuda on Aug 25, 2016 | hide | past | web | favorite | 66 comments



Money line:

> It’s an example of a utility box that runs various random services - maybe acts as a bastion host or testing ground - and nobody quite manages it or knows what it is used for. This server is as weak as its weakest service; and because it is not purpose-managed, it can be difficult to keep track of what is running on it and ensure all services are patched and secured. If you have one of these servers floating around somewhere, you might want to think twice about keeping it - it may very well be the chink in your armor.

True for applications AND servers.


But the weakest point is always human: http://boston.conman.org/2004/09/19.1 (true story---happened to me)


That was a delightful read.

Wasn't entirely sure how they knew where to look for the Passport credentials though, unless they'd gleaned that information earlier. From the article the author makes it sound like they had a limited window to mount the disk, pull off what they wanted and shut everything down, did they already know where to look?


Thanks, and great question. During our exploration of the first box we popped, we noticed that most of the services on the host had their code and configuration in /usr/inversoft (or /usr/local/inversoft, I can't remember which exactly). So when we got our shell on the app server, that was the first place we looked, and luckily it didn't take long to find a config file containing Passport credentials.


How does Passport find its credentials? IDK, but I'd guess there is a series something like: exec flag, envvar, config file in user's home directory, config file in default /etc location. Attackers would just use the same logic. That wouldn't get them the first thing in the list, but that's probably not how it was set up anyway. If one were worried about time, one could patch Passport to print the results after it had done the work itself.


Very fun read. I love following the train of thought and seeing where they "failed".

Also, this Elasticsearch RCE has been patched a while ago and we still see a lot of servers hacked because of it. In fact, there is a DDoS botnet made of only ES servers that we have been tracking.

<unrelated>If you are using Elasticsearch, please patch it!</unrelated>


For aspiring pen testers who read this article, there's an important catch: if the rules of the challenge (or other Inversoft policies, like any bug bounty programmes they may have) didn't specifically allow targeting their other systems, then the author of the blog post could have been prosecuted under e.g. the CFAA.

Always read the list of in-scope systems and rules of engagement before starting. If that information hasn't been provided, then don't start until it has.


Two main lessons for me:

1. Always run services (e.g. ElasticSearch) with a unique user dedicated to that service and nothing else.

2. You're never as secure as you think.


>Always run services (e.g. ElasticSearch) with a unique user dedicated to that service and nothing else.

Quick tip; If you do this, you can also make your iptables rules be per user. For example, "webserverUser" can only accept inbound connections on 80/443 and only have outbound connections that are related to established inbound ones. If an attacker gains execution as this user, they cannot download new code, etc or even do DNS lookups for that matter.


This is totally wrong. Once you have RCE, you can upload code the same way you're executing code.


Wow, that's great, I never considered doing that. Thanks!


It's only marginally helpful: it doesn't actually prevent attackers from uploading code to your server.

Instead of having 'nc -l 8080 | bash' or whatever as your payload, an attacker can just run code instead. "pwd > /var/www/html/exfiltration.html". If they absolutely need a shell, they could e.g. alter nginx or its config files to run `bash` on POSTs to a hidden route.

This does make it a little trickier, and potentially a little easier to detect. But it certainly doesn't make it so that "they cannot download new code".


>This does make it a little trickier, and potentially a little easier to detect.

Correct, I should have said, it eliminates many easy ways to download code. Defense is depth is all about making the attackers job harder and increasing their likelihood of being detected.


Ah the fabled one user per service. I try then something weird happens with permissions then I run it as root "temporarily" for development and then 3 years later...oops.

Good thing I don't do dev ops except for my personal projects.


At that point it's well worth figuring out what has gone awry with permissions rather than executing as root. If nothing else, you'll have added valuable information to your internal knowledge base about that particular permission problem and know how to fix it when it inevitably crops up again.


Sure but that's much easier to say than to actually do. Sometimes it's not necessarily a permission issue but something missing in a user account. When you're under a tight deadline, especially at a start-up, it's not always easy to say "well I'm just going to keep working on the problem until I fix it" when you can also say "I can fix this in 3 seconds for now and hopefully revisit later".

It's simply reality. I don't enjoy it. Fortunately I've never done that in any type of production environment (just development). But I don't entirely blame people. Many of the infrastructure pieces that need to be deployed can get very complex very quickly.


>Sure but that's much easier to say than to actually do. Sometimes it's not necessarily a permission issue but something missing in a user account. When you're under a tight deadline, especially at a start-up, it's not always easy to say "well I'm just going to keep working on the problem until I fix it" when you can also say "I can fix this in 3 seconds for now and hopefully revisit later".

I used to be in the same boat as you until I had my lightbulb moment: if you're constantly coming across permission errors like this, it's a sign that you're just doing things wrong.

I don't mean this as an insult; it's more that often when you download, install and provision a new piece of kit, it's easy to do so without taking a few minutes to read the docs and find out the best way of doing things. Especially if you're glancing over e.g. Ubuntu instructions when installing on CentOS.

(in fact, this rule applies to a lot of IT. Spending hours trying to get a CI tool to do X, Y and Z? Chances are it's just not meant to do it that way. God knows how much time I've lost to this.)


Agreed. After a few years of doing these sysadmin tasks "the right way", it doesn't take any longer than the insecure way, as you gain a solid mental model of the various permission structures and you've learned all of the diagnostic tools available to you.

> I don't mean this as an insult; it's more that often when you download, install and provision a new piece of kit, it's easy to do so without taking a few minutes to read the docs and find out the best way of doing things. Especially if you're glancing over e.g. Ubuntu instructions when installing on CentOS.

As a sysadmin, I find software documentation to often be the worst place for deployment advice. It's often written by developers with the same mindset as the GP - just get it working, best-practices can come later (which they never do).

One app per user (and the corollary, proper segmentation between system users) is something you just have to train yourself to do, screw what the developer had in mind with his deployment instructions.


Oh I certainly agree and fortunately this hasn't happened much to me. But it has happened to myself and many developers I know hence my original comment :)

Though to be fair I've also run into weird bugs with OpenShift, Mongo and a few others over the years so it's not necessarily something wrong that the developer is doing.


Ahh! this is such fun and accessible problem to troubleshoot


Always run Elasticsearch behind an auth proxy like nginx or their Shield offering.


Interesting, but I had to disable images to read through it. Too many distracting and unecessary gifs.


The gifs are how the author expressed emotion. I felt they added to the experience of reading the article.


I don't understand why people bother posting personal comments like this.


Would've been much easier to just actually hack Linode.

:)


This has always been my concern with Linode and other cloud VPS providers. You can secure the server all you want, but you can always get in via a vulnerability in their API or out of band console access. 2FA helps a lot to prevent access to Lish/Glish/Rescue mode console but as shown the API is powerful enough to still be a potential problem if your key gets leaked somehow.


Oh, but last I checked lish was an INCREDIBLY (can't emphasise this enough) insecure hack around screen(1)[1] 2FA doesn't help you a bit.

Moral of the story is not to trust your provider. Don't use cloud if you don't need it, don't let your host have your ipmi crdentials, run FDE (this has saved several bitcoin exchanges from losing millions).

[1]. Here's one particularly funny thread from their support forum that perfectly captures exactly how much of a mess this setup was (is? I haven't hacked linode lately): https://forum.linode.com/viewtopic.php?t=3231%3E


"I put in a support ticket about this back in December 2007." (source: comment in mid-2008)

Usually a bad sign by itself lol...


My strategy was always going for whatever they trusted the most. Often security or admin-level systems. They usually sucked more than some of the battle-hardened apps they were looking out for. Hilarious that the same principle might have led me to go for Linode, too, instead of the app. Well, also the ROI factor of getting more for my time.


So Inversoft got a full pentest for the price of a MacBook. Not bad ;)

Nice read anyway, some important lessons there.


Awesome read - Congrats to Polynome for a hack well done.

Here is Inversoft's account of the events leading up to the hack and the lessons learned.

HackedThat: Mind the backdoor https://news.ycombinator.com/item?id=12390936


I enjoyed the article. It inspired me to participate. However, the Iversoft challenge site has a stupid password policy. Password policies are basically the same thing as just storing them locally in plaintext. Requiring people to perform password gymnastics is the most surefire way to get them to write it down.


There is probably some truth to this, but I don't know the answer is to allow weak passwords. Every company I've ever worked for enforced some sort of minimum password requirements.

What I do find to be a PITA is when you attempt to create a password that does not meet the minimum rules and the error message gives you no indication of what you need to change to meet the requirement.


Sure, block the stupidly weak passwords.

But don't block a 6 word diceware phrase because it has not numbers (or because it is to long.... looking at you PAYPAL).

Meanwhile, I'd venture that P@ssword1 meets their requirements...


fair point.


Well, the first 3 are ok other than increasing the password length.

Next, they really should be using ed25519 keys if possible. Better, faster and all that good stuff.


Is writing down your password a bad thing? Reduces the attack surface from 510.1 million km² to 5 m²


Wow - fantastic work! I'm glad the ES article came in handy for you ;)

I can almost promise you that if you could reach the external ES instance and exploit it - it was already exploited. They need to wipe that box and rebuild.

Overall, great writeup - thanks for sharing!


I have not read any hacked that blogs more intently than this one... wow. Slow Claps


I would love to see a screen share video of this. Can you make that happen possibly?


Could two factor authentication have stopped the hacker from exploiting the Elasticsearch vulnerability or from implementing the final "Smash and Grab" strategy?


Really interesting read, well written. Congrats to you and your team.


This is a great article. Unfortunately, it's littered with a bunch of animated garbage that's super distracting and adds absolutely nothing of value. Please fix.


Yeah, the constant motion made it hard to focus on the text. The only way I could finish the article was by running this command in Chrome web dev tools:

$$('img').forEach((i) => i.hidden = true)


Wow, neat. Is `$$` some kind of jQuery-esque service that Chrome provides?


It's part of the Command Line API which most browsers implement in their dev consoles. http://getfirebug.com/wiki/index.php/Command_Line_API


It's actually just an alias for document.querySelectAll, so kind of like a very minimalist, modern-web jQuery.


I think you'll start to see more and more of this style of including poignant (if somewhat exaggerated) gifs in informal posts. In most colleges in the US, this is the de-facto format for email communication from student to student. I've started to see the inclusion of reaction gifs and other memes creep into the corporate world (I've worked at 2 of the Big Four tech companies, and I can't remember a design review or informal presentation that didn't include funny gifs). I personally think it helps keep the reader engaged, although that might be because I'm so used to it now.


Worth considering whether "poignant" is the word you want there. It denotes something evocative of sadness or regret, and derives from a root which means "to prick". So, that pricking sensation around your eyes when you feel yourself on the point of tears? Something which evokes that feeling, and that sensation, is accurately described as "poignant".

(Also possibly of interest is that "pungent" derives from the same root, for much the same reason. In this case, the pricking sensation is that caused by the irritating sulfoxides released when you cut up an onion.)


Huh. What a feeling, finding out that you've been using a word totally incorrectly for your entire life! I guess I was going for something more along the lines of "punctuating" or "relevant".


I know that feeling! I read a lot as a kid, and I'd pick up words from context but not get their meanings right, and that got embarrassing. So I talked my folks into getting me the biggest, meanest dictionary they could lay hands on, in order to avoid embarrassing myself further. Little did I know this would lead me down a path of linguistic pedantry from which I'd never be able to return.

Was the word you were after perhaps 'pertinent'?


Browser extension idea - gif freeze. Pauses all gifs by default.


Used to be in Firefox you could hit Escape after load to do that. Not any more. There's an extension to add it back, but it's too flaky to work well. So it goes.


For Firefox:

user_pref("image.animation_mode", "once");



For me the gifs only play during mouseover, with a ~300ms delay. I didn't notice them at all.


That's new since I read the article, and an enormous qualitative improvement. Kudos to the author for a quick, thoughtful bugfix!


Even Blue Steel?! :-)


For what it's worth, I though the gifs were funny and liked them. But there's the whole tradeoff of adding extra clutter.

I a few GIFs can definitely split up the article a bit and make it easier to read, but too many can also be distracting.


The Blue Steel has to stay


Even Blue Steel. (What's a Blue Steel?)


The first gif on that page, Derek Zoolander's famous look. http://www.urbandictionary.com/define.php?term=blue%20steel


Maybe I'm taking crazy pills, but that looked more like a Ferrari or LeTigre to me.


You may be loco, but probably LeTigre. Definitely not Magnum though.


Ha - you could be right. I may have jumped to a conclusion.


It worries me how it's suddenly become acceptable to have 1MB+ pages (6.2MB in this case).




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: