
Ask HN: Anonymous person sent proof of SSH access to our production server - throwaway0727
Anonymous person (under a nickname) sent screenshot as a proof that they managed to gain SSH access to our production server. The screenshot is legit, information displayed in it could not be faked without actual access.<p>Just a proof, not some ransom request or anything equal.<p>What would be a smart next step? Other than checking if there are any security updates for all the software in our stack.<p>We are a small company and don&#x27;t have any security experts, etc.<p>Thanks!
======
kbenson
In addition to the wonderful technical advice already here for how to deal
with the server, there is the question for how to deal with the anonymous
person. _If the proof contained the method of exploit_ I suggest something
along the lines of:

"Thank you for bringing this problem to our attention! We are taking steps to
resolve the problem now, but would like to reward you for your work. If you
let us know how you would like to receive if, we would be happy to donate $X
to your efforts."

Where $X is something you think you can easily part with, $100, $250, $500,
$1000? This both primes the sender to be more generous, if they were on the
fence as to whether to do something nefarious, and establishes some small
trail to them (depending on method) in case of major problems with them later.

If the proof was _not_ included in the email, I think it's much more likely
you just received the opening email in a blackmail campaign. It's highly
unlikely that server is even the entry point in that case, so cleaning it will
not resolve the problem. It's just the sacrificial lamb for them to prove
they've got leverage and let you stew, and they can contact you again after
you _think_ you've cleaned out the problem to let you know they still have
access, and the only way to be rid of them is the pay them.

~~~
throwaway0727
Unfortunately, it's the later - no details of exploit, just a proof.

If this comes to ransom, rather than unethical/unexperienced gray hat thing,
are there any good steps to take? Or hiring an expert consultancy is probably
the only good option here?

~~~
kbenson
I can't comment on the _correct_ approach in that case, I'm under qualified. I
would urge you to make sure you have good backups in a location that can't be
compromised (as in, you won't wake up tomorrow to fine them all deleted). If
your system already supports this, all the better. Keep in mind the worst case
scenario here is that every production server is wiped, which is essentially
close to the situation of a natural disaster at the site they are housed. If
you don't have a plan on how to deal with a situation like this (disaster
recovery/business continuity plans), such as redeploying to the cloud or to a
different cloud, or a different datacenter, then that's a thought for the
future (and the present if you have time).

I assume a professional computer security firm could help, but I don't know
enough about the incentives at play to know whether that's good in practice
(if they often deal with situations like this and not just
hardening/forensics, I assume they would have good advice). I have no idea
what that costs, and whether your business can afford it.

~~~
kogus
Total data loss isn't the worst case scenario, in my opinion. Quietly
interacting with your site, contacting your customers to abuse their trust in
you, etc.

------
danesparza
This has been covered elsewhere (like on serverfault:
[http://serverfault.com/a/107346/2557](http://serverfault.com/a/107346/2557) )

But it comes down to:

\- Take existing server down immediately. I'm assuming it is not on an
isolated network -- so this should really be a priority.

\- Prep a new patched server (with a smaller attack surface and updated
security credentials)

\- Postmortem the old box on an isolated network. Try to understand how the
attacker got in. If necessary, get security professionals involved.

~~~
emcrazyone
100% agree. Disclaimer I own a data center and have dealt with customer
collocated equipment breaches. In addition to the above steps:

-> disable root being able to login inside your sshd_config file. Make sure PermitRootLogin no

-> rename the root account too so if they are using an exploit based on user authentication then perhaps they won't be able to elevate to root.

-> disable password based logins and go to cert based auth. This will shutdown brute force attacks.

-> lower MaxAuthTries in sshd_config to something like 1 or 2 to help slow down attackers.

-> change the port you listen on. A little security through obscurity while not very effective might slow future casual port scanners that are testing a single port. In practice I've seen this really eliminate a lot of reconnaissance or farming type activities.

-> Make sure you're openssl and openssh are at the latest stable releases.

-> If the perp is coming from the same IP, you can use an iptables rule to block the netblock. Again, not perfect but may help slow things down.

-> grab shell history of all the user accounts on the box. Example ~/.bash_history. This is more reconnaissance but may be helpful if they are sloppy - you might see what they were doing on the box.

-> look for any modified files or new ones. Obviously logs will show up in the list but you're looking for things that should not be there. Example: find / * -mtime -60 -print where 60 is how many days ago you want to go back.

-> look at chron file to see if any timed delay bombs exist.

-> look at ps -aux output for any running processes that don't make sense

-> look at iptraf for any suspicious traffic to IPs you can't reconcile.

Good luck!

~~~
eonw
just a note: intelligent exploiters hide their files inside of yours, so the
-mtime is useless in many cases, they will set the mtime of their upload to
match the rest of the folder they hide in.

command history is also easy to alter if you know what you are doing.

~~~
porker
So the only way to detect exploits is to scan the server regularly and log the
mtime and the file size, and look for changed files that shouldn't have
changed?

Re the command history, is there any way around them wiping it - e.g. piping
all .bash_history entries to an append-only store?

~~~
spdustin
RootKit Hunter [0] works pretty well on most distros to check hashes on files
and other potential problems.

You can also use a global _bash_ configuration that would log all commands [1]
entered into any _bash_ shell to a central log, which could be shipped off-
server simultaneously.

[0]: [http://rkhunter.sourceforge.net](http://rkhunter.sourceforge.net)

[1]: [http://askubuntu.com/questions/93566/how-to-log-all-bash-
com...](http://askubuntu.com/questions/93566/how-to-log-all-bash-commands-by-
all-users-on-a-server)

~~~
uxp
Does RKHunter scan userspace for mtimes? I haven't used it in years, so I'm
honestly curious. Back when I did, it was customary to install it side-by-side
with Tripwire, which essentially does only that; scan userspace and
categorically log changed files based on a configurable severity depending on
location (eg, /root/ is high, /var/log/messages is low)

~~~
spdustin
They really serve two different functions. Tripwire and similar tools like
aide (I use _aide_ now rather than tripwire, but that's my personal
preference, I'm not an infosec domain expert) are file integrity checkers that
check files for property changes (including mtime). Tuning out false alarms
can result in an admin just turning off the reporting functions of the tools,
that's the downside as I understand it.

However, _rkhunter_ has additional logic to specifically seek out rootkits and
malware-like behavior, and is more specifically targeted to system file
modifications. Combining it with _unhide_ (to compare actual processes running
with those visible from userspace) provides a reasonable assurance that
nothing nefarious is going on.

They're all part of a spectrum, however. I use scanners like _aide_ alongside
_rkhunter_ as well, but I'm the sort of guy that will spend a day tuning the
config of _aide_ to avoid constant false alarms.

------
akerl_
Step 1) Image the system. Don't log into it, don't pull anything off of it.
Take a snapshot of it. If your hosting provider doesn't provide you a direct
way to do that, contact them and get them to do it. You want a clean image of
it for investigating later.

Step 2) Hire a security expert / forensics company. Give them the image, ask
them how to proceed.

Things to keep in mind:

\- You don't know now much you can trust the person who has contacted you.
It's possible they think they're a good samaritan, though logging into a
system as a proof of concept is pretty far into grey-hat.

\- Anything you say to them may one day be public record, attached to your
company forever.

\- It's possible they've compromised far deeper than this, and they just
haven't said so

\- If they've gotten in, it's possible that they aren't the only ones, so even
if they cooperate and help you close the hole, you still want to do steps 1
and 2 above.

------
laumars
In terms of hardening against SSH attacks, the principles are quite simple.
Your business case might mean that some of the following cannot be applied,
but there's plenty of measures below that you can use to harden SSH.

1\. firewall - only allow SSH connections from trusted static IPs

2\. Use SSH keys then disable password logins. Lots of guides online to create
keys, so I'll just cover the 2nd point: as root or sudo, edit
/etc/ssh/sshd_config

    
    
        PasswordAuthentication no
        ChallengeResponseAuthentication no
        # restart sshd
    

(edit: make sure the SSH keys have passphrases. That way you have an extra bit
of security in case any workstations get compromised)

3\. Disable root access. edit /etc/ssh/sshd_config

    
    
        PermitRootLogin no
        # restart sshd
    

4\. Limit SSH access to specific user accounts. This prevents users creating
their own key (in the case of mountable home directories) or other machine
accounts with passwords (if you've not done #2):

    
    
        groupadd sshaccess
        # add all users to the sshaccess group. lots of different ways to do this. The following will work on some flavours of Linux but not all:
        usermod -a -G sshaccess $USER
        # now edit /etc/ssh/sshd_config and add the following in (it wont already exist)
        AllowGroups sshaccess
        # restart sshd
    

5\. Install auto-firewalling for failed SSH logins. I personally favour
fail2ban as that covers other scenarios too, but I've also used denyhosts and
that's worked well for SSH.

6\. Lastly, and by far the best option, don't enable SSH on any internet
facing IPs.

If you need SFTP enabled, then let me know and I'll post some details on how
to harden SFTP so attackers cannot gain an SSH shell.

~~~
xorbyte
You assume the breach happened over SSH. This is valuable information to
securing SSH, but it's entirely possible the original breach happened over
some other service, and there were some other steps involved in the breach
before the SSH screenshot was taken.

~~~
laumars
True but I'm working from the angle that If the breach happened via some other
means then they'd need some way to remotely execute code to enable SSH, create
valid login credentials, and disable the firewall; in which case they already
have a more convenient shell access so gaining access to SSH becomes
redundant.

However it's possible that the attacker's screenshot was of a remote shell
initiated via some other means and the OP assumed it was via SSH.

Edit: why was this downvoted? If there's an error then I need to be educated.
I've spent enough years of my professional life hardening servers to have some
idea what I'm talking about, but I'd be an idiot if I didn't listen to the
expertise of others. So please correct me rather than downvote me :)

~~~
nickpsecurity
Best not to ask why downvoted. Those people's responses will rarely teach you
anything. The kind that would will usually reply instead of downvote. Plus, a
few already explained to me it's common for a post to get hit with a few
negative votes followed by corrective action as other, open-minded people show
up. Happens all the time with mine.

~~~
laumars
True. I've scratched my head over why some of your posts I've seen have been
down voted.

Probably doesn't help I've been working long hours this week so a little on
edge to begin with.

------
eli
Do you have a way to reply to the person? I don't see any harm in thanking
them and asking for more details.

But in the meantime I'd have to assume everything is compromised: save a copy
or an image of the server for analysis, but take it offline and build a new
one. Rotate all passwords and credentials. Assuming you're not doing something
strange with SSH, they probably got legitimate credentials from a compromise
somewhere else or password reuse or a compromised development machine, etc.
There are guides online for doing this: [https://support.rackspace.com/how-
to/recovering-from-and-dea...](https://support.rackspace.com/how-
to/recovering-from-and-dealing-with-a-site-compromise/)

It sucks. Sorry.

------
adjwilli
I had something similar thing happen several years ago when I was a grad
student. Me and a couple friends were putting together some Facebook apps
(when they were a new thing), and one guy didn't escape user input correctly.
Some teenager from an Eastern European country (I can't remember which
anymore) ran a script to figure out that we were vulnerable to SQL injections.
He was a nice enough guy and didn't want anything besides some experience
"hacking". We patched up the code and told him thank you.

I understand you're running a business which makes it that much more scary. If
he's not asking for ransom, you might ask him how he'd fix it. I know it might
seem like blackmail, but you might even offer him a "consulting fee". He's
probably just someone looking to try new things out and not malevolent.

~~~
ethbro
This is a fair point. Maybe the hacker is evil and hell bent on destroying
your server / company.

But if we treat every hacker like that by default then what kind of world do
we create? Certainly take prudent safety action, but then practice what many
here claim to value: knowledge sharing among curious individuals.

------
eeZah7Ux
Do not trust random HN or serverfault answers.

Cleanly shutting down the server can trigger rootkits that might wipe
evidence: talk to a professional. Pulling the plug can still remove the
ability to observe the behavior of the attacker: talk to a professional.
Touching the disks can expose you to the risk of being accused of _tampering
evidence_ : talk to a professional.

~~~
ktta
The irony here is that your comment is a random HN comment from an account
less than 10 days old :D

In reality, I think random answers on HN (or any answer/advice anywhere)
shouldn't be trusted, but rather taken with a grain of salt and think about
whether the answer really helps you.

>Touching the disks can expose you to the risk of being accused of tampering
evidence

I don't understand this. What do you say _touching the disks_ is? Like
physical touch, or logging in and looking at the logs? I don't think both of
those can be attributed to _tampering of evidence_ , like _criminal_ tampering
since you use the word 'accused'

On a lighter note, do you _always_ end your sentences with 'talk to a
professional'?

~~~
FilterSweep
> I don't understand this. What do you say touching the disks is? Like
> physical touch, or logging in and looking at the logs?

Is this sarcasm?

If not, hypothetical situation for you:

OP works at a company that processes card information of customers. A hacker
demonstrated gaining unauthorized access to production servers. Hacker pulls a
db dump as well as any keys used in encryption of data (some bad practices
here, but this is common). Hacker does not tell OP of his additional actions,
only demonstrates unauthorized SSH entry.

OP does the _logically_ correct thing of wiping his db servers, and "cleaning"
the machine, because, well, mitigation of future damage.

Hacker pissed he/she was not given _reward_ for demonstrating his proof of
vulnerability, uses this production data for ill will. A third party audit
(which _will_ happen) finds that OP has done a full wipe of the server - logs
for who pulled vulnerable information is now unknown.

With no finger to point (the hacker contacted him "anonymously", remember) OP
is then implicated.

~~~
ktta
>Is this sarcasm?

Yep. I was going for pointing out the fact that he obviously logged in so he
'touched' it and the OP is not so stupid as to wipe the whole drive when 90%~
of the comments of this post say no to. And I really doubt the 'logical'
decision of anyone who manages to post to HN for advice will be to wipe the
drive without getting a snapshot. And since it is a prod server, the same
server _has_ to be used unless they use AWS or some other cloud service.

~~~
FilterSweep
Thanks for clarifying.

I believe the point the parent was getting at is that there could be other
unintended consequences to taking relatively good advice.

Honestly, I'd even argue there is some _better_ advice on server fault/HN than
some professionals - but the difference is getting the professional has a
paper trail that you can't say "well, some DBA on server fault told me!"

------
throwaway0727
OP here. Thanks for all the responses.

I took action and updated firewall settings (which were too loose), ensured
that offsite backups are in place if worse comes to worst, rotated all api
keys etc, meanwhile trying to contact the anonymous person. Will rebuild the
servers asap as well, super glad that we have properly maintained ansible
scripts.

Also will try my best to convince the CEO to allocate some money for
professional audit/consultancy since we are no experts in security and to
reduce the chances of future incidents.

Trying to do our best and avoid things like SQL injection, XSS, etc but no one
is secure after all.

~~~
welder
None of these things you've done will remove access for this person.

~~~
welder
Contact me if you need a recommendation. I can point you to good security
consultants probably within your budget.

~~~
throwaway0727
Thanks, will have that in mind, as its' not up to me to allocate funds for
consulting, etc.

Any idea how much can such a service cost, assuming web application with a
very common stack (such as Ruby on Rails + PostgreSQL)? Is it something like
$5k, $10k, or $20k+? Or it really depends? Sorry if it's a very amateur
question, I have no experience in dealing with such companies so have no clue
how much can it cost.

~~~
lfx
It really depends, common stack don't mean that much. Depend what app is
doing, how much dependencies it has, with what external services it talks,
etc. Also 5k, 10k - this would price per day. And then you hire guy/company
for day. It depends from app, but it might take some time.

Security is not cheap.

Disclaimer: worked for such company.

------
einrealist
1) Isolate that server. Treat this server as your ground zero, but assume that
other systems might be compromised (including servers and devices of employees
etc).

2) Rotate / delete SSH keys on all other servers, that have the same keys
installed as the compromised server. Private SSH keys may have been
compromised in your company. Inform all employees who had a public key on that
server that their access is revoked and that their private keys may have been
compromised.

3) Log all (established) SSH connections on these servers. If there are
unexpected connections, handle these servers the same as the compromised one.
(better: inspect SSH connections through network devices). Interrupt these
connections.

4) If you cannot trust that the isolated server is the only compromised one,
you should isolate the network and start investigating for more breaches. The
person coming forward may have dug deeper to prevent you from shutting him
out.

If there are signs for more breaches, you probably should bring experts in.
Or, if you can afford it, rebuild your infrastructure on a green field. The
latter one is the safest one. If you don't have complex systems and use cloud
services and automation tools, this should be doable.

I hope you survive this one.

~~~
einrealist
One addition in regard to the perpetrator: Do not trust this person - these
persons (always assume its more than one person). Don't commit to anything.
Since they cannot be trusted, don't waste time and money on them.

------
disease
Give the person a small reward for pointing out the vulnerability and offer
another small reward for suggesting how to fix it.

~~~
edelans
why small ? the reward should be in accordance with the criticity IMHO.

~~~
brianwawok
So a startup with 100k total funding got messaged a major company crushing
bug. What should they offer for such a big find, 50k? Half their runway?

It's not that rewards should be small for big finds, but if you are legit
poor, you have limits on what you can do.

------
mboelen
Step 1: Try to get in contact with the person and see if he/she is willing to
help you share details on entering your systems. Thank this person and see if
you can provide a reward.

Step 2: Next step is setting up new systems, and start from scratch. Install
the systems, start with basic system hardening and up-to-date software
packages. Use
[https://github.com/CISOfy/lynis](https://github.com/CISOfy/lynis) to validate
your configuration.

Do not have any interaction or data exchange with the old (compromised)
systems.

Step 3: Save all running systems to learn from the event. See if you can find
the main cause why this happened.

Step 4: Learn about security, hire someone on your team with security
knowledge.

Step 5: Do regular (technical) audits.

~~~
danesparza
+1 for link to lynis. I had never heard of that before!

~~~
wila
You are replying to the author of lynis, might as well thank him for writing
it :)

~~~
mboelen
You are welcome (even without thanking) :)

------
jwarren116
In addition to all the really great advice already submitted, I really like
using Lynis[0] for scanning my servers to get an idea of obvious
vulnerabilities and a baseline for hardening ssh. It's absolutely not a
substitute for a security expert, but in about 10 minutes of setup you can get
an idea of what action items you need to add to the top of your queue.

[0] [https://cisofy.com/lynis/](https://cisofy.com/lynis/)

------
gbog
Many people tell you to hire an "expert". Be careful, many such security
experts are experts in fud and taking the money of clueless frightened people.
For a lot of money they'll run their attack bot on your servers and send you
you a twenty pages auto generated report which you will need another expert to
read and understand. And security is not a one off task, you need to either
not care at all, or make sure everybody cares all the time.

While you doing yourself the emergency backup and logs evacuation task, and
the password flips, I would suggest to mission your best two hackers on the
task of quick learning the basics and making sure there's no obvious hole in
the wall.

------
zhte415
[Edit: not-OP]

As the 2 comments so-far have suggested getting security experts, where would
be a good place to source security experts? I'm envisioning 2 kinds:

* Consultant, working for a fee (with retainer?);

* Independent, may be consultant, but could also be someone currently looking for a new permanent role and would bring welcome diversity/expertise to a small team - potentially illiquid / poorly matched hiring market that could be nice for many smaller companies to tap? Working remotely could work too - no borders/boundaries.

~~~
CiPHPerCoder
> where would be a good place to source security experts?

There's no universal good answer for this.

I spend a lot of time on ##crypto (irc.freenode.net), and a lot of smart folks
hang out there. Some are very well connected to other security experts in
their own isolated communities.

However, there are undoubtedly silos of security expertise that remain
untapped if you rely on just IRC.

You could also find folks who talk about security here on HN and follow their
Twitter accounts (if public).

You could try "[development stack] security expert" in a Google search, as a
last resort. (My company's currently at the bottom of the first page for PHP,
although that's likely only true because of our filter bubble.)

A diverse approach is probably most likely to succeed here.

~~~
web007
It's the same concept as auditors, there's the big 4 that you've probably
heard of, and a ton of other, smaller firms with varying quality.

You could go with a known firm like iSec Partners, Matasano (now NCC) or
Mitnick Security. They won't be cheap - at worst they may be able to refer you
to some other reputable firm if your budget is limited.

~~~
CiPHPerCoder
> or Mitnick Security

Please no. Not Mitnick.

I'd rather funnel clients towards my competitors than Kevin Mitnick.

He's a skilled social engineer, and his greatest social engineering success
was manipulating the media into believing he speaks for hackers in general.

He is not a programmer, his opinions on cryptography aren't insighful, etc.
His only skill is deception.

~~~
web007
Regardless of your opinion of The Man Himself, the company employees people
that are good at things beyond social engineering. I've seen two separate
engagements with them (one as a 3rd party and one as technical contact), and
both found significant non-trivial vulnerabilities that needed to be patched.

~~~
CiPHPerCoder
I can't speak towards the efficacy of his staff, but that testimonial is
generally true of any engagement with most security teams.

------
justin_vanw
Enable two factor authentication on your servers, or else two factor on a
bastion server and disable external ssh other than from your bastions on all
other servers.

I strongly recommend YubiKey, it is convenient and cheap and extremely secure.

With ssh it is very easy for someone to create an ssh key that does not have a
passphrase. With that, it is possible to log into the server with just the key
file and nothing else. At that point all it takes is someone to lose a laptop
or leave a computer unlocked and unattended and someone can get access to your
machines.

~~~
middleca
Came here to say +1 to this, definitely employ a bastion host and make sure
that's the only way to SSH to your servers. This can be a little tricky to do
correctly if you don't have someone on your team, but it's a valuable way to
reduce your surface area to monitor.

Installing fail2ban is also a very basic / smart way to discourage brute force
SSH attacks on your boxes. Also you could try piping your SSH logs into
something like papertrail / slack, so you have clear visibility into who's
logging into your servers, etc.

~~~
figgis
On fail2ban, I have had more success in being able to stop attacks quickly by
using SSHGuard. Quicker easier setup, easier to understand, etc. Is there a
significant reason to use fail2ban over sshguard?

------
devishard
To people focusing in on securing SSH: just because the person has SSH access
doesn't mean that they got it through SSH. It's possible that they brute
forced the password or whatever, but there's a ton of attack surface on a
website and many ways they could have gotten access. If they got it through
for example an XSS attack and got the SSH password/keys, securing SSH doesn't
stop them from doing the same thing again.

~~~
laumars
I was one of the people offering advice on hardening SSH. I mentioned
firewalling sshd to a subset of trusted IPs - which would still secure you
against the above attack. I also suggested SSH keys should have passphrases,
which would also mitigate against this attack (providing the passphrases are
complex enough). And if anyone has root permissions to disable the firewall or
change user login credentials, then they don't need to enable SSH (much easier
ways to gain interactive shells).

I also love how you can take genuinely helpful posts - after all, it's better
to harden SSH _regardless_ of whether this specific attack initially came
directly from SSH - and somehow turn those contributions into something
negative. God bless internet messageboards.

~~~
devishard
> I mentioned firewalling sshd to a subset of trusted IPs - which would still
> secure you against the above attack.

No, it doesn't secure you against the above attack. If they can use an XSS to
get full access to the server and get the SSH key, then hardening SSH does
pretty much nothing, they just whitelist their IP and continue accessing the
server. We don't know that they got SSH access via SSH. Until we understand
how they gained access to the server, blocking off other means of access that
might have nothing to do with how they gained access does nothing.

> I also love how you can take genuinely helpful posts - after all, it's
> better to harden SSH regardless of whether this specific attack initially
> came directly from SSH - and somehow turn those contributions into something
> negative. God bless internet messageboards.

Sure, hardening SSH is always a good idea, but until we actually understand
how the SSH access was obtained, we don't know that it fixes the immediate
problem. It absolutely is negative to give people information that persuades
them their problem has been solved when it hasn't been solved.

I don't see any reason for you to take my post personally. It's not an attack
on you, it's just pointing out that we need more understanding of the problem
to actually solve it. Don't shoot the messenger.

~~~
laumars
> _No, it doesn 't secure you against the above attack. If they can use an XSS
> to get full access to the server and get the SSH key, then hardening SSH
> does pretty much nothing, they just whitelist their IP and continue
> accessing the server._

Someone else suggested that and my reply was that if they already have access
to remotely execute code as root then they can easily gain root shell access
with much less effort than having to do the above workarounds to enable the
systems default OpenSSH server (and there are plentiful other ways to execute
remote shells without needing SSH)

> _Sure, hardening SSH is always a good idea, but until we actually understand
> how the SSH access was obtained, we don 't know that it fixes the immediate
> problem._

Since you agree that hardening SSH is a good idea, then it doesn't matter how
the attacker gained access to SSH, you'd recommend they review the security of
their SSH server regardless. So your latter statement becomes irrelevant to
the former statement.

> _It absolutely is negative to give people information that persuades them
> their problem has been solved when it hasn 't been solved._

I never once suggested this would fix their problems. In fact my language was
very clear that my advice would harden against SSH attacks, specifically.
However they have asked for next steps and while other people have rightfully
focused on the forensics side of the investigation, I have complimented their
advice with tips on hardening SSH. One recommendation doesn't have to override
another :)

> _I don 't see any reason for you to take my post personally. It's not an
> attack on you, it's just pointing out that we need more understanding of the
> problem to actually solve it. Don't shoot the messenger._

The server is compromised thus it's already too late to "solve". However that
doesn't mean people cannot offer advice on hardening against potential future
attacks on new or existing infrastructure in conjunction with analyzing the
point of attack on the compromised (and hopefully now isolated) equipment.

The snarky tone of my replies are because you have not offered tips that take
precedence over my own recommendations, which if you had then I would have
taken your criticisms seriously. But as it stands you're currently just
disagreeing for the sake of disagreeing. Which is something I see far too
often online and often just from people who want to seem knowledgeable but
without imparting any actual knowledge and thus mitigate the risk of
themselves looking stupid. Which is also why so many experienced individuals
tire of contributing to public forums.

You say my advice doesn't solve the OP issues, well neither do your posts. So
what was the point in posting them? I just see it as an odd kind of cyclic
logic.

~~~
devishard
Okay sure. They can figure out how the attacker got access and fix that, AND
harden SSH. They should also audit their logs, install updates, use a linter
on their JavaScript, use version control, use a library to sanitize SQL
inputs, force HTTPS, do code reviews, and 100 other good development practices
that have nothing to do with the problem at hand. I'm glad we agree on that.

Given that the OP doesn't know how to address their immediate problem,
however, posting a bunch of random good practices is probably not very
helpful.

I contributed something constructive: I recommended figuring out what the
vulnerability is and fixing that over fixing random things and hoping you fix
the problem by chance.

> I never once suggested this would fix their problems. In fact my language
> was very clear that my advice would harden against SSH attacks,
> specifically. However they have asked for next steps and while other people
> have rightfully focused on the forensics side of the investigation, I have
> complimented their advice with tips on hardening SSH. One recommendation
> doesn't have to override another :)

You said, "I was one of the people offering advice on hardening SSH. I
mentioned firewalling sshd to a subset of trusted IPs - which would still
secure you against the above attack."

If you want to claim you didn't say anything wrong and you were intending to
suggest your solution _in addition to_ the solution that actually solves the
problem, that's your prerogative, but people can read the post history which
shows that isn't true, so it would be more dignified to just admit you made a
mistake. Nobody cares that you made a mistake--I'm not attacking you for that.
I'm just trying to put up the correct information, since you didn't. It's not
about you, so there's not much reason to take it personally, and you're not
making yourself look good by claiming you didn't make mistakes that everyone
can read.

~~~
laumars
My point was correct to the specific attack you broadly described. Your
example required a web server attack that allowed arbitrary code execution and
privladge elevation; which is a hugely specific attack and it's pretty fair to
say it's unlikely (in the case of gaining root access and then choosing to
enable SSH, which I'll get to). Furthermore, and at risk of sounding like a
broken record, if an attacker can remotely execute code as root then they have
absolutely no need to enable SSH for they already have far easier methods of
firing off a remote shell. (To be honest they don't even need root to
accomplish this).

This is not a mistake, this is something I've done in practice when auditing
security at work. (Preventative pen testing rather than post breach forensics
such as this situation calls for. But as I said before, that doesn't mean you
cannot take lessons from the former in conjunction with the latter.

Anyhow, we really are just arguing about arguing now, which is an utterly
pointless waste of both our time

~~~
devishard
> My point was correct to the specific attack you broadly described.

But you didn't correct it, you proposed a solution that didn't address the
attack I described.

> Your example required a web server attack that allowed arbitrary code
> execution and privladge elevation; which is a hugely specific attack and
> it's pretty fair to say it's unlikely (in the case of gaining root access
> and then choosing to enable SSH, which I'll get to).

The XSS attack is just an example of a vulnerability that wouldn't be
addressed by hardening SSH. There are plenty of other vulnerabilities that
wouldn't be addressed by hardening SSH.

> Furthermore, and at risk of sounding like a broken record, if an attacker
> can remotely execute code as root then they have absolutely no need to
> enable SSH for they already have far easier methods of firing off a remote
> shell. (To be honest they don't even need root to accomplish this).

There's a good reason to demonstrate SSH access even if they have root access:
they might want to show their capabilities without exposing how they gained
those capabilities (because knowing how they gained those capabilities would
allow the OP to fix the problem).

> This is not a mistake, this is something I've done in practice when auditing
> security at work.

Just because you've made mistakes in practice when auditing security at work
doesn't mean they aren't mistakes.

This isn't even time for a security audit. OP first really needs to do some
forensics. A security audit should happen, but it can wait until the
vulnerability has been found and fixed.

~~~
laumars
> The XSS attack is just an example of a vulnerability that wouldn't be
> addressed by hardening SSH. There are plenty of other vulnerabilities that
> wouldn't be addressed by hardening SSH.

Well durr! We could be here all night listing things that SSH hardening
wouldn't secure against. I never suggested it was a silver bullet to fix all
security needs (which seems to be the faux argument you're accusing me off).

I'm loving all the personal attacks too. You cannot compete on an intellectual
level so you make baseless accusations about my professional capabilities
instead. God bless Internet message boards....

~~~
devishard
> Well durr! We could be here all night listing things that SSH hardening
> wouldn't secure against. I never suggested it was a silver bullet to fix all
> security needs (which seems to be the faux argument you're accusing me off).

That's not what I accused you of--people can read our previous discussion and
see both what I _actually_ accused you of saying, and also that you said what
I accused you of saying.

> You cannot compete on an intellectual level so you make baseless accusations
> about my professional capabilities instead.

I did call your course of action a mistake, but that's not an attack on your
professional capabilities. Everyone makes mistakes. I'm sure you're reasonably
skilled at your job.

I think you'd enjoy this conversation a lot more if you didn't take my
disagreement with your strategy as a personal attack. But if that's what you
want to do I can't stop you.

------
gnode
Given that no threat or demand was made, this sounds like they were just
notifying you of a vulnerability.

Does your company / product have an official responsible disclosure policy?

------
wvh
I did this a few times in the late '90s when I was a script kiddie eager to
learn. I was naive enough to believe that I couldn't get in trouble that way.
(I actually once got a job offer as a result.) From a security point of view
the internet is a slightly more scary place these days though, with criminal
gangs and governments being major threats.

If it's just a nice kid you got lucky, and I would hire someone with a clue on
security to thoroughly check your infrastructure at least once. But if it
turns out to be a hook for something more nasty, you better get some serious
help. Don't trust anything on that server anymore, and if you have other,
similar servers running, suspect them to be compromised too.

Also consider if you have any private (i.e. customer) data on that server that
might get you in trouble if a third party has/had access to it. You can
(re)install servers in seconds, but data is out there forever once leaked.

PS: Did they obtain any SSH keys, or did you have password authentication
enabled in the SSH configuration? (Don't do that!)

------
bigato
If you and the people who work there don't know what to do, you most likely
won't solve your problem by asking this generic question here or elsewhere.
I'd suggest making an honest assessment of the value of your business and
information, and depending on the numbers, considering the possibility of
hiring a professional who can help you secure your systems.

------
DominoTree
A lot of the comments here mention what to do with the server, but it's often
more probable that a workstation controlled by a user that happened to have
access to that server was compromised.

------
alpb
I think this is what happens your service does not have a responsible
disclosure and reward policy. Every fairly important service should have a
security page and you should think of what you should be rewarding for each
scenario in advance. Because it will happen and you better give people good
reason to stay ethical.

------
dkns
Can't you just ask them how did they do it and patch it?

------
amorphid
Ask your employees which one is trolling you? Maybe you will get a giggle and
find out it was a prank.

~~~
danieltillett
This was my thought too.

------
caspereeko
Why would you have and production system exposing SSH to the public?

If you must, at least do these steps:

\- Disable password SSH login \- Install root kit scanner, like rkhunter and
check if your networked systems are infected. s/he might gained access to
other instances in your infra. \- Use port scanning on all your instances and
check if there is any suspecious rpc port is open that you are not familiar
wtih. \- Enable unattended security upgrades. \- Check for the vulnerabilities
listings for your internet facing services, like nginx, apache, HAproxy, etc..
\- forward all your syslog logs to remote system so the attacker can't cleanup
her/her traces after establishing the attack. \- enable automatic blockers
like fail2ban.

~~~
riskable
There's nothing inherently wrong with exposing SSH on your production servers
to the Internet. It is one of the most secure services that can run on any
given host. Surely it's more secure than your web server or application
daemon(s) which handle the other publicly-facing functions of your production
host.

If you have the infrastructure and capability to put it on a different network
by all means make it inaccessible but for most businesses there's really no
other option anyway.

Simpler (better, IMHO) advice would be to make key-based authentication
mandatory for your production servers. That way a brute force attack is
unlikely to ever succeed. It also rules out stealing passwords since the
attacker would need to obtain the entire SSH key before they could login.

Having said that, we don't know how the attacker got in. They could have
created an account for themselves or changed the root password/system
configuration via a vulnerability. If that's the case they could modify
sshd_config so that it listens on the public IP which would make "don't expose
it to the public" moot (firewalls notwithstanding).

------
Daneel_
I am a forensic investigator and security consultant working for a well
established organisation.

I'd recommend engaging a forensic consultant from a reputable company. The
mish-mash of advice here is mildly useful if you know what you're doing, but
since you don't the only way to be somewhat confident that you're no longer
compromised is to perform proper scoping and investigation.

If you've already wiped/rolled/overwritten logs then this instantly becomes
more difficult. I would want to see a saved copy of your firewall
configuration too, for analysis in Nipper or similar.

I'd recommend the free Redline forensic tool from Mandiant if you're unwilling
to hire a consultant.

------
bArray
>Anonymous person (under a nickname) sent screenshot as a proof that they
managed to gain SSH access to our production server. The screenshot is legit,
information displayed in it could not be faked without actual access.

Could you upload a redacted version of the email? Get rid of anything that
identifies you or the company - the community here might be able to help.

I agree with the majority of the advice here - get the server down immediately
after preparing a new, security enhanced version.

Assuming the following:

    
    
      * You use a key for access and not a typed password.
      * You haven't done done anything to your distro to compromise security.
    

There are probably two likely routes of thinking here:

    
    
      * The key leak was internal to the company and you need to figure out who it was.
      * The software your company wrote introduced a security vulnerability.
    

For the first one, you need to devise a method of catching them out - you at
least want to know if it was an internal or external source. Assuming this
person is smart, they'll probably be using a VPN or TOR so tracking their IP
will be useless. Try the following:

    
    
      * You could set SSH to only accept incoming connections from your company (and your home IP as backup). Allowing SSH connections from arbitrary places is generally not a good idea.
    

For the second one, it would be good to know what sever software/libraries you
are using as well as versions. `sudo nmap -sA <IP>` your own server so we can
see what you have running and possible entry points.

Also:

    
    
      * Patch SSH to send you all commands typed - if you've been compromised you at least want to know to what extent.
      * Assume whatever data was on the server is now compromised - databases, passwords, usernames, emails, bank accounts, etc. You need to inform your customer base if their details are leaked. Internal or external they've already copied everything (I would have).
    

They are just some initial ideas - no doubt somebody will shoot them down but
hopefully you can find some use in them.

------
mercanator
I think your best bet is to follow the advice here to offer a bounty, but also
start setting up honeypot servers for each of your public network facing
services. This way you can do surveillance as this hacker (or others) are
gaining access to your system. Good honeypot monitoring software should tell
you where they got in from, what directories they accessed, and keystrokes
being used.

------
mst
So, to summarise the thread -

1) talk to them and reward them

2) don't talk to them or reward them

3) you're an idiot

4) hire somebody expensive

5) don't hire somebody, they're a rip off

Thanks, HN, that clears everything up.

------
calebm
Call an Incident Response company like Mandiant.

~~~
lucb1e
Many comments say "find & hire an expert" which is kind of vague and would
take a few days probably. This comment is actually really good I think, way
too low in the thread. An incident response team (of which there are plenty,
and kudos for naming one, that makes it even quicker and more concrete) will
know what step number one is to do right now (e.g. "make an image and bring it
down"), before even officially hiring them to do further incident response and
analysis.

------
0xmohit
You should ensure that (1) password authentication is disabled, (2) existing
ssh keys are recycled (if only password authentication was used earlier,
generate those [2]).

Subsequently, refer to "Essential Security for Linux Servers" [0] and "7
Security Measures to Protect Your Servers" [1].

[0] [https://plusbryan.com/my-first-5-minutes-on-a-server-or-
esse...](https://plusbryan.com/my-first-5-minutes-on-a-server-or-essential-
security-for-linux-servers)

[1]
[https://www.digitalocean.com/community/tutorials/7-security-...](https://www.digitalocean.com/community/tutorials/7-security-
measures-to-protect-your-servers)

[2] [https://www.digitalocean.com/community/tutorials/how-to-
set-...](https://www.digitalocean.com/community/tutorials/how-to-set-up-ssh-
keys--2)

------
dustingetz
Since this is most likely a script kiddie assuming nobody cares enough about
you to deliberately target you, If it was my company I would 1) lawyer and 2)
Ask him how he did it, most script kiddies would love to tell you how awesome
they are, 3) fix the problem 4) give him a couple BTC under the condition that
he blog anonymously

------
sturmisch
I agree with the individuals who say that you need to rebuild your server. You
really have no idea what they've done while they were logged in or how long
they had access. So take the time to be sure a replacement build is secure and
then cut over.

By the way, unauthorized access to a computer is a crime in every jurisdiction
that I'm familiar with. It is not advisable to "test" someone else's computer
and then provide them with proof that you accessed it in an unauthorized way.

If you communicate with the person and think that they deserve some sort of
reward, that's your business. It is a nice thing to tell a neighbor that their
door is unlocked -- but it's illegal to step inside the door and take a look
around.

~~~
veidr
That's true, but it's also a reason people who have done so are wise to send
an anonymous proof rather than "Hi there, my name is Dale McGuyver and I
hacked your server..."

------
giis
My suggestions: If your server has confidential/money related info. take down
the server. If not , wait because even if you bring up new server. How do you
know , he won't crack it again?

\- Check your /var/log/messages & audit.log & ssh/d.log

\- Check lastb & last command outputs

\- Take dump of network connections. (netstat)

\- find out his 'tty' and spy on him! with something like sysdig or using
strace ([http://serverfault.com/a/423666](http://serverfault.com/a/423666))

 __Most importantly do these after turning off bash_history.So that attacker
won 't see you are gather information.

I assume you are running Linux server.

~~~
Symbiote
> \- Check lastb & last command outputs

I wasn't aware of lastb, thanks.

I'm impressed that my home server has logged over 650,000 failed logins since
1 July, and a couple of machines I administer in a university over 300,000
each. That's every three-four seconds for the home server.

That's quite a lot of bandwidth, worldwide.

~~~
giis
Some kind of auto-bot is attacking, fine-tune the firewall settings :)

------
Practicality
You should consider what information was available on that server. Did the
code contain any passwords for other systems (such as an internal DB, or
another production machine?). If so, those systems should be considered
compromised now too.

On the other hand, if you are just getting started it may be that production
doesn't have much on it yet, and you can just nuke the thing and start over.

I understand recommending security experts is easier, so that if you're wrong,
you can just blame the expert, but you may be able to make the decision
yourself if you are aware of everything that is on that machine.

------
Mahn
Talk to him, find out his motivation. You may find out that he's not out to
get you and breached the server just for fun and recognition, in which case
it'd be easier to sort out if you just talk.

------
acd
I have had to clean up similar in previous job helping tracking down a larger
intrusion. Call the police and ask for advice

Here is some server advice, given that the server does not have ransomware.

    
    
      1) Save the memory of the machine if possible /dev/mem to disk it may have proof how the hacker got in.
      2) Save process list of the machine
      3) Save the netstat -plunt TCP/UDP output to file, you want to see which TCP/UDP connection the machine have before its turned off. 
      4) Check for any strange process in ps list
      5) If you see any strange process visit /proc/PID and check CWD current working directory and start commands.
      6) List kernel modules with lsmod and dump to file
      7) Power off the machine with the power button, reason you do not want to run any of the normal powerdown scripts
      8) Take a system image of the disk to an external hard disk for proof. This is important that you do not tamper with file access/last modification,change time.
      9) Pull out the network cable
      10) Power on the server again, backup all data files to external media
      11) Read log files
      12) Wipe the hard disk and reinstall the operating system with latest version
      13) Change root and account passwords
      14) Use an SSH key
      15) Lock down ssh port 22 to known IP numbers
      16) Apply all security patches, operating system and applications, make sure applications are running latest patched release
      17) Restore data backup
      18) Deploy host intrusion detection system, HIDS
      19) Send logs to an external machine
    

Optional steps

    
    
      20) If authoritative suggest it try contact the person and ask how he/she got in.
      21) Make a copy of the system image from step 8
      22) Examine system image and logs with something like Autospy from Sleuthkit, you can check last access times, read web and ssh logs.
      23) Ask upstream ISP providers if they have ssh connection logs. Ask if you can get them.
    

Additional resources Mozilla how to secure SSH
[https://wiki.mozilla.org/Security/Guidelines/OpenSSH](https://wiki.mozilla.org/Security/Guidelines/OpenSSH)
Debian 5.1 how to secure ssh [https://www.debian.org/doc/manuals/securing-
debian-howto/ch-...](https://www.debian.org/doc/manuals/securing-debian-
howto/ch-sec-services.en.html) Sans top twenty security controls
[https://www.sans.org/media/critical-security-
controls/critic...](https://www.sans.org/media/critical-security-
controls/critical-controls-poster-2016.pdf)

------
iask
Thank the person. It wouldn't hurt to ask their suggestion. Enlist an expert
on your team. Not every software developer is a system administrator and not
every manager is an expert developer. YouTube videos doesn't make one an
expert after a few hours.

I am certain you will receive very good advice here in the comments.

Technically speaking - you should give a little more data on your setup. A
high level view would suffice. That way folks here can narrow their
suggestions down.

------
theaustinseven
They got SSH access? Steps I would take are:

1\. Isolate the machine.

2\. Rotate your keys.

3\. Set up ssh via ssh keys and remove ssh passwords.

4\. Now do whatever you want with the isolated machine.

If you already are using ssh via ssh keys then either one of your employees
has been hacked or "the call is coming from inside the house".

I would really suggest, as others have, trying to get the mysterious
individual to tell you how they did what they did.

------
ing33k
Others have covered on possible ways you can tackle the situation , hopefully
you will get through this.

since you have stated that you don't have a dedicated sysadmin , its better to
use a platform like Heroku to host your app.

I am not suggesting that it will make your website/app inherently secure, but
at least it removes a lot of pain points.

------
siffer
Consider hiring a Security Engineer for future issues. As for the current
issue, are you a target for any known reason. Is their anything of value on
the server?

If not, it's highly possible someone in your group who already has access
could be involved.

Otherwise, maybe the Security Engineer serendipitously found you.

------
alexjray
More likely than not he/she is probably reading this thread.... his nickname
might even be on HN, ha!

------
nwilkens
Get professional help.

------
nitrogen
Don't forget the legal obligations you may have to your customers, depending
on what kind of data transits through or is stored on your servers. Different
states have different requirements, so you'll want professional legal advice.

------
lawnchair_larry
Whoever sent it has ego issues (common among hackers). Play on that. Act
impressed with their skills, and concerned with how you may have exposed the
company to attack, and ask for their advice. Email them as the sysadmin, not
the CEO.

------
tachion
More and more I get the feeling this is a result of some giant illusion of
'serverless fairy tale, ops skills and competent infrastructure people are no
longer needed' a huge part of IT market tries to live in...

------
Cordkold281
Proof shows it's not included to html5. But including to something. I have
show proof about who owned HTML 5 company and logo. It's William who owns it!
The other one!

------
llcoolv
There is some missing information here - the most important thing is the
intentions (and possibly identity) of this person. Did he leave any clues as
to what those might be?

------
ReFruity
I wonder if if that person just emailed them vulnerability, then it would just
get patched within a day and he got only $1000.

------
unstatusthequo
You sure you have no disgruntled employees or any whose individual endpoints
may have been compromised (and had keys)?

------
ewilliamsh
Try to "hide" your servers. Simple things such as vpn's and changuing ips

------
itstripe
Re-install the server

On new re-installed server:

1\. Change SSH service port to non-default one. 2\. Do not allow root user to
remotely connect (change sshd config) 3\. Create new user which you will be
using for administration to login as root. 4\. If possible restrict which IP
addresses are allowed to connect via SSH using firewall.

~~~
datalus
I would also add to this to block any password authentication and use SSH
keys.

------
cjcampbell
Came across your question moments after facilitating a security incident dry
run for a SaaS with a 6 or 7 person core team (and a lot of sensitive data).
Since many of the responses are focusing on the technical and reward/ransom
aspect of this, I figured I could offer you some parallel thoughts on how to
handle this from the business and liability standpoint.

There is quite a bit in this response, so please don't hesitate to ask if you
want to talk this through directly. I can stand to offer 30min to provide some
connections and make sure you feel equipped to go it with your current team.
No cost of course and no nagging to get you on as a paying customer!

1\. Insurance - Assuming you have a policy geared toward technology companies,
you may have coverage for "cyber" incidents. Look at your policy or call your
broker to find out how to initiate coverage.

a) You probably need to tell them sooner rather than later that you are
investigating an incident. b) Ask for a referrals to the following
specialities: tech/privacy law, forensics or security operations, and breach
resolution vendors. c) Find out whether the policy requires you to obtain a
referral to use paneled providers (most do from my experience). d) Find out
any other requirements that might determine your ability for coverage down the
line.

2\. Legal (assuming American or similar legal system) - Given what I know
about your situation, I'd suggest that you retain a lawyer experience with
security and breach response. This doesn't have to break the bank. The key
point is to establish privilege and get help from someone who can navigate the
statutory/liability landscape (it's a shit show). I can recommend one or two
that are sensitive to the needs of small business clients.

3\. Forensics - If you have insurance coverage that will apply to this
scenario, there is little reason not to contact a forensic or security
specialist to validate your cleanup and ensure that the infrastructure is
totally buttoned down going forward. Definitely follow the rest of this thread
and digest the recommendations you're getting.

4\. Customers/ Stakeholders - Your customers and stakeholders will appreciate
clear communication, whether during the investigation/response or after the
matter. The lawyer I mentioned above will be able to help you think about
communication with these parties as well as timing. If the incident turns into
something more than it is right now, you'll be thankful for thinking this
dimension through carefully.

5\. Intruder - Don't react carelessly. You don't know their motives or further
plans. Is this a gray hat who is just wanting you to shore up your system. I'd
be inclined to work with them and possibly even offer a reward. But I wouldn't
go there without consulting legal counsel. Is this more about taunting or
warming you up for a ransom request? Then get some folks with experience in
your corner from the start. And be aware, your insurance might even cover a
ransom request.

6\. Law Enforcement - Most of us prefer not to go here, but keep it as an
option. Talk to your attorney about it.

7\. Documentation - Start keeping detailed notes. What are the steps you're
taking to assess the situation? Who are you consulting? How are you measuring
the risk? How are you preventing further damage? A breach coach can help you
ask the right questions and record the information most needed to protect you
and your customers.

8\. Notification - If you're in the US or EU, you very likely have breach
notification requirements. Unless you are certain the intruder did not take or
view any protected data (pretty much anything personal or payment related), I
would treat the incident as a breach. In the US, you have 47 different state
laws dictating notification requirements plus a few federal. Breach resolution
services to the rescue if you have insurance. Breach coaches and lawyers may
handle the little cases themselves.

9\. Follow up - a) Summarize the process in a short report and keep it in case
you discover related damage down the line (or get sued). b) Close the incident
out with insurance and provide any documentation they need to process the
claim. c) Put a basic, written, practical plan together. d) Build a
relationship with a knowledgeable insurance broker (I can recommend a few). e)
Review, or update your insurance policy (or buy one). f) If you can afford it,
keep a tech attorney on retainer. g) Set up a bug bounty through an
established program (and if this intruder is well-intentioned, encourage them
to use it in the future)

~~~
cjcampbell
FYI, if you do want to chat. Our new website will include resources, but it
hasn't launched yet. Hit me up for now on
[https://www.linkedin.com/in/clintonjcampbell](https://www.linkedin.com/in/clintonjcampbell).
Our twitter account, @quirktree is operational though not launched yet. And
our website, quirktree.com, should be ready by next Monday with online contact
options.

------
leoh
Could this anonymous person be an employee with access to the server?

------
darkhorn
Thank to him and reinstall the server from scratch.

------
leonardzen
Don't be a cheap ass and hire a security guy.

------
Cordkold281
They would not go buy all of this in proof.

------
riprowan
What account did they log into?

------
ewilliamsh
Hide it. Look for ways to "hide" your server , simple things such as using a
VPN or change IPs

------
dogma1138
Treat is as the worse case scenario, even if you think that guy who got in was
a "good guy", some one else might have also visited your box with other
intentions.

0) Check your own policies (if you do not have an incident response policy
it's a good time to start one), compliance and regulation/legislation
specifically around due diligence and notification/disclosure requirements,
consult with a legal adviser especially around dealing with who ever informed
you about the breach, I would not advise you to deal with the notifying party
without doing so, and do not provide them with any compensation until you have
spoken to a lawyer.

1) Revoke all credentials which tied to that system, revoke all credentials of
all users and services that have accessed that system, revoke all secrets that
were stored on that system or the system had access too like certificates and
encryption keys.

2) Do not turn off the system unless you can preserver memory, do a full
snapshot/image but keep it running keeping the memory intact is important for
forensics and a reboot can erase a lot of evidence.

2.1) Isolate the system from your main network with minimal interruption, if
you can prevent the NIC on the link from going down do so, if at all possible
to mirror the port on your switch do it and enabled pcap for the next 24-48
hours at the least.

2.2) Identify any additional logs from other systems (load balancers, routers,
firewall etc.) that could potentially have additional information regarding
the breach and preserve them.

3) Do a full integrity analysis and inventory on one of the clone
images/snapshots and compare it to your build policy/template and identify any
discrepancies.

3.1) Check your current build template/configuration against your own policies
(if they do not exist it's a good time to start making them) and best
practices and identify any gaps.

3.2) If you received a detailed explanation of how the hack was done check if
and what in your policies could or should stop it, if there was nothing
implement a new control and add it to your build/config template.

3.3) If you have successfully identified how they got in review any other
systems that can be accessible via the same vector.

3.4) Attempt to verify if any other systems were or could be compromised
during the breach / vulnerability window and based on your risk assessment
make a call to do a full review/rebuild of those system too.

3.5) If this was a common vulnerability get a vulnerability scanning tool (e.g
OpenVAS) and scan all your systems.

3.6) If this was more social engineering/i found your SSH creds on GitHub then
policies and awareness training should not be taken lightly.

3.7) If this was some sort of super duper NSA grade 0-day (unlikely ;)) notify
the maintainer of the SSH software you are using about the breach.

4) Based on the outcome of step 3 either rebuild any machines that were
compromised, if the vulnerability cannot be fixed immediately implement
mitigating measures (restrict SSH access, implement 2FA, implement a jump box,
perform active logging on all connections to the vulnerable machines).

4.1) If you are required or wish to perform a full forensic analysis of the
entire incident.

Depending on the outcome of step 0 you may be required to have a qualified 3rd
party perform the incident response and a forensic investigation, especially
if you do not have in house "certified" people to do it, some EU countries are
especially strict about this like Germany. You are also might be required to
notify your customers regarding the breach, even if you do not I suggest you
do it otherwise you might see "how i hacked XYZ" in a few weeks on HN and
realize they are talking about you.

------
Wellshit
And ask him how he did it. Reward accordingly.

------
Karunamon
Meta:

Why is this post flagged, and why is the usual "vouch" option missing?

~~~
sctb
It looks like the [flagged] label was incorrectly displayed—users haven't
flagged it.

~~~
Karunamon
Strangest thing I've seen on HN in a while. Anyways, it's gone now.

~~~
dang
It's a bug. The post was originally killed by a spam filter. We turned that
off and marked it legit, but some traces of spam-filter disapproval remained.
Should be fixed now.

------
eip
The part they left out is that they most likely have full access to one of you
or your employees laptops. They probably just used your own keys to access the
server. Which means they probably have access to all your personal info.

------
ajamesm
If your company got compromised, and your next step is to ask HN for advice,
maybe your company should not be in business.

------
ryanlol
>We are a small company and don't have any security experts, etc.

Find one.

~~~
Fuxy
Or contact the person if you can and ask them how they got in maybe even offer
him a financial reward.

Since he contact you Anonymously and is not trying to extort you he's just
trying to point the issue out so there's no point in over reacting.

~~~
ryanlol
How much would you trust that person? Enough to potentially risk your business
on them?

~~~
riprowan
You are already trusting that person enough to risk your business on them, if
that server is still up and running after finding the security hole.

~~~
ryanlol
It's not just about that server though, who knows if the other servers were
compromised?

