
Ask HN: Best security practices for startups - sithu
Hi,<p>My partner and I plan to launch a healthcare-related web app in the coming months. We&#x27;ll be hosting on AWS, with the database on an encrypted EBS volume, all conncetions over HTTPS and we should have two-factor authentication by SMS. We&#x27;re mostly using the MEAN stack.<p>I&#x27;m not technical, so I&#x27;d appreciate some guidance on best security practices that are relevant and feasible for a startup. I doubt we&#x27;ll have anything financially useful to steal, but my main concern is avoiding leaks of private patient data, of which we might store a limited amount.<p>1. Is there a checklist&#x2F;best practices guide somewhere? I&#x27;d like to avoid making obvious mistakes that would be embarassing in retrospect, though I know it&#x27;s hard to defend against someone skilled and determined.<p>2. Any experience with hiring a firm (like Matasano) for penetrating testing? Rough estimate of cost? When is the right time to consider this?<p>3. How and when to start a bug bounty program? Is there a standard way to determine severity and payouts?<p>Thank you!
======
andrewjshults
FWIW - even inside a VPC on AWS traffic isn't encrypted by default so if
you're dealing with PHI traffic between servers also needs to be encrypted.
Many databases support this out of the box, but if you're using something like
redis you either need to use ipsec or stunnel. Google's Compute Engine
platform does support encrypted network traffic so that' s nice plus (we're
multi-cloud so we're currently using stunnel and moving to ipsec).

Lock out the root AWS keys as much as you can (ours requires a MFA token
that's stored in a safe) and only use IAM users with restricted permissions
for day to day operations.

Everything should have an audit trail, preferable with all the logs shipped
off the servers to a centralized store (that way if a server is compromised
the attacker can't also edit/delete the logs)

Script all your boxes through config management so that you can handle
updates/security patches in a uniform manner and quickly.

Restrict who has access to root/DB in production. When you grant access keep
an audit trail of why they have access and revoke it if it's no longer
necessary. Have a good development environment setup so people don't develop
the habit of developing against production.

Pentest + bug bounties are good. Once you get to a certain point you'll
probably also need to have a general security/HIPAA audit as well.

~~~
sithu
Great advice, thank you- will make a note of these things for when we start
deploying. At the moment, I think we will only need one EC2 instance attached
to an encrypted EBS volume with the database on it. We're not using RDS. When
you say encrypt PHI traffic between servers, you mean like EC2<\-->S3?

~~~
andrewjshults
EC2 <-> S3 yes (this should be easy as S3 has ssl support out of the box). The
bigger issue is stuff like redis (which purposefully doesn't support
encryption) which means you either need to be careful not to put PHI in redis
(e.g., use object IDs rather than the object themselves, don't cache things
that might have PHI) or use something like stunnel (which doesn't play lovely
with redis), ipsec, or use GCE.

I'd recommend encrypting from the boot volume up and not just your EBS
volumes. Otherwise you have to worry about things like PHI in logs, core
dumps, etc. being put onto unencrypted storage.

------
wilburlo
It's hard maintaining great security because, security and speed are usually
in direct opposition.

In terms of hardware/OS: Turn off everything incoming except for HTTPS, SSH,
and ping (optional). Make sure everyone uses SSH keys (no passwords)

In terms of programming, focus on security roles is tricky at first. So you
want to be careful in describing how user roles or user permissions work in
your site.

Create a staging server with test data that mimics your production site
(nearly exactly). Any penetration company company will ask you to sign a "This
won't hurt anything", when smashing up your server.

Another place to focus is how backups are copied, who can access the data,
etc..

This is a really big topic. Your insurance company when you apply will have an
excellent check list.

~~~
insoluble
> Turn off everything incoming except for HTTPS, SSH, and ping (optional).

OP mentioned using AWS, in which case Amazon's built-in "Security Groups"
feature can be used for restricting access to the instance by port or possibly
by protocol. Naturally, however, one would not want any dangerous outbound
traffic, such as unencrypted/unauthenticated automatic updates, so there also
is merit in controlling which services and programs are running.

------
jerematasno
Sithu,

Here are a couple of resources that I tend to hand out to startups that we do
work for at Matasano. No charge :-)

Not trying to be a salesperson, but I feel like most startups get more value
out of sitting down with a security consultant for a couple days and talking
about architecture and dev processes then they do getting a full penetration
test. Like the presentations say, the big risk in the early days is lack of
interest, not security. I feel like a startup's big security concern it doing
something that's going to make them have to rewrite _everything_ later on.

[http://chris.improbable.org/2009/9/24/indie-software-
securit...](http://chris.improbable.org/2009/9/24/indie-software-
security-a-~12-step-program/) (old presentation from tqbf. We might one day
put it back on our blog. Don't hold your breath. Anyway, the slides and
presentation aren's great IMO, but the blog post is!)

[http://firstround.com/review/Evernotes-CTO-on-Your-
Biggest-S...](http://firstround.com/review/Evernotes-CTO-on-Your-Biggest-
Security-Worries-From-Three-Employees-to-300/)

------
panorama
I'm not too familiar with the field, but I'd also assess whether or not the
data you're acquiring needs to be encrypted/handled in a certain way due to
HIPAA-related compliance. That's potentially the most relevant worry you
should have if your developer(s) are decent enough.

~~~
sithu
HIPAA is surprisingly vague about the minimum standards for compliance,
calling encryption "addressable" instead of "required"[1]. I believe this goes
for both data in transit(HTTPS), and data at rest. I've seen some say that
HTTPS is required but I can't find this on the gov site. My understanding is
that if you choose not to encrypt, the burden of proof is on you should
anything bad happen, to prove that implementing it was not "reasonable and
appropriate". Since I can't think of any circumstances where sending plain
text patient info over the internet is reasonable, i'll choose to encrypt. The
other things for HIPAA compliance are complete audit logging so you can see
who has accessed anything, and training of staff who have access to protected
health info.

Most HIPAA recommendations seem to be a good idea to do anyway.

[1]
[http://www.hhs.gov/ocr/privacy/hipaa/faq/securityrule/2001.h...](http://www.hhs.gov/ocr/privacy/hipaa/faq/securityrule/2001.html)

~~~
brudgers
Hire someone who has done HIPAA before.

------
jtfairbank
Checkout aptible [[https://www.aptible.com/]-](https://www.aptible.com/\]-)
they do hippa compliance as a service.

------
dsacco
Hey Sithu, my name is Dylan. I work at Accuvant, a security firm like
Matasano. We work with a lot of the top tech companies and Fortune 500.

I sent you an email, and I'd be happy to answer any of your questions or help
you out (no charge). Down the line, if you decide you'd like to explore a
security audit, we can help with that too.

For now, I'll answer your questions:

1\. I wrote a basic checklist for startups looking to improve their security.
You can find it here: [http://breakingbits.net/2015/02/28/security-for-
startups/](http://breakingbits.net/2015/02/28/security-for-startups/). It's
not comprehensive, but I tried to cover the most common issues I saw with
startups. Ryan McGeehan also wrote a wonderful checklist for incident response
after something _does_ happen. They're two sides of the same coin -
preparation and damage control. Check that out here:
[https://medium.com/@magoo/security-
breach-101-b0f7897c027c](https://medium.com/@magoo/security-
breach-101-b0f7897c027c). Your specific company will have more to do for each
of these based on the context of your team, product and size.

2\. It's difficult to give a good estimate of cost, and I'm not trying to be a
salesman here. It depends on length and scope of work. Are we doing code
review or just blackbox testing? Five days or 10? The entire application
attach surface or just a few critical pieces of functionality? Budget about
$10,000 for a white label pen test lasting a week, but it could be more or
less. I'm summoning 'tptacek here in the hopes he might have more nuance to
contribute to this answer.

3\. How and when to begin a bug bounty program is similarly variable. I have a
lot of experience working with Series A companies, and bug bounties are a
personal specialty of mine (I have research directly on the subject). In my
opinion, you should not have bug bounties until you have at least one full
time security engineer. You don't want to pay out a bounty for someone
reporting that your cookies lack an HttpOnly flag. On the other hand, server-
side request forgery usually warrants a payment. If you have developers who
could tell you the difference in those two in terms of both definition and
severity without looking it up, you're okay to have a bug bounty. If not,
don't pay people for their reports just yet. (This is personal philosophy from
my experience - many people will have different opinions).

On the other hand, _always_ have a responsible disclosure program. It is
perfectly okay to not reward people for reporting security vulnerabilities.
I'll repeat this: _it is perfectly reasonable to not give out payments for
reports._ Don't let financial rewards enter the program until you have reached
a certain level of internal security maturity.

And no, there is no standard way to determine severity and payments. Google
and Facebook pay a lot for bugs that some companies wouldn't even accept. It
totally depends. That said, as a general guideline, if a reported bug 1. is
valid, 2. compromises users, it's worth something.

Like I said, I sent you an email. Let me know if you have any questions, I'd
be happy to help you with anything you need to know.

~~~
sithu
Thanks for your reply! Very helpful. Got your email, and will be reaching out
to you.

