Hacker News new | past | comments | ask | show | jobs | submit login
Saas CTO Security Checklist (sqreen.io)
341 points by vinnyglennon on March 19, 2018 | hide | past | favorite | 58 comments



It's not a bad list, though it is a bit of a marketing tool for "Sqreen" (though, I guess if we did one, it'd come off as a marketing tool for us).

What I actively don't like is their use of "Seed", "Series A", "Post-Series-A". It's cutesy but, more importantly, their categories are mostly wrong.

Here's their "Series A" list:

* No shared accounts * Centralized account management * Centralized logs * Server list * DDoS protection * IP restrictions for internal services * Monitor metrics * Public security policy * Special security for non-tech staff * Real-time monitoring/protection * Customer 2FA * Monitor suspicious activities * Security-oriented test sessions

Many of these are things you want to be doing ASAP. That doesn't mean they take priority over product/market fit, but it does mean you're not waiting for a milestone to build them. For instance: most competent teams have centralized logs. Every competent team restricts access to internal services (and IP restrictions aren't the way to do that). If you don't get this stuff started early, it's a nightmare to start later.

Other things on the list are things that mature, profitable companies don't bother doing, because the ROI isn't high enough. Customer 2FA is an example --- if you need it, you probably know. Real-time security monitoring tools are another big one; chances are, if you're reading a checklist like this, your monitoring needs are too idiosyncratic to buy COTS.

Post-Series-A is worse:

* CloudFormation * Incident Response * Internal security policy * Asset inventory * Bug bounty program * Security audits * SDLC

With the exception of bug bounties, which you might not bother ever doing at all, these are all ASAP items.

You could also start a list of important things they're missing:

* Security testing integrated into CI/CD pipeline * Automated routine network monitoring * Segmenting AWS accounts * Having a process to tag PRs for security review * Adding a security page to your website

This list reads to me like it was originally shrink wrapped around a product, and then expanded with bullets with links to random websites to throw the scent off.


I would like to see each particular item tagged not as "Seed", "Series A", "Post Series A" but with more tangible events or metrics that would trigger needing to take those things seriously.

For instance:

---

"Follow an onboarding / offboarding checklist"

Do this when...

- you have ten or more employees,

- you have at least ten thousand user accounts, OR

- you have at least ten thousand dollars MRR

---

"Protect your application from DDoS attacks"

Do this when...

- you have publicly announced funding, put a major product launch or milestone in commercial press, or have publicized a key strategic hire

- a prolonged period of downtime (N minutes/breaking 3 or 4 nine availability) would have a materially adverse effect on your business

---

"Use SSL certificates to secure people using your website"

Do this when...

- you are collecting any information from visitors to your website (logins, email addresses, phone numbers)

- you would not want the traffic from sessions browsing your website to be intercepted by third parties.

Of course, a lot of these things would be obvious, but there's a chance here to make this really accessible to the security ignorant or security illiterate.


For SSL I think there is industry consensus to push for HTTPS everywhere. There is no argument to skip it. With free lets encrypt certificates. It is not like you will spend a week implementing it, without prior experience decent developer should have it up and running in half of day.


I agree with you, maybe the advice should be “do this immediately, regardless of stage or size”


This is really great input! Thank you.

We will definitely take this into consideration for the next iteration.


I’m the CTO at Sqreen and I do love Matasano (cryptopals... awesome crypto challenge https://cryptopals.com/). Realistically, security audits or bug bounty are not doable in seed startups - where most of the time no one has any security knowledge, and no money :) Thanks for the missing things we will update! By the way this is open source, feel free to contribute: https://github.com/sqreen/CTOSecurityChecklist (not sure this is 100% today with the version on Sqreen.io, we’ll get there this week).


I have only one thing to nag about. Password complexity rules. Please only do length check and promote checking against dictionary passwords to be rejected. For password security this was last year hot document: https://www.nist.gov/itl/tig/projects/special-publication-80...


There could be a different between password rules for your users, and for your employees. The former should probably be more focused on usability, the latter on security.


Sounds reasonable, since you can (and should) enforce use of password manager by your employees.


> Realistically, security audits or bug bounty are not doable in seed startups - where most of the time no one has any security knowledge, and no money

Ehhhhhhh...I disagree.

1. Most of my clients tend closer to seed stage than to well-funded.

2. You don’t need security expertise or money to run a good bug bounty program. You can start one immediately. There are enough high quality resources available for free on the internet (that are not content marketing) that you can learn most of the important unknown unknowns.

For example, I think this is excellent reading for any young company thinking about security: https://medium.com/starting-up-security/starting-up-security...


What do you do when the response to a bug bounty is "yeah, we already knew about that, and we're not planning to fix it soon because the consequences aren't high"? In my experience that's a pretty common scenario for early to medium stage startups.


It's not just a common scenario for early to medium stage startups. It's also a common scenario for every other business with a bug bounty program.

Sometimes, the consequences aren't high.

"Your CORS is configured to allow access from another domain, also owned by you."

"You can give yourself a redirect to any site by intercepting and modifying your own Host header."

"Your static blog on a separate domain from your actual site is accessible over unencrypted HTTP."

"If I zoom in on your web page, the text becomes blurry."

If your question was from the other end, "what do you do as the company when you get a report like this?", I say something like "We don't believe that this warrants fixing at this time. Thanks for your interest in our program, and we hope you continue reporting to us in the future!"


What I'm thinking of are things like "a paying customer can DoS you with a carefully constructed malicious input". That usually won't be practical issue if you're small enough to know all your customers - but it has the potential to be very problematic if you incentivize people to find it.


This is usually addressed in your program policy. For example, look at https://hackerone.com/twitter :

> Accessing private information of other users, performing actions that may negatively affect Twitter users (e.g., spam, denial of service), or sending reports from automated tools without verifying them will immediately disqualify the report


Was this list also you? https://gdprchecklist.io/

The html is almost identical - is there a checklist-templating service that you used to build this?


It's inspired by this checklist.

You can get two code implementations here: https://github.com/sqreen/CTOSecurityChecklist https://github.com/sqreen/DevOpsSecurityChecklist


> The project is inspired by The SaaS CTO Security Checklist created by Sqreen.io


Ah, thanks! I didn't spot that on my first read through.


I worked on this checklist and your feedback is very appreciated.

You're right on all your points from a pure security point of view. We should be doing security as soon as possible. Unfortunately, the reality of building a startup is about finding product-market-fit. Entrepreneurs are not incentivized to do security early on. The fear strategy our industry is using for the last XX years has failed.

As security professionals, we need to help entrepreneurs and educate developers find a good balance between building a business and building good security practices. This is the goal of this checklist.

We can't expect developers to spend days implementing security best practices before even having a business.


> As security professionals, we need to help entrepreneurs and educate developers find a good balance between building a business and building good security practices. This is the goal of this checklist.

No offense, but that’s not an answer to tptacek’s point. I can’t speak for him, but he probably agrees with this point. But that’s a soundbite - everyone would agree with that “we security professionals need to help entrepreneurs help themselves”, etc. The devil is in the details. He is critiquing the checklist’s content, not the checklist.

To make this comment constructive, I’m going to provide a link to what I personally consider very high quality advice for companies, written on the blog of Facebook and Coinbase’s former director of security:

https://medium.com/starting-up-security/starting-up-security...

Beyond that, having worked directly with many founders of early stage companies for security, I have to say I disagree that they can’t think about security early on. Resources like the series of articles I’ve linked to show how to navigate that compromise effectively.


Everything Ryan McGeehan writes is amazing. All of it belongs with the very best startup security content on HN.


> We can't expect developers to spend days implementing security best practices before even having a business.

We absolutely can. Otherwise, expect regulation to do it (see: GDPR).


Yeah. At the risk of digging at a raw wound and trivializing a recent tragedy, this is kind of like saying "We can't expect structural engineers to develop a fundamentally-safe construction plan right from the get-go."

If you're going to do something at all, there are some fundamental standards that you just don't risk by putting them off for later. Not saying you have to start out with all the frills, but there is a minimum acceptable standard of safety and competency that can and should be expected of any new work, and things that don't meet such standards should never exist in a form that could potentially be misconstrued as doing so. Reasonable baseline security practices are certainly part of those inviolable professional standards.


Or see TCSEC that was how the market produced the first, security-focused systems. They were only ones to pass pentesting at the time with designs and implementations still stronger than most software today. Although it had issues, its core lifecycle requirements mostly work and are still used for high-assurance security implementations. Alternatively, the DO-178B standard (now DO-178C) that got more vendors writing well-documented, well-reviewed code that they run through all kinds of static analyzers and testing tools to avoid costly re-certifications. Two examples of regulations that worked so well that they raised the status quo for both security and safety.

People mostly mention bad or questionable regulations when the topic comes up. I figure the good ones deserve mention, too, esp given they worked better than the market. That's probably due to the absence in market of both liability for software failures and most customers' ability to evaluate security claims.

TCSEC Overview https://en.wikipedia.org/wiki/Trusted_Computer_System_Evalua...

Bell Looking Back on TCSEC/TPEP http://lukemuehlhauser.com/wp-content/uploads/Bell-Looking-B...

DO-178B Overview https://en.wikipedia.org/wiki/DO-178B


OT-fun: You have 27007 karma points. ISO/IEC 27007 is one standard for "information security management systems auditing".


I’ve wasted so much time on HN, but have enjoyed it very much. Thanks for pointing that out :)


I'd add to this that there is also a culture of security that is being created from day one. It is unrealistic to purposely let things slide security-wise with the idea that you'll care about it later, and expect a healthy culture to emerge.

Going back to layer on security can be challenging, but it's even harder to retroactively layer on a way of thinking about and prioritizing security. For instance, it literally has to be that every engineer is thinking about the security implications of every design/implementation choice with the same urgency as the product features themselves. And there has to be a sense of standards, accountability and direction coming from the top.

Equifax is a good example of a company that failed in this. You look at the original incident, then consider how they made it so much worse in their response. At a certain point you start to think "is anyone even thinking about security there?" and you realize their culture is fundamentally broken in such a way that their entire organizational mindset will need to be clean-sheeted. That's a tough road.


That's a very important and interesting topic. Do you know any literature and standards on how to establish such a "culture of security" (afterwards)?


The thing that became evident to me with this breach and the long-delayed response from Uber about their breach was the the following is one way to look at a company's ability to deal with the reality of the internet.

First, there is the overall company culture. If employees, top to bottom, care about the company, its mission, and what they are doing day to day, it makes it feasible to introduce a security culture.

Secondly, technical competence, top to bottom. Not understanding the importance of patching (didn't they tell congress that it was too hard?), or the fact that that your customer outreach web site should be part of your already existing domain, as opposed to a totally independent easily spoofable domain that can even fool your social media guy.

Third, a serious security team. Penetration testing, security awareness training, logging/monitoring. But a crack security team is hard-pressed to overcome weakness in the other two.

This all clearly starts at the top.


The list is mixing different subject matters.

The security of all the information in your company should be handled by an information security management system, which is normally under the supervision of a CSO or CISO. The ISM gets established by defining (e. g. through policies) roles, processes and requirements for many problems occurring related to information security.

The security of your infrastructure should also be supervised by some management position and your infrastructure should be designed in advance to fulfill your security requirements.

The security of your software should also be supervised by some management position and your the whole software process have to be designed to produce secure software.

I don't believe this can be handled by a CTO with a basic checklist, although it includes important points which should be supervised. This list also feels kinda incomplete.


I know very few startups that have or had a CSO/CISO. I would assume that most startups that have one, are in one of the following fields security/fintech/healthcare/education. Typically it falls on the CTO at early stage startups to ensure that appropriate security measures have been taken


Thanks for your input.

Unfortunately, startups don't have this kind of resources (CIO/CISO etc.). What we see is that security is often handled by CTOs in Seed/SeriesA startups.


One thing missing from all these is a basic anti-phishing training for top folks in the company. Be sure that they who control financial string have a pre-agreed non-email pattern of, for example, CEO telling CFO to transfer money.


> IP restrictions aren't the way to do that

I think they might talking about something else — like restricting db access to only certain IP where apps are hosted. That’s the problem with checklists — they aren’t very clear at times.


> * Security testing integrated into CI/CD pipeline

> * Automated routine network monitoring

Do you have specific recommendations on these 2 things?


> With the exception of bug bounties, which you might not bother ever doing at all

Why not?


I wish frameworks included 2FA support in the default auth systems. While there are third party packages to add 2FA support, adding support at the framework level would help drive 2FA adoption in a major way.

Question : How would you add 2FA to a django website? Ideally TOTP + yubikey + backup code, but TOTP + backup code is also fine. The website I plan to add 2FA support will be open source and is meant to be self hosted by users, so I want to avoid SaaS solutions. I came across a couple of packages : django-two-factor-auth [0] which is based on django-otp [1]. There's pyotp [2], which will require me to integrate it into the login flow (not a good idea to be writing login flows). If anyone's using the mentioned packages in production, would be interested to hear about your experience.

[0] https://github.com/Bouke/django-two-factor-auth

[1] https://django-otp-official.readthedocs.io/en/latest/

[2] https://github.com/pyotp/pyotp


django-two-factor-auth seems to be a good choice (using django-otp alone might be considered too). I agree that pyotp is less suited, since it does not integrate easily with Django.


My honest opinion is that this is vacuous. I dislike being that critical but this is important. First, I don’t think it’s meaningful to segregate this list into divisions by funding milestones, and I don’t think it’s productive to have so many bullet points crowding one page for attention. Second, and more specifically, I think this list leaves a lot of the heavy lifting out of the problems to the detriment of solving them. For example:

> Encrypt all employee laptops and phones

You have two sub-problems here. Either you take endpoint security very seriously (which becomes its own much more important bullet point), or you trust employees to encrypt their devices on their own. This process should be entirely automated. My concrete criticism: mention this problem alongside endpoint security.

Accustom your team to locking their computers

This is another endpoint security problem. You should automate this enforcement across all employee computers. Even if people are fully on board with it conceptually, they will err because no one can have Constant Vigilance.

Centralize and archive your logs

Yes, but how? Provide examples. You linked to Elastic, but why not talk about tradeoffs between the Elastic stack and others? How about the tradeoff of paid infrastructure versus full open source? A build versus buy discussion is very useful here.

Evaluate your website’s basic security

There’s a messaging problem here. Your checklist doesn’t recommend bug bounties until post-Series A (!!!), partly because you have no qualified staff to review reports. Precisely how are your engineers reviewing “basic” security without basic qualifications? What defines “basic?” If they have the time and initiative to learn how to do this step, why can’t they do other steps you reserve for Series A or beyond?

Frankly, most of this list could be meaningfully reduced to prioritizing automation, endpoint security, formal processes and finding the right people to tell you your unknown unknowns very early on.


>You have two sub-problems here. Either you take endpoint security very seriously (which becomes its own much more important bullet point), or you trust employees to encrypt their devices on their own. This process should be entirely automated. My concrete criticism: mention this problem alongside endpoint security.

Do you have any recommendations for endpoint security? I'm setting that up now and finding a good vendor has been annoying.

>Precisely how are your engineers reviewing “basic” security without basic qualifications?

There are standards and checklists for this (ie: OWASP Top Ten). Being able to read and follow a simple checklist, that someone recommends to you, doesn't mean you're even close to an expert in that area. That said, the OWASP Top 10 includes "Logging and Monitoring" on it while this checklist punts it to Series A, so it's confusing.


Feedback: The section on password policy recommends requiring special characters and mixed case, then as a reference links Troy Hunt's article specifically recommending against that.

To be fair, I zeroed in on this because a mega corp did the same thing this morning.


Thank you. You're right. We will change that


One modification to the backup bullet... “backup, backup and restore”

I’ve seen more than one day ruined by a backup that wasn’t.


Testing backups and having a recovery plan is paramount. I have been burned in the past by making backups, but then floundering around trying to redeploy.


Did you expand the read more? That’s exactly what it says.


Mea culpa. I did not


"It will protect against both malicious activities and accidents (e.g. an employee’s child accidentally wiping a mailbox)."

Can someone help me understand how a child accidentally wiping a mailbox is related to encryption? Isn't this just a matter of putting a password on your computer and not related to encryption?


As someone who has made a similar, internal checklist, I find this to be well-done. A good primer for someone who is starting their own Saas company, and is interested in understanding the scope of what it takes to implement best practices as a CTO.


Don't just make backups and then backups. Test regularly if you can restore your system from said backups. The worst feeling in the world is having backups and then learning that you can't restore from them in case of emergency.


this is fantastic. Bookmark this if you plan on starting a SaaS business, though...these seem broadly applicable for other models as well.


Slightly OT: Can someone elaborate on "Do not share Wifi"? What can be problematic when using a shared but encrypted (say, WPA2 with pre-shared key) Wifi?

This is a common setup in public places like cafes. I've always wondered in what ways this can cause problems.


I think they meant, “don’t have internal and guest WiFi connections use the same subnet”. If you’re giving non-employees passwords and access to a network that contains sensitive resources that’s a big security red flag.

Many companies also use IP whitelisting for external services/systems, so if someone from outside the company joins an internal network they now have access to them as well.

Edit: Actually, they point out those exact reasons if you expand that entry in the list.


Great checklist. One recommendation - the checklist page has a lot of CORS policy violation and unsecure end points errors/warnings. Being that this list, and your company represent a security product, these errors undermine the credibility a little.


Those are CSP violations not CORS violations.

There are different levels of maturity with your security headers, and Sqreen's cookies are scoped to a completely different subdomain my.sqreen.io versus www.sqreen.io. It looks to me like they are doing everything right.

There is no shame in having your CSP header in Report Only. It's complicated to manage your assets, especially when using a tag manager where it's not obvious what the hell the URI/hosts are that will be loaded.


i love stuff like this:

* Goodui.org

* enterpriseready.io

* ixdchecklist.com


You may then want to see this one too: https://git.io/security


The first link has a great content but the UI could be so much better. Some suggestions:

1) adding a sidebar to give an overview of the tips and help to navigate to the ones I'm interested in

2) grouping the tips by category to make the list shorter and easier to read


The stupid share links blocks the left of content. Crap web layout.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: