
Spotify – security team response time - algorithm_dk
http://algorithm.dk/posts/reflected-xss-spotify-best-security-response-time
======
billyhoffman
From OP: "Spotify came to my attention because of that specific page as I'm
currently looking to build a security portfolio."

This is almost as foolish as it is illegal.

Finding web security issues is much more complicated legally than finding
security issues in desktop/server software. I can download and install Apache
and test it on my local box and discover things like the Range Header DoS.
There are no legal issues for me to worry about.

I can't download and install "LinkedIn" and test it for web issues. I have to
test against their site running on their computers. Immediately in the US the
Computer Fraud and Abuse Act comes into play (though the OP lives in Denmark).
"Hmmm, let me 'test' random website X for something that could cause a DoS" is
insane.

Even if a company has a formal "we are OK and authorize people to test our
site for security issues" policy that is still extremely dangerous from a
legal perspective. Maybe you are only authorized if you follow their specific
disclosure policies. Maybe you are only authorized to do non-destructive
testing. Every company is going to have some tipping point where, if you
testing impacts X or caused Y downtime, that's enough damage where they will
seek your prosecution.

Building a "security portfolio" against websites is a stupid idea with some
potentially huge negative consequences. If anyone insists on doing this, at
least go looking for issues in projects that you can download and audit
locally.

~~~
icanhasfay
Have to disagree with a few points here.

"Hmmm, let me 'test' random website X for something that could cause a DoS"

The author was testing reflected XSS which is inherently client side, there
should be no case of concern for DoS here.

"Building a "security portfolio" against websites is a stupid idea with some
potentially huge negative consequences."

The author mentions Apple, Linkedin, Amazon and AT&T all of which have some
type of vulnerability notification program. (See [https://bugcrowd.com/list-
of-bug-bounty-programs/](https://bugcrowd.com/list-of-bug-bounty-programs/)
for a great list of programs) I would have to say that as long as the
researcher was performing within the scope of the respective program, there
should be no worry. I think it's the exact opposite of a stupid idea. Building
out a portfolio within the scopes of the programs is a great way to build some
security reputation.

~~~
billyhoffman
I understand what you are saying, but I disagree with the risk/reward.

Yes, the author was testing with a reflected XSS attack string, and yes "there
should be no case of concern for DoS". But that's exactly my point. You have
no idea what effect that will have.

I personally have caused an inadvertent DoS attack with an XSS probe while
doing a security audit for a large SaaS company. (The attack made the code
throw and exception and some of my escaping characters caused havoc with their
error logger, sending it into a infinite loop trying to log data, pegging the
CPU and growing memory until the disk thrashed.)

You don't know what you actions will cause because the server/app is a black
box. And if you want to audit that, you should have a formal agreement with
the company that they know what you are doing and its OK. A generic, flimsy,
non-personal "everyone can try and 'hack' us and it's OK" policy published
somewhere is just too little protection, especially given how the CFAA is
applied in courts today.

~~~
algorithm_dk
Then what is the purpose of bug bounties in your perspective?

~~~
billyhoffman
I think bug bounties can make the risk/reward ratio palatable, by either
lowering risk or increasing the reward.

If a company offers monetary awards for web vulnerabilities in their site,
that's a good sign. It shows they recognize the monetary value on what you are
doing and made the decision to reward you. It shows the extent of their
commitment to the bounty program that they have allocated budget and have
accepted the added accounting challenges to provide money for issues. It also
means that have thought through exactly what they are expecting and what they
are willing to pay.

In my eyes, someone offering marketing swag or "putting your name on a web
page" doesn't think your work is as valuable. Thus they are more likely to be
angry and seek legal remedies should your testing cause them harm in some way,
because in their eyes the value of your security testing or your findings
doesn't out weight the damage.

In short, all of this is an indicator of how much thought was put into the
company's policy on finding and reporting security issues, and how mature
their reaction will be to things they have not thought of or bad things
happen. The less mature, the more likely they are to decide your testing was
"unauthorized" and to seek legal remedies.

Compare Spotify's policy [1] with Google's Policy[2]. Which one has put more
thought and planning into running a program like this? Obviously people's
risk/reward appetites differ...

Please note that I am taking the risks of bug bounties/"its ok to audit us"
policies that apply public websites. The Stripe/Matasano challenges are
specially created systems and are always a great thing to do.

[1]- [https://www.spotify.com/dk/about-us/contact/report-
security-...](https://www.spotify.com/dk/about-us/contact/report-security-
issues/) [2]- [http://www.google.com/about/appsecurity/reward-
program/](http://www.google.com/about/appsecurity/reward-program/)

------
pilif
The initial response was very quick, but honestly: Fixing such a simple XSS
hole on a web application shouldn't take 7 days. In the age of automated
testing and continuos deployment, escaping that parameter is trivial.

Granted. If you have a systemic issue and you need to fix your framework to
always escape unless told otherwise, that will take longer, but it won't
prevent you from quick-fixing just the reported issue.

~~~
antsar
I had to do a double-take when I read the "7 days" part, because I wasn't sure
why that should be a pleasant surprise.

If it were their desktop client, that kind of turnaround would be
understandable. For their website? I don't see why they couldn't have deployed
a hotfix immediately.

On the other hand, perhaps they had a quota of more serious security issues
and were preoccupied with addressing those.

~~~
abjorn
It's likely that it was fixed sooner than that. I would bet they gave it 7
days before he could talk about it, so they could audit the rest of their
site.

------
yeukhon
_edit_ : oops. Didn't know it was actually a Sweden based company. (minus
1000000000 chars below).

Regarding the 7 days patch, that does look long. Not sure why. It'd be
interesting to find out.

By the way, I like your resume. Visually impressive; a pleasant to read. It'd
be interesting to see that being a shirt.

~~~
nbody
Actually Spotify is mostly Sweden-based but they seem to be ramping up on NY
as well.

~~~
yeukhon
Oh! Thanks for the correction! I am surprised. They used to recruit a lot of
U.S. based engineers last year as I was looking for job.

~~~
nirvanis
Yes, Spotify is a Swedish companies, but we have several engineering offices
in the US and in Europe. The current openings are listed on spotify.com/jobs

------
HERPDERPSD
Did they reward you in any way, or simply recognize your name on this list?

------
korzun
Response time is useless if it still takes 7 days to deploy a patch for a
simple XSS issue.

~~~
jsegura
Totally agree with that. The important time is the one between the disclosure
and the fix, not the one between the disclosure and the first response. It
could be a bot :)

~~~
nirvanis
Trust me, Nenad is not a bot :)

We work hard on becoming better and faster at tackling these and other issues.
We try to learn from incidents and mistakes in order to be faster next time.

Thanks for the feedback.

