Hacker News new | past | comments | ask | show | jobs | submit login
Exploiting filepicker.io (digitalmisinformation.posterous.com)
59 points by Stealth- on Oct 11, 2012 | hide | past | favorite | 20 comments



On it! We have a new security release coming out next week that allows the use of server-side secrets to lock down both uploads and reads. We've already contacted the author and are happy to work with anyone who is concerned and get them early access


Also, given the feedback, we're implementing a way to set the max file size on your developer portal. Fix should go out in the next 10-15 minutes


Took a bit longer to handle the edge cases, but we now have the max file size stopgap implemented. On your developer portal you can set the maximum size that you'll allow to be uploaded. We're still moving forward with the more fully-featured security functionality, but wanted to make this available ASAP


This is great, but there isn't any indication of what units the max filesize is using. Kilobytes or megabytes?


Oops! Sorry, bytes


So for 30 MB I should put in 30000000?


or 31457280


Your response time is awesome!

I'm about to implement filepicker.io for a project and really do not mind any kind of server-side integration that would avoid chances of malicious users abusing the service against our filepicker.io or S3 usage.


We take this stuff seriously. Great, shoot me an email at brett at filepicker.io and we can walk through how to get you set up on the new security scheme


Totally agree! Seeing that the Filepicker team is responsive to this kind of public feedback and are reaching out to the author in a appreciative and collaborative way speaks volumes about their character.

Even more excited to get Filepicker implemented now!


For the last couple of years I've been working on a project to index multimedia for language instruction purposes. We had to address exactly this problem- if someone intercepts your API key, which is trivial if you have to put it in JavaScript, you're screwed. The partial solution we've implemented is disallowing requests that require a key from a browser, and requiring client applications to work server-to-server in those cases.

It doesn't totally solve the problem, it just moves it; but it moves it to a less vulnerable location if you never have to get your API key into a browser where anyone can check it out by viewing source, or, in extremis, opening FireBug.


Amazon solves this with shared keys, and is filepicker's up coming solution. You might want to look into doing it like that.


Could you elaborate on this, please? I can't find much on the topic.

Edit: The Filepicker.io email seems to indicate a PKI-style solution, but I can only sort of guess at the implementation.


Sure - check out http://aws.amazon.com/articles/1434

What you're interested in is:

"signature" - "A signature value that authorizes the form and proves that only you could have created it. This value is calculated by signing the Base64-encoded policy document with your AWS Secret Key, a process that I will demonstrate below."


I may be missing something, but it seems to me that's still vulnerable to interception. The policy document can limit the kinds of things that can be uploaded, but an attacker could still intercept that form on the way to or from the user and replace the intended user's data with anything else that happened to fit the policy.

I suppose that's solved by serving the form over https. Perhaps that's just what I was missing.


HTTPs would work but also if you scroll down a bit and look at the policy JSON (http://pastie.org/private/tkr7iyqzqrezmmqazbfijw), it has an "expiration" field which would mitigate the type of attack outlined in the parent post since after a period of time the signature would no longer be valid.


It's unfortunate to see Chris posted this without mentioning anything to Filepicker first.

That said, it's a pretty obvious problem which is inherent in the way Filepicker is doing things right now. Simple sometimes comes at the expense of secure. I'd argue that they made a fairly reasonable trade-off for the time.

Good to see Brett and the team are responding quickly.


We asked them to implement this feature before integrating in August, and they were very responsive and said they'd do it asap. My understanding is that it's now on their staging environments.

I don't see why anyone would have integrated without this.


S3 allows you to include a maxfile size param in your upload signature, so it should be easy enough for this to be fixed. However, you still have to be careful, as someone uploading 100 10MB files is just as bad as 1 1000MB file.


This seems like a prime example of how a startup should reply to a blog post like this. Quick, honest, and with a solution in the works.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: