

Developer Preview of AWS SDK for JavaScript in the Browser - jeffbarr
http://aws.typepad.com/aws/2013/10/developer-preview-aws-sdk-for-javascript.html

======
tomsaffell
My first thought: great - we can retire our in-house multi-part uploader
(EvaporateJS) [0] and use AWS's. But digging into the API, it seems like most
of the real work of multipart uploading (managing part failures) is not done
by the SDK [1], so we'll stick with EvaporateJS. It's working pretty well for
us. We've seen 22GB uploads go through (direct from browser to S3). The main
issue that we know of [2] should be fairly easy to fix if anyone wants to
contribute!

[0]
[https://github.com/TTLabs/EvaporateJS](https://github.com/TTLabs/EvaporateJS)
[1]
[http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/frames.ht...](http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/frames.html)
[2]
[https://github.com/TTLabs/EvaporateJS/issues/6](https://github.com/TTLabs/EvaporateJS/issues/6)

~~~
akoumjian
We had similar hopes, but for now we will continue to use our in-house multi
part uploader. We have to sign every multi-part request server side, using an
auth scheme that depends on the key and the user.

I'm not sure if it would be an improvement or not to give each of our users an
IAM account that correlates to specific bucket acl which is based on user
namespaces. It would lighten the load on our servers (don't need to sign each
request) but now each user needs resources in IAM to be managed.

------
_pius
This is huge. Jeff, if you're still reading, are there any plans to include
Persona in the Web Identity federation?

~~~
jeffbarr
I am reading! I will ask the team about Persona.

~~~
ozten
This would be great. Firebase is somewhat of a competitor and offers this
feature.

[https://www.firebase.com/docs/security/simple-login-
persona....](https://www.firebase.com/docs/security/simple-login-persona.html)

------
davidjgraph
It's a library helping devs move to a fully client-side model.... that
requires a server for auth.

I must been missing the intended use case. Happy with the Google Drive and
Dropbox equivalents, this isn't something that we'll be adding to that set.

~~~
sha90
You don't need a server for auth, that's what web identity federation[1] is
for.

[1] [http://aws.typepad.com/aws/2013/05/aws-iam-now-supports-
amaz...](http://aws.typepad.com/aws/2013/05/aws-iam-now-supports-amazon-
facebook-and-google-identity-federation.html)

~~~
davidjgraph
In the flow picture at the bottom, the STS looks like a server to me. What I'm
saying is the auth flow still seems to require a server to act as an
indirection to the real auth server. But if Amazon provide that part for us,
great.

~~~
sha90
It's not just Amazon providing this-- there is Login With Amazon, but there is
also Facebook and Google that act as identity providers. Unless I'm mistaken,
this is how Google's storage APIs work too, by using OAuth/OpenID to get an
access token that can then be exchanged for keys.

~~~
jeffbarr
Here's a sample that will let you authenticate using Facebook and then upload
content to S3:

[http://aws.amazon.com/developers/getting-
started/browser/](http://aws.amazon.com/developers/getting-started/browser/)

------
superfresh
How does this generate security credentials for a user client side? It
mentions Identity Federation[1], but it looks like that links with Google ,
Facebook, or Login with Amazon. Is there a way to to authenticate a user
without a 3rd-party login service?

[1] [http://aws.typepad.com/aws/2011/08/aws-identity-and-
access-m...](http://aws.typepad.com/aws/2011/08/aws-identity-and-access-
management-now-with-identity-federation.html)

~~~
williamcotton
I really apologize for a very cursory read on my part, but I'm guessing it is
in the same manner that CORS uploads to S3 is handled:
[http://aws.amazon.com/articles/1434/](http://aws.amazon.com/articles/1434/)

There seems to be more details here: [http://aws.typepad.com/aws/2013/05/aws-
iam-now-supports-amaz...](http://aws.typepad.com/aws/2013/05/aws-iam-now-
supports-amazon-facebook-and-google-identity-federation.html)

Again, I didn't have time to read all of this and see if it is done in the
same manner as has been possible with S3/CORS for the last year or so.

~~~
sha90
If your application is only using S3 to upload objects, the full SDK would
probably be overkill. Using pre-signed forms is sufficient there. However, if
you want to use other services like DynamoDB, SQS, or SNS, pre-signed URLs
will not work. Also note that in order to generate a pre-signed URL you still
need a backend service running to sign those URLs, something you can avoid
with the client-side SDK.

------
recuter
"Each request must be signed with your AWS credentials. .. Our web identify
federation feature to authenticate the users of your application. By
incorporating WIF into your application, you can use a public identity
provider (Facebook, Google, or Login with Amazon) to initiate the creation of
a set of temporary security credentials."
[http://media.amazonwebservices.com/blog/2013/iam_web_identit...](http://media.amazonwebservices.com/blog/2013/iam_web_identity_federation_1.png)

I rather disagree with getting your autherization tokens by the grace of
Google and Facebook.

It seems simple enough to roll your own, perfect use case for App Engine
actually:

(Gets temp S3keys)

User <\--------------------------> Logins to your site (AppEngine)

ˇ

S3

I just wish Amazon offered better late rimiting options and intergration
behind the scenes. The 'Each Request' phrasing can be misleading, you don't
need to sign each request, you can give the client side app a token that will
last for an hour or a week. (But its on you to refresh it when it expires and
keep track of how its being used so there's no abuse)

~~~
jeffbarr
What kind of rate limiting do you need? What are you trying to guard against?

~~~
recuter
Well lets say I have a TokenFactory running somewhere (my servers, AppEngine,
whatever) and my app is called Shminstegram - when my users login and get a
token that's good for say 1 hour, I probably don't want them uploading 10,000
photos in that time period. That's probably a bot.

Now I could manage this myself with a background task that crunches S3 logs at
its own pace and when it notices abuse it reports it to my TokenFactory so
next time that user asks for a fresh token they get denied... but it would be
great if S3 was a tad smarter about such things on its own. :)

Does that sort of make sense? In other words a token should have finer grained
permissions than just time. Maybe... "Accept this token for the next 24 hour,
or 200 POSTs, which ever comes first, and no more than 20 POSTs in the last
hour." That sort of thing.

~~~
Chupachupski
In this context bandwidth throttling is also a consideration - perhaps a per
token or per bucket max object size?

~~~
recuter
You can limit objects by size with the existing ACL's already. But you can't
specify how many objects overall a user may upload in some time frame, or
their accumlated size.

So you can have a token for: upload as many objects as you want of size less
than 1MB in the next 1 hour.

You can't have a token for: upload no more than 1000 objects of size less than
1MB, in the next 1 hour and cut them off after 200MB Total.

------
Kiro
How do you do security when making calls to DynamoDB directly from the
browser? Does that mean a user can do anything it wants with your DB by
forging the JS?

EDIT: Ok, so it says in the post. Fine-Grained Access Control for Amazon
DynamoDB. Good stuff.

------
davidjnelson
epic, I'd love to see angular and ember sdk extensions.

------
loucal
Thank you for this!

------
trvd1707
"Deceloper"?

~~~
jeffbarr
I have old eyes, further blurred by writing re:Invent blog posts almost non-
stop for the last 2 weeks. The typo has been fixed.

