

3 Problems AWS Needs to Address - aaronwhite
http://blog.jacobelder.com/2012/05/3-problems-aws-needs-to-address/

======
Smerity
The ability for S3 and CloudFront to properly handle GZIP compressed files
would further encourage the use of S3+Cloudfront for static websites. As a
host S3 + CloudFront have arbitrary scalability, good performance across the
globe and is pay as you go.

With GZIP compression, bandwidth drops but more importantly load times can
decrease significantly. "It takes several round trips between client and
server before the two can communicate at the highest possible speed [and for
broadband users] the number of round trips is the larger factor in determining
the time required to load a web page"[2]. There was a graph depicting the non-
linear impact file size increases have on load times but I can't find it... =[

In the Google article on compression, a 175% increase in a page's size (non-
GZIP version of Facebook.com) results in a 414% increase in load time on DSL.
Load time does not increase linearly with file size and hence why GZIP
compression is so important for performant websites!

[1]: [http://aws.typepad.com/aws/2011/02/host-your-static-
website-...](http://aws.typepad.com/aws/2011/02/host-your-static-website-on-
amazon-s3.html)

[2]: <https://developers.google.com/speed/articles/use-compression>

------
ComputerGuru
It's a little-known fact that CloudFront supports GZip just fine, so long as
you're using pull from custom origin (like most people are).

You just need to configure your origin servers to serve GZip _even to HTTP
1.0_ (which is what CF requests will come as) and set the "Vary: Accept-
Encoding" header to prevent users of old IE versions from having GZip'd
content they don't support stuffed down their throats.

For example, this is my nginx configuration which serves both GZip'd and non-
GZip'd versions of the same objects via CF. The second and third lines are the
most important for correct AWS CF GZip distribution:

    
    
        gzip  on;
        gzip_vary on;
        gzip_http_version 1.0;
        gzip_comp_level 4;
        gzip_proxied any;
        gzip_types      text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/png;
        gzip_disable    "MSIE [1-6]\.";
    

Note that "image/png" is only in there because Google PageSpeed is very stupid
and marks not GZipping PNG files as a "bug" because I can save "up to 1%" by
employing GZip on PNGs.

~~~
TazeTSchnitzel
>gzipping format that uses deflate compression

ಠ_ಠ

~~~
nphase
This isn't reddit. Please reply with a helpful comment describing why this
is/isn't the right way to do things.

([http://stackoverflow.com/questions/388595/why-use-deflate-
in...](http://stackoverflow.com/questions/388595/why-use-deflate-instead-of-
gzip-for-text-files-served-by-apache))

~~~
TazeTSchnitzel
Sorry, I made assumptions that people knew what gzip was. EDIT: That wasn't
meant to be condescending, I apologise.

To clarify the original comment: I think it's rather pointless to gzip a PNG
file, since PNG files use deflate compression, the same method gzip uses, and
hence it has very little real benefit, if any.

~~~
Terretta
Your look of disapproval seems to have overlooked why he said he was using it.
He's clearly aware of your point, and even mentioned the uselessness: "Up to
1% savings" while explaining why: Google Page Speed is stupid.

~~~
TazeTSchnitzel
Yes, I was pointing out the stupidity of that further because you're
compressing something twice.

------
RoboTeddy
Missing support for Cross-Origin Resource Sharing headers is a big problem for
some applications. For example, drawing images to a canvas from s3/cloudfront
will unavoidably taint your canvas.
(<https://developer.mozilla.org/en/CORS_Enabled_Image>)

Right now I'm proxying image requests to s3 through nginx, which is a terrible
workaround.

The AWS forums has a topic on the issue started in 2009 (~200 replies so
far...): <https://forums.aws.amazon.com/thread.jspa?threadID=34281>

~~~
sofuture
We do the same thing for the same reason. Luckily we were able to split off a
lot of our other content serving (same content, not to canvas elements) to a
CDN which is backed by our origin servers. It's a little crazy, but is the
best we can do until there's better CORS and custom SSL for Cloudfront.

------
jorgeortiz85

      S3 has eleven nines of durability.
    

The author will find, to his dismay, that durability is not the same thing as
availability.

~~~
kozubik
The complexity implied by anything "better" than three nines is a recipe for
disaster.

In reality, neither you, nor Amazon, nor anyone else has any idea how durable
S3 is. But if they _did_, it wouldn't matter because unexpected interactions,
cascading failures, and SNAFU will keep it from ever being realized.

Much better to have more frequent, very boring failures than to have rare
spectacular ones.

~~~
jorgeortiz85
The author is proposing to serve his site entirely from S3, claiming it's
better than using a couple of nginx boxes because S3 has eleven nines of
durability.

Durability means you will get your data eventually (it will not be lost).
Availability means you will get your data right now, which is probably what he
really cares about in terms of serving live internet traffic.

Put another way: S3 not infrequently has availability hiccups (files are
temporarily unavailable, resulting in a disruption of service), without taking
durability hits (your files haven't been lost, you just can't see them right
now).

------
flyt
This is less "AWS" and more "S3/CloudFront".

there are many other product features that EC2/R53/ELB/etc could use, but
calling this AWS is a little too broad.

~~~
benatkin
He uses other AWS services but these are all of his major gripes. So I think
it's fair for him to say AWS.

Also AWS is an organization (part of a larger organization), but S3 is a
product.

------
mistercow
> You could break your CSS into multiple files, but this is in direct
> opposition to one of the tenants of website optimization: minimize the
> number of HTTP requests.

Am I missing something here? Your fonts were going to be in a separate file
anyway, right?

------
akoumjian
I tweeted the same thing to that account and got no response. I'm glad you
did. the Access-Control-Allow-Origin header has been a heavily requested
feature since 2009:
[https://forums.aws.amazon.com/thread.jspa?threadID=34281&...](https://forums.aws.amazon.com/thread.jspa?threadID=34281&start=175&tstart=0)

One example of how fundamental this is: you cannot currently perform a direct
AJAX upload to an s3 bucket from a web application hosted on an ec2 instance.

There is a postMessage hack that will work with small files, and of course you
can use a proxy, but you'd think it would be a common scenario to want to
upload files directly to S3.

~~~
boucher
You can upload files directly to s3 from your website:
<http://aws.amazon.com/articles/1434>

~~~
Smrchy
This only works standalone and not embedded in an AJAX application.

I only wish someone from the S3 team at Amazon would at least answer to this
thread:

[https://forums.aws.amazon.com/thread.jspa?threadID=34281&...](https://forums.aws.amazon.com/thread.jspa?threadID=34281&start=175&tstart=0)

Hundreds of messages in this thread and no answer after 2 years and counting.

~~~
boucher
Well, "AJAX application" is too nebulous of a term to really be meaningful
here. If you mean you can't use an XHR request, sure, but you can certainly
construct the necessary <input> element and POST request dynamically with
JavaScript.

------
ww520
Actually I would like to see S3 support custom SSL certificate. That would be
an awesome addition to make S3 a great static page server.

~~~
sofuture
Related: custom SSL for Cloudfront. It's a real show-stopper that I can't
serve Cloudfront over SSL via a CNAME.

------
diminish
"...Someone monitoring the @awscloud account opened a trouble ticket to my
email address asking for clarification" support through twitter is going
mainstream. It is like praying loud ad getting a response.

~~~
corin_
1.) AWS staff doing something is the very definition of not mainstream when it
comes to stuff like Twitter, their customers are developers

2.) AWS have always been awesome at responding to customer feedback in my
experience

3.) But you're right, except change "is going" to "has gone". A friend of mine
who works in SEO and social media (the good kind) says "In 2009 companies
needed to have social media accounts, in 2010 they needed to put out content
on them, in 2011 they needed to respond to customers through them" and he's
right. The mentality of customers of Twitter/Facebook has, for the most part,
moved from "holy hell, a company ACTUALLY SAW MY TWEET?" to "I tweeted about
my problem an hour ago, where the fuck is my answer?".

~~~
jasonkester
_"I tweeted about my problem an hour ago, where the fuck is my answer?"._

Can anybody else verify this? To me, it seems ridiculous that anybody would
expect to get support by posting something to a random website. Personally, I
go to twitter.com about four times a year and type in the names of my products
to do a quick vanity search about what people are saying about them. I've
never seen anything like a support request (or even a complete coherent
thought) in there. It just doesn't seem like something worth monitoring.

My product sites all have a contact page with an email address on it. If you
want to contact me, that's how you do it.

Amazon has forums with dedicated representatives monitoring them. That's how
you get in touch with them. I've never gone more than a few hours without a
response from somebody who knows what they're talking about in there.

~~~
hemancuso
I certainly get tweets in this vein for my app @expandrive and storage service
@strongspace. Especially if something is effecting availability.

Lots of users realize that there is likely a faster response from twitter than
support@whatever.com because the developer has some amount of face at stake
with the dirty laundry in public.

~~~
jasonkester
But it's not really "in public" though, is it? I mean really, how many people
would you expect to go to search.twitter.com and type in "expandrive" in the
twelve or so hours that they cache that post? That's the only way anybody
would know that your dirty laundry was airing, and then only if they could
parse what the airer was trying to say.

If you really wanted to "expose" something in public, you'd put it up on a
blog or someplace that's actually on the public facing internet. Not that it
would get you any more chance of the company hearing about it, but at least
other people might see it.

And, of course, if you run a company that simply doesn't respond to things on
Twitter, the customer in question will hopefully learn that they can send you
an email and get a fast response.

------
yummybear
The lack of CORS support have been known by Amazon for years, but they still
have chosen not to fix it. There's a long running thread on their support
forums somewhere where they start by saying they'll look into it. I believe
this was years ago.

------
melvinmt
Cloudfront actually does support gzip encoding if you use Custom Origin, just
not with S3.

~~~
jelder
Technically, CloudFront supports Accept-Encoding/Transfer Encoding, and /not/
compression. If client and server supported ROT13 as an encoding, CloudFront
would support that, too. CloudFront is neither compressing nor decompressing
anything.

------
23david
These issues have been known to Amazon and to serious AWS users for a long
time. Why do you expect that this time they will actually do something? It
will take more than a simple twitter response from the AWS team to believe
that they actually will make changes to fix the situation...

------
bsimpson
We've been hosting our gzipped JavaScript via S3/CloudFront, and have had no
problems serving to IE7:

[http://libraries.netshelter.net/javascript/netshelter/librar...](http://libraries.netshelter.net/javascript/netshelter/library/1.4.2.min.jgz)

------
atechie
Also SQS should accept utf-8 in message body rather than a restricted set of
characters.

~~~
mleonhard
SOAP is the cause: <http://www.w3.org/TR/REC-xml/#charsets>

Fortunately it looks like AWS is starting to use JSON for newer APIs:
[http://docs.amazonwebservices.com/amazondynamodb/latest/deve...](http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/DataFormat.html)

------
spullara
You can use S3/Cloudfront for compressed assets as long as your main page is
dynamic. It can just generate different URLs for assets based on whether the
browser supports gzip or not. See bagcheck.com for an example.

------
jwr
These are valid points and the same ones I've encountered when using S3 and
CloudFront. I am actually amazed that gzip encoding _still_ isn't supported —
people have been complaining about this for years.

------
driverdan
On a related note, do not use S3 on the web, use CloudFront. S3's performance
is highly variable and latency tends to be high. Serving files from S3 and not
CloudFront is foolish and will slow your site down.

------
ceejayoz
I'd like to see Micro instances available in Virtual Private Cloud.

In the forums, an Amazon rep promised it'd be available within 2011. No luck,
though.

------
malandrew
They also need websockets support over ELBs.

~~~
gabrtv
At OpDemand we're using WebSockets successfully through multiple ELBs. The
trick is setting the listener to use TCP instead of HTTP. With TCP forwarding
you lose X-Forwarded-For headers, Cookie stickiness and a few other HTTP-
specific features.. but you can always spin up a separate listener for that.

------
hypervisor
Only three problems?

