
Making browsers faster: Resource Packages - bpung
http://limi.net/articles/resource-packages/
======
jerf
Seems like a better idea would be to just have the server aggressively shove
resources down the wire much like standard HTTP pipelining, except where the
server doesn't wait for the client to request them. Have the client send a
flag up saying it's willing to accept this in the initial request. You get
much the same effect, only without adding ZIP files to the mix.

You also get the ability to have proper headers on each element; this solves
every "Additional Note" they mention. Some obvious extensions involve things
like sending up all the E-Tags the client knows about in the first request. I
think it's a lot simpler than their approach and retains more of HTTP. Yeah,
it's slightly less bandwidth efficient (though to make the difference large
you need pathological examples), but I think it's worth it.

~~~
djcapelis
The naive implementation of this ruins all the elegance of caching which is
very important in modern websites. You end up choosing between getting all the
common elements shoved down the wire at you every time or invariably having to
go back for the few images unique to the page that you missed.

We're in the middle of a research projects right now where the server
automatically constructs cache groups and does this properly. It was one of
the things the guys working on spdy and us talked about, but they are taking a
slightly different approach.

We'll see what happens, but this is the direction we want to move as well.

~~~
jerf
In my model, the webserver knows what the "extra few images" are and could
send them to you as part of the first burst. First-order caching (no proxying,
just server and client) can be handled by having the client also smart enough
to send up all the caching info at once.

How to handle proxy caches cleanly isn't immediately obvious to me, but
there's far more tools with this approach than the zip file approach.
Certainly with work I think it could be solved. Again, to a first order, the
proxy would have to accept this style too and do something intelligent with
it; an old-style proxy would fall back on the current system.

It is not clear to me: Are you the one proposing this zip file thing, or
someone in another group that is working on this? If the latter, I wouldn't
mind seeing a link, perhaps even submitted to HN.

~~~
djcapelis
I'm in a completely separate group in a completely separate place who happens
to have seen this link before. We haven't done anything worth linking to yet,
just have a little bit of knowledge about this area since we've been looking
at it as well.

What you're saying is close to what we've been looking at trying to turn into
code. I think the proxy thing is a non-issue. In fact a proxy in many ways
could implement a portion of the server-side smartness and accelerate
conforming clients on websites that don't implement this.

These are some larger issues though, I think:

1) Making the server smart enough to actually do this.

2) Sending the resources in the right _order_ (It turns out the order things
go down the line can be significant and this is something that can vary
depending on which web rendering engine your user has.)

3) This requires changes to HTTP, unlike the linked proposal, so one needs a
way to do it right and a compact representation of the cache information.

------
alec
"While this effort [SPDY] from Google aims to make everything faster, it is
largely orthogonal to what we’re trying to do with Resource Package."

Seems to solve the same problem, though, but without the headache of rewriting
everything.

~~~
ramanujan
I think he means orthogonal here in the sense of complementary. SPDY +
resource packages may be faster than either individually.

~~~
jerome_etienne
can you explain ? both try to minimize the number of tcp connection and thus
avoid the slow start at every http requests.

spdy got the advantages of being a dropped in solution, aka you push the layer
in the browser, and in the server. But no change in the website itself.

"resource packages" requires changes in the browser and at web site level.

------
joeyh
This seems like a fairly gratuitous workaround for IIS not properly supporting
HTTP pipelining.

But, bonus irony points for using .zip to implement it. (.rar would be even
more ironlicious.)

~~~
patio11
It is more like a work-around for RFC2616, the HTTP 1.1 spec, which says that
compliant clients SHOULD NOT make more than 2 simultaneous connections to one
server. This interacts rather poorly with modern sites, which may well need to
load 50+ files on a single page view.

Much of the YSlow recommendations center on various ways of getting around
this limitation -- for example, using multiple redundant domains
(images1.example.org, images2.example.org... permitting you to do 2*N
downloads in parallel instead of 2), putting all your CSS/JS in one file,
image spriting, etc etc.

------
Veera
Good Idea.

But what if the user just visit only one page of the website and leaves the
page immediately (such as traffic generated from search engines) ? In this
scenario, the resource package required for the entire site will be downloaded
just for showing one page!

~~~
loup-vaillant
Often, the majority of the resources (css, background images…) is required for
rendering _any_ page. So I don't think it will be a problem most of the time,
and fits their 80-20 goal.

------
sutro
This is a very good, very pragmatic idea.

~~~
djcapelis
It is, but I'm of the opinion that HTTP needs more than pragmatism to solve
its woes at the moment.

The latency issue with the protocol is just immense and this takes a whack at
a main issue, but doesn't quite get to where it should. One of the main
drawbacks is you still have to wait for the first reply before you can request
any associated content, which is bigger deal than it seems. The second
drawback is it requires the website owner to do this by hand and invariably
they'll bundle things in the wrong way, not account for differing network
bandwidth types or simply not do it.

That said, for some very good reasons, this is good direction, so we'll see
what happens.

~~~
sutro
The author has knowingly traded off technical style points for practical ones,
a tradeoff I applaud. So often the opposite choice is made, resulting in
beautiful little laboratory experiments that have little relevance in the real
world.

------
gojomo
The <link> could include a secure hash of the package contents -- so that it
could be loaded from a less-reliable or less-trusted channel than the
referencing page. (For example, small page from uncacheable HTTPS, bulk
package from edge-cacheable HTTP or P2P.)

------
gojomo
An earlier Google proposal for faster page-loading -- still active in Toolbar
for IE AFAIK -- was "Shared Dictionary Compression over HTTP" (SDCH):

[http://sdch.googlegroups.com/web/Shared_Dictionary_Compressi...](http://sdch.googlegroups.com/web/Shared_Dictionary_Compression_over_HTTP.pdf)

<http://groups.google.com/group/SDCH>

In some ways a custom-dictionary-per-page-group would be a lot like a
resource-package-per-page-group.

