
Digg shows Multipart XMLHttpRequest prototype - alexandros
http://ajaxian.com/archives/digg-shows-multipart-xmlhttprequest-prototype
======
froo
Umm... I just tested this, Firefox 3.0.10 running under Linux (Ubuntu).

 _Trial 1.._

MXHR Stream - 10988 ms

Normal - 346 ms

Screenshot attached <http://i41.tinypic.com/zn0k88.jpg>

_Trial 2.._

MXHR Stream - 10978 ms

Normal - 111 ms

Screenshot attached <http://i44.tinypic.com/24awmxg.jpg>

_Trial 3.._

MXHR Stream - 12222 ms

Normal - 122 ms

Screenshot attached <http://i42.tinypic.com/198o6t.png>

So Digg figured a really interesting way of making something at least 31 times
(median average of 59 times) slower? Awesome. Good job!

Instead of trying out new and interesting ways of slowing shit down, perhaps
they can actually figure out how to turn a profit.

~~~
burny
As stated in the comments of the actual article, the demo for text doesn't
really do the technique justice.

Check out the Image demo: <http://demos.digg.com/stream/imageDemo.html>

~~~
froo
So, in my day to day browsing of the web, reading news sites, twitter,
facebook, emails, blog posts and all that other text content out there will
only make my web experience at least 31 times slower, but that's ok, because
at least the occassional logo and embedded image will load just a little
faster.

Also thankyou for pointing me to that comment, because it also mentions that
IE won't support data uri’s

So all this does, is slow down the majority of a web experience for people who
want a customisable, faster browser?

How in any way is this a useful technique?

Besides, the test is a bit stupid for a real world example - 300 tiny uncached
icons? Yahoo talks about speeding up load times by reducing http requests,
wouldn't you just want to do the same as a single CSS sprite (as Yahoo
recommends), which would also reduce the overall image size aswell by removing
unnecessary meta information?

~~~
e1ven
It's useful for pages with hundreds of inline elements, such as uncached
images.

As I understand, the reasoning is that each connection to the server has a
time-cost in creating and tearing down the connection.

Creating X connections, up to the browser's persistent connection limit, and
sliding all the data through those avoids the overhead of connect and
teardown, allowing those specific edge-cases to go faster.

For most sites, I don't imagine it would be a boost, for sites with lots of
avatars like Digg or large forums, it might be a boost.

~~~
froo
_For most sites, I don't imagine it would be a boost, for sites with lots of
avatars like Digg or large forums, it might be a boost._

Oh, I didn't say that you couldn't think of an example where it wouldn't have
a use, I was thinking perhaps some sort of photo gallery application would
find tremendous use in this... if it worked in ALL browsers.

However, the fact is, that IE is the current dominant browser and IE7 doesn't
support this... so the point is really moot.

Besides, your average Digg user doesn't read past the first 20 comments anyway
(just look at the weighting in diggs for comments), does it really matter how
long it takes for the rest of the avatars to load?

Cool idea in theory, limited use - Digg could have spent more time working on
profitability instead.

~~~
e1ven
Fair point, but I try to avoid second-guessing what's best for companies other
than my own to do. For all I know, the bandwidth savings are greater than the
cost of the engineer who implemented it.

That said, there may be a workaround for IE7 via MTHML.
[http://www.phpied.com/mhtml-when-you-need-data-uris-in-
ie7-a...](http://www.phpied.com/mhtml-when-you-need-data-uris-in-ie7-and-
under/)

------
judofyr
old + blog spam <http://news.ycombinator.com/item?id=574459>

