I wrote a tire search app a few years back and made it work extremely fast given the task at hand. But I did not go to the level that this guy did. http://tiredb.com
If we could cram more modern functionality into say...twice or three times the performance of the above, I think the web would be a better place. Instead the web is a couple orders of magnitude slower.
I feel like the big thing I'm missing is smart compilers that can take web app concepts, and turn them into extremely optimsed 'raw' HTML/CSS/JS/SQL/backend. All of the current frameworks still use hand written frequently very bloated or inelegant hand written CSS & HTML, and still require thinking manually about how and when to do AJAX when it's least offensive to the user. Maybe something like yesod ( http://www.yesodweb.com/ ) or something like that is heading in the right direction. http://pyjs.org/ has some nice ideas too... But I'm thinking of something bigger than the individual technologies like coffeescript or LESS... Something that doesn't 'compile to JS', or 'compile to CSS', but 'compile to stack'. I dunno. Maybe I'm just rambling.
That's gotta be a law codified somewhere, right?
On the other hand, there are sites which are conceptually much simpler but incredibly sluggish. Twitter is a particularly bad offender after you've scrolled down a few pages. Or any other site that uses a ton of Ajax with little regard for the consequences.
But, yeah. If webpages would just revert to what TBL had created (yes, I'll allow for images and minimal other frippery) things would be so much more manageable.
Not just performance, but efficiency - both speed and size. Sadly it seems that most of the time this point is brought up, it gets dismissed as "premature optimisation". Instead we're taught in CS to pile abstraction upon abstraction even when they're not really needed, to create overly complex systems just to perform simple tasks, to not care much about efficiency "because hardware is always getting better". I've never agreed with that sort of thinking.
I think it creates a skewed perception of what can be accomplished with current hardware, since it makes optimisation an "only if it's not fast/small enough/we can't afford new hardware" goal; it won't be part of the mindset when designing, nor when writing the bulk of the code. The demoscene challenges this type of thought; it shows that if you design with specific size/speed goals in mind, you can achieve what others would have thought to be impossible. I think that's a real eye-opener; by pushing the limits, it's basically saying just how extremely inefficient most software is.
Right, exactly. It's obvious too that software has scaled faster than hardware in the sense that to do an equivalent task like say, boot to a usable state, takes orders of magnitude longer today than it it used to, despite having hardware that's also orders of magnitude faster.
So when I see demo of ported software that does something computing used to do back in the 90s (but slowly), I'm really only impressed by the massive towers of abstraction we're building on these days, but what we're actually able to do is not all that much better. To think that I'm sitting on a machine capable of billions of instructions per second, and I'm watching it perform like a computer doing millions, is frankly depressing.
All of this is really to make the programmers more efficient, because programmer time is expensive (and getting stuff out the door quicker is important), but the amount of lost time (and money) on the user's end, waiting for these monstrosities of abstraction to compute something must far far exceed those costs.
I'm actually of the opinion that developers should work on or target much lower end machines to force them to think of speed and memory optimizations. The users will thank them and the products will simply "be better" and continue to get better as machines get better automatically.
I believe that the amount of time spent optimising software should be proportional to how long it will be used for, and how many users it has/will have. It makes little sense to spend an hour to take 10 minutes off the execution time of a quick-and-dirty script that will only be run once or twice. It makes a lot of sense to spend an hour, or even a day or week, to take 1 second off the execution time of software with hundreds of thousands or millions of users that constantly use it. At some point the overhead of optimisation is less than the time (or memory?) saved by everyone, so the "programmer time is expensive" line of thinking is really a form of selfishness; interesting that free/open-source software hasn't evolved differently, since it operates under a different set of assumptions.
Before I replaced my old desktop, I think my boot times were something on the order of 10 minutes.
(and I don't count Windows claiming you can start to work while it loads a bunch of stuff in the background making the system unusably slow as counting).
Well, there will always be demoscene (http://www.youtube.com/watch?v=5lbAMLrl3xI ) which I've always found remarkable.
- I optimized for compression by doing things the same way everywhere; e.g. I always put the class attribute first in my tags
- I wrote a utility that tried rearranging my CSS, in an attempt to find the ordering that was the most compressible
Compression algorithms can do a better job when they're domain-aware. An HTML-aware algorithm could compress HTML much better than a general-use plain-text compression algorithm, without requiring the user to do things like put the class attribute first. Of course, that also requires the decompression algorithm to be similarly aware, which can be a problem if you're distributing the compressed bits widely.
Well not necessarily... An HTML-aware algorithm could for example rearrange attributes in the same order everywhere because it knows it doesn't matter.
Actually that would be a nice addition to the HTML "compressors" out there.
> Courgette transforms the input into an alternate form where binary diffing is more effective, does the differential compression in the transformed space, and inverts the transform to get the patched output in the original format. With careful choice of the alternate format we can get substantially smaller updates.
The reason for doing it wasn't so much the compression benefit but some of the nanoc code that generates the site did not always order the tags the same way and then it had to rsync up more than it needed to
I've seen similar ideas in the demoscene 4k competition world, where code and music is arranged to have as many repeating self-similar patterns as possible so the executable compressors can shrink them optimally.
"Farbrausch" and their series of Fr-X "small" demos would be one example of this kind of "entropy trickery"
The next target for crunching would be to minimize the actual amount of code given to the browser to execute, versus maximizing the compression ratio only (which is "just" correlated to the running "code" size)
It could make writing for 4k less of a chore?
In any case, this is an outstanding hack. The company I work for has TLS certificates that are larger than the payload of his page. Absolutely terrific job, Daniel.
This probably says something about compression technology vs. the state of the art in machine learning, but I'm not sure what.
E.g on a thread click:
GET https://api.stackexchange.com/2.2/questions/21840919 [HTTP/1.1 200 OK 212ms]
GET https://www.gravatar.com/avatar/dca03295d2e81708823c5bd62e75... [HTTP/1.1 200 OK 146ms]
stackoverflow.com (a lot of web requests):
GET http://stackoverflow.com/questions/21841027/override-volume-... [HTTP/1.1 200 OK 120ms]
GET http://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min... [HTTP/1.1 200 OK 62ms]
GET http://cdn.sstatic.net/Js/stub.en.js [HTTP/1.1 200 OK 58ms]
GET http://cdn.sstatic.net/stackoverflow/all.css [HTTP/1.1 200 OK 73ms]
GET https://www.gravatar.com/avatar/2a4cbc9da2ce334d7a5c8f483c92... [HTTP/1.1 200 OK 90ms]
GET http://i.stack.imgur.com/tKsDb.png [HTTP/1.1 200 OK 20ms]
GET http://static.adzerk.net/ados.js [HTTP/1.1 200 OK 33ms]
GET http://www.google-analytics.com/analytics.js [HTTP/1.1 200 OK 18ms]
Even with browser cache enabled, stack overflow loads a considerable number of resources compared to st4k. St4k loads 1 api call to get the S.O. data (JSON - ~1KiB), then loads any needed images. Stack overflow is loading the entire HTML document again (~15KiB), along with a lot of other web resources. Without going into their code, I've got no idea on what is lazy loading.
But my point still stands on the speed of page navigation (not first time landing). St4k is faster as each change of page requires less KiBs of information to perform a page render, as well as less content to render: Compressed JSON, vs the entire HTML markup, and Rendering the Changes vs Re-Rendering the entire window/document.
I'm surprised there's such (15/1) a difference between the HTML and JSON versions of the same data. Both add their own syntactic cruft, but I wouldn't expect the weight of the markup to be that much greater than the equivalent JSON, unless it's being implemented horribly inefficiently (e.g. very verbose class names, inline styling, DIVitis). I'm suspicious about that 15/1 figure.
All in all, though, this is an interesting approach. Nothing radically new, but definitely good to see a solid proof of concept that we can all relate to. I particularly like the way this gets around any api throttling limits since the St4k server isn't doing the communication with SO, it's all happening client->server, much as if one were just browsing SO as normal. Is there a term for this? It's not quite a proxy, since it's not 'in the middle', but more 'off to the side, not interfering directly, merely offering helpful advice' :)
And now we're proud to have a simple functional list compiled into the same amount of memory ...
My only thoughts are that search is the real bottleneck.
html ~200KB (~33 gzipped)
I'd take some trade-off between between crazy optimization and maintainability, but I'd definitely rather do this than slap on any number of frameworks because they are the new 'standard'.
Of course, the guy who has to maintain my code usually ends up crying like a little girl.
I would love to investigate this further. I've always had a suspicion that the aim to make everything reusable for the sake of bite size actually has the opposite effect, as you have to start writing in support and handling tons of edge cases as well, not to mention you now have to write unit test so anyone who consumes your work isn't burned by a refactor. Obviously, there's a place for things like underscore, jquery, and boilerplate code like Backbone, but bringing enterprise-level extensibility to client code is probably mostly a bad thing.
Wonder how we can unobfuscate the source. It would be great if there is a readable version of the source as well, just like we have in Obfuscated C Code Contests. Or perhaps, some way to use the Chrome inspector for this.
That font family should have been
But what ever :)
$ curl -s http://danlec.com/st4k | gzip -cd | sed 's/serif/monospace/' | gzip -9c | wc
14 94 4098
Try popping open the inspector panel, and the fonts will magically correct themselves.
$ curl -s http://danlec.com/st4k | wc
14 80 4096
$ curl -s http://danlec.com/st4k | gzip -cd | wc
17 311 11547
$ curl -s http://danlec.com/st4k | gzip -cd | gzip -c | wc
19 103 4098
$ curl -s http://danlec.com/st4k | gzip -cd | gzip -9c | wc
14 80 4096
$ zopfli -c st4k |wc
11 127 4050
$ curl -s http://danlec.com/st4k | gzip -cd | 7z a -si -tgzip -mx=9 compressed.gz
$ wc compressed.gz
14 84 4048 compressed.gz
Did you try a png data url? Could be smaller.
A specialized compression protocol for the web?