I haven't completely digested it, but it looks like it works by a combination of pre-calculating a dependency graph for each page, which it then tweaks during the real page load depending on current network conditions.
CSS, and images, Polaris leverages DOM interfaces like
document.innerHTML to dynamically update the
The high level advice of streamlining the dependency chain of page assets is laudable, but this doesn't sound like the right approach.
> "To use Polaris with a specific page, a web developer runs
Scout on that page to generate a dependency graph and a Polaris scheduler stub. The developer then configures her web server to respond to requests for that page with the scheduler stub’s HTML instead of the page's regular HTML."
(But even if I've misunderstood that and it is realtime, the network is still a tighter bottleneck than the page renderer in most cases, so it seems like a generally reasonable tradeoff to me.)
I'm curious how well this compares to e.g. a plain old webpack solution though -- ideally you'd have organized your assets well enough in the first place that something like Polaris would be unnecessary. "Ideal" is often a long way from reality, of course.
eval() is a significant concern for performance.
The mechanism loads much faster than allowing the client browser to perform all those (latency-unfriendly) GET requests. It is also superior to file concatenation since you can change any individual asset without having to worry about the impact of all clients having to re-download that one huge concatenated file.
HTTP2 is basically like loading all your assets over an existing WebSocket.
Even with resource hints, H2 etc. the browser is still discovering the dependencies at run time and relying to the developer to hint them correctly for some performance speed up.
I switched the include directive to inline for the table row, but I was using template directives that were not supported inline by the template compiler. In the end I had to add some code to the compiler in order to get a reasonable performance time (thank you open source).
That running with lighttpd will probably be the fastest you can easily pull from a web-server.
Also I try to never directly edit the ramdisk. I write changes to a git repo and write a small script to pull from the repo and directly write to the disk immediately.
Doesn't this describe all web browser development of the past 15 years?
If you do it manually, you can generally do things that will speed things up more significantly, but that requires time, specialized knowledge, and attention to a not very glamorous area of web development.
An example might be an image that shows up on a hidden tab that does not have enough information in the html for the browser to understand it is not a required element for initial rendering. If the request for that image can be pushed to the time frame after the initial render the end user has a better experience.
When the web was limited to 28K speeds for a significant amount of the userbase people paid a lot more attention to these kinds of things. Now that a significant amount of end users have broadband speeds, web developers are more concerned about other things because some things as simple as leveraging a CDN and outputting appropriate cache control headers are enough optimization to provide adequate performance.
It's not clear on my initial read if this is a new rendering engine, a client side script, or a server side parser that re-shapes output, but the intention is obvious - download things in the right order to get the fastest page load times. I've done this kind of optimization before and it can make an enormous difference - especially with bandwidth challenged end users.
Yeah, thinking about it:
386: we can put a 32 bit CPU on a die
486: we can improve that, add a serious cache, and FPU on die
Pentium: more than one instruction at a time
Pentium Pro: and out of order
Pentium 4: No one listened to those warning that “THE MAGMA PEOPLE ARE WAITING FOR OUR MISTAKES.” (Intel high level engineering management frequently screws up big, see also the multiple memory architecture errors, two resulting in 1 million part each recalls.)
I aspire to someday be the kind of guy who gets the definite article prepended to his name
Some comedian (Demetri Martin, maybe?) called this "the American version of royalty."