Server side state in http is a wonderful thing to abuse.
One adserving system I worked on had a very expensive correlation process where it joined auction stats with impression stats. Essentially a big Hadoop job, except run in a stream.
Another system I later worked on solved the problem by tying impression requests back to the same server that ran the auction. That server kept the auction data in memory until it either received the impression or until a set amount of time had passed.
Hearing how it worked made me feel a bit dirty, then I realized how much money it saved them to do it that way and I was impressed with the simplicity of it.
For applications that have extremely fast server response times, it seems simpler to just send the HTML, keep JavaScript to the minimum and use prefetching for likely next clicks to ensure the next response is cached.
Here's me complaining again that the first few sentences/paragraphs/section don't provide any useful information for me as a new user and I skipped right over them. Who are those words for? They sounds like a project proposal.
Briefly what the $THING is should always come first (in everything you say and write, formally and informally), so you can decide whether it's relevant to you. If it is, then you can read the motivation and historical significance.
I thought the second section did a much better job explaining what the pattern was and what the advantages of it are! The first section tried to set up the background for PRPL, but these justifications don't belong in the user-facing documentation (except as an appendix).
It was quite ironic to see the Google IO 2016 presentations about patterns how to make web apps approximate native apps in regards to UI/UX, with the presenters throwing jabs at mobile development, while the Google Android teams were presenting all the goodies about Android development on other rooms, just by using the platform.
If your app uses a huge amount of code, then yes. But that's really the fault of the app developer. Not the library.
What developers often do is pull in a ton of components from the web components catalog and not realize that every component comes with a cost. It's no different than working with libraries. Know the tradeoffs.
https://ebidel.github.io/polymer-experiments/polymersummit/f... is another example that shows how to utilize the "upgrade" feature [1] of the custom elements API. IOW, the browser can render markup without JS ever running. Unfortunately, this is not something that Polymer leverages very much. It requires more work and is less friendly to new developers.
For most real-world projects, it’s frankly too early to realize the PRPL vision in its purest, most complete form – but it’s definitely not too early to adopt the mindset, or to start chasing the vision from various angles.
This reminded me of the "worse is better" essay, because it sounds very much like the perfectionist "MIT approach".
I personally prefer an emphasis on simplicity over perfection, especially when simplicity means you actually get working code. Is there any sample app around that obeys the PRPL pattern and is a good example to start from?
Sure. If you're a React/Webpack fan, take a look at https://github.com/GoogleChrome/preload-webpack-plugin/tree/.... Twitter.com also just shipped a PWA using the PRPL pattern to production if you take a look at their mobile web app on Android (or via DevTools Device Mode) in case you're looking for something more complex to dive into.
What is any possible motivation for doing that? The only one that comes to mind is "I hate when people link to me from other sites," but that makes no sense. It's not hot-linking to a resource, it's just a page address.
(And apparently a lot of other people want to see that picture, as right now I'm just waiting for imgur to do its thing.)
I feel that this isn't so much a 'new pattern' as just an elaboration on 'pre-load stuff you know you'll need', which again isn't so much a pattern as just being common sense.
Hey there. Author of the article here. Flipkart implement a version of the PRPL pattern in production :) We've worked with them quite a bit in the past and I gave a talk with them on their architecture at Chrome Dev Summit last year in case you're interested in learning more about what they're doing.
I love this pattern. I've been studying web performance for a while in the context of working on my project Gatsby [0] — a React.js static site generator. The next version of Gatsby is explicitly patterned after PRPL.
Gatsby at build time generates a static HTML version of each page. Which makes the initial load time of a Gatsby site super fast. The browser then loads the minimum Javascript necessary to make that page interactive. Then in a service worker, it starts prefetching Javascript and data necessary for other pages so that when you click on a link, it takes very little time to fetch the code/data and then make the page transition.
I wrote up the performance plans for Gatsby in this issue [1].
One analogy I came up for that which I really like is to JIT or lean manufacturing.
Quoting myself:
"There's a close analogy to just-in-time manufacturing ideas. Companies found that the way to be the most responsive to customers is to actually avoid doing work ahead of time. When they did do work ahead of time this would paradoxically slow them down as the speculative work would get in the way of getting the work done that's actually necessary (resource contention).
For both manufacturing and web apps there's high inventory cost (unused code takes up memory) and a premium on responsiveness. The car customer wants their new car yesterday and the web app consumer wants their app running immediately. Any work you do ahead of time because "they might need it" gets in the way of the app being responsive to the user.
With both you want to wait until the user asks for something and then work overtime to get it to them as fast as possible."
One adserving system I worked on had a very expensive correlation process where it joined auction stats with impression stats. Essentially a big Hadoop job, except run in a stream.
Another system I later worked on solved the problem by tying impression requests back to the same server that ran the auction. That server kept the auction data in memory until it either received the impression or until a set amount of time had passed.
Hearing how it worked made me feel a bit dirty, then I realized how much money it saved them to do it that way and I was impressed with the simplicity of it.