Hacker News new | past | comments | ask | show | jobs | submit login
Location, Privilege and Performant Websites (stephaniestimac.com)
15 points by OrwellianChild on Oct 31, 2019 | hide | past | favorite | 5 comments



Two years ago I was working on optimizing the load time for a web page. My basis for the investigation was the lighthouse tests that then where integrated into google chrome. To get data for each step of the process I tied this into our selenium test. This meant that we could get data on page speed (along with some more data) after every build. During this process I came up with a thought that I haven't been able to shake. What if at each page we gather the javascript code that is used and discard all the other data. In the end I came to the conclusion that for us the optimization that couldn't be reached was dead code in 3:rd party javascript libraries. This means that we define what is to be usable by customers is strictly what is covered by selenium. But for a company that has diligently implemented testing using say selenium, this could work. Bear in mind that this probably wouldn't be an entirely automated process but would require some small manual fixes here and there by a (very) competent javascript coder.

If anybody is interested in this or are aware of prior work, please let me know by commenting!

) By using googles metric this might optimize search engine rating for google as well as they are probably using the same system to analyze while crawling. ) I'm not saying that it's easy, but this is very there are optimization that are left to be done


If I'm not mistaken this is actually the whole point of Webpack. Instead of serving 8,000 npm modules on each page load it goes through and makes one mega-bundle that only includes the code that gets called.


That is exactly the purpose of webpack, but that requires that you rebuild your website with webpack. Some sites might be even decades old and a rewrite would be a major undertaking.

Also this method would work with code within the modules itself. Webpack works on a module basis, i.e. import statements. My approach would rather look at the code on a line or function basis and exclude unused code within the module itself. Every page is ran through in different browser and what the client actually is executing is measured instead of a higher level dependency graph.

One thing that could happen with the webpack approach is that if say the frontpage js bundle and the login js bundle is different the bundles have to be downloaded separately the cache is not really utilized.

I do understand that the approach of automagically slicing javascript files like this might be way trickier than it sounds, which is why I propose that this would be a semi-autonomous process to some extent (at least in the beginning).


Hah even this very page has 76% unused code. Alas, that is only 9KB out of 11.8KB!


Yes, but you have to take into account that some of that might be used by another browser / js engine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: