Hacker News new | past | comments | ask | show | jobs | submit login
Things I learnt making a fast website (hackernoon.com)
385 points by inian on Dec 23, 2016 | hide | past | web | favorite | 108 comments



Why?

I was recently talking at a conference and got in an argument with another speaker (not publicly) because he had a technique to improve API response time. Thing is, said technique would delay the time to market on the product and make the code needlessly complicated to maintain. And I pointed out, that it is probably better for the bottom line of most companies to do the way that is slightly slower but easier to maintain and faster to get to market.

At which point he accused me of being another product shill (not a real software engineer) and shut down the debate.

Thing is, I have a computer science degree and take software engineering very seriously. I just also understand that companies have a bottom like and sometimes good is good enough.

So I ask, this "world's fastest web site"... how much did it cost to develop in time/money? How long is the return on investment? And is it going to be more successful because it is faster? Is it maintainable.

I'm guessing the answers are: too much, never, no and no.

With that said, I fully appreciate making thing fast as a challenge. If his goal was to challenge himself like some sort of game where beating the benchmarks is the only way to win. Then kudos. I love doing stuff just to show I can just as much as the next person.


In this case you're probably right but in general, latency can make a big impact on profit and revenue.

For example Marissa Mayer claimed (though this was in 2006) that a 500ms delay directly caused a 20% drop in traffic and revenue: https://glinden.blogspot.com/2006/11/marissa-mayer-at-web-20...

Optimizely found that for The Telegraph, adding a 4s delay led to a 10% drop in pageviews: https://blog.optimizely.com/2016/07/13/how-does-page-load-ti...

100ms cost Amazon 1% of sales: http://blog.gigaspaces.com/amazon-found-every-100ms-of-laten...

Akamai has a number of claims: https://www.akamai.com/us/en/about/news/press/2009-press/aka...

Of course the impact of latency is going to depend on your particular circumstances but there are certainly circumstances where it can make a big impact.


Google: 2006.

Amazon: 2006.

Akamai: 2009.

At least the Optimizely result is from this year.

I really want more evidence of the importance of performance in web systems (to justify my hobby, if for no other reason), but things from ten years ago just don’t cut it.

Can we please stop citing articles from ten years ago and cite some studies that aren’t more than two years old?

Do people have more such studies?

(Also I’d be interested in more geographically diverse results. Figures like “three seconds to abandonment” are absurd when you’re in Australia or India as I am. Most e-commerce sites—especially US/international ones—haven’t even started drawing at that point, even with the best Internet connectivity and computer performance available, simply because of latency if the site is hosted in the US only. I visited the US a couple of years ago and was amazed at just how fast the Internet was, simply due to reduced latency.)


> I really want more evidence of the importance of performance in web systems (to justify my hobby, if for no other reason), but things from ten years ago just don’t cut it.

> Can we please stop citing articles from ten years ago and cite some studies that aren’t more than two years old?

Could you explain why? New data is good, sure but why should we disregard old data? I think instead of asking for people to simply discard old data, you should point out the issues you believe the data has so that they can be addressed (with newer data if necessary).

Anyhow, from https://riteproject.files.wordpress.com/2014/10/fact-sheet.p...:

- "In 2008 the Aberdeen Group found that a 1-second delay in page load time costs 11% fewer page views, a 16% decrease in customer satisfaction, and a 7% decrease in searches that result in new customers."

- In 2009 Microsoft Bing found that introducing 500ms extra delay translated to 1.2% less advertising revenue.

I also found a whitepaper from 2010 done by Gomez.com that shows how speed impacts various metrics: https://github.com/JasonPaulGardner/po4ux/blob/master/InfoPa...

There's also an article from 2013 that's only loosely related but shows that reducing TTFB latency can increase search result rankings: https://moz.com/blog/how-website-speed-actually-impacts-sear...

> Also I’d be interested in more geographically diverse results.

I'd love these too, let me know if you find them.


The situation has changed. The nature of people’s consumption of the Web is different now, especially with the take-up of mobile browsing in the last couple of years. (The change in the nature of consumption on that count in particular is why I would like to see results from the last year, especially if they’re segmented by device.)

People depend on the web more than they did, so they might be willing to wait for things longer—or even just be resigned to the web being slow. Or perhaps the increased number of providers means that they won’t be willing to wait as long. I don’t know.

It does seem to me that the precious few studies that there are from recent times haven’t shown as severe a drop-off as ones from ten years ago did. But then the ranges of the results are so far out of my experience in non-US countries that I don’t feel I can judge it.

What I do know is that I have had people complain of the age of the data I cited within the last couple of years, when citing the traditional examples of Amazon and Google. Their age and the indubitable ecosystem changes since their time make me a little leery of citing them now, because after due thought, I agree with the objections.

As I say, I want performance to be a commercially valuable factor, because it would justify one of my favourite two topics in IT. I just want us to be using solid, dependable studies of the matter so that we can prove our point without reasonable doubt, and justify expenditure on improving performance, rather than aged studies which are open to quibble.


> People depend on the web more than they did, so they might be willing to wait for things longer—or even just be resigned to the web being slow. Or perhaps the increased number of providers means that they won’t be willing to wait as long. I don’t know.

It is hard for me to disregard those big names and their claims, especially since I have no direct experience in working for any of those companies for me to say otherwise.

On the other hand, in my purely anecdotal world of friends, family, and colleagues, I have never seen/heard of such behaviors. I doubt any of them even realize what 100ms is, let alone abandon their purchase on Amazon because of that delay. Most people can't even react in 100ms to brake their cars.

I really have to wonder if we have causation vs correlation going on.


I think citing data before the wide spread uptake on mobile is non-representative of the modern use of the web.


But it's probably very reasonable to believe that people's expectations of a faster response time has only risen since 10 years ago.

If I had to make a guess whether the same studies performed 10 years ago were performed today, whether the drop in page views would increase or decrease with slower response time, Id guess that today you'd have even fewer page views than 10 years ago.


I don’t believe it’s reasonable to assume that, having presented an alternative view that I find plausible in other comments in this thread. I hope that people’s expectations have increased, but I don’t believe it’s a reasonable assumption.

We need data.


There is lots of recent data out there. The general trend is that those findings from the mid 2000s hold even more true today. People are much more sensitive to latency, while the problem has gotten harder due to last mile flowing through congested air. https://www.doubleclickbygoogle.com/articles/mobile-speed-ma...


The entire premise of Google AMP and Facebook's Instant Articles is predicated on performance. Speed matters.


Why would you expect that visitors became more accepting of slower websites?

If anything I would imagine to be the opposite with connections generally being faster these days. Especially since old Google's result was about observed effects of 100ms slowdown so not about generally slow websites.


I’m not trying to say it will go one way or another—just that the variables in the equation have all changed (expectations, the nature of access to the Internet, ubiquity, &c.), so results from several years ago are obsolete (I’d even call results from over a year ago obsolescent) and we need new figures for credibility.

People don’t necessarily become more demanding; maybe in the earlier days they said, “I don’t need this, it’s taking too long, I’ll just give up,” whereas now they are resigned to the Internet being slow.

Also of interest would be making a site massively faster (e.g. by an apposite service worker implementation), but A/B test on the rollout of it. Over several months, ideally.


My personal mental model is this:

People have a threshold to to [1] on how long they stay on your site, say 5 minutes. If you're site is faster, they will see more pages. More pages increases the likelihood that they find something to buy/comment/...

[1] would love this tested.


I tested the impact of page speed on ads about 3 years ago. The CTR impact was pretty dramatic - I can't recall the exact numbers but they were in the order of 50% CTR drop with 500ms delay.

That's obviously specific to ads, but there's clearly still some impact from page speed there.


You can fix that when the product is released. It's called optimization, you don't need to do it upfront.

Check if people like your software and you receive enough €€€, if it's market git. Optimize and make a blog post about it


Ya I have worked with the optimize at the end guys before. Really, you are gonna refactor a 100k Java codebase to be fast when its done? Want to know how much easier it is to plan for speed from day 1?


As always, going too far either way is a bad idea. I've worked with "optimise everything ahead of time" people before and it can be an astonishing waste of time.

You shouldn't completely ignore performance all the way through and assume you can sprinkle it on top later, but you shouldn't spend months squeezing the last few ms out of a feature that the client will change / remove when they try it anyway.

> Really, you are gonna refactor a 100k Java codebase to be fast when its done?

Maybe? Depends where the problems are. If the problems are huge architectural ones, no that sounds terrible. If the problem can be solved by later improving an inner loop somewhere and getting things solved then sure.


Speed is user-experience.

People used Instagram at first because it was fast.

And they were quickly bought out for a billion.


Speed is part of the user experience, but again it's more complex than that.

Nobody would have used snapchat if it didn't take photos but the site loaded in 1ms. Nobody would use it if it had loads more features and took 2 hours.

Speed is a part of it, and must be traded off with all the other parts of the user experience.


I never heard someone say "I use instagram because it's fast"


But how many people don't X because it's slow? I don't browse on my phone's because ads make it too slow.


It's helpful to separate performance from the ability to perform. It's generally not the greatest idea to prematurely optimize, but you should absolutely be making decisions that give you the flexibility to optimize in the future. For instance, adding a caching layer before it's needed may be a bad idea, but organizing your data in such a way that you could easily add a caching layer may be a good idea. Making your website in C because it's fast may be a bad idea, but choosing Python because you could easily add C bindings to the slow parts may be a good idea.


Your particular example made me smile.

I'm currently leading a team whose primary product is the front end service for the core SaaS business that does billions of monthly impressions (hundreds of millions of uniques). This product is over 10 years old, written in Spring, and has around 400,000 lines of Java.

A couple years ago, we underwent a massive overhaul to the system and refactored the application to use our realtime steaming platform instead of using MySQL+Solr.

It took some time to get it done, mainly because at the time there were about 8 years of features baked into a monolithic web service.

Ultimately, we were successful in the endeavor. It took about 2.5 years and our response times are dramatically faster (and less expensive in terms of infrastructure cost).

So yeah, you can do it. It's expensive and takes time if you wait too long. And you need full support from the business. And some companies don't survive it. YMMV.


Yeah but if it takes 6 months longer, and is 30% harder to refactor, it doesn't matter. You're not selling anything when it's not running.


But sometimes that fix can only be done by switching to another tech stack or framework.

That is, a complete rewrite.

When that's not possible, we have to endure the constantly crashing and slow as molasses web sites like the ones used in universities by students to enroll in courses.


This is just moving the goalposts. No one is saying you have to do these optimisations upfront, just that you they are worth doing.


I wonder if Amazon has done other/more research since and softened their stance since that gigaspaces article...

... because I'm noticing, esp. over the last 1+ years, that Amazon's page loads are getting slower and slower (Win7, FF) with all of the "crud" they are loading with each/every page. I remember when it was all a lot snappier (not like an HN page load "snappy", but pretty good!)


The 'fast' is not increasing revenue by itself I think. Faster pages => more products seen by user => higher likelihood they buy something.

Amazon has shifted since then to pushing more products into your face, which looks like for them works better than the 'faster pages' approach.


Yep they sure put a lot of crap on their clunky pages. They could do much better but they don't seem to care.


my observation is Amazon hardly cares about web page performance..


This is reasonable. Aside for must-reads, I don't read articles or news stories, funny posts, memes, gifs, etc. if they take way too long to load (3000ms+), or if it's TLDR (unless it's must-read). Most of the time, stuff ends up being TLDR, and it's a quick skim to make sure I'm not missing must-read. On websites like Washington Post, Bloomberg, etc. there's just too much shit going on. Too many ads. Too many popups. Too many "please give me your email!" requests. Shit's everywhere on the internet. I'm not about to go to Rocket Fizz for a Snickers Bar.


> 100ms cost Amazon 1% of sales

I wonder why we don't have free high-speed internet yet, financed by big companies.


Google tries with fiber and balloons, but it's a very hard market to enter. Telcos are terribly regulated and investment costs are high.


Why?

Because mobile.

An acceptable 2s load time on Wifi might turn into 2 minutes on Edge.

You might say, 95% of our customers have 3G, to which I'll reply, 100% of your customers sometimes don't have 3G.

And when your page takes a minute to load, it doesn't matter what your time to market was, because noone will look at it.

When your news website is sluggish every time I'm on the train, I'll stop reading it, and do something else, like browse hacker news, which is always fast.


Exactly. I'm at my parents' house, where there is only EDGE, and only if the wind goes in the right direction. :)

The supposedly "fastest web page" took 13 seconds to download and render, and that's really good compared to the rest. HN is slightly faster, but probably due to cached assets.


I wouldn't know what to do to make a page load a minute.

Overengineering is the biggest problem to finish a product fast.

Fix problems when they are coming and try to foresee problems, as long as it doesn't take 2 times as long to finish


Have you never tried to browse the internet on a crowded hotel wifi, or on a train, or in the mountains? When you have network latencies measured in seconds and a bandwith of a few KB, one minute page load times are nothing special.

Most people just give up in those cases and think they have no signal. But it's not true: That bandwidth would be absolutely sufficient to transfer a few tweets or a newspaper article. It's just the bloated ad-tech infested websites used to serve the content that are breaking down.


Let's imagine for a while, that an engineer is designing a bridge, or an architect a new building. The companies that pay for them are in a hurry and want to cut costs as much as any other.

Do you think it would be an ethical thing to build a less secure bridge or building just for the sake of getting them out quicker and/or cheaper?

So this is how I see it with software engineering. Of all the engineering branches, we take our job the least serious and are not good at defending our decisions or taking the required time to build our software the way it should be. We just assume that our customers know better and have better reasons to get out to the market and that there is nothing we, as software engineers, can do about it.

So in a way, that guy you talked to was kind of right, because it is your responsibility to defend the need of fast, efficient and maintainable software. It is the customer's responsibility to take care of the product and plan accordingly.


> Do you think it would be an ethical thing to build a less secure bridge or building just for the sake of getting them out quicker and/or cheaper?

Yes, and we do this constantly. Nothing is ever designed to be completely safe because we'd never get anything done. Bridges aren't all designed to withstand magnitude 11 earthquakes. We arguably go too far because the costs of the final safety features could save far more lives if used for something like vaccinations, but that's not key here.

As always, the trick is where you draw the line.


When a bridge collapses, people could die and it would take weeks or months to rebuild.

When your app fails due to some reason, you can fix it sometimes under the 3 minutes.

Azure, amazon, Google all have status pages. A bridge doesn't.


Three minutes of Amazon being down is measured in the millions of dollars lost. People's careers (or businesses) are destroyed over such numbers.


> And is it going to be more successful because it is faster?

There's a lot of research out there about the link between page performance and user retention rate. And this makes sense: If newegg is taking forever to browse, I'll switch to amazon and newegg loses out on a decent chunk of change.

https://research.googleblog.com/2009/06/speed-matters.html

So, up to a point, yes, yes they are going to be more successful because it is faster. 200ms on my broadband connected desktop isn't that much, but Google is able to measure it's impact. And that might be a second or two on my cellular connected phone.

> Is it maintainable.

A lot of optimizations I've seen involve simplifying stuff. Fast and maintainable don't have to be at odds. I wouldn't care to guess for the whole, but, for example, do you really think using system fonts instead of embedding your own is more complicated, harder to maintain, and more work? I doubt it, and that's one of the optimizations suggested.

Now, yes, with optimizing, there is a break even point where it's no longer worth it to push further, but it's also not necessarily obvious where that is if you're just taking it a task at a time. Keep in mind: some of this is research for effective and ineffective techniques for optimizing other websites, and evaluating which ones are maintainable (or not) for future projects. To know what to bother with and what not to bother with when implementing the rest of the codebase. If you're just worried about the next JIRA milestone, you'll be sacrificing long term gains for short term metrics.

Is it worth micro-optimizing everything before launch? Probably not.

Is it worth testing out what techniques and technologies perform well enough before launch? I've been part of the mad scramble to do some major overhauls to optimize things that were unshippably slow before launch. Building it out right the first time would've saved us a lot of effort. I'd say "probably yes."


You were right if it would take a lot of resourses to make things fast. But most of the time it doesn't.

I made a lot of sites fast(er) in maybe 4 hours time. Yes, slow frameworks are slow so you cannot change that in 4 hours time. But most frameworks arent that slow. My work involved: rewriting a slow query used on every page, changing PNGs to JPEG and reducing the resolution, moving an event that is fired on every DOM redraw, and so on.

And every single time I was just fixing someone's lazy programming.

Ofcourse I agree that there should be a limit to optimizations, but most of the time simple fixes will reduce seconds.


> And every single time I was just fixing someone's lazy programming.

Often, the kinds of mistakes you mention are not due to laziness, but instead to ignorance. Many web developers come from a graphic design or old school "webmaster" background and some never really mastered programming as a skill. They're comfortable writing and/or copy-pasting small scripts using jQuery or some favorite library for minor enhancements to a page, but struggle when it comes to building a cohesive, performant, well-designed application for the front-end.

I myself was a member of this group until about 2007-2008, when I made a concerted effort to upgrade my programming skills. I did intensive self-study of algorithms, data structures, low-level programming, functional programming, SICP (most of it) and K&R C (all of it), etc.

More recently, Coursera and EdX have been a great resource for me to continue to advance for software development skills.


An engineer's job is to solve a problem within real world restrictions. Cost, implementation time, maintainability are all parts of the equation an engineer has to solve.

Your approach was correct. Ideally you would take into account how response time affects a site's metrics and try to balance between all constraints.


Why?

Because its all about making compromises to manage an app and achieve its goals. You are right about the time to market and launching the product sooner should be the number one priority. But of all the factors that make your product worthwhile, performance is a pretty darn good factor.

There are several websites today on the internet who have the potential to become great, if only they pay some heed to the performance factor. Take the Upwork freelancing site for example, its performance was really solid when it was oDesk, its predecessor. Its basically, because of the earlier oDesk goodwill that it still even has a sizable userbase today. Sometime in 2013, along the lines of your thinking, some management guru must have cut corners in development of the repolished upwork site, and the result was an absolute performance nightmare! As a freelancer, Upwork is a third or fourth priority for me now, whereas the former oDesk was actually number one.

Another example of a nightmarish performance is Quora - it has a fantastic readership that supplies solid content to the site. Its a solid proof that really good content is so much valued in the online world - that despite its lagging performance, people are willing to endure a site with good content, but that doesn't mean its ideal. Quora still has a lot of potential, it can match or maybe even surpass the levels of Reddit and HN, or even Facebook and Linkedin if they pay heed to the performance factor, but I don't see that happening soon!


I consider developers senior if they can make trade offs between technology and business.


Exactly. It's not like performance is or is not important. It's about the trade-off given the detail and the context.


I think performance is one of the reasons, if not the main reason why WhatsApp is the leading mobile chat application.

They can manage more millions of messages per second than all their competitors.

The 'build slower applications much faster' mantra has some value, except when everyone can build that application in a month and the market is full of clones.


For anybody not versed in web design, here's a link to the mildly ungooglable Lighthouse tool: https://developers.google.com/web/tools/lighthouse/

(If you search for "lighthouse benchmark", "lighthouse speed test", or "lighthouse app" you get nothing. "lighthouse tool" and "lighthouse web" works.)


I think eventually we should start distinguishing between a website whose main purpose it is to present some minimally interactive hypertext to the user and an application that uses HTML as a GUI description language.

The former is trivial to make fast: just write it like it's 1999. You don't really need a database or Javascript, just make sure that your images are not huge and don't load too much stuff from other sites.

The latter requires a lot more work.


Does it though? Does the latter actually take a "lot" more work to optimize? That sounds too much like a convenient excuse.


I thought we had this already; website vs webapp


IMO the former is a superset of the latter.

In other words, all webapps are websites, but not all websites are webapps.

I think having another term for the things which are only "sites" like blogs, news articles, etc... Would be useful.


5. Don’t server-render HTML

This is exactly what I do and why the average render time for my e-commerce site is less than 200ms. I use JS for minor DOM manipulations, everything else is static.

The secret ingredient is using Varnish cache server that beats the pants off everything else and that can fill a 10G pipe while still have plenty of capacity left on a single cpu core.


He lost me somewhere during #3. Simply too much story telling around the facts for my liking. I don't want to know that somebody recently got a tattoo and stuff like that, when I click on a link about making fast websites.


Counterpoint - I really enjoyed the style. It was entertainingly over-the-top and chatty. I'd probably have given up on a very dry "just the facts" article quicker, and I'm fairly sure I'll remember this one better because of the stylistic touches.


As an old binary file hacker I can't help but to think that the performance would also be improved by using a different format than Json that doesn't require 75,000 lines to be parsed on load.

One drawback of formats like Json and XML is that they require reaching the ending tag ( "}" or "]" usually for Json and </root> for XML ) before the file can be considered parsed. A properly designed binary format can be used like a database of sorts with very efficient reads.


Well instead of JSON you can use Google's Protocol buffers: https://developers.google.com/protocol-buffers/

Haven't used it myself, but worked with people on a project which did, and boy does it fly compared to JSON!


The last time I looked into it, pure JS implementations of profound (that runs in the browser) were lacking and made destroyed much of the speed impreovements.


The point is that the JSON parser runs in the browser - as native code - so it's rather difficult to match it's performance with some other parser in JS, even if that one has an easier job to do.


It may be hard to match the performance of the parsing per se.

But the typical data flow with JSON goes like this:

1) Get a string. 2) JSON.parse it into an object graph. 3) Use that object graph to build the data structure you really care about (e.g. a DOM).

If you parse yourself you may be able to cut out the middleman object graph. That might be worthwhile; needs measuring.


There are less verbose, binary formats for storing XML information models -- like Fast Infoset [1], and formats listed on Wikipedia under 'Binary XML' [2]. HTML5 can optionally be written in a mostly-XML-conforming syntax [3].

[1] https://en.wikipedia.org/wiki/Fast_Infoset [2] https://en.wikipedia.org/wiki/Binary_XML [3] https://html.spec.whatwg.org/#xhtml


We really need a binary version of HTML, CSS, and JSON. (WebAssembly would be the binary version of JS)


There's MessagePack[0], which serves basically the same purpose a binary version of JSON would. It came up recently in an engineering article Uber posted.

[0]: http://msgpack.org/


According to their website, there is only one JavaScript implementation, this that one seems to be for server side (NodeJS), not client side (browser).

And even if we had that, we would need some consensus (read: coding rules) on how to exactly encode HTML/CSS/etc. in MessagePack.

Looks doable for me, but there is a long way do go.


I have already fathered a child with Firebase and quite enjoyed myself

Our <scripts>, much like our feet, should be at the bottom of our <body>. Oh I just realised that’s why they call it a footer! OMG and header too! Oh holy crap, body as well! I’ve just blown my own mind.

Sometimes I think I’m too easily distracted. I was once told that my mind doesn’t just wander, it runs around screaming in its underpants.

Pure gold. :D


Pretty sure I have the fastest site in the world: https://www.futureclaw.com

Speed test: https://tools.pingdom.com/#!/FMRYc/http://www.futureclaw.com

Average page generation time from the database is 2-3ms & cached pages generate in 300 microseconds. Also, this includes GZip.

One day I'll write a blog post on it, but it's a Django app with a custom view controller. I use prepared statements and chunked encoding & materialized database views & all sorts of techniques I forgot that I need to write down. I also gzip data before storing in Redis, so now my Redis cache is automatically 10x bigger.

I just checked and it's faster than Hacker News: http://tools.pingdom.com/fpt/bHrP9i/http://news.ycombinator....


Impressive! You can do some more front-end stuff to improve the speed too, like adding a good CDN and optimising your images. For example, [1] is ~790KB which is great for viewing on a 5k iMac, but is probably unnecessary when someone is viewing your website from a low powered phone on a slow 3G connection. There are various tools like ImageOptim or Dexecure (which I work on) to solve this issue. See [1] and [2] for a comparision on how this single image can be compressed further.

[1] https://farm9.staticflickr.com/8380/29778359335_f59e0b4fcb_k... [2] https://bigdemosyncopt.dexecure.net/https://farm9.staticflic...

Note that [2] varies in size and format depending on your user agent and it becomes ~70KB when viewed from a chrome mobile device.


I do use PICTURE elements of various sizes, so it shouldn't be loading the 790kb version on a phone. Most likely either the 107kb version or the 220kb version.


Yep picture tag is really useful. I had tested on WPT and found a big image [1] so I assumed you hadn't taken care of this yet.

[1] ~500KB image on 3G mobile device and load time of ~15.9s (https://www.webpagetest.org/result/161224_M7_29Q/)


I don't mean to detract from the great back-end time that you've managed to achieve, but I think it's important to point out that web performance is about _so_ much more than the back-end.

Using WebPagetest, you can test your page with real devices to get an idea of how it will load for users who are not on overpowered desktop devices hooked up to the Internet over a fibre connection. If you look at the "Filmstrip view" for futureclaw[0], you don't even see any text until ~4.7 seconds in, and the page doesn't look finished until well after 13 seconds because that image takes so long to come in. Hacker News, on the other hand, has content visible at 2.3 seconds[1].

So yes, back-end times are important. But it's a small slice of the overall load time. Just keep in mind that the proportion of people on mobile connections is actually going _up_ in the developed world, and the average speed of mobile connections is not improving. It's really important that we (web developers) recognise this and build sites that can be used by everyone, no matter the speed of their device or connection.

[0] https://www.webpagetest.org/result/161224_TS_264/ [1] https://www.webpagetest.org/result/161224_H9_26Z/


One thing to keep in mind is the audience itself. You tuned the benchmark for one specific audience.

You'll find that in the fashion industry, not a lot of people are browsing expensive dresses on 3G connections on older devices in third-world countries.

Also, for connection speeds, that's going to be affected by the fact that Hacker News doesn't serve pictures, which you sorta have to do for fashion.


True that you need to keep your audience in mind but doesn't lend well to your claim of being the fastest site in the world either if it is fast for only a subset of people in the world


> Speed test: https://tools.pingdom.com/#!/FMRYc/http://www.futureclaw.com > > 580 ms

The backend number is nice, really, but the quoted load time by the tool isn't. My jenkins loads almost as fast, (and according to the tool faster than HN), and isn't even hosted in a DC. Some pages hosted on the same machine load in almost a quarter of that. Most of that is internet latency, the same pages requested from the same network take just a couple ms. Even Jenkins only takes 65 ms for /.


Its only good if you have good network conditions. For me, sitting in a hotel in Europe it takes 19 Secs to load. 49 Reqs/Page Load is just awfully much. Also the fastest website by far is blog.fefe.de, 2 reqs/pageload. Also written in C.


That was really fast, felicitation!


Blog about it!


Will do in Spring, as it becomes more feature complete. (search, articles, user blogging, etc..) All compounded by the fact that I also do the content for the site.

Anyone want to turn this into a platform product?


This is a well written article. I only read it because he initially presented a link to his website, which ended up loading in 250ms. This begged the question "why did it load so fast?" which in turn begged the entire article. That presentation was genius, at some level. You can't not read the article.

Reading this article is like witnessing a race between a bunch of lions, only to see the winner unzip what appears to be a lion costume covering a cheetah. The website appears as if it has the overhead of a DB, other networks, etc., when it actually doesn't.

Instead the article reads as "a few tricks to make really fast websites that don't do anything". Of course, that's not entirely true, since the optimizations mentioned do apply to more involved websites, but with much lower efficacy. Anyways, funny article.


Hilarious :-) I don't care whether the lessons are useful or not. I just want to be able to write like this.


Took 4 seconds to load on my phone. About 2 seconds slower than hacker news.


If you want to see a quick site: blog.fefe.de


> I have a big fat i7 CPU that I do most of my work on, and a brand new Pixel XL: the fastest phone in the world. What do you think the phone performance will be? It’s only 10%. If I slow my i7 down by ten times, it is the same speed as the $1,400 phone in front of me.

That's the most surprising thing I learned from the article. But I'm still a little skeptical about this. One well-known CPU comparison site[0] gives the following scores:

- The fastest desktop system they rated[1], which happens to have an i7 CPU just as the author is using, is rated 9206.

- Google Pixel XL smartphone[2] gets a score of 8153.

In other words, the Pixel comes out as having 89% of the performance of the fastest desktop system.

I looked at some other metrics for comparing systems, and I'm not seeing how smartphones are only 10% as fast as desktops -- neither in average cases nor in extreme cases (fastest desktop vs fastest smartphone).

[0] PassMark Software Pty Ltd. Their PassMark rating is a composite of tests for CPU, disk, 2D & 3D graphics, and memory.

[1] http://www.passmark.com/baselines/top.html

[2] http://www.androidbenchmark.net/phone.php?phone=Google+Pixel...


Take a look at this talk on why mobile devices are so much slower than desktop devices ( they are indeed 10x slower in the traces I have looked at too) - https://youtu.be/4bZvq3nodf4

Also look at the speedometer benchmark (http://browserbench.org/Speedometer/) for a more realistic comparison of how slow your mobile device is compared to your desktop when running real world apps. Here are some numbers from the benchmark from some devices - http://benediktmeurer.de/2016/12/16/the-truth-about-traditio...


> the Pixel comes out as having 89% of the performance of the fastest desktop system.

Do you really think we're expending 20-30 times the power for a 12 % compute speed-up?

Just to clarify, the two numbers you quote are different benchmarks. You can't compare them, lest say "oh, they're only different by 10 percent".


I agree that the 20-30x greater power consumption is hard to reconcile with the benchmarks I quoted. So, yes, something doesn't make sense; the benchmark is not taking into account something or I don't know what.

> the two numbers you quote are different benchmarks

But I don't agree with explanation. Both benchmarks are "PassMarks" (a proprietary measure devised by that company). I don't see them saying that their PassMarks for desktops are different than their PassMarks for smartphones.


That metric is likely multi-core performance. Contrarily, since JS is single-threaded (usually), the very weak single cores in a mobile processor don't do as well.


The bottom line is the power consumption. A big fat i7 (desktop) can draw 91W, while a fanless, battery operated smartphone can pull from 5 to 10W.

Power optimization come at a performance cost.


Android phones still compare well on CPU metrics but poorly on browser/JS metrics.

If a beefy desktop is 100 I'd guess the real world (todomvc) benchmarks are 100/90/20 for desktop/android/iOS, for a recent android and an iPhone 7.

So CPU power (sadly) doesn't translate directly to browser performance as well on android as it does on iOS.


Slightly OT: I was once like the OP. I shaved off every bit of my pages, almost no libs, CDN delivery, benchmarked and stress tested it with ab (Apache Bench) after every commit (!), this thing was fast like a beast but tricky to maintain. I did it for SEO reasons next to tons of other SEO techniques.

And you know what? My competitors still ranked all higher despite their clumsy, megabyte heavy pages, loading for more than ten seconds. I know that size and speed are not the only SEO signals but they seems to be not that important as believed. Not sure though how AMP's importance will play in future.

I still like his post, some of his tips are good (eg, start mobile first) and some depend heavily on the use case (eg, don't render HTML on the server). As other her in the thread say, does the ROI reflect the extra mile? If yes then go for it.


Looks like website performance optimization is becoming a high-valued and high-paid skill for the next 10 years at least.

10 years ago (and now, to a degree) it was database or C++ performance optimization. You could specialize in these to earn above 10-15 percentile of all the crowd.


That's actually a good thought. In most companies you could probably immediately make the site 10% faster by eliminating competing analytics. I've personally seen sites with 4 different Google 'UA-XXXXX-Y' sections! The hard part is talking to the various business owners and getting them to agree to one central account.


Wow, it did load super fast on a slow phone and over 3g; however, once loaded, and a couple of seconds after content skimming, interacting with the site was clumsy. In particular, touching a category element was unresponsive. On a second experiment I waited 10 seconds before interacting and it worked fine. I guess there is some UI blocking JS scripiting going on right after load.


This was very funny to read and has several good tips that I'll use, but the polyfilling tips need more testing: the site doesn't work at all on IE11 (on Windows 10 and 8.1).


don't ship too much synchronous code at your entry points.

go below 200kb at your entry points, so people see something fast.

now they have something to do and you can ship the rest of your app asynchronously in the background.

I think this is the most important. Even more than build time rendering or something. The 80% yield with 20% effort.


There are two problems why I stopped reading this article: it is not fast (took 20s to load compared to 3s on Hacker News. Note that I'm on a crappy mobile network) and the site is non-responsive on mobile (I clicked things which looked like clickable stuff and nothing happened).


Hmm interesting. Yeah I just had that same issue. Mobile was unresponsive for about 3 seconds till everything loaded.


"Whenever I feel reluctant to throw out some work, I recall that life is pointless, and nothing we do even exists if the power goes out. There’s a handy hint for ya."

Worth reading just for this gem.


This article took almot 45 seconds to load. Reason is I had the medium app installed.

Just uninstalled that and it loads within 10 seconds.


Very cool thanks for sharing this.


[flagged]


Personal attacks are not allowed on Hacker News.

We detached this subthread from https://news.ycombinator.com/item?id=13249148 and marked it off-topic.


It wasn't a personal attack, but OK.


I believe you that you didn't intend it that way, but it is a cliché of personal swipes, so absent extra information one can hardly read it otherwise.


Context, dude! People go to parties to enjoyably waste time with and around other people. People read tech blogs to learn stuff to help them do better work. The author is mistaking his tech blog for a party.

He can do that if he wants, it's his blog after all. But I can easily sympathize with the grandparent's annoyance at having his attention abused to parse an extrovert's lame stand-up comedic fluff.


really impressive programmer went thru and made this page https://knowitall-9a92e.firebaseapp.com super fast for both mobile and desktop and found a lot of interesting benchmarks in the process.


FTA:

> Hopefully that opened quickly and I have established credibility. If that sounds immodest it’s because I’m awesome.

Not by hosting via a blog on Medium, which while may seem nice, you're loading a bunch of Medium links _in the header_ which causes the page to non-render for anyone under a moderate-strict corporate firewall.

Eat dog food. Not preach it. Consistency, not showing off a one-wheel skid on a scooter.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: