I was recently talking at a conference and got in an argument with another speaker (not publicly) because he had a technique to improve API response time. Thing is, said technique would delay the time to market on the product and make the code needlessly complicated to maintain. And I pointed out, that it is probably better for the bottom line of most companies to do the way that is slightly slower but easier to maintain and faster to get to market.
At which point he accused me of being another product shill (not a real software engineer) and shut down the debate.
Thing is, I have a computer science degree and take software engineering very seriously. I just also understand that companies have a bottom like and sometimes good is good enough.
So I ask, this "world's fastest web site"... how much did it cost to develop in time/money? How long is the return on investment? And is it going to be more successful because it is faster? Is it maintainable.
I'm guessing the answers are: too much, never, no and no.
With that said, I fully appreciate making thing fast as a challenge. If his goal was to challenge himself like some sort of game where beating the benchmarks is the only way to win. Then kudos. I love doing stuff just to show I can just as much as the next person.
For example Marissa Mayer claimed (though this was in 2006) that a 500ms delay directly caused a 20% drop in traffic and revenue: https://glinden.blogspot.com/2006/11/marissa-mayer-at-web-20...
Optimizely found that for The Telegraph, adding a 4s delay led to a 10% drop in pageviews: https://blog.optimizely.com/2016/07/13/how-does-page-load-ti...
100ms cost Amazon 1% of sales: http://blog.gigaspaces.com/amazon-found-every-100ms-of-laten...
Akamai has a number of claims: https://www.akamai.com/us/en/about/news/press/2009-press/aka...
Of course the impact of latency is going to depend on your particular circumstances but there are certainly circumstances where it can make a big impact.
At least the Optimizely result is from this year.
I really want more evidence of the importance of performance in web systems (to justify my hobby, if for no other reason), but things from ten years ago just don’t cut it.
Can we please stop citing articles from ten years ago and cite some studies that aren’t more than two years old?
Do people have more such studies?
(Also I’d be interested in more geographically diverse results. Figures like “three seconds to abandonment” are absurd when you’re in Australia or India as I am. Most e-commerce sites—especially US/international ones—haven’t even started drawing at that point, even with the best Internet connectivity and computer performance available, simply because of latency if the site is hosted in the US only. I visited the US a couple of years ago and was amazed at just how fast the Internet was, simply due to reduced latency.)
> Can we please stop citing articles from ten years ago and cite some studies that aren’t more than two years old?
Could you explain why? New data is good, sure but why should we disregard old data? I think instead of asking for people to simply discard old data, you should point out the issues you believe the data has so that they can be addressed (with newer data if necessary).
Anyhow, from https://riteproject.files.wordpress.com/2014/10/fact-sheet.p...:
- "In 2008 the Aberdeen Group found that a 1-second delay in page load time costs 11% fewer page views, a 16% decrease in customer satisfaction, and a 7% decrease in searches that result in new customers."
- In 2009 Microsoft Bing found that introducing 500ms extra delay translated to 1.2% less advertising revenue.
I also found a whitepaper from 2010 done by Gomez.com that shows how speed impacts various metrics: https://github.com/JasonPaulGardner/po4ux/blob/master/InfoPa...
There's also an article from 2013 that's only loosely related but shows that reducing TTFB latency can increase search result rankings: https://moz.com/blog/how-website-speed-actually-impacts-sear...
> Also I’d be interested in more geographically diverse results.
I'd love these too, let me know if you find them.
People depend on the web more than they did, so they might be willing to wait for things longer—or even just be resigned to the web being slow. Or perhaps the increased number of providers means that they won’t be willing to wait as long. I don’t know.
It does seem to me that the precious few studies that there are from recent times haven’t shown as severe a drop-off as ones from ten years ago did. But then the ranges of the results are so far out of my experience in non-US countries that I don’t feel I can judge it.
What I do know is that I have had people complain of the age of the data I cited within the last couple of years, when citing the traditional examples of Amazon and Google. Their age and the indubitable ecosystem changes since their time make me a little leery of citing them now, because after due thought, I agree with the objections.
As I say, I want performance to be a commercially valuable factor, because it would justify one of my favourite two topics in IT. I just want us to be using solid, dependable studies of the matter so that we can prove our point without reasonable doubt, and justify expenditure on improving performance, rather than aged studies which are open to quibble.
It is hard for me to disregard those big names and their claims, especially since I have no direct experience in working for any of those companies for me to say otherwise.
On the other hand, in my purely anecdotal world of friends, family, and colleagues, I have never seen/heard of such behaviors. I doubt any of them even realize what 100ms is, let alone abandon their purchase on Amazon because of that delay. Most people can't even react in 100ms to brake their cars.
I really have to wonder if we have causation vs correlation going on.
If I had to make a guess whether the same studies performed 10 years ago were performed today, whether the drop in page views would increase or decrease with slower response time, Id guess that today you'd have even fewer page views than 10 years ago.
We need data.
If anything I would imagine to be the opposite with connections generally being faster these days. Especially since old Google's result was about observed effects of 100ms slowdown so not about generally slow websites.
People don’t necessarily become more demanding; maybe in the earlier days they said, “I don’t need this, it’s taking too long, I’ll just give up,” whereas now they are resigned to the Internet being slow.
Also of interest would be making a site massively faster (e.g. by an apposite service worker implementation), but A/B test on the rollout of it. Over several months, ideally.
People have a threshold to to  on how long they stay on your site, say 5 minutes. If you're site is faster, they will see more pages. More pages increases the likelihood that they find something to buy/comment/...
 would love this tested.
That's obviously specific to ads, but there's clearly still some impact from page speed there.
Check if people like your software and you receive enough €€€, if it's market git. Optimize and make a blog post about it
You shouldn't completely ignore performance all the way through and assume you can sprinkle it on top later, but you shouldn't spend months squeezing the last few ms out of a feature that the client will change / remove when they try it anyway.
> Really, you are gonna refactor a 100k Java codebase to be fast when its done?
Maybe? Depends where the problems are. If the problems are huge architectural ones, no that sounds terrible. If the problem can be solved by later improving an inner loop somewhere and getting things solved then sure.
People used Instagram at first because it was fast.
And they were quickly bought out for a billion.
Nobody would have used snapchat if it didn't take photos but the site loaded in 1ms. Nobody would use it if it had loads more features and took 2 hours.
Speed is a part of it, and must be traded off with all the other parts of the user experience.
I'm currently leading a team whose primary product is the front end service for the core SaaS business that does billions of monthly impressions (hundreds of millions of uniques). This product is over 10 years old, written in Spring, and has around 400,000 lines of Java.
A couple years ago, we underwent a massive overhaul to the system and refactored the application to use our realtime steaming platform instead of using MySQL+Solr.
It took some time to get it done, mainly because at the time there were about 8 years of features baked into a monolithic web service.
Ultimately, we were successful in the endeavor. It took about 2.5 years and our response times are dramatically faster (and less expensive in terms of infrastructure cost).
So yeah, you can do it. It's expensive and takes time if you wait too long. And you need full support from the business. And some companies don't survive it. YMMV.
That is, a complete rewrite.
When that's not possible, we have to endure the constantly crashing and slow as molasses web sites like the ones used in universities by students to enroll in courses.
... because I'm noticing, esp. over the last 1+ years, that Amazon's page loads are getting slower and slower (Win7, FF) with all of the "crud" they are loading with each/every page. I remember when it was all a lot snappier (not like an HN page load "snappy", but pretty good!)
Amazon has shifted since then to pushing more products into your face, which looks like for them works better than the 'faster pages' approach.
I wonder why we don't have free high-speed internet yet, financed by big companies.
An acceptable 2s load time on Wifi might turn into 2 minutes on Edge.
You might say, 95% of our customers have 3G, to which I'll reply, 100% of your customers sometimes don't have 3G.
And when your page takes a minute to load, it doesn't matter what your time to market was, because noone will look at it.
When your news website is sluggish every time I'm on the train, I'll stop reading it, and do something else, like browse hacker news, which is always fast.
The supposedly "fastest web page" took 13 seconds to download and render, and that's really good compared to the rest. HN is slightly faster, but probably due to cached assets.
Overengineering is the biggest problem to finish a product fast.
Fix problems when they are coming and try to foresee problems, as long as it doesn't take 2 times as long to finish
Most people just give up in those cases and think they have no signal. But it's not true: That bandwidth would be absolutely sufficient to transfer a few tweets or a newspaper article. It's just the bloated ad-tech infested websites used to serve the content that are breaking down.
Do you think it would be an ethical thing to build a less secure bridge or building just for the sake of getting them out quicker and/or cheaper?
So this is how I see it with software engineering. Of all the engineering branches, we take our job the least serious and are not good at defending our decisions or taking the required time to build our software the way it should be. We just assume that our customers know better and have better reasons to get out to the market and that there is nothing we, as software engineers, can do about it.
So in a way, that guy you talked to was kind of right, because it is your responsibility to defend the need of fast, efficient and maintainable software. It is the customer's responsibility to take care of the product and plan accordingly.
Yes, and we do this constantly. Nothing is ever designed to be completely safe because we'd never get anything done. Bridges aren't all designed to withstand magnitude 11 earthquakes. We arguably go too far because the costs of the final safety features could save far more lives if used for something like vaccinations, but that's not key here.
As always, the trick is where you draw the line.
When your app fails due to some reason, you can fix it sometimes under the 3 minutes.
Azure, amazon, Google all have status pages. A bridge doesn't.
There's a lot of research out there about the link between page performance and user retention rate. And this makes sense: If newegg is taking forever to browse, I'll switch to amazon and newegg loses out on a decent chunk of change.
So, up to a point, yes, yes they are going to be more successful because it is faster. 200ms on my broadband connected desktop isn't that much, but Google is able to measure it's impact. And that might be a second or two on my cellular connected phone.
> Is it maintainable.
A lot of optimizations I've seen involve simplifying stuff. Fast and maintainable don't have to be at odds. I wouldn't care to guess for the whole, but, for example, do you really think using system fonts instead of embedding your own is more complicated, harder to maintain, and more work? I doubt it, and that's one of the optimizations suggested.
Now, yes, with optimizing, there is a break even point where it's no longer worth it to push further, but it's also not necessarily obvious where that is if you're just taking it a task at a time. Keep in mind: some of this is research for effective and ineffective techniques for optimizing other websites, and evaluating which ones are maintainable (or not) for future projects. To know what to bother with and what not to bother with when implementing the rest of the codebase. If you're just worried about the next JIRA milestone, you'll be sacrificing long term gains for short term metrics.
Is it worth micro-optimizing everything before launch? Probably not.
Is it worth testing out what techniques and technologies perform well enough before launch? I've been part of the mad scramble to do some major overhauls to optimize things that were unshippably slow before launch. Building it out right the first time would've saved us a lot of effort. I'd say "probably yes."
I made a lot of sites fast(er) in maybe 4 hours time. Yes, slow frameworks are slow so you cannot change that in 4 hours time. But most frameworks arent that slow.
My work involved: rewriting a slow query used on every page, changing PNGs to JPEG and reducing the resolution, moving an event that is fired on every DOM redraw, and so on.
And every single time I was just fixing someone's lazy programming.
Ofcourse I agree that there should be a limit to optimizations, but most of the time simple fixes will reduce seconds.
Often, the kinds of mistakes you mention are not due to laziness, but instead to ignorance. Many web developers come from a graphic design or old school "webmaster" background and some never really mastered programming as a skill. They're comfortable writing and/or copy-pasting small scripts using jQuery or some favorite library for minor enhancements to a page, but struggle when it comes to building a cohesive, performant, well-designed application for the front-end.
I myself was a member of this group until about 2007-2008, when I made a concerted effort to upgrade my programming skills. I did intensive self-study of algorithms, data structures, low-level programming, functional programming, SICP (most of it) and K&R C (all of it), etc.
More recently, Coursera and EdX have been a great resource for me to continue to advance for software development skills.
Your approach was correct. Ideally you would take into account how response time affects a site's metrics and try to balance between all constraints.
Because its all about making compromises to manage an app and achieve its goals. You are right about the time to market and launching the product sooner should be the number one priority. But of all the factors that make your product worthwhile, performance is a pretty darn good factor.
There are several websites today on the internet who have the potential to become great, if only they pay some heed to the performance factor. Take the Upwork freelancing site for example, its performance was really solid when it was oDesk, its predecessor. Its basically, because of the earlier oDesk goodwill that it still even has a sizable userbase today. Sometime in 2013, along the lines of your thinking, some management guru must have cut corners in development of the repolished upwork site, and the result was an absolute performance nightmare! As a freelancer, Upwork is a third or fourth priority for me now, whereas the former oDesk was actually number one.
Another example of a nightmarish performance is Quora - it has a fantastic readership that supplies solid content to the site. Its a solid proof that really good content is so much valued in the online world - that despite its lagging performance, people are willing to endure a site with good content, but that doesn't mean its ideal. Quora still has a lot of potential, it can match or maybe even surpass the levels of Reddit and HN, or even Facebook and Linkedin if they pay heed to the performance factor, but I don't see that happening soon!
They can manage more millions of messages per second than all their competitors.
The 'build slower applications much faster' mantra has some value, except when everyone can build that application in a month and the market is full of clones.
(If you search for "lighthouse benchmark", "lighthouse speed test", or "lighthouse app" you get nothing. "lighthouse tool" and "lighthouse web" works.)
The latter requires a lot more work.
In other words, all webapps are websites, but not all websites are webapps.
I think having another term for the things which are only "sites" like blogs, news articles, etc... Would be useful.
This is exactly what I do and why the average render time for my e-commerce site is less than 200ms. I use JS for minor DOM manipulations, everything else is static.
The secret ingredient is using Varnish cache server that beats the pants off everything else and that can fill a 10G pipe while still have plenty of capacity left on a single cpu core.
One drawback of formats like Json and XML is that they require reaching the ending tag ( "}" or "]" usually for Json and </root> for XML ) before the file can be considered parsed. A properly designed binary format can be used like a database of sorts with very efficient reads.
Haven't used it myself, but worked with people on a project which did, and boy does it fly compared to JSON!
But the typical data flow with JSON goes like this:
1) Get a string.
2) JSON.parse it into an object graph.
3) Use that object graph to build the data structure you really care about (e.g. a DOM).
If you parse yourself you may be able to cut out the middleman object graph. That might be worthwhile; needs measuring.
 https://en.wikipedia.org/wiki/Fast_Infoset  https://en.wikipedia.org/wiki/Binary_XML  https://html.spec.whatwg.org/#xhtml
And even if we had that, we would need some consensus (read: coding rules) on how to exactly encode HTML/CSS/etc. in MessagePack.
Looks doable for me, but there is a long way do go.
Our <scripts>, much like our feet, should be at the bottom of our <body>. Oh I just realised that’s why they call it a footer! OMG and header too! Oh holy crap, body as well! I’ve just blown my own mind.
Sometimes I think I’m too easily distracted. I was once told that my mind doesn’t just wander, it runs around screaming in its underpants.
Pure gold. :D
Speed test: https://tools.pingdom.com/#!/FMRYc/http://www.futureclaw.com
Average page generation time from the database is 2-3ms & cached pages generate in 300 microseconds. Also, this includes GZip.
One day I'll write a blog post on it, but it's a Django app with a custom view controller. I use prepared statements and chunked encoding & materialized database views & all sorts of techniques I forgot that I need to write down. I also gzip data before storing in Redis, so now my Redis cache is automatically 10x bigger.
I just checked and it's faster than Hacker News: http://tools.pingdom.com/fpt/bHrP9i/http://news.ycombinator....
Note that  varies in size and format depending on your user agent and it becomes ~70KB when viewed from a chrome mobile device.
 ~500KB image on 3G mobile device and load time of ~15.9s (https://www.webpagetest.org/result/161224_M7_29Q/)
Using WebPagetest, you can test your page with real devices to get an idea of how it will load for users who are not on overpowered desktop devices hooked up to the Internet over a fibre connection. If you look at the "Filmstrip view" for futureclaw, you don't even see any text until ~4.7 seconds in, and the page doesn't look finished until well after 13 seconds because that image takes so long to come in. Hacker News, on the other hand, has content visible at 2.3 seconds.
So yes, back-end times are important. But it's a small slice of the overall load time. Just keep in mind that the proportion of people on mobile connections is actually going _up_ in the developed world, and the average speed of mobile connections is not improving. It's really important that we (web developers) recognise this and build sites that can be used by everyone, no matter the speed of their device or connection.
You'll find that in the fashion industry, not a lot of people are browsing expensive dresses on 3G connections on older devices in third-world countries.
Also, for connection speeds, that's going to be affected by the fact that Hacker News doesn't serve pictures, which you sorta have to do for fashion.
The backend number is nice, really, but the quoted load time by the tool isn't. My jenkins loads almost as fast, (and according to the tool faster than HN), and isn't even hosted in a DC. Some pages hosted on the same machine load in almost a quarter of that. Most of that is internet latency, the same pages requested from the same network take just a couple ms. Even Jenkins only takes 65 ms for /.
Anyone want to turn this into a platform product?
Reading this article is like witnessing a race between a bunch of lions, only to see the winner unzip what appears to be a lion costume covering a cheetah. The website appears as if it has the overhead of a DB, other networks, etc., when it actually doesn't.
Instead the article reads as "a few tricks to make really fast websites that don't do anything". Of course, that's not entirely true, since the optimizations mentioned do apply to more involved websites, but with much lower efficacy. Anyways, funny article.
That's the most surprising thing I learned from the article. But I'm still a little skeptical about this. One well-known CPU comparison site gives the following scores:
- The fastest desktop system they rated, which happens to have an i7 CPU just as the author is using, is rated 9206.
- Google Pixel XL smartphone gets a score of 8153.
In other words, the Pixel comes out as having 89% of the performance of the fastest desktop system.
I looked at some other metrics for comparing systems, and I'm not seeing how smartphones are only 10% as fast as desktops -- neither in average cases nor in extreme cases (fastest desktop vs fastest smartphone).
 PassMark Software Pty Ltd. Their PassMark rating is a composite of tests for CPU, disk, 2D & 3D graphics, and memory.
Also look at the speedometer benchmark (http://browserbench.org/Speedometer/) for a more realistic comparison of how slow your mobile device is compared to your desktop when running real world apps. Here are some numbers from the benchmark from some devices - http://benediktmeurer.de/2016/12/16/the-truth-about-traditio...
Do you really think we're expending 20-30 times the power for a 12 % compute speed-up?
Just to clarify, the two numbers you quote are different benchmarks. You can't compare them, lest say "oh, they're only different by 10 percent".
> the two numbers you quote are different benchmarks
But I don't agree with explanation. Both benchmarks are "PassMarks" (a proprietary measure devised by that company). I don't see them saying that their PassMarks for desktops are different than their PassMarks for smartphones.
Power optimization come at a performance cost.
If a beefy desktop is 100 I'd guess the real world (todomvc) benchmarks are 100/90/20 for desktop/android/iOS, for a recent android and an iPhone 7.
So CPU power (sadly) doesn't translate directly to browser performance as well on android as it does on iOS.
And you know what? My competitors still ranked all higher despite their clumsy, megabyte heavy pages, loading for more than ten seconds. I know that size and speed are not the only SEO signals but they seems to be not that important as believed. Not sure though how AMP's importance will play in future.
I still like his post, some of his tips are good (eg, start mobile first) and some depend heavily on the use case (eg, don't render HTML on the server). As other her in the thread say, does the ROI reflect the extra mile? If yes then go for it.
10 years ago (and now, to a degree) it was database or C++ performance optimization. You could specialize in these to earn above 10-15 percentile of all the crowd.
go below 200kb at your entry points, so people see something fast.
now they have something to do and you can ship the rest of your app asynchronously in the background.
I think this is the most important. Even more than build time rendering or something. The 80% yield with 20% effort.
Worth reading just for this gem.
Just uninstalled that and it loads within 10 seconds.
We detached this subthread from https://news.ycombinator.com/item?id=13249148 and marked it off-topic.
He can do that if he wants, it's his blog after all. But I can easily sympathize with the grandparent's annoyance at having his attention abused to parse an extrovert's lame stand-up comedic fluff.
> Hopefully that opened quickly and I have established credibility. If that sounds immodest it’s because I’m awesome.
Not by hosting via a blog on Medium, which while may seem nice, you're loading a bunch of Medium links _in the header_ which causes the page to non-render for anyone under a moderate-strict corporate firewall.
Eat dog food. Not preach it. Consistency, not showing off a one-wheel skid on a scooter.