1) App Server Response. How long it takes your application to issue a successful response?
2) Network Speed. A server response for a 500MB file is fast, but downloading it isn't
3) DOM Processing. Great, you've reached the user's browser, now how long does it take the page to load into the DOM
4) Page Rendering. Only done after all the "assets" are downloaded. This is usually the final state of the page.
Yes, if your Google Page Speed number is super high, you've got a problem. But it helps to be able to break down each step of that number, to identify where the bottleneck is.
Also, as the author mentioned, be aware that every computer is different. One really slow internet connection will drastically affect your page speed averages.
A small benchmark for you, one of my applications has an average app server response time of about 350ms. But, one page has an average Page Speed of 5 seconds. I'm diagnosing why that page is so slow, it looks like a few external assets are causing the slowdown.
For example, something like Kayak.com takes a very long time (~10-15 seconds) to load a results list. However, the whole experience feels very fast because the data layer is separated from the presentation layer, and the presentation layer comes up very quickly, with data allowed to load incrementally "below the fold".
Put another way, part of "page speed" is also how fast the user can start interacting with any of it, even if it's just to start looking at the initial search results.
Actually, using your logic, #4 is right. I worded it wrong, "Only done after all the "assets" are downloaded" should be "Only completed after all the "assets" are downloaded". More often that not, you'll see high (6 seconds+) Page Speed scores because Page Rendering isn't completed due to missing or slow assets.
You are right. This is precisely the reason why we are looking at CDNing. Our server response time is under 300 ms and our page speed score is 96/100
I try to keeps our numbers under 1 second, although we only target the USA.
The biggest improvement to our page load speed came from using a php cache (xcache) and specifying client cache lifetimes. The first got us to around 1 second and the second meant that the best case (which happened often) was about 0.5s
I just complained to one online cartoon about this, changing from strip to strip takes 1,5 seconds and that's super slow afaik. It really annoys users.
Where is the speed issue coming into play? There are 3 areas of concern
1) App server and backbone speed
- This what the discussion is centering on. CDN is good.
2) Connection speed between client and ISP
- This is a critical issue when dealing with performance in
3rd world countries. CDN will NOT help here!
- As an example, there is likely a 200-300ms roundtrip from
your servers to the CDN and customer ISP. By using a CDN,
you remove this 200-300ms. If the customer is using GRPS
with 5000ms latency, your CDN has done almost nothing!
3) Customer's processing speed
- If your customer is using an old blackberry phone, and your
seconds for the page to render for the user, even if the
content were local.
2. for a site like http://syncfin.com where i have to use 4 bright images, is there any particular format I should keep the images in. Basically when there are multiple heavy images to be loaded , what would be the best strategy to optimize the page speed.
thanks in advance for any pointers to relevant articles and tips.
Use WebPageTest.org to carry out spot tests.
Btw, I found out some pretty nice online tools.
Make of that what you will.
Just gathered some stats from my browser by loading the link we are on: http://news.ycombinator.com/item?id=4973322
921ms (onload: 949ms, DOMContentLoaded: 927ms
1.29s (onload: 1.30s, DOMContentLoaded: 1.12s)
Most weird. I think it might be a problem getting the CSS, but I simply cant see why.
Should add, I have not been notified in any way that this has or might have happened. I know of know reason why it should, I have always tried to respect this community. If it has happened behind my back, then I would be quite appalled. At the very, very least, I would expect to be told. And yes, I use a proper email account. Besides, I emailed the site and got no reply.
The problem with hell-banning is that for users that post a lot of comments, they'll soon realize that they are hell-banned and trolls (in particular) can only be stopped if they are genuinely ignored.
For this reason, if you want to stop trolls, slow-banning could be a good alternative by making usage of the site unpleasant. This way they can still interact with others, but it will be painful for them to do so.
I agree that these methods are passive-aggressive and shouldn't be used on non-trolls, even if such people do not comply with the community's guidelines. The problem with this site is that user accounts do not have emails attached ... as a much more effective method for making most users behave according to guidelines would be to send them a warning.
"why did that user get banned?"
"they were a troll"
"ah, okay. that's fine then, seeing how I am not banned this must be totally legit. damn trolls!"
What if users had a killfile for their account? Seems more meritocratic than hellbanning at least, if I can just choose who I want to ignore or not, and if you can tolerate someone, then they're alive for you.
Most sites have a capability where they can flash a message to a user the next time they log in. Such a message could link to the flagged comment in question and point out what was wrong about the comment.
But really, HN is not a democracy, or transparent, or really anything other than pg's playground. So it's up to him, and clearly he's chosen to put avoiding confrontation first.
Amazon's report: "Every 100ms delay costs 1% of sales"
Everything else IIRC needs a credit card and I am not willing to use that.
That said I am not 100% happy with CloudFlare.
when you make new website today using modern tools, page load should be below 1 second,