Hacker News new | comments | show | ask | jobs | submit | workhere-io's comments login

For PostgreSQL: RDS or Heroku Postgres.

-----


Does this mean that all FastMail accounts are now hosted in Iceland?

-----


No, this is response from employee: https://news.ycombinator.com/item?id=7832207

-----


They've changed their IP address ranges not their physical servers from what I can tell in the announcement.

-----


Node's big advantage (some would say disadvantage) is that it uses JavaScript (or languages that compile to JavaScript). There's a good ecosystem around it. However, let's face it, the whole asynchronous thing is a pain - even if you use the async library or promises or whatever. Node enthusiasts would say that the asynchronous thing is necessary for great performance, but the reality of it is that there are plenty of languages that outperform Node without using asynchronous calls: Java, C++, Go, even raw PHP without a framework in some cases (see http://www.techempower.com/benchmarks/#section=data-r8&hw=i7...). Another reality is that most sites don't even need such high-performing languages. Instagram with its 200 million users uses Python, and Facebook used "normal" (uncompiled) PHP up until the point when they had 500 million users.

-----


Don't think you are gaining any SEO-benefit from one-page JS-only applications, just because Google made it possible for you to start ranking.

No one is expecting to get any SEO benefits that "normal" pages don't have. We are expecting to get the same chance of ranking as normal pages.

You mentioned that single page apps might rank differently or worse than normal pages. Do you have any source for that? (A source that is current, since Googlebot's improvements are quite new).

-----


>We are expecting to get the same chance of ranking as normal pages.

Then you should probably adjust this expectation. You say in your article:

>While having this sort of HTML fallback was technically possible, it added a lot of extra work to public-facing single page apps, to the point where many developers dropped the idea...

A JS-driven site with an HTML fallback is a normal page. Then you don't need any tricks or force Google to run your application and hopefully make pages out of them. Start with the fall-back and enhance.

This is a serious mistake with consequences. Tor bundle and Firefox shipped with JavaScript support, because disabling JS broke too much of the current web. It causes accessibility issues (remember when Twitter changed to hash-bang URL's?), if not for Googlebot, then for regular users (From the Webmaster Guidelines):

>Following these guidelines will help Google find, index, and rank your site.

>Use a text browser such as Lynx to examine your site, because most search engine spiders see your site much as Lynx would. If fancy features such as JavaScript, cookies, session IDs, frames, DHTML, or Flash keep you from seeing all of your site in a text browser, then search engine spiders may have trouble crawling your site.

>Make pages primarily for users, not for search engines.

I am still going on the assumption that you created a one-page application without a time-consuming fallback, and you rely on Google to make rankable pages from them. Then you leave some users standing in the cold, so why deserve to rank equal to a user-friendly accessible web page?

> ... single page apps might rank differently or worse than normal pages ...

From the original article, the most current source on this:

> Sometimes things don't go perfectly during rendering, which may negatively impact search results for your site.

> It's always a good idea to have your site degrade gracefully. This will help users enjoy your content even if their browser doesn't have compatible JavaScript implementations. It will also help visitors with JavaScript disabled or off, as well as search engines that can't execute JavaScript yet.

> Sometimes the JavaScript may be too complex or arcane for us to execute, in which case we can’t render the page fully and accurately.

> Some JavaScript removes content from the page rather than adding, which prevents us from indexing the content.

In the SEO community Googlebot's improvements were noted for a while now. See for example: http://ipullrank.com/googlebot-is-chrome/

Single page websites or application-as-content-website's are not popular among SEO's. One reason for this is that it doesn't allow for fine-grained control on keyword targeting, and keeping the site canonical, and it can waste domain authority when you have less targeted pages in the index than you can rank for. Experiment and find out for yourself.

-----


Which I emphasized in the post :)

-----


If anything but almost all of your website is static, you won't be saving all that much time.

Single page apps can easily be static (static HTML page + static JSON). The point of this would be to decrease the download size for each new page visited by the user.

-----


I think you missed my point. In each web page downloaded there's a bunch of (basically constant) static data - to download your javascript files, and set up your document - your template (or similar). This is the only data that single page apps can eliminate - everything else must either be queried from the server or can already be cached.

Some sites obviously inline CSS or JavaScript, but that can be eliminated if necessary (and only affects the first page load anyway).

This information is free to generate on the server side, so it's not slowing down that computation at all (it's just a stringbuilder function, essentially). Furthermore, the transfer time is generally not the deciding factor - it's the server side time to put the rest of the information together.

To give one example, I went to a typical website - the Guardian (it's a fairly standard high-traffic news website). Chrome informs me that in order to request one article, it took 160ms to load the html - 140ms of waiting and 20ms of downloading. Now, the RTT is about 14ms, so that's about 110ms of generating the web page and 20ms of actually downloading it. It's about 30kB of compressed HTML (150kB uncompressed), most of it's 'static content' - inlined CSS and JS.

Them using the single page model would reduce the page download time (apart from the first page) by an absolute maximum of 20ms - which means that the time to load each page has been reduced by about 12%.

This is fine, but almost all of the data is just the result of string concatenations and formatting - i.e. free processing (or at least almost-free processing). It's getting the rest of the data together that's somehow taking the 100ms (or crap implementations).

The cost of moving data around on websites is typically small compared to the actual production time of the content. That's why we see people preferring to inline huge amounts of CSS etc on each web page and having people download it time after time - because it's only about 10kB compressed the data transfer is inconsequential, and normally is dominated by the RTT.

Spending all the time writing these frameworks because of performance benefits is a fallacy - the data still has to be generated somewhere, and if it happens dynamically it's slow as hell. The savings can never become that great - at most they lead to 20-30ms of improvements if bandwidth is acceptable.

Writing the frameworks because they make development easier is a much more reasonable argument.

This still all detracts away from the fact that non-static websites are typically dog slow and they shouldn't be.

-----


But, have Yahoo or Bing or DuckDuckGo made the transition to be able to crawl the web with a full JS & DOM rendering engine?

They can just use PhantomJS (http://phantomjs.org/), which is free and open source.

-----


They could, but I wonder what it'd take to scale it to crawling that number of pages.

I think only Bing would have the cash and resources to build that.

-----


What they were saying before was that you always need a HTML fallback for JS-generated content. Now it seems they're saying you don't necessarily need to.

-----


Author here. Some of you are saying that this will lead to bloated, JS-heavy websites. I disagree. The JS necessary for making a single page app can be done with something like 10 lines of JS (plus jQuery or something similar, but that is already included in most normal pages anyway).

A single page app isn't JS-heavy by definition, and a "normal" page (with HTML generated on the server) can easily be JS-heavy. It all depends on how you program it. Just keep in mind that single page apps don't necessarily need to use heavy frontend frameworks such as Knockout, Ember or AngularJS.

-----


One potential problem here is that google will use this to widen the gap between it and the 'one page apps' web and other search engines (such as duckduckgo) that can't match it in resources.

There are free and open source tools available that would help search engines parse pages containing JS (PhantomJS comes to mind).

-----


It's not just tools, it's the cost of all that parsing and executing in a mock browser environment.

-----

More

Applications are open for YC Summer 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: