Hacker News new | past | comments | ask | show | jobs | submit login

No, both receive HTML, the real problem is that websites built with modern javascript frameworks (e.g. react, vue, angular, etc...) render most of the content using the javascript engine in browsers, search engines don't execute javascript and thus they see almost nothing but a header and almost an empty body (of course it depends on your use case to be more accurate). Rendora solves the SEO problem for such websites, by being a lightweight reverse HTTP proxy in front of your backend server, it detects crawlers by checking whitelisted user agents and paths, if it finds one, it instructs a headless Chrome instance to request and render the corresponding page and then returns the final server-side rendered HTML back to the crawler (or according to whatever whitelisted in your config file). This is called dynamic rendering and has been recommended by Google and Bing very recently (see the links in https://github.com/rendora/rendora#what-is-dynamic-rendering)



So if I'm hearing correctly, if a User Agent is white-listed it Rendora will render the page on the server and return less assets (no CSS, fonts etc)

If it's not white-listed it will just let the request pass through normally to the server (eg static file server) is this correct?


>and return less assets (no CSS, fonts etc)

no, it returns the whole page, it's just equivalent to rendering pages on your browser, everything in the initial HTML is the same in addition to HTML generated by javascript frameworks added to the DOM after the webapp scripts are loaded.


Okay, so it renders the javascript webapp (Angular/React/Vue) on the fly into HTML, (just as it would appear in a client's browser), but only for the whitelisted UserAgents, which can be crawlers right?


yes exactly :), as if crawlers see exactly the same HTML of what browsers see after DOM load not the initial incomplete HTML sent by servers




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: