Hacker News new | comments | show | ask | jobs | submit login
Netflix: Removing client-side React.js improved performance by 50% (twitter.com)
357 points by pjmlp on Oct 27, 2017 | hide | past | web | favorite | 163 comments



Hey, I work on the team at Netflix that gave the talk on React in the signup flow in the tweet.

The full talks are available here if people want to watch them:

https://www.youtube.com/watch?v=V8oTJ8OZ5S0&t=11m30s

Thought I'd also provide some more context on some common questions that people have asked.

### Why are you using React to render a landing page?

The Netflix landing page is a lot more dynamic than most people think it is.

It's our most heavily A/B-tested page in the signup flow, with even some machine learning models being used to customize the messaging and imagery that you get depending on location, whether or not you were a previous Netflix member, device type, and a lot more. Even beyond that, Netflix supports almost 200 countries now, and there's a different combination of localization, legal challenges, and value messaging for each one. We end up sharing a lot of the logic and UI for these A/B testing and localization challenges throughout the signup flow, mainly through React components.

The example I always love to give is the <TermsOfUse/> component that we have, which to a Netflix customer signing up is literally one or two checkboxes on the UI, but has some of the most complicated logic in the codebase due to the vast number of countries and user states we support. Because of all this, it's more valuable for us to share these common React components across the entire signup process, both the landing page and the rest of the flow, which is a single-page React and Redux application.

We've seen a lot of conversion value though in improving the performance of the landing page, especially in countries with slower connections, but we don't also want to re-duplicate a lot of the shared UI logic that we have.

The tradeoff that we decided to make is to server-render the landing page using React, but also pre-fetching React / Redux / the code for the rest of the signup flow while on it. This optimizes first load performance, but also optimizes the time to load for the rest of the signup flow, which has a much larger JS bundle size to download since it's a single-page app.

### What's the performance metric that's being used?

It's TTI (Time to Interactive), when the user can fully interact with the page. This is different than TTR (Time to Render) for us, when the user can fully view the page. There's more information in the talk about the differences.

### Why not Service Workers or some other pre-loading / caching mechanism?

We have been experimenting with it, but it's mainly the lack of some browser support - Safari is the main one. Generally the Netflix signup flow needs to have more legacy browser support than the Netflix member experience. Lots of people sign up on a pretty old browser, but only ever watch Netflix on the native mobile apps or a TV device.

#####

Feel free to comment here or tweet at Tony (https://twitter.com/tedwards947) or me (https://twitter.com/clarler) if there's any other questions that we can help answer.

Though from a lot of experience, 140 characters isn't always enough to provide enough context for JavaScript framework discussions. ;)

If these sort of performance and UI challenges seem interesting to you, our team is also hiring for UI engineers and an engineering manager!

* Senior Software Engineer (React, Node): https://jobs.netflix.com/jobs/864767

* Senior Software Engineer (Android): https://jobs.netflix.com/jobs/864766

* Engineering Manager: https://jobs.netflix.com/jobs/865119

Cheers!


I work on React. We’d love to hear from your team sometimes and collaborate on this sort of thing. We’re solving many of the same problems but I rarely hear from Netflix engineers except at talks when announcing they’re avoiding React or have forked it, often for reasons we weren’t even aware of.


Hey spicyj!

I'd love to sit and talk React with you and your team. We're always particularly interested in performance optimizations so maybe we can swap some knowledge.

Not aware of any team that is actively avoiding React but we're a fairly prolific bunch when it comes to public speaking and knowledge sharing so perhaps I missed some announcement from another UI team. A few of my colleagues will be speaking at the SFHTML5 meetup[1] in a few weeks if you'd like to come hang out and talk shop or feel free to drop me a line at jem@netflix.com :).

1 - https://www.meetup.com/sfhtml5/events/244074642/


I had a terrible time trying to find react performance _monitoring_. There's plenty of performance _troubleshooting_ once you already know what component is slow, but nothing that monitors.

Did I miss something? I see some `measureLifecycleperf` functions in the react source but those look like dead ends.


So in other words: the main reason removing the React code in the frontend in this case gave such an immense performance benefit is that the logic used to compute the rendered UI is significantly more complex than the logic necessary to make that UI interactive.

It's not React that's slow, it's the logic needed to render the page?


That's pretty close. For our use case (and many others), the biggest culprit for increased TTI is the sheer size of the JavaScript payload and the time the browser takes to parse the whole bundle[1]. We still use React and other libraries on the server to generate the HTML but rather than sending all that JS down, Tony worked out exactly which client interactions still needed JS and wrote those in plain JS which resulted in a big TTI win as the browser simply had less data to parse.

1- https://www.youtube.com/watch?time_continue=1297&v=M1qm-AWWu...


Hi could you post some before and after traces (chrome dev tools recordings) of the site. Performance is nuanced, and the original tweet only helps to put down others (the react team) unfairly.

I think the win here is less about react and more about not running as much javascript on the client. I hypothesize that the traces will show this, but it is impossible to know for sure without them :)

Thanks in advance!


> the original tweet only helps to put down others (the react team) unfairly.

There was nothing personal or malicious in that slide at all. Do we need a safe space to talk about performance metrics now?


Let’s say you think it’s not React or you have time to improve React to solve that problem: wouldn’t having some data help test that assertion and make changes?


We have the data: they removed React and performance increased by 50%.

Why should any more work be put into keeping React or improving it if their solution of removing it entirely already works?


Well, for one, cause they did /not/ remove it entirely: They use it on the server.

It doesn't "work", it's a hack.

50% isn't data. It's a claim without evidence. The evidence in this case may help fix React.


In the talk you say that you reduced JS and therefore lowered TTI and saw more clicks on the signup button. Is it possible that you are just _tracking_ more clicks, but the total is the same?

Does it really matter how much JS there is if it's loaded after HTML and the user can already move to the sign up step without it?


Please make the search textbox always visible. Most of the times I go to Netflix.com is to see if it has the movie I want to see. (I live in Sweden, Netflix is pretty small here)

And going to the front page of Netflix, clicking the Search icon is not a pretty experience. What happens next is a chuggy chuggy animation of the textbox gliding out, and typing in it is a disaster of input lag.

I bet you fire an event for each key I press while in there? Perhaps wait with that until the rest of the page has loaded?


Thanks for taking the time to answer everyone's questions!

Do you simply use Nodejs to do the SSR? I've seen some complaints about the difficulty of scaling node to run well in a cluster, and about security things. Have you had to deal with that?

I'd honestly love to see what you did there. People use Java for the reason that all of those questions have decent answers by now.


I've been ranting here occasionally about the infuriating unprofessionalism of a web app that can't operate without JS. There is very little that you can do with JS that can't also be done with the help of a server, unless you start making up overly-specific requirements about what technologies are used, or writing user stories for robots. As a diehard NoJS guy and a developer who uses React professionally, my community's willingness to accept of the problem, and obliviousness to the solutions established by the React devs themselves, are pretty embarrassing. And it's not like I hate the language, Node has been my go-to application server for years now (Clojure is displacing that for me, but I digress).

Last year, I was building a simple SVG based chart dashboard for internal usage. Being a NoJS guy, I would sometimes disable JS while developing, on purpose or otherwise, and aside from forcing me to manually hit refresh in the browser, things generally worked. I added a couple links (styled as buttons) for zooming, to supplement the JS based drag-window zoom, and let the browser scroll the potentially very wide SVG chart within a div. If necessary, I could have even embedded the whole thing in an iframe, to avoid triggering whole page reloads, but our caching story was tight enough to compensate. Also, the React-rendered SVG represented the bulk of the markup on the page anyway.

Interactive visualisation, no JS needed. It added maybe an extra 10% to my workload (we already had the SSR stack), and helped me to avoid a variety of little glitches that plague many client-only apps, glitches that users learn to tolerate with mild disgust. The satisfaction of seeing our in-house dashboard pop up "instantly" with data, while Parsely and New Relic were still churning spinners or stuttering while waiting on JS, or even waiting for initial data after waiting on JS, was very cathartic. TTI can equal TTR, we have the technology, we've had it for a decade or so.


Split code by routes? Split interactions and the view layer? fiber? Stream the render?


So just keep throwing more shit at it till it gets faster?


Yes, just like Intel. We engineered our way into this, we'll engineer our way out, and then smooth it all over with more engineering. Entropy wins again.


It is quite telling that you give the example of the <TermsOfUse/> component which has "one or two checkboxes on the UI" that "has some of the most complicated logic in the codebase".


I can't quite tell what you're implying here... but this makes total sense to me.

Every country has different regulations, and therefore would require different wording and agreements as a Terms of Use. Anyone that has worked at a multi-national company understands this is just The Way It Is.

At Nike there is a ton of logic on a per-country basis. For example, they might own the copyright of the word "FlyKnit" in most countries, but in Italy they don't and get fined if it's ever misused. There's a TON of development work that caters to legal / regulation problems like this.


I was at the All Things Open conference this week, and Yehuda Katz gave a talk on Glimmerjs[0].

It was enlightening. The size of your front-end application is generally dominated by view code. So, they precompile views into a super simple set of binary VM instructions (making your views very compact). These views don't go through the JS compile / parse phase on the client (saving hundreds of ms, up to seconds on slow devices). They are instantly executable by a small VM written as a JS library, and can be streamed in and rendered as they arrive (the way plain ol' HTML pages do). Turns out to be hella fast, but still gives the benefit of rich client-side rendering capabilities.

[0] https://glimmerjs.com/


For folks who want to play around with this, we've got an interactive playground at https://try.glimmerjs.com/ that lets you add components, templates and helper functions.

In a nod to The Net, you can click the π symbol in the bottom right corner where you'll get a debug view of the disassembled binary bytecode.


Tom Dale recently did a a great talk at ReactiveConf about Glimmer that I can highly recommend watching: https://youtu.be/62xd25kEZ3o?t=7h40m


If I'm not mistaken Angular 2+ also has this. It's named AOT compilation (ahead of time).

Edit: so does Vue if you're using vueify (https://github.com/vuejs/vue/issues/4272).


It's not the same, though. Those are compiling textual templates into executable JavaScript. React/Preact do this automatically, too.

What Glimmer does is it converts your views into bytecode (not JavaScript) which can be streamed and run without a JavaScript compilation / parsing phase in the browser. Vue, React, and Angular views all ultimately are translated into JavaScript which has to be compiled / parsed in each browser.


I'll have a look at Glimmer later, but who executes this bytecode?


> They are instantly executable by a small VM written as a JS library

Guess you have download the VM on initial page load.


Yes. It's about 18KB of JavaScript.



Is it more similar to Vue server-side rendering (https://vuejs.org/v2/guide/ssr.html)? Or are Glimmer and server-side rendering two different beasts?


Two different beasts.


That's pretty exciting. I'm curious when we can just compile our views into some IR or byte code that can be natively run. Isn't that kinda what WASM is trying to do?


Why use binary VM instructions with a VM written in JS instead of just straight binary data? Why does it even need to be instructions in the first place?


It needs to be instructions because the views aren't static. They are reactive. So some of the content is static, some might change due to user interaction. The VM understands this fundamentally and optimizes for that.


hbar is no longer Planck's Constant but the speed a which a metacircular hypecycle can JIT itself on your runtime.


I'm not clear on where the logic is needed. Even if there are cells or boxes that need to be a percentage of their parent and/or have min and max sizes etc, that can still just be binary data.


Anyone thinking on betting on Ember, or Glimmer.

Word of advice / wisdom from having built multiple platforms / sites with it:

Don't.


Glimmer itself is made by Ember?


Yes, the ember core team is behind Glimmer you can use it today in Ember.


Wow that site loads instantly


It's a tiny page of static html.


Oh I thought they're using their library


[flagged]


If you can get all three of these-- good performance, simple application code, and a rich UI experience-- then yes, it's a win.


[flagged]


Honestly you're just making up non-genuine complaints and criticism in bad faith.


It's worth watching the video to understand the context: https://youtu.be/V8oTJ8OZ5S0?t=1012


Definitely worth it just to learn about the prefetch API and the XHR prefetch technique


Sounds like form the comments they still use React they just defer loading it until after the page is otherwise usable.

I think this is a fairly common use case for server-side rendering in React and I wonder how much of this improvement they would have seen just by SSRing the page and putting the React scripts at the end instead of the header (or using defer). This is the strategy I use with any React marketing type pages.

Edit:

Just to give more detail incase anyone is wondering, usually marketing pages are relatively the same for every user (unless you are doing identity based personalization or a/b testing) so often you can get away with pre-rendering all the HTML generated by the React. Then you don't even need to load React server side, you just serve up a static file.


Yep, and personalization and A/B testing can be accommodated with query params. By whitelisting those AB params in the caching layers, you can mostly prevent a performance difference between A and B, something that may have messed up our results with a 3rd party AB testing service.


Even facebook acknowledges that React should only be used when complex interactivity is more important than initial load-times: https://youtu.be/TMun9g5o7OA?t=47m28s


Well, of course it will. You are trading loading an entire application for loading a static page.

I'm always amazed at those devs who load entire applications framework when their end goal is to have a slider at the top of a page.


"static page"



If you get your SSR right, you can just not serve javascript for landing pages and readonly dashboard type things. This is what Hyperfiddle does - all readonly things work 100% with javascript disabled (try disabling it in the dev tools)

http://hyperfiddle.net


I like how the people in replies to the top tweet seem to get angry at someone implying that React is not great.


For me they mostly show people asking for numbers and people making fun of them for using React for a boring landing page.


You find what you expect to find.


From the slides, it looks like the performance metric in question was TTI (Time to Interactive).


Today we learned: Removing an advanced library from a simple project increases performance.

Sorry for being sarcastic but don't you also find this too obvious?


I work on the team at Netflix that gave this talk.

Thought I'd share a bit more context on the reasons that we did this beyond a picture from a tweet:

https://news.ycombinator.com/item?id=15568305

Let me know if you have any questions!


I think it's the 50% part that we're meant to pay attention to. For me that's a pretty drastic, "why would you ever put this on the client side now that you know this" kind of thing.


The caveat is that this is in reference to a _landing page_ only. If you try to write an advanced application with vanilla.js and you aren't some Javascript guru, then you are going to really hate yourself later on -- I guarantee it.

Also, that 50% is only measuring the time to interactive on what I imagine is the first page visit. After everything becomes cached, I would think that the the benefit is dramatically lower.


[flagged]


That site has been around a while, and is definitely a joke ;) No matter what you check off to include in the library, the generated file is 0 bytes.


Default choices on that page show the following file size:

  Final size: 0 bytes uncompressed, 25 bytes gzipped.


Gzip format has: 10 byte header, 8 byte footer, optional additional headers, and DEFLATE compressed content.

So, yeah, +25 bytes seems reasonable.


Yeah, but it bloats up considerably when you add in the other features.


Dude, it's a joke. Lighten up.


Because it provides a robust, consistent, isomorphic framework which many people have decided that certain benefits (for us common tooling, maintenance, consistent implementation across the org, write once run twice isomorphism) outweigh the costs (interactable time, download size, etc). Everyone has to make their own choices, but this news was not surprising and did not change our opinion.


Playing devil's advocate for a moment:

So, the developer experience trumps the user experience, in an industry where fractions of a second of load time can cost a company customers and conversions?


Not at all. The objective for a landing page is very different from all the other pages.

Landing Page: Make it load instantly so that a new user clicking on it doesn't get frustrated and close the window before it's ready to go. This is typically a static page. Note the tiny and heavily optimized google.com

Other Pages: It's sometimes necessary to load larger js libs gradually, but generally once they are loaded the browser cache makes their size a non-issue.

In summary: It's worthwhile to pay the cost of libraries, but that cost should not be paid by visitors to the landing page, and if the cost is high it might be necessary to load them in phases to preserve a good UX.


Life is full of trade-offs, gaining 1% more customers for a 10% increase in your engineering costs is not always a good deal. Being customer-centric is good, but not at all costs.


At 1 second, it's closer to a 5% loss. 5 seconds, 20%. 10 seconds is a whopping 50% loss.

And yes, I've seen a lot of landing pages which are not viewable, let alone interactable, for 5+ seconds, an effect exacerbated by mobile.


Better DX = more time to work on UX


This is a very salient point.

Ultimately many businesses will sacrifice polish and dev testing for features and deadlines on the skull-encrusted alter of short sprint cycles and rapid iteration.


Bad UX and buggy implementation can also cost a company customers and conversion. Sometimes the trade off is worthwhile.


You can't lose customers you never converted in the first place.

Does React really eliminate the potential for buggy implementations? Also I'm fairly certain that UX has more to do with design than your choice of frameworks.


You also have to consider what side benefits libraries provide. Eg, we're likely choosing React (first time in company using React) because we're choosing React Native for our mobile apps. This is because we're a very tiny shop, with very limited developers. We simply don't have the manpower to hire or learn both Android and iOS development.

Now, we reviewed a handful of iOS/Android abstractions, and React Native seemed competent - so we're using it for our first pass. Likely, we'll use it for the web frontend too, because we're using React Native. More familiarity can be a good thing.

As I discussed internally when we were choosing frameworks, if I was just choosing a web frontend it would never be React. It's not that I think React is bad, it's just that there are a dozen frameworks similar to it that are faster. It popularized a new good idea (vdom), but it's just not best in breed imo. Yet, having the same concepts and very similar codebase between mobile apps and frontend is a pretty big deal for us.


It may not eliminate them, but it's much better than my memory of developing interactive interfaces with lower level libraries like jQuery that force me to keep track of everything having to do with the DOM in my own logic.


If all their home page is is a bunch of images and hyperlinks, then I imagine removing React made a lot of sense.


'premature optimization', build it first, optimize later, react helps a lot with the first part


There is such a thing as premature pessimization: making choices based on intuition about productivity benefits or personal taste that force you down the road to poor performance.

I think it's smart to include performance requirements in your specifications before starting a project (or when taking on the maintenance of one too). Set an upper limit on your TTI or server response times. Turn it into a budget. That's not optimization that's just engineering.


If you can't gain traction in a market due to lost conversions, you'll never have a chance to optimize it.

Not to mention that some infrastructure choices make it really hard to optimize later, if not impossible (sans re-writes).

Also worth noting: the phrase "Premature optimization is the root of all evil" had a very different connotation when first uttered:

http://ubiquity.acm.org/article.cfm?id=1513451


"'Premature optimization is the root of all evil' is the root of evil"

https://medium.com/@okaleniuk/premature-optimization-is-the-...

http://ubiquity.acm.org/article.cfm?id=1513451

http://www.joshbarczak.com/blog/?p=580

http://scottdorman.github.io/2009/08/28/premature-optimizati...

It's basically an excuse to not think about the performance or implementation of your entire product until the end and then, when you use a profiler and chop away "10%" here and "20%" there, you still end up with a slow-as-piss program with every function taking less than 1% total time. Then what do you do?

Oh yeah, what you should have done in the first place. _Think_ about how your program actually functions so you can remove all of the insane systemic, architecture decisions you made that all bleed a fixed amount of time every single function call. That's the difference between an engineer, and script kid. If you're not reasoning about _the entire platform_ (all the way down to the cache lines), you're not engineering. You can abstract all you want, but those abstractions don't magically prevent the underlying hardware and architectural decisions from affecting the product. Abstractions tools (read: approximations!) for high-level reasoning. They're not magic "you don't have to think [about low level implementation]" spells.

Actual experts keep trying to warn you guys about abusing that quote, but nobody apparently ever listens. Stop quoting it like a bible verse, because just like bible verses, everyone forgets the entire context and just uses it like a bloody bumper-sticker slogan.

In the words of Saul Williams: "You have wandered too far from the original source, and are emitting a lesser signal."

The full Donald Knuth quote:

>The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming.

That quote sounds much less like a law handed down by God, and more a guideline from experience, no?

Optimizing the wrong things (things that get thrown away when you change your program as you work toward your goal) ends up wasting time. But just because that was applicable _in the 70's_ doesn't mean modern programmers have the same mentality of "optimize optimize optimize." Modern programmers have the opposite. They're so afraid of having to learn how cachelines work, they use that quote to prevent having to do any optimization (or understanding of architectural layout tradeoffs). I can't count how many programmers I've met that hear "cache" and their eyes glaze over like it's some magical, incalculable thing.

Watch a lecture from Mike Acton, where on consoles, they don't have the luxury of writing poorly written, un-optimized code. They _do_ have to understand the entire platform to ship a competitive title. Because any optimizations they do, for a fixed platform, means they can pump out a better experience with those extra CPU cycles. An experience that elevates them above their competitors.

https://www.youtube.com/watch?v=rX0ItVEVjHc


Calm down. This is for a single, static landing page.


> Calm down.

A rather unnecessary statement, don't you think?

> This is for a single, static landing page.

Which at one point wasn't a static landing page. Netflix has something of a captive market at this point in time; but could a newer startup scrambling to gain users afford to make the same choices?


I would hope that a startup scrambling to gain users would be able to explain in a simple static html site why their service is worth bothering with.

If I'm not particularly interested, I'm not going to wait around for mountains of JS to download and execute, and the typical content-free startup landing page full of hero images, stock photos, and vague bullet points to load up.


> Calm down.

Please don't break the HN guidelines by being uncivil. Your comment would be much better without that bit.

https://news.ycombinator.com/newsguidelines.html


> and did not change our opinion.

If this comment section is any clue, clearly it's not going to change a React programmers opinion because they completely loose their shit if you mention a flaw in React.


This isn’t a flaw in React, this is using the wrong tool for the wrong job. React is great, but it doesn’t solve ALL problems.


I believe my case was pretty clearly articulated as are some of the other comments on a thumbnail sketch of the tradeoff comparison we have and continue to do. I'm sorry you believe the worst in this community.


What is the magnitude of the 50%, is what would be nice to know. If talking 50ms to 25ms, would be seem less big a deal than say 500ms to 250ms.


No this is the wrong conclusion to draw.


You are correct. This is such an obvious, simple thing that anyone with a small bit of experience optimizing a static website for render completion would understand.

React is designed to allow a server-side render and an async load of the js library to allow it to be used in this kind of scenario without impacting perceived load time or "time to interaction".


> Sorry for being sarcastic but don't you also find this too obvious?

If we pay attention to how bloated most web pages are, it seems it is not obvious at all.


I was thinking about the same thing. Well, you can remove CSS as well, I bet it would improve the performance.


Go next level: Remove the whole browser and make a native application - gain even more performance!


I just use lynx to browse and block downloading js files by default. I wrote this comment using ed and curl.


I don't know why you'd use such heavy duty applications for such a simple use case. I merely use a butterfly to induce cosmic rays to flip the bits of memory needed to send a http request.


A butterfly is way more complex than React.js.


How true, in fact butterflies are more complex than all software put together :P you just broke that whole branch of xkcd jokes dammit!


Ha, you kids and your fancy lynx browsers. I do just fine with Netcat and typing in the HTTP requests by hand. Works just fine!


I wrote this with a torch and a length of fibre optic cable.


Netflix website is a simple project!? Wha?


You would be right but they mean the damn landing page :)


it displays information about videos that are stored on a server - it's not that complicated.


oh yeah.. some jquery or vanilla JS would be fine. no need to abstract a workflow or anything.. it's literally like a scrolling marquee.. a couple of guys on it should be sufficient.. No need for Git either.


What's funny is that your conclusion is incorrect.


It would be much better to say how. Then your comment would be substantive and we would learn something.


True. I was replying to a sarcastic comment in a sarcastic way. Mea culpa.

> Removing an advanced library

They made it clear they didn't remove the library. They still load it later, and still use it.

> from a simple project

They make it clear it's not a simple project. In fact, it's probably one of the more complex pieces of the project.

> increases performance.

No, it increased specific metrics that they cared about.

> Sorry for being sarcastic but don't you also find this too obvious?

Your comment, or their results? Your comment is incorrect based on a single slide and taken out of context. As for the actual reasons, no, they are not obvious at all. Incredibly intelligent people didn't find it obvious, as this was something they clearly didn't see at the beginning, and instead relied on measurements.

Premature optimization and all that.


Today we learned: Removing an advanced library from a simple project increases performance.

The library that slow down the UX terribly is "advanced" while the project that is being the victim is "simple".


> ...improvement on our landing page

React on a landing page sounds like an overkill to begin with.


In the talk they say they are still using react server side to make this mostly static page. I suppose with some cache in front of it though.


So basically they write React in front-end, then use react-renderer to generate static HTML and then serve it from back-end?

Seems like with this speed JS frameworks are going full-circle back to back-end rendering soon enough.


Everything is a circle in web development. It appears to take about five to ten years for developer personnel to turn over close to 100%.

And so the cycle continues, with old ideas dusted off or rediscovered, the original flaws in them rediscovered, the reaction to those flaws, traveling well-trodden paths, then the reaction to the reaction, "best practices" shaking out, an explosion of complexity, and then someone burns down the baroque cathedral of ideas with, if we're lucky, a slightly less flawed version of the original old idea.

I've now done ASP.NET long enough to see WebForms come in, reach high tide, recede before the new hotness of MVC, MVC decline in favor of WebApi with monstrosities of JS frameworks in front of it, and now Razor Pages are coming back in as a simpler alternative, with concepts that an old WebForms developer from fifteen years ago wouldn't raise an eyebrow at.


Actually that is exactly what I thought when they introduced RazorPages as well.

I really like .NET Core but this RazorPages is just single worst "improvement" for the .NET Core platform.

I even remember watching the presentation on BUILD conference and nobody applauded at this "feature" when they graciously introduced it and you could feel the cringe. Scott even had to say "Isn't this awesome!?? Come on!".

No, Scott, it's not awesome. It's going backwards.


Except "react-renderer" is just React.

As some people who have been paying attention said from day one: other than the component model and unidirectional data flow, what makes React stand out is that it's ultimately a way to describe a UI in terms of functions that output a component tree. Whether you apply that component tree to the browser DOM or use it to generate static markup makes no difference (or to glue together native components as with React Native).

A lot of the more complex features only make sense in an interactive context (i.e. when rendering in the browser) but React is a great choice for server-side rendering too. And unless your initial rendering logic is as complex as the Netflix landing page's and the interaction logic is as trivial, you can just hydrate what the server spits out and go on from there.

The difference is that historically we used one code base to generate the initial markup and another to breathe life into it. With React we can do both with the same code. And in this case Netflix just skips the client-side rendering in favour of a tiny bit of JS (which means they're still using the same language which is still better than the mess we had in the early '00s).


This is largely what we do, except with Angular. We generate the angular dynamically server side and then serve it up on a per-user basis.


This is what we need for LinkedIn. I am yet to see a slower site.



Yup, the Twitter replies are pretty much what I expected. This tit-for-tat JS pissing contest is so old. Just use what makes sense for your project, there are no silver bullets.


Everyone uses these types of headlines to justify their positions.

The statement was meant to provoke people, and honestly if it wasn’t Netflix nobody would give a damn. Because the real headline is “We chose the wrong tool for the job! Even pros make mistakes”.


This.


We learned this hard way :P

We avoid fancy tools as much as possible and use basics as much as possible.

Laravel + Bootstrap + JQuery + MySQL works pretty well then this modern stuff.

Easy to implement, deploy and manage.

P.S. Use knife which you really need. Try to use simple knife as much as possible.


> Laravel + Bootstrap + JQuery + MySQL works pretty well then this modern stuff.

It's interesting to remember that Facebook started out as a PHP application. I imagine that Facebook's original tech stack closely resembled the stack you're recommending. And scaling that stack is what led to the creation of React. If you choose to build a rich user interface (like Facebook or Netflix) on top of that stack, you will likely find yourself creating abstractions to manage the complexity. Those abstractions will be equivalent in their utility to React, Ember, Angular, etc.

I agree that a website does not always need or benefit from a React or an Ember. Small dynamic behaviors behaviors can be implemented with less powerful tools. But, the power of your abstractions will need to scale with the complexity of your UI behavior. That's why, at a certain point, the abstractions provided by React _are_ the right choice and jQuery is not.

Regardless, the slide featured in this tweet does not comment on React working or not working well. As they say in a reply to that tweet, the application still loads React and renders React components. What they are presenting is an interesting optimization that has to do with the unique performance constraints of Netflix's UI.


That's completely untrue.

I've built both SPA and traditional fullstack applications similar to the mentioned architecture (although I prefer Python over PHP).

Fullstack is a fraction of the complexity, with much more power. You can even use a simple library like PJAX to get all the same speed boosts an SPA offers without introducing any of the additional complexity.

To be honest, I don't see what React offers beyond what you get with a templating language like Handlebars, and a few lines of jQuery for initializing components.

This is, of course, for database heavy applications where each action requires the backend. If you're building a game, or an app that's mostly offline, SPA can be helpful.


SPAs arent meant for solving the issue of offline apps and building games in web browsers. Their target was to address the difficulty of building, maintaining, extending an ever changing UI in a corporate development environment.

Having worked on a ui in this setting, built with erb, jquery, and pjax... It is not about the degree of difficulty building it, but extending/modifying it, and collaborating with multiple crews in the same codebase. These are problems enough teams have experienced that they have moved away from the approach of jquery+handlebars.

Now granted, SPAs have a bandwagon effect because of where the job opportunities are, so you get people using them not to solve those problems, but to align their technical skills with employable skills.


I have a hard time believing that you built a non-trivial SPA that you consider easy to maintain without an abstraction beyond jQuery.

There's just inherent complexity in software, so it's like suggesting that you built an application without ever writing code outside of main() (functions are for hipsters).


I mean, you're welcome to have a look at it https://github.com/IEMLdev/Intlekt-editor

An example of how I put together a component is here https://github.com/IEMLdev/Intlekt-editor/blob/master/src/sc...

- My "view" function takes a template string and replaces the node's content whenever "render" is triggered.

- I create a "model" object which triggers a "render" event anytime it changes. You can also give the model object methods which act as Redux-style reducers.

- The controller method is a shorthand for registering multiple jQuery events at once.

That's pretty much all you need for a complex app. It's the entire React-Redux architecture à la jQuery + ES6.


That's quite a complicated, embedded framework.

If that's "just jQuery", then React is "just ES6". I don't understand the distinction. Every abstraction is built on the layer below. You just rolled your own bespoke one instead of using one that already existed.

> I don't see what React offers beyond what you get with a templating language like Handlebars, and a few lines of jQuery for initializing components.

You say that like that's all your framework does, but I see more than that.

Seems you're mistaking different trade-offs for objective improvements, something that almost rarely exists in software despite what you read in HN comments.


The idea behind React is pretty cool, that all DOM transformations happen in the template. But that's all it is.

There's no need for a vdom for the most part, template strings and innerHTML are fast enough on their own.

I just find React to be an overly complicated way to achieve its goals.

   var model = { text: '' }

   var component = $( '.component' ).on( 'render', function() {
       $(this).html( `<div>${ model.text }</div>` )
   })

   update( 'text', 'Hello world' )

   function update( key, val ) {
       model[ key ] = val;
       component.trigger( 'render' )
   }
That's React in 8 lines of jQuery. My framework is just a minor extension to this concept.


> That's React in 8 lines of jQuery

No sir, that doesn't include server side rendering or diff rendering either (meaning changing a bit on the top component would re render all markup, not just what has changed)

Stop trying to over-simplify. The problem being discussed here is not React vs jQuery. It's what we do and the decisions we make as engineers to provide a good experience based on performance which is OUR side of UX.


I mean, you can use either the of the morphdom or setdom libraries to do the diffing if you don't want to replace all the HTML, but the concept remains the same.

I'm not convinced by arguments such as "when you're a large team it's necessary". Scaling teams has more to do with project structure than what rendering library you use.

Likewise, it's much easier to tweak performance when you aren't depending on a monolith like React or Angular.


I'm sorry bigmanwalter but you're working on a team of 3 people, not 100+ engineers working in the same codebase.

And you've created your own abstraction beyond "jquery bits for each component".

This is proper vanillaJS engineering, I do the same, it's just not what you're selling here.


Well of course I've created my own abstraction... that's what programming is.

I prefer to keep my abstractions simple and light, and preferably built using lower level libraries. If I can build a powerful component system in 150 lines of jQuery (and it would be barely more in vanilla js), I much rather do that than than import a 30kb + library.

There are plenty of 4kb react alternatives to choose from. Most with vastly simpler APIs. Choo.js or Mithril come to mind :)


> I don't see what React offers...

It sounds like the sites you've worked on didn't have enough UI complexity to make something like React necessary. Even for a relatively simple admin or dashboard type site, using a more modern framework/library will make it much easier to build and maintain.


I use a modern batteries-included fullstack framework. It's incredibly easy to maintain :)


What do you use jQuery for in 2017? querySelector, classList, fetch, … — everything we used jQuery for in 2010 is built-in.


"resulted in a 50% performance improvement on our landing page"

What constitutes their landing page? The un-authenticated page or the authenticated page? If the latter, is the page with all the title art cards?


Was performance measured against fiber or the old rendering engine?


How are they doing this ? This sounds surprisingly similar to Gatsbyjs ...Except it's real-time .

Anybody know how drop-in this is for existing react code ?


Wow, this is crazy. As a huge vue.js fan I'd argue that the percentage value would be much lower with vue. But, I’m not totally sure.


I'm also a big fan of Vue, but why would it be different in this instance?


Quicker load time but a landing page can be more static. No framework needed.


I meant why would it be different than React.


The irony is that React was supposed to make everything faster...


Actually you are incorrect, not everything, but the most important part, development time.


did the same thing by removing ckeditor from a page it wasn't being used on. we all go through this at one point when taking another look at something with fresh eyes.


rolls eyes so hard he falls over

Not a chance that they were just, you know, using it wrong?


Please don't comment like this here.

If you have a substantive point to make, make it thoughtfully; if you don't, please don't comment until you do.

https://news.ycombinator.com/newsguidelines.html


Duh.


In other news, 1 cubic meter of air is lighter than 1 cubic meter of water. WHO KNEW?!


My experience with React has not been good. A gullible manager bought into "react everywhere" hype, flew in consultants for training and then proceeded to have a dozen pages on the website written in React. It's important to note that all but one was either completely static or a simple form with less than 5 fields.

This project did not go well. It took months to develop, has 0 test coverage, numerous bugs and doesn't work on edge or IE and isn't rendered on the server so it's a bad experience.

Everyone on the team was either fired or quit less than 6 months after it launched.

After evaluating the level of effort it would take to get the code base in a maintainable state, it was decided to migrate it over to a more traditional stack (plain js, static html, html forms) and that was completed in less than 2 weeks. Customer metrics across the board also improved.

The lesson here is to use the appropriate technology for the appropriate problem. Unless your application is very complex, requires real-time updates and you can afford the extra maintenance costs -- react is probably the wrong choice.


Don't mean to be rude, and it certainly seems like React was overkill for your project, but 6 months to generate a react site like you're describing seems like the problem might not have been react. Maybe it was engineering, or maybe it was management, but cranking out a react app that's "completely static or a simple form with less than 5 fields" is something one developer should be able to do in a day or two after reading a basic react tutorial (assuming there are design guidelines)...

Which is not to say I'm not sympathetic to bandwagoning having a negative impact on engineering teams. Just that in this case, I find it hard to believe that react specifically was the problem.


> cranking out a react app that's "completely static or a simple form with less than 5 fields" is something one developer should be able to do in a day or two after reading a basic react tutorial

I've only done very basic web development during my career, and absolutely no javascript. But, I've been learning React for a week or so and that's something I could do in a few minutes. 6 months is insane, hyperbolic, and probably untrue.


Yeah, or something other than react is seriously screwed up. I said a couple of days just to have time to do styling (assuming design is already done.)


> Everyone on the team was either fired or quit less than 6 months after it launched.

If that team took 6 months to develop such a simple site with a framework that basically does everything for you, I'd fire them all, too. I guarantee the problem was not with the framework.


> A manager bought into "react everywhere" hype

...that actually exists? Seems crazy.

> My experience with React has not been good.

Certainly from your story it sounds like you had a bad experience, but it seems to be in the "had a bad experience with a hammer, terrible for hammering in screws" type of experience.


Why else would we have stuff like this? https://github.com/Izzimach/react-three

As if ThreeJS is too hard or cumbersome without React.

Or this: https://github.com/FormidableLabs/react-game-kit

Or this: https://www.npmjs.com/package/react-audio (a relatively unknown library, but it's surprising how often projects will import packages like this)

I generally do what I can to avoid software snobbery, but it's examples like those that really cause me to scratch my head and exclaim "Why?"

React makes sense when you actually have a DOM to deal with, but as soon as you start trying to write something that goes beyond that model, well, I really don't get you. ("you" being rhetorical)

Generally, though, I think a lot of junior programmers or those new to React get very enthusiastic about it and try to solve whatever problem they can with a tool that works well for them, and that does speak quite a bit to how good React is. In general, I would avoid using React for things it was not designed for.


> a dozen pages

> It took months to develop, has 0 test coverage, numerous bugs and doesn't work on edge or IE and isn't rendered on the server

This has nothing to do with React and everything to do with implementation. It does not take months to develop a dozen pages in React. React makes test coverage easier to write, works cross platform and supports server side rendering.

While I agree that using React everywhere is a dumb choice, so is ruling out a technology because your company implemented it horribly.


I've see this go down before. React was the wrong choice but I'd put the blame right at the feet of the consulting firm. Let me guess, the budget was under $5M but the firm was one of the large ones probably being used elsewhere in the company right? I work for a very very large consulting firm, anything under $30M is not worth getting out of bed for them. Projects like this get staffed by nobodies and run by nobodies, unless you have a PM and one dev who actually care they will always go down in flames.


I'd put the blame on the manager for not knowing better. I mean s/he is the gatekeeper for company resources.


Most middle managers at large companies are totally clueless. Consulting companies are built to take advantage of them and drain their budgets.


It reflects poorly on us that this is being downvoted. The parent shared a simple cautionary tale without baiting or flaming or anything like that. It is a good thing, to be aware of the worst case scenario; IME, projects that fail badly do so mainly for people-reasons / org-reasons, just like this.


It doesn't reflect poorly on anybody beyond said company that took six months to build a <form> with five <input>s which was clearly blighted by major issues regardless of which technology they chose for their <form>.

What's interesting is that, despite regurgitating the writing on the wall, OP still blames a JavaScript framework.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: