Hacker News new | past | comments | ask | show | jobs | submit login
Crafting a high-performance TV user interface using React (netflix.com)
389 points by dustinmoris on Jan 12, 2017 | hide | past | web | favorite | 190 comments

Interface performance is one of the strangest problems to have in this age of crazy processing power but it is extremely common.

Some of the delay is just plain silly and avoidable, like having long and synchronous opening animations in response to an action, which only serve to waste the user’s time. (Oh how I love being on a web site like AT&T and watching their JavaScript poorly zoom open a blank box from the center of the page for 2 whole seconds, when I KNOW they could just show me the damned page already.)

In other cases, the source of the slowdown is less clear. Is a physical device just not delivering its signals any sooner?

I’ve played games where you have to walk to a very precise spot, hit a button, and wait literally one whole second before ANY response is visible onscreen or in audio. (And if it turns out you didn’t really take the action you thought you did, you have to walk in circles to try a slightly different spot, and wait again). Why should that ever be the case? How can a super-fast console not immediately display something or play some sound to show that you took the action?

At the hardware level, the author of BSNES recently wrote up an excellent rant on sources of latency in modern machines https://byuu.org/articles/latency/

But, most of what you are talking about is software latency. At 1/30th of a second each, software pipelining systems seem cheap individually but pile up very quickly. Hit a button, read the button, react in AI, react in animation, react in physics, react in graphics, process in the GPU, process in the display device. These can easily add up to 5/30ths of a second with poorly planned software. In the middle of all that, the animation and audio has aesthetic requirements for smooth transitions that can insert a 1/2 second lag in the middle of that process. Now we're up to 20/30ths.

Regarding animations: I've been in convos with managers requesting character animations to be "Smoother, but more poppy!" because of the conflicting needs of aesthetics and control latency. The best compromise I've found is to design a smooth transition, but have the underlying representation pop and the visuals skip immediately to mid-way through the animation.

I'm not sure what you are referring to about skipping halfway through a transition, but a sinusoidal transition is almost always what you want anyway. Possibly a power distribution, one. That is, it should not just evenly slide from point A to B. You want some acceleration.

Acceleration is important, but the “pop” that they wanted was probably some form of the “squash and stretch” effect that we all subconsciously associate with good quality animation[1].

[1] https://en.wikipedia.org/wiki/Squash_and_stretch

The conflict is that in the real world, people have momentum. They take a bit of time to change their velocities. In video games, we are accustomed to sprites that can instantly change velocity and sometimes go from motionless to moving significant distances in a single frame.

The publisher wanted both simultaneously. They wanted the human player character to instantly change direction in response to controls. But, they also wanted the character to move like a semi-realistic human who has momentum and takes a while to change directions instead of like a sprite that instantly changes direction. :/

That makes sense. Curious that we think it is quality animation.

Why is it "curious"? It's a direct consequence of attempting to emulate how non-rigid bodies (including people) move in the real world.

For a contrast of what happens when there's no squash and stretch in animation, take a look at pretty much everything ever made by Hanna-Barbera before 1990. Everything remains almost pathologically on-model all the time to reduce animation costs.

Most rigid things do not snap and stretch. Phones, paper, books.

People will act that way. Some things will compress. Certainly not all things, though. So, is curious if it is always seen as better.

But, in animated media, "people" can include things like phones, papers, and books.

Apologies for missing this. I took the claim to mean not just animated movies, but animations of our devices. I am specifically remembering the silly animation that ubuntu used to have where a window would shimmer and shake as you moved it around. Or how it will "pop" onto the corner of the screen.

Windows that are flimsy are just annoying to me, which is why I would find the view that they are quality curious.

More realistic animation is typically described as more realistic. Not "popping and snappy."

I can see the it-adds-up argument but there are also plenty of techniques to deal with that. (Maybe it is an education issue for developers.)

For instance, in a lot of cases, a human cannot reasonably observe a particular type of change on every frame so you can skip frames. What I mean is, suppose you have tasks A, B, C and D to perform “each frame”: you might be able to perform tasks A and B on odd-numbered frames and C and D on even-numbered frames, with the user no wiser, as long as the result seems fine.

Another technique is to prioritize the start and finish but not in-between. Often, intermediate frames are relatively crappy from a “niceness” or even correctness standpoint, and nobody really notices because the frames go by quickly. As long as the end frame looks as nice as possible and everything is in exactly the right place, you can get away with a lot of short-cuts for the steps taken to get there.

> Another technique is to prioritize the start and finish but not in-between.

The problem with techniques like these is that it's almost impossible to fully generalize them (e.g. in the case of intermediate frames, if some of them are really wrong then you get sudden clipping or jumpiness).

So if your 'fast' technique only works for a certain set of parameters, then you have just introduced an implicit dependency into your system: things are fast enough while the app looks like X, but go a bit beyond that and it suddenly breaks.

In games, everything works in frames. Usually, AI, animation and physics can be completed in a single frame. But, it is very common to pipeline graphics to a separate thread that runs a frame behind everything else. The GPU frame and display frame (multiple frames on some TVs...) are pretty much impossible to eliminate.

Ideally, a game would sample input multiple times per frame, go with wide parallelism for every step (very difficult for graphics until DX12/Vulkan came along), start some GPU work before physics is completed, render in less than 1/60th of a second and the users would enable no-processing "game mode" on their lag-optimized TVs. But, that's all not common practice.

> Why should that ever be the case?

You can solve any problem by adding more software and layers of abstraction, except the problem of too much software. That's the state we're in now.

The BBC micro could have a word processor in ROM that would boot almost instantly and responded to keypresses immediately. This was because the software was written in assembler and had to fit in a small ROM. The choice of using a TV system running (say) Android and a web browser means that, although the software is slightly easier to write and the processor is 100 times faster, it has to execute 10,000 times more machine instructions in order to render the UI.

This is partly why people like Maciej campaign against multi-megabyte text pages. Another way is possible.

Especially the lag that you see on some brand new cars, only BMW and Audi seem to have lag free interface, but anything else that involves touch interface is just horrid! I've recently sat in my friend's brand new Honda SUV and the interface lag is just plain silly, for a car that costs $30,000+. Why is that?

Because the software development was started 3 years ago on a platform that was spec'd out 5 years ago.

Over those 5 years, the software requirements slowly crept upward while the hardware performance stayed the same and couldn't be changed.

How did the quality control passed it? The lag on some of these car infotainment system just to change the sound is abysmal, someone definely saw that and should have said something, you paying $30,000+ for something that takes 4 seconds to respond to a music volume change. Whoever is responsible for that should not work there...

The quality control came last in the process, so when they 'finished' not long before the delivery date is due, QA gets a ton of political pressure to not make the date slip.

That's why test-driven design is valuable -- you iterate while testing.

What's the point of quality control then if it means nothing in that case?

Making sure it doesn't burst into flames

... whilst it's still in warranty.

If only we'd work together then the car could have a common bus system and you'd just swap out the control console on the front and choose your level of [stupid, annoying, distracting] graphics and what have you.

There's no competition in these systems because people choose the car and get lumbered with the UI on the console. Kinda like if houses had unique electrical systems and you couldn't change the white-goods.

Cars already have CANBUS as a standard thing so it shouldn't be difficult.

I think the CarPlay/MirrorLink/Android Auto thing is probably a better model though. Make the console dumb and let me connect my upgraded-every-year phone that's far more powerful.

Believe it or not, the software quality in these things is often quite cruel, and even a mediocre javascript framework might perform awesome against it. I've seen enough things things like handcoded UI frameworks in C++ (in order to be fast), and then doing things like blocking network calls on the main thread from there.

Some newer systems are based on QT or Android. These typically have better performance, because the underlying frameworks have at least a decent design.

I wish there were car reviews out there that take software quality in account in particular for things like lane keep assist. When I was in the market for a car the reviews that I have seen only mention if they have the feature, not how well it actually works.

For what it's worth, there's a specific distinction for software systems like lane assist. The bar for "working" is so high that if it doesn't essentially work perfectly we can't say it works at all.

We would expect reviews to point out if a feature such as lane assist fails or has noteworthy failures (such as rapid weaving inside the lane) but maybe not so much if it works properly.

I currently have a Honda Civic with lane keep assist. It doesn't slow down before curves and disengages frequently. Tesla's autopilot works much better from what I have seen.

I have a 2016 Volkswagen (Tiguan II), whose Lane Assist also leaves a lot to be desired. It handles nearly straight roads with no traffic quite ok, but it would surely crash the car on nearly every obstacle (lane narrowing/widening, obstacles, tighter corners, ...) without manual intervention.

That's a reason why I don't believe in seeing safe autonomous cars during the next few years at all. But maybe Tesla is that much better - haven't ridden one.

It is much better, but a LONG way from autonomous.

I recently got a Mazda3, their newer MazdaConnect system is running an iMX6 (dual CortexA9 w/GPU and video accelerators) and uses Opera as the interface. All of the core UI is written in Javascript.

It's also highly hackable. =)

That's interesting. I don't know too much about what Mazda uses. Is there any documentation in the web about it?

There's a PCB teardown here


It was designed by Johnson Controls (JCI) but the IVI group was recently sold to Visteon, which probably explains the sudden lack of momentum from Mazda on new features (like, cough, Carplay...which was announced 2 years ago and never showed up).

Most of the people hacking on the unit hang out at mazda3revolution.com. Here's a page indexing their work so far:


But how's the lag?

The UI is simple, there's nothing swipable by design and animations are minimal. It works quite well.

I don't have the navigation enabled, so I can't speak to how responsive the map display it.

I found this advertisement from Mercedes interesting: https://i.redd.it/wwxk8nqh88ex.gif

I can't speak for the quality of Mercedes interface (and this is obviously marketing for non-programmers) but LOC seems like an odd thing to be emphasising.

Everyone needs to hit 100,000 LOC this week to hit our target from marketing

Here is one line of code to solve all of their software problems:

rm -rf /

As a programmer who strongly believes in less being more, remind me to never buy a Merc.

Because some marketing guru thought that it's a great idea to advertise it that way? Common...they are great cars

These things are quite random, because most likely they counted also all dependencies in - including LoC for OS kernel, all libraries, etc. And how much LoC is boost alone? :D

At one undiscovered bug per KLOc, that's basically advertising that their software is a disastrous ball of mud.

KLOC are something you spend, a debt you accumulate.

Sounds like it's full of spaghetti code.

Same with in-flight entertainment systems on planes. They always have terrible, slow interfaces.

I always want to turn it “off” because it’s too distracting, and the only way is to turn the Brightness down to off. It goes like this: BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, oh now it’s off. Then 5 minutes later, the airline starts its welcome mostly-advertising video which TURNS THE DAMNED THING BACK ON AT FULL BRIGHTNESS. Then it resumes DirecTV at which point I have to say: BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, wait, BrightnessDown, annnnnnnnd now it’s off.

This causes people to press harder in reaction and then they are bouncing the person in front of them's head. It's comical if it's not happening to you.

I recently took a flight with Virgin Atlantic and was actually pleasantly surprised. It still wasn't perfect, but it was by far the best I've seen from in-flight entertainment.

Mostly I'd agree, but Emirates ICE system is pretty decent https://www.emirates.com/au/english/flying/inflight_entertai...

Most of the newer ones I see these days are just Android tablets with capacitive digitizers and a pretty decent shell.

VW's newer interfaces are also quite lag-free. But then again, it is the same concern as Audi, so I guess it makes sense.

I have a 2016 car that I hate the screen's ui/ux... it's laggy, and looks like something from 2007. That doesn't even get into the fact that the onboard wireless is 3G, not LTE on something in 2016.

Really makes me wish the whole thing was more hackable.

Have you seen BMW's Display Key or however it's called? Ridiculous.

I've used it, the lag on it is abysmal, whoever approved it should be fired right away.

Just to be clear: That is a KEY with a DISPLAY. THAT is ridiculous.

I think it will get worse with time as younger generations have never experienced lag-free appliance technology.

I'd love to see a cost breakdown of a modern car, especially Tesla.

Car is say 100k, how much of the componentry is the cost of the wiring harness, nervous system and screen?

Our 2013 Honda Accord needed several updates just for the radio to work reliably. You would go to change a channel, the screen would blank out and come back on with all presets set to 97.3. Of course, if you "rebooted" the car, they came back correctly.

And the lag when hitting a tough screen button, incredibly frustrating. I want my buttons!

are these non-Android/Apple infotainment systems running a high-level language runtime?

Because Honda at its core is not a software company?

Honda isn't a tire or windshield company either, it does not prevent them from ordering high quality tires or windshields...

Actually, a lot of the Japanese car companies are built on vertically integrated companies where a parent (usually a bank) company owns both the primary company (Honda) and a set of complementary companies that provide things like Windshields or Tires. It's called a Keiretsu.

Then maybe they should keiretsu their way to software because whether they're a software company or not they make shitty software. And either they can make good software or they can buy good software, but if they make shitty software they're a software company, just a shitty software company.

I've worked at several companies where at least one manager/exec says "We are not a software company, we're a ___(their core product/service)___ company".

If an organization creates and utilizes software as part of their ongoing concern...is at some level, a software company.

Indeed. Yahoo pretending it wasn't a software company is what lead to a billion user accounts getting compromised in the largest data breach in history.

Most manufacturers are only responsible for powertrain components, contracting and assembly.

They're not experts in much, but contracting for quality parts IS something they're supposed to be experts in.

It should be as simple as stipulating a requirement of "sub 100ms response time to all user input".

> like having long and synchronous opening animations in response to an action, which only serve to waste the user’s time.

Oooooh, don't get me started on DVD menu screen navigation. What shambling, drooling idiot decided that it was critical for me to watch an unskippable spoiler-rich montage of scenes from the entire length of the movie before I can click "Play Film", followed by another unskippable montage afterward? Insanity.

There's a great infographic showing the UX of playing a DVD vs playing a pirated video. Exactly this.

That reminds me:

I have a Siemens washing machine, and the interface have a latency of >500 ms. How they fucked it up is way beyond me. It consists of nothing more than a rotary switch, four buttons and three 7-segment LED displays.

I've played with the thought of disassembling the firmware just to see how they fucked this up this bad. I could never make something this unresponsive even if I tried.

It's utterly fascinating and pisses me of every time I do the laundry.

Why you can't just toss the clothes in, close the door and walk away is beyond me. Wish I could empty the whole 150oz of Tide into the machine and have it dispense over 96 loads too.

I've thought this many times, and I suspect it's somewhat complicated, engineering-wise — but solvable.

For one, detergent comes in at least three forms: Powder, liquid and those little plastic pouches. Powder would be pretty easy (but the dosage would be brand-specific) and liquid would be messy (flow rate would be a challenge).

The easiest way would be if all machines could accept a "standard pellet" which gets loaded in some kind of completely fool-proof way so the machine cannot mechanically choke on them, ever, or accidentally add too few/many to a load.

Same thing with dishwashers.

As for why you can't just close the door and walk away: Setting the program is an important step in washing clothes. Modern machines do have a single "start" button.

GE has both under the name "SmartDispense". They use a peristaltic pump to dispense liquid detergent. I owned the dishwasher for a few years and enjoyed the convenience.

In 20 years, I've owned 3 washing machines. I dealt with maybe 3 or 4 malfunctions over those 20 years, and each of them required only buying some spare part and installing it, or cleaning something inside.

Surely this kind of reliability is a good trade-off Vs having to pour some detergent for each wash?

The Coke touchscreen fountain machines. There should be zero animation. Just let me pour my drink.

Input lag is a common complaint of mine, but absolutely _NOTHING_ comes close to the Sony BDP S300 https://esupport.sony.com/p/model-home.pl?mdl=BDPS300.

Key-press response time can frequently be more than 30 seconds, depending on what the action is. Of course, you might say, that is because of bluray bloat on more recent disks, but I can assure you that its been that way from the day I purchased it. Sure some disks were better than others, but the multi minute boot-up, disk load times, player menu popup times, etc have been there since the beginning. I used to use it as a demo against my HD DVD player of why bluray wasn't ready for primtime ,and it was a 3rd generation bluray player.

Given it's a BDP-series player, if you were interested you may be able to get a Linux shell of some kind on it and find out what's taking so long to run on it.

There's a whole lot of Phillips and Sony players that are based off some ancient Mediatek SDK.

Edit: Wow, that's old: Sigmatek, not Mediatek. A makefile in the GPL source suggests there's a similar Pioneer player somewhere, too.

> like having long and synchronous opening animations in response to an action

I love animations when they make the UI more understandable. I can't stand them when they are more than a couple hundred milliseconds though. I don't even think "synchronous" when I think of animations. That sounds terrible. If they are quick animations it doesn't seem as big of a deal as the long running ones though.

For me the most insulting part is the inversion of priorities in these designs. The top priority of a UI designer SHOULD be to make the user as productive as possible, yet making me wait for something that is by definition not necessary (like an animation) is nonsensical. A related backward trend is this idea of pushing something in front of my face as a modal panel, with complete disregard for the fact that I was working on something and am now (a) distracted, (b) completely unable to continue doing what I chose to do, and (c) will have problems even after the modal goes away, taking extra time to figure out how to refocus on whatever I was originally trying to do before being interrupted.

That is even true for actions I triggered, for example: I click 4 icons on my desktop consecutively and thus 4 binaries start in the background. The order in which they appear is determined by their startup time: One pops up after another. BUT: whenever one is open and I am USING it, all the ones coming up should not pop up over the current one! This bad behaviour even happens in Windows 10 and many desktop environments.

To UI designers: Have some consideration for the f user!

As far as the less-clear cases, this is basically CAP theorem, with some physics thrown in for good measure. In some sense there is always a partition of some length between two points, thanks to the speed of light: the theoretical limit of information propagation through space.

So in the presence of this delay "partition," you have three choices, really, and the choice you make depends on the application.

A) You can choose to be available and responsive. Show the user feedback immediately and never concern yourself with global state. Technically, I'd call this an illegal choice because you must have some sort of state to even be executing code. Unless you simply don't write the code, in which case your job is easy!

B) You can choose to be immediately available and eventually consistent. You calculate the response quickly with the assumptions you have most available (local memory, disk), all while transmitting events and waiting for the further-away less-available state to become available.

This is the way many online games that need quick feedback to be fun are done. [1] Unfortunately, this is also the source of the lag jumps that you see. You're always running with [partition-size in ms] outdated global state, so the assumptions you made when calculating outcomes are going to be incorrect. This is why your headshot might turn into a total miss when the player jumps five feet to his right and, oh yeah, you also died.

3) Don't react to events until the global state has been updated. This means a full round-trip plus processing remotely and locally before that click event performs the action it is supposed to. This can be anything from a crappy experience (I shot into the ground, why should I wait), all the way to the only sensible choice (if integrity is highly important, say in transactions and avoiding double-spend).

Really, it's so much more than this too. On top of availability vs consistency you have to account for some trust model (the game client says it was a headshot, but how do I know I can trust the client) and information security (confidentiality, availability, integrity).

So TL;DR there are lots of very hard problems in distributed systems and sometimes people just default to one stance or the other to balance their cognitive load or for any number of reasons (ranging from legit to ridiculous). Sometimes they default to consistency. That's probably the case for your button-click example.

[1] https://developer.valvesoftware.com/wiki/Source_Multiplayer_...

I've never connected this to the CAP theorem, it's a good way of looking at it.

I see it as 2 different questions, "Did the computer hear me" and "Does the computer have a response for me.". Most people only make an effort to answer the latter for the user, it indirectly answers the first question anyways. But you can easily answer the first by quickly doing some sort of update, such as a progress indicator, animating a button staying depressed, etc. You don't have to mess with your real model until you get a response (or an error), but then you also don't leave the user confused for 12 seconds while your app loads search results or whatever.

Obviously it doesn't work in every situation. In most video games you want your actions to affect the gameworld immediately, even if the server doesn't know about it yet. However, for most applications just adding fast indicators that the client is aware of your actions (and staying off the UI thread) will make it feel more responsive.

I think B is the way to go. Most apps aren't as competitive as CS:GO.

Companies don't make money off of performance.

That's just patently false. One of many, many articles to the contrary: http://blog.gigaspaces.com/amazon-found-every-100ms-of-laten...

Uh.. they've already _bought_ the car. What percentage of people will return a car due to a little lag when adjusting radio stations?

e-commerce findings from a major retailer are, alas, not applicable to every domain

Consumer Reports has found that the functionality of the in-car entertainment system is the #1 indicator of customer satisfaction. You might not return the car, but you can bad mouth the system to anyone who will listen and purchase a different brand car next time.

Someone may choose to not buy a car after test driving it and experiencing the laggy touchscreen...or after reading reviews by people who did already buy the car.

Lack of performance is an attribute that would contribute to a potential consumer's attitude towards the whole car. It may not have as huge of an impact as it would for Amazon, but saying it has no effect is definitely wrong.

Well he did say Companies, not Auto Companies. There are a lot of companies that make money by having better performance.

They can. I bought a Roku 3 after seeing how much more responsive it was compared to my Sony BDP-S790 for streaming apps.

I've got a Sony BDP-S1700 which has an equally sluggish interface, the "powered by Java" badge on the back makes me laugh.

Though they might not nessessarily be able to use it in marketing (most customers might think that UI should always be fast) and not charge a premium for "fast" UI, they might lose potential customers due to bad reputation.

I built my early career entirely around CRO / testing and moved over time into product / ux / app optimization.

Huge, crazy, insane amounts of time are WASTED by humans dickering around with interfaces that they don't understand and that are not personally optimized.

One of the things I don't hear many people talk about, but I am particularly interested in, is the coming and continued improvement of adaptive & personal interface design.

A challenge that any single interface has is that it's difficult to set-up and qualify a test on a small cohort group level (men over 70 years old that wear glasses are homeowners, drink wine, and live in California is an actual target class we can easily devise from current ad tech for instance).

It's challenging because - NOT ENOUGH DATA - eg very hard to run experiments and achieve statistical significance, let alone biforcate your alreadly limited resources to drive to that level of granularity.

But imagine an adaptive UX or set of UX preferences.

EG - Take the same inputs -> eye tracking / natural language feedback (speech!) / interface observation / time to goal / etc <- and then let a flush ML / AI come up with a set of experiments and pathway.

Key to not completely confuse and blow users off path will be some kind of throttling mechanism - adaptations that settle you into the UX like your body's settling into the couch cushions.

I disagree there, thinking back to the days of when Office 'customized' it's UI to how you used the product (constantly moving menu items, shudder!), I'd rather have a consistent UX that didn't exactly fit my patterns than an adaptive one.

The problem with many interfaces, especially on consumer products, is that they're not discoverable, and oftentimes hide things behind inane levels of menu. Interface isn't a competitive advantage (although it should be!) so manufacturers don't invest in it.

Yup, it's also important to be able to walk over to your friend's machine and help them do something.

I spend a good amount of time doing support. I still play around with plugins, tools, and interfaces, but I try really hard to stick with defaults. One thing I do often do is remap Caps Lock to CTRL and it surprised me how often this catches people (and drives me nuts when I'm using their computer).

I have a Logitech Harmony 700 (a very mainstream universal remote), I don't care for it but it's the best I could find, because I use a receiver and Apple TV. Whenever I have guests it's always a mystery for them how to use it.

This alone is why I wouldn't consider DVORAK. If we have to live in a top-down driven world, I'd at least be open to standards being driven that waay

> But imagine an adaptive UX or set of UX preferences.

I can only imagine how difficult support from friends and family would become.

"Click on the widget" "I don't see the widget" "I'm on the same page and I see the widget" "Oh, I have to click 'Show all" to see the widget"

> EG - Take the same inputs -> eye tracking / natural language feedback (speech!) / interface observation / time to goal / etc <- and then let a flush ML / AI come up with a set of experiments and pathway.

I like that... but that's going to take a lot of work to keep it from becoming the contemporary equivalent of "microsoft clippy" but from Hell.

Has anyone pursued or published about such an approach yet?


That's a completely orthogonal issue. I'm curious how you even came up with that post considering the thread context.

Netflix's A/B testing has really screwed me in the past in terms of performance, so much so that I reached out to support to ask if there was a way to manually remove me from the testing group at that time. I'd log in, and the "new" interface throttled my 13 inch retina Macbook's CPU immediately to 100% (or more).

Last time it was so bad that I stopped using Netflix on my computer until the testing stopped.

That said, the "normal" Netflix player is great and I've always been impressed with its performance and responsiveness for such a large application handling video streaming.

You can opt out of A/B testing. Head to netflix.com, go to your account settings > Test Participation > Include me in tests and previews, and switch it off.

I had to do that after I got put in a group where there was no way to view the description or episode list without also starting to play the selection (in browser). On a console app the other day it would start playing an audio preview of anything selected making scrolling through my options extremely annoying. I quit the app almost immediately. I don't mind being in the testing group but I would definitely like the option to quit a test group. Wouldn't that feedback be valuable to them?

> Wouldn't that feedback be valuable to them?

The feedback is generally implicit in that if it causes problems for a lot of people then the test fails because you stop watching Netflix with that device.

That being said, if you call in to customer service they can remove you from the test if it turns out that's actually the problem (sometimes the problem is that you are in two conflicting tests for example) and they do mark that down as feedback. But they want you to call in so they can better understand and record the failure mode.

You don't even have to call, you can do it via chat, if you're not the kind of person who like to talk on a phone. :)

This is great to know! I'm surprised support hasn't mentioned this to me. I'm assuming it's because I reached out while the test was happening, but being able to opt-out for the future is great. Thank you!

Wow thanks. I always seems to get screwed and placed in the B group. That or their UI is just frequently terrible.

I hope someone from amazon is reading this. Their new app on the roku is unbelievably slow, 5+ seconds for transitions.

I don't work in that group, but I'll forward this link over. Thanks for the note and sorry for the bad experience.

Any chance you could recommend someone to finally get us different logins for the different profiles? It's kind of ridiculous to have parental controls when they can just change the profile to watch whatever they want. I also don't really like sharing a main account password with kids.

Weird because I watch Top Gear^W^W The Grand Tour with the prime app on my roku and it works great.

The show is pretty awesome too.

Amazon's video streaming apps are pretty bad across most platforms, iOS excluded. It's extremely slow on the Xbox One and their search for some reason is absolutely awful.

I don't think Roku apps are written in JavaScript/HTML. I believe they use BrightScript (at least on the older models), so I don't think React would help.

It is possible to work directly with C++ though, which I think Netflix must be doing with their Gibbon framework because their interface is very different from those obviously using the Roku tools (and looks exactly like their interface everywhere else). As the article states, Netflix aren't necessarily working with HTML when they make these React views - it sounds very much like how React Native works.

Do you have a cite for that? If there is a c++ sdk it isn't easy to find, though I can easily believe that companies on the scale of Netflix have access to better tools.

Amazon is a company that absolutely does not care about UI, as seen/proven by their website alone.

The commerce side of their site is acceptable, what is not is using the same cluttered interface for their music and video services.

Similar problem with their not so new ps4 app. I think it is less the rendering, but the fact that it doesn't prefetch anything ever? Either way it is painful to drill into a particular show from the top level.

Why is it so hard to just say, "precompile your code?" Just in time means having to take the same code, and do the same transformations to it, parse and reparse the code, every time you start the app. Yes, React may be a nice templating engine, but why make everyone go through the same process over and over again when you can copy once and not have to worry about it anymore?

This is the idea behind Svelte: https://svelte.technology/.

The templates are indeed compiled. All the optimisations they describe happens at runtime, when applying changes to the UI. Browser layout engines are complex beasts and writes can be very expensive.

Link is down for me, so I'm reacting to the title. I don't have an issue with the performance of the Netflix UI on my PS4. I have a huge problem with the design of the UI. First, there is no reason we can't have more configuration flexibility. I don't ever want a program to auto play and I have no use for the video loops that auto play when I hover over a program. Let me disable these features. I don't want "my programs" to move. Stop moving it so I have to scroll over promotional content to locate it. In general, the UI should just be my "my programs", "continue watching", and access to searching. Netflix is trying to hard to make the UI something we don't need. KISS.

Agreed! I was just looking over their web interface, under Playback settings, they at least now have a way to disable Auto Play

That's not the same thing. That setting enables/disables the playing of the next episode after you complete watching an episode. Even with that setting disabled, when you click on a series Netflix starts playing the next episode, or if it's not a series, the video.

Mildly unrelated - does anyone else get frustrated with the ui/ux for HBO now/go? The menu switches from the right side of the screen to the left, the scrolling is wonky, etc.

Man their content is good, but the app sucks.

Netflix is on another level with the care that goes into designing their streaming product. I didn't truly appreciate the Netflix app until also subscribing to HBO Now and using their Roku app.

HBO is missing the most basic UX features like discoverability, recommending the next episode of shows you've been watching, ratings, etc. Great content, but basically no effort in their apps that consume their content.

Netflix is a tech company that got into content, and HBO is a content company that got into tech.

I actually find the Netflix UX frustrating. First they removed the straightforward ability to simply say that I never want to watch a show, and then hide it from me. Beyond that I can rate a show 2 stars and yet it continues to come up in recommendations. This gives me the feeling that it doesn't really matter how I rate shows because they are going to shove whatever they want at me regardless.

The only competitor I can compare it to is Amazon. In that regard I agree that Netflix obviously puts more into the interface. Amazon's splitting shows into seasons and not even grouping them together is maddening.

It's crazy how big of a difference the various Roku apps have in terms of quality. Amazon Video and Shudder (two I have used recently) were lacking in so many ways compared to the polish that Netflix has.

I was watching Goliath last weekend on Amazon Video via the Roku player, and 6 minutes prior to the ending of an episode, they dim the screen to start a countdown of playing the next episode. While Netflix has had some hiccups in cutting off some cold opens of some shows, it's vastly better and consistent in behaviour.

Yeah their app and streaming quality are sub par in comparison to Netflix and Amazon. The app is quite slow on the Xbox One and confusing to use overall. Streaming is usually fine, but occasionally it stutters and hiccups severely despite my internet connection being fine. I'd rather pay for HBO and download copies of their shows from a torrent server for better quality.

For folks who really care about performance, the easiest win is just switching to Inferno or Preact. You can pretty much leave your React code unmodified and get massive performance gains.

Easiest win for me was trying out re-frame[1]. Not only do you get the best performance out of react, to the point where the actual virtual-dom implementation doesn't matter, but you also gain productivity and since it forces you to build apps made out of 95% pure functions and immutable data, reasoning even at scale is still incredibly easier than anything else I've ever tried to build GUIs with.

[1]: https://github.com/Day8/re-frame

I've maintained a ~20k LoC re-frame app for a year and a half. The performance of Clojure's persistent datastructures with pervasive sCU gives you generally good performance but it's not panacea. I've debugged a hundreds-of-milliseconds freeze on laptops from a poorly written hierarchical menu and I get pauses of 70-100ms if enough data comes in on our main view.

Using it with cljs-time and storing times in the app state is a really easy way to shoot yourself in the foot perf-wise since that's based on closure's date object and two equal date objects are not identical. This fails the fast identity check but will pass the structural equality check so no vdom gets generated but the check is not that much cheaper than a diff.

This is also a B2B app where we have no demand for mobile use so I haven't had to do load time or mobile perf optimization. If I did, the first thing I'd be concerned about is the bundle size since the cljs runtime plus React represents a fairly sizeable amount of overhead.

Not to say that reagent+re-frame is bad, it's just not so amazing you don't have to care about perf. I think Reagent would run particularly well on top of Inferno since the library provides lifecycle events to function components and I did experiments back in April and June on Inferno with persistent datastructures to good results. I just don't want to maintain it.

I tend to abuse the fact that subscriptions can compose in re-frame to lower the work needed in order to react to app-db changes. The fact that you can namespace keywords make it easy to scale as well.

Its also easy to optimize if you integrate day8/re-frame-tracer. I've lowered the number of views touched by updates quite a bit using it. I barely put any pressure on React anymore.

The bundle size usually isn't that bad with full optimizations enabled. An empty cljs project strips the entire cljs runtime for one, minus one defonce. You can also pass a compilation constant to react to strip its debugging features; that's saving you a few dozen kilobytes as well.

I'm currently building a small app to display Twitch chat as a personal side project and it handles GamesDoneQuick's chat effortlessly. Performance has been great so far.

re-frame might be technically interesting, but you're switching to writing in a lisp, which is a very niche choice. That's not going to be an overall win for many people.

Considering Clojure's impressive adoption I wouldn't call it a niche choice anymore; it even seems to be a win for most people adopting it.

Can you provide stats on "Clojure's impressive adoption"? By my understanding it's still a very niche language.

The TIOBE index [0] fwiw (please debate) suggests by going from JS to Clojure you'd be switching from the 7th most popular language to the 47th.

In that 'top 47' there are only 3 lispy langs present, 'Clojure', 'Lisp' and 'Schema'. That suggests programming in lisps is a very unpopular idea.

[0] http://www.tiobe.com/tiobe-index/

TIOBE is a terrible indicator generally. More activity doesn't mean more people are using a language. I wouldn't work for anyone using it to make decisions :)

The Cognitect website[1] lists quite a few success stories using Clojure. I would trust the names in that list far more than TIOBE.

But really what sold me on the language was the quality of the libraries, the incredibly helpful community and its shared focus on simplicity. Reading about a thing is no substitute for hands-on experience; its hard to judge the trade-offs you're making without it.

[1]: http://cognitect.com/clojure

A simple warning: switching to Inferno using inferno-compat can currently give worse performance than react. I tested this recently when Inferno 1 was released and think it's related to https://github.com/infernojs/inferno/issues/548 The techniques outlined by Netflix should apply to users of Preact and Inferno too, so it's still an interesting article.

It would be awesome if you could give the Inferno team your feedback so we can help improve Inferno. :)

Inferno sounds interesting, I wish I could use it for projects. But React has grown to the point where there are components for it you can npm install and import in whatever components you want.

Should the Inferno community start porting over stuff to their ecosystem it'll be easier to sell to clients.

You can point 'react' at Inferno at build time. I believe there's a Webpack plugin that does this.

I assume that HTML Dom would be fastest. Statically generated.

And that working w/ DOM/CSS would make it easier for team's designers to be more engaged.

Gibbon is Netflix's proprietary form of the DOM. They found that the standard DOM was too hard to optimize for embedded hardware, but that it was much easier if they removed the parts they didn't need.

I believe Jafar gave a talk at one of the React Confs about it.

Its quite easy to do server-side rendering of react pages, then only mount the dynamic views once in the browser.

It's possible but 'quite easy' does it a disservice as it is often not the case at any reasonable scale. Pinterest wrote a great article [1] on how they've handled it recently.

[1] https://engineering.pinterest.com/blog/how-we-switched-our-t...

It really depends on your tech stacks. Running python on the backend and JS on the frontend will create enough impedance mismatch to indeed make server-side rendering of react rather difficult.

Clojure's reagent and re-frame remove most of the complexity and tools from the equation. You run the same (mostly) code on the backend and frontend. This is what I meant by quite easy :)

http://davidtanzer.net/server_side_rendering_with_re_frame http://yogthos.net/posts/2015-11-24-Serverside-Reagent.html

I often hear that approach mentioned. But it seems so counter to my experience - that computers are plenty fast to run even complex display logic and that I'd never notice the difference. Is there really a benefit for well-written code? Or is this just an easy workaround for poor code?

I really don't like that argument because it assumes your program is important enough to be the only one running on your user's machines; it rarely ever is.

Everything easily wastes 90% of the CPU resources it touches and the task manager is completely oblivious to it, happily reporting high usage. When you have 20+ tabs open and 10+ apps all those "its fast enough" apps combine to create their own variant of hell.

And that isn't even a big workload. Its no wonders computers have increased many orders of magnitude in performance over the last decade, yet user experiences are still generally mediocre.

We're talking here about the time to render JSON to HTML for one page - the page that this user is presumably looking at. If that takes 90% of your CPU for more that a few milliseconds, then it's time to refactor.

Its a bit more complicated.

Its that JSON data, the request to API endpoint(s) to get it, the JavaScript to drive it, the request to fetch that JS and whatnot. That may not waste much CPU indeed, but instead wastes bandwidth and time. This is quite easy to notice on mobile devices.

Your servers now have to serve these API endpoints; static pages can be deferred to proper CDNs. For larger deployments this can drive the server costs to eat your profits rather quickly. Not even considering the fact that the dynamic route was much more development than the static one in the first place.

About the 90% wasted CPU, I was talking about how the CPU constantly waits for memory because very, very few programmers optimize for cache misses and lots of dynamic languages make it impossible to. Waiting on memory still shows as activity in the task manager, but the CPU isn't computing anything.

Well, sometimes is not much that it's poor code but a lot of data to render. For example, imagine rendering a form with 100 elements. If it takes 0.1 second to render that on a average computer, you can make that 0.01 by rendering it on the server instead.

And then you can cache it. Whats the point of downloading megabytes of javascript, calling 80 different endpoints and then generating what could've been statically saved from the beginning and sent in a single request?

But your server has to render it for the thousands of visitors to you site. I still don't get the rationale.

Smart rendering would involve using a mostly static template that's updated with the relatively few dynamic elements as the page is being readied to be sent over the wire. If 90% of your dynamic content is the same, why put that through your app processes at all?

Static stuff is static at either end, and doesn't add to rendering time.

Most computers, sure, but Netflix runs on a variety of devices which includes pretty slow/dumb "smart" devices. An example is the Roku stick, where your resources might be quite limited.

Disheartening that A/B testing and user feedback led to video previews. One of those "features" that's so annoying you wonder how it ever made it past the brainstorming stage.

Still worse is auto starting the program when your trying to figure out what to watch. They have my money already why do they need to inflate their view numbers?

That's what they call "video previews." Sounds so innocuous huh?

Semi-OT, but does anyone know how to make the new Netflix high-performance React-based TV UI stop playing selections automatically while scrolling through them, when you pause on one for more than 3 seconds?

What possible reason(s) would they have for doing this? Doesn't initiating playback of a title cost them money for bandwidth and/or license fees? It's just plain infuriating, in a first-world-problem kind of way.

Different kind of TV interface, but still: I have a Sony 46HX853 and it really bugs me that after turning it on from standby it still takes many seconds before it will respond to the input select button on the remote. UI responsiveness from startup will definitely be on my list of things to test the next time I buy a TV.

My TV controls the standby LED from software, thus it takes a 5-6 second wait to determine if you actually turned it on

I'm still waiting for them to support Firefox and Linux. Their support says they are working on it, but I am positive that is a lie. At least I can work around it by masking my user-agent string as Chrome (DRM needs to be enabled).

Netflix works on Firefox. I don't remember about Linux. ~3 years ago yes Netflix wouldn't work on Linux without plugins, but I thought they fixed it or provided a workaround.


Nope, you have to do a user agent string to get around it. From what I can tell Netflix is filtering it. But I may be wrong, I'm not sure how this stuff works. But I can do it on Firefox and Linux as long as I have DRM enabled (the change in FF49+) and change my user agent string to Linux Chrome (see image http://i.imgur.com/twjpXgv.png). It WILL NOT work without changing the user agent string.

Talking to their support Netflix says it is FF's problem. From what I can tell it isn't actually FF's problem anymore (because this wouldn't work, right?).

Anyone know if this TV interface can be set up on a HTPC?

The Netflix experience on a Windows HTPC is quite frankly, abysmal (whether it's the Windows Store app or through the web browser).

On which TV's does this interface run? Or how can i check it? I didnt know it was HTML on TV interfaces.

As far as I am aware, most modern ones.

Panasonic, LG etc..

Whilst the Netflix UI is great compared to a huge number of apps, I do find it has actually deteriorated over the past couple of years, in terms of interface speed.

Nowadays, when I launch the app on my 2013 Panasonic TV, it stalls at the profile selection screen for about 5 seconds, and once again once a profile is loading.

It never used to do this, and I presume it has a lot to do with precaching data; it is highly annoying as it consumes keypresses during this stall, meaning you can quite accidentally start watching something you never intended to.

I'd really love to see more high quality posts like this on React. Thank you for sharing the knowledge!

If you want to have performance why use a 90k Javascript library?

What do you use to measure Key Input Responsiveness !?

How many people still use TVs these days? Everyone I know don't even own one anymore, though my friends circle might not be that representative of society as a whole.

Practically every UK household.

I know some people without a TV but that's because they choose not to consume mass media, we've done it in the past.

Where do you live? Presumably your circle still consume mainstream media but do so via laptop/desktop/handhelds?

What's with everyone's obsession with React recently? All I see is a template engine and not a particularly good one. You're still left with the same problem mixing your HTML into JavaScript.

The thinking--and I'm not sure I agree with it, but it doesn't seem that objectionable--is that, while the HTML/CSS/JS separation makes a lot of sense for documents, it does not necessarily make as much sense for applications. HTML is just being used as a rendering language, not as a content description language, if that makes sense? And so the split between them might not make as much sense.

As a template engine (which I'm not totally sure is the correct term for it), I will cape up for it (more from a React Native perspective than a React perspective, and with the caveat that I think the JS world is mostly a crime-in-progress). I haven't found something that I've enjoyed working with in the JavaScript world nearly as much as React.

So 100% agree. I haven't found something I've enjoyed working with in JavaScript.

The environment I work on usually has very clean MVC and the issue is always that the designers I work with don't know javascript but need control over the HTML so I'm always taking their raw HTML with almost no javascript and basically reworking the HTML & CSS to do the animations or w/e is necessary.

I've tried React but generally I just get very dirty .JS files with mixed HTML/CSS in them. It's honestly not as clean as people make it out to be unless you're working with UI/UX people who know JavaScript already.

Literally every time I have an HTML/CSS person mess with the code I have to fix the JavaScript afterwards.

I think the implications of UI/UX people who don't know JavaScript is probably not great to begin with, yeah? I'm not sure that React isn't just exposing a flawed assumption that maybe we shouldn't have had in the first place.

"but generally I just get very dirty .JS files with mixed HTML/CSS in them"

Just put the css/sass on .css/.sass files and import the classes from the react jsx file. No need to mix css and html.

I don't think I could write a comment that's a stronger explanation than what Guillermo Rauch wrote:


TL;DR: It's not about the HTML in Javascript; it's about being able to declare all the states of your UI in a stateless way. The result is that React components are like pure functions: you pass them inputs and you get predictable outputs.

You can do that with other solutions, its not limited to react. I use polymer but you can use it with vue or others.

True, but that's aside the point. React, along with others, exists to solve an architecture problem that regular HTML/CSS/JS is notorious for.

If it's about state I feel like ExtJS is a better alternative especially with the way it handles Controllers, Events, and "Components" with state.

We use ExtJS a lot. But I think the exact opposite is the case. With ExtJS, you have the usual double-binding, e.g. if you set this property of that Model somewhere in your view hierarchy some little snippet of HTML are inserted/removed from the DOM. It's basically side effects only.

If I understood it correctly, one can think of react app/page as a pure function taking data and producing a DOM.

Reasoning and testing pure functions should be much easier than a set of components which partly rely on side effects.

I've used both React and ExtJS. Overall React is much easier. ExtJS becomes a mess of event handlers, just take a look at all the possible events when working with their trees. In theory it looks great, in practice it's hard to follow the code and performance is terrible. The two way binding causes all sorts of headaches including the usual remove listener, update model, add listener type code. It's difficult to follow what code updates what part of the DOM. With React I only need to look at a component's render function. Same with the component's state.

You don't have to use JSX if you're a purist - but why? JSX is just an easier way to write React components.

Yep I wish Elm people would wake up to this too



(EDIT: Help - how do you post 2 links on adjacent lines without HN breaking the formatting and sticking them on the same line?)

React is not about templating. It is about swapping an O(n^2) problem of transitioning between all N possible states of your app for an O(1) problem of describing what the UI looks like in any state and letting a machine do the transitions.

JSX is convenient but all React really requires is putting all of the logic to render a component under one method. You could carefully construct a piece of XML or something like that in render and then pass it to a templating engine of your choice°, instead of any JSX.

But JSX is as nice a language as any for templates, and side effects in render are clearly wrong in React so the old issues of turing compete templating are ameliorated.

° You'd have to do a tiny bit of wiring to make sub-components go back into the react pipeline.

I found the whole update state and have the UI automatically render on state changes a very useful model. You only need to define the UI once, and have the real time rendering left to the state watcher.

It's a useful model, but I've found that it can break down a bit on really large, complex apps. If a piece of state is being passed to multiple places via props (either manually or using something like Redux), it can be difficult to see at a glance every place in an app that a single bit of state is being used.

Going back quite a few years now, I once worked on a GWT app that kept its state in a central store, and updated it via firing events on an event bus. In practice, it worked somewhat like Redux, but instead of mapping state to props, in each component you'd subscribe to the update event and then update the relevant bits of the component in the event handler. Being Java, it was easy to find all references to the update event to get a quick understanding of exactly what it was being used.

Come to think of it, it would be easy enough to use React that way too, especially if using TypeScript and maybe something like RxJS.

Just wondering on this bit:

> it can be difficult to see at a glance every place in an app that a single bit of state is being used

What problems has this caused you? Personally, I find being oblivious to what specifically needs to update is useful but I'm interested in situations where that's not the case.

> It's a useful model, but I've found that it can break down a bit on really large, complex apps. If a piece of state is being passed to multiple places via props (either manually or using something like Redux), it can be difficult to see at a glance every place in an app that a single bit of state is being used.

This is why a UI app state query-based approaches (ex: om next) on a single app state tend to be a bit easier, but still can suffer from what you describe (especially if queries are dynamically built by a user or something). At a minimum, you can either re-use the same query or at least know through the query syntax where your code is touch some bit of state as it should be available for analysis by an IDE and easily searchable as plain text. Still, there's not really a perfect solution I've seen and eventually things can get messy if you aren't careful and as you increase the number of developers touching code.

In general, updating a UI only based on changes is a great approach. The challenge has always been identifying what changed and minimizing the tracking of those changes in scalable way. I saw many old approaches use things like dirty flags everywhere or doing field comparisons one by one. Things like react frameworks in Clojurescript make this so much easier today because you can do a very fast identity check that is cheaper using immutable structures like those in Clojure. If your check itself is expensive, the benefits of a delta-based approach are limited vs. giving up and doing full re-renders. I've hit this in game programming either way, but usually the change approach wins unless there's some very specialized case or design issue or simply just tons of raw power it's not an issue anyway.

Where events themselves suck is predictability. This is doubly so for systems that introduce event hierarchies i.e. inheritance-like constructs for events. I strongly prefer deterministic approaches to rendering when possible. That's not to say you re-render at a fixed interval, but rather attempt to re-render changes only if they exist. It makes debugging, optimizing, and understanding the system so much easier.

From a performance and debugging point of view, events or signals and slots tend to cause situations where you're jumping so much over the code you lose all kinds of CPU cache or GPU benefits (buffers, batching, pipelining, etc.) depending on what you are doing. Also some event systems use a lot of objects and in systems requiring allocations on the heap and/or garbage collection, this can become really ugly if the app is running for awhile. Events do make things easy for small projects, but tend to create spaghetti for larger ones, even with a single event bus. Approaches using loops, deltas and possibly queues/mailboxes tend to scale a lot better for games, and also for apps that have performance issues. A side benefit is your state tends to become a functional reduction, which has its own benefits such as making undo/redo, logging, and error handling easier.

Sometimes I'm rather confused by all the UI issues created in app dev. I understand them, but as someone who has many decades of experience doing both game and app development, I'm like wtf app programmers. React and things like it are or can be at least a bit closer to loop, pipelined, time approach that's used in modern game architectures precisely to achieve consistent performance and do sane things in the face of GPUs.

React is not a template engine.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact